"Testifying While Black" is now available online!

A few months ago, I wrote about my article “Testifying while black”, written with Taylor Jones, Ryan Hancock, and Robin Clark, which received quite a bit of media attention. That article has just been published on Project MUSE ahead of print for the June edition of Language.

Here is a repost of my summary of the work from January:

Following four years of work [EDIT: I realized it’s actually been 6 years!], a project I did with several colleagues has just been accepted for publication in the journal Language. Together with Taylor Jones (U Penn Linguistics, www.languagejones.com), Ryan Hancock (Philadelphia Lawyers for Social Equity, WWD), and Robin Clark (U Penn Linguistics), I conducted a study to test Philadelphia court reporter’s transcription accuracy and comprehension of African American English (see Taylor Jones’ explainer on this language variety).

In order to work as an official court reporter in Philadelphia, court reporters have to be certified by the court system at 95% accuracy. In other words, they have to be able to correctly transcribe 95% of all words they hear in a test at specific speeds (words per minute) based on the type of speech — there are different required speeds for testimony versus question and answer, etc. However, based on informal interviews with court reporters, we determined that their certification tests and training are based on Mainstream American English spoken by lawyers, judges, and broadcasters. We know from previous research that there are issues of cross dialect comprehension between speakers of AAE and MAE (some examples outlined here) so we decided to test court reporters’ ability to accurately transcribe AAE as spoken by native speakers from the Philadelphia area.

We recruited 9 native speakers of AAE (5 men and 4 women of varying ages) from West Philadelphia, North Philadelphia, Harlem, and Jersey City and recorded them reading 83 sentences in AAE that were chosen from actual heard speech (we did not create the sentences from imagination). These sentences included syntactic features of AAE both alone and in combination. We then randomized the voices and sentences. We played the audio for 27 court reporters — one third of the official court reporting pool in Philadelphia. Reporters were given a 220Hz warning tone followed by a sentence repeated twice and then ten seconds of silence. The sentences were played at 70-80 decibels at 10 feet — more than loud enough for the court reporters to hear — and at speeds much slower than their certification tests. We asked the court reporters to transcribe the sentence and then paraphrase the sentence in “classroom” English. While we were aware that paraphrasing is not part of their normal job, we were curious if miscomprehension contributed to mistranscription.

The results showed that the court reporters in our sample could not transcribe spoken AAE at their required level of accuracy. When measured at the sentence level - was the sentence right or wrong - our sample transcribed 59.5% of sentences accurately on average. When measured at the word level, on average our court reporters transcribed 82.9% of the words accurately. In 31% of transcriptions, errors change who, what, when, or where. Accuracy was not related to race, age, where they got their training, or the number of years on the job. Additionally, court reporters paraphrased the sentences correctly only 33% of the time. Surprisingly, reporters’ individual paraphrase and transcription accuracy were not systematically related.

From post-test conversations with the study participants, it was clear that these court reporters’ wanted the tools to perform better and did not hold explicit malice towards the speakers or the individuals they transcribe in court. They did, however, express the opinion that speakers like the ones we played for them were speaking incorrectly and that the difficulties they had with transcription were the fault of the speaker. Rather, in our paper, we contend that court reporters are not given appropriate training related to other varieties of English they are likely to encounter in their day-to-day work. Given that the court reporter is responsible for the official court record and that the official record has consequences in terms of cross-examination, appeals, etc, it would behoove the court system to ensure that certification standards are related to the task at hand.

You can read more here on the study, its implications, and what we think comes next.

What's the deal with cashless businesses?

So what is the deal with cashless businesses? I’ve noticed more and more of them popping up in cities like New York and Philadelphia. In my experience, they are mostly casual, take-out food joints — fancy salad joints, down-home-American-cuisine meal assembly lines, overpriced fancy burrito cafes, and even some third-wave coffee shops.

These businesses make the claim that they are cashless for positive reasons. They say it makes the flow of service go faster, that it is cleaner (no dirty dollars and coins passing from hand to hand - who knows where they have been …), and that it is safer (no stacks of cash means no burglars or armed robbers, right?).

So this all sounds great! Right? Well, it depends on who you ask.

For people with credit cards and bank accounts, in theory there is no problem with frequenting cashless businesses. All they have to do is swipe their card — or pay with an app connected to the business they want to buy from after giving the app access to their credit card number. And everyone has credit cards, right?

Wrong. According to a study done by the Federal Research Bank of Boston, as of 2015 only about 75% of Americans had a charge card, a credit card, or both. That means that 25% of the US population (about 82million people) can’t buy things at cashless businesses.

Think about that: 82 million people can’t enjoy the fancy salad or the third wave coffee at these new cashless businesses simply because the business will not accept their hard earned money in the form of physical currency. And having a credit or charge card is not evenly distributed throughout the population. Sure, maybe there are some rich people who don’t have credit cards, but really the majority of people without access to credit cards are poor, working-class Americans. African-Americans, Hispanics, and immigrants are more likely to not have a credit card than White Americans (remember, saying 25% of Americans don’t have credit cards means on average. That means everyone is lumped together to make that statistic. In reality, the percent of Whites who don’t have credit cards is much lower than the percent of Blacks and Hispanics who don’t have access to credit.)

So by refusing to accept cash, businesses are systematically shaping their clientele. By not accepting cash, businesses are not so subtly telling poor people, and especially people of color, “you can’t shop or eat here.” Which can sound a lot like “we don’t want you to shop or eat here.”

These issues are why politicians in a lot of major cities have recently started crafting legislation to ban cashless businesses. In Philadelphia, for example, the mayor has just signed new legislation that makes it illegal for business to refuse cash payment, with a few exceptions. It seems that New Jersey, New York, Chicago, Washington, and San Francisco are also looking into using legislative means to make refusing cash payment illegal.

So the next time you go into a store that doesn’t accept cash, take a minute to look around. See who is and isn’t present in that space. Notice who walks by, looks in the window, and then doesn’t come in. For the moment, cashless businesses offer a space to observe social stratification in action. Let’s hope, for the sake of equity, that moment doesn’t last very much longer.

Social Influence

I’ve been reading Paluck & Shepherd’s 2012 paper about social influence and salient referents, i.e. salient people within a social network. In it, they describe a very interesting study they conducted in a high school. They were able to document the social network of the whole school, which in and of itself is pretty awesome.

Based on the network, they picked out what they called “salient referents.” For them, “salient referents” were individuals who were highly connected in the network in important ways. In the case of the high school, the “salient referents” were two types of students: widely connected students (basically the type of student who knows and is known by many other students in the school) and clique leaders (students who are the leaders of densely connected subgroups within the student body). From the group of students that met these criteria, they randomly selected half to be “treatment” and half to be “control.” Then they asked the “treatment” students to participate in a sort of intervention (a few declined to participate but most agreed).

It turns out the school had been having a lot of trouble with bullying, so Paluck and Shepherd asked the chosen students to model anti-bullying behavior and participate in an anti-bullying assembly. At the assembly they publicly spoke about their experiences with bullying, either as someone who was bullied, someone who did the bullying, or as a silent bystander. They also wrote and acted out a skit about the consequences of bullying.

Paluck and Shepherd had surveyed all the students at the very beginning of the study period about their attitudes towards bullying, their perceptions about bullying, and their experiences with it. They also surveyed the teachers about disciplinary action and which students they thought created problems in the school. Then, they surveyed everyone again after the intervention and at the end of the school year.

They found that the “treatment” students seem to have changed the other students perceptions of the descriptive norms at the school. After the “intervention”, students who were connected in some way to the treatment students were more likely to report that bullying was bad, that they didn’t participate, and that they intervened on behalf of others.

Let’s put this in more concrete terms. This would be like asking the blonde clique leader in Mean Girls, the head of the football team, and the popular student body president to model anti-bullying behavior. The idea is that if these well connected and popular people, who presumably have lots of connections to other students and are what we might call “influencers” (i.e. people that others look up to and want to be like, or in the case of social media marketing, the people whose stuff you want to buy), then their actions will signal to the rest of the student body that the “what we do here” norm at the school is NOT bullying. Of course, its a whole different story if the clique leader and the captain of the football team are the main bullies to begin with …

This got me thinking about how “salient referents” could be leveraged in other kinds of organizations to change descriptive norms (or perceptions of descriptive norms) in order to positively affect organizational culture. For example, lets say a company has a “climate problem”, which is in effect a kind of grown-up bullying problem. Could that company identify “salient referents” within the organization and enlist them to model behavior that demonstrates descriptive norms around being equitable and inclusive to fix the “climate problem”? Of course there is the issue of identifying the “salient referents” and deciding the most effective way for them to exert social influence through modeling good behavior. But, it definitely seems like something worth exploring.

Testifying While Black -- New media coverage

[EDIT 3: Pod Save the People talked about our research this week! Start listening at 11min 58seconds into the episode. Its a really good discussion of the research and its implications. And so cool to have this group of people discussing what we did!]

[EDIT 2: Taylor was on Radio Times with Marty Moss-Coane on WHYY Philly this morning. The other guests were Cassie Owens who wrote about our research in the Philadelphia Inquirer and Kami Chavis, a Professor of Law and Director of the Criminal Justice Program at Wake Forest School of Law. They had an awesome discussion about the research and its implications — with a nice little shout out to me and other co-authors at the top of the program! You can listen to the whole episode here.]

[EDIT: Taylor went on the CBC Radio One program As It Happens to talk about the research two days ago. You can listen here. Also, either he or both of us will be on Radio Times on WHYY Philly on Friday morning.]

We have been getting a lot of media attention for our forthcoming paper in Language about court reporter mistranscription of African American English. (See my previous blog post, or this excellent post from my co-author, Taylor Jones, to read a little about what the research was and what we found. )

Here is the Philadelphia Inquirer’s coverage of the research.

And here is the coverage by John Eligon at the New York Times. The article made the Sunday Times print edition!

We are still getting inquiries, so watch this space for more media related to this research.

Testifying While Black: (Mis)transcription of African American English in the Court Room

[EDIT: The article is now available online, open access(!) via Project MUSE: https://muse.jhu.edu/article/725984/pdf]

Following four years of work [EDIT: I realized it’s actually been 6 years!], a project I did with several colleagues has just been accepted for publication in the journal Language. Together with Taylor Jones (U Penn Linguistics, www.languagejones.com), Ryan Hancock (Philadelphia Lawyers for Social Equity, WWD), and Robin Clark (U Penn Linguistics), I conducted a study to test Philadelphia court reporter’s transcription accuracy and comprehension of African American English (see Taylor Jones’ explainer on this language variety).

In order to work as an official court reporter in Philadelphia, court reporters have to be certified by the court system at 95% accuracy. In other words, they have to be able to correctly transcribe 95% of all words they hear in a test at specific speeds (words per minute) based on the type of speech — there are different required speeds for testimony versus question and answer, etc. However, based on informal interviews with court reporters, we determined that their certification tests and training are based on Mainstream American English spoken by lawyers, judges, and broadcasters. We know from previous research that there are issues of cross dialect comprehension between speakers of AAE and MAE (some examples outlined here) so we decided to test court reporters’ ability to accurately transcribe AAE as spoken by native speakers from the Philadelphia area.

We recruited 9 native speakers of AAE (5 men and 4 women of varying ages) from West Philadelphia, North Philadelphia, Harlem, and Jersey City and recorded them reading 83 sentences in AAE that were chosen from actual heard speech (we did not create the sentences from imagination). These sentences included syntactic features of AAE both alone and in combination. We then randomized the voices and sentences. We played the audio for 27 court reporters — one third of the official court reporting pool in Philadelphia. Reporters were given a 220Hz warning tone followed by a sentence repeated twice and then ten seconds of silence. The sentences were played at 70-80 decibels at 10 feet — more than loud enough for the court reporters to hear — and at speeds much slower than their certification tests. We asked the court reporters to transcribe the sentence and then paraphrase the sentence in “classroom” English. While we were aware that paraphrasing is not part of their normal job, we were curious if miscomprehension contributed to mistranscription.

The results showed that the court reporters in our sample could not transcribe spoken AAE at their required level of accuracy. When measured at the sentence level - was the sentence right or wrong - our sample transcribed 59.5% of sentences accurately on average. When measured at the word level, on average our court reporters transcribed 82.9% of the words accurately. In 31% of transcriptions, errors change who, what, when, or where. Accuracy was not related to race, age, where they got their training, or the number of years on the job. Additionally, court reporters paraphrased the sentences correctly only 33% of the time. Surprisingly, reporters’ individual paraphrase and transcription accuracy were not systematically related.

From post-test conversations with the study participants, it was clear that these court reporters’ wanted the tools to perform better and did not hold explicit malice towards the speakers or the individuals they transcribe in court. They did, however, express the opinion that speakers like the ones we played for them were speaking incorrectly and that the difficulties they had with transcription were the fault of the speaker. Rather, in our paper, we contend that court reporters are not given appropriate training related to other varieties of English they are likely to encounter in their day-to-day work. Given that the court reporter is responsible for the official court record and that the official record has consequences in terms of cross-examination, appeals, etc, it would behoove the court system to ensure that certification standards are related to the task at hand.

You can read more here on the study, its implications, and what we think comes next.