Social Influence

I’ve been reading Paluck & Shepherd’s 2012 paper about social influence and salient referents, i.e. salient people within a social network. In it, they describe a very interesting study they conducted in a high school. They were able to document the social network of the whole school, which in and of itself is pretty awesome.

Based on the network, they picked out what they called “salient referents.” For them, “salient referents” were individuals who were highly connected in the network in important ways. In the case of the high school, the “salient referents” were two types of students: widely connected students (basically the type of student who knows and is known by many other students in the school) and clique leaders (students who are the leaders of densely connected subgroups within the student body). From the group of students that met these criteria, they randomly selected half to be “treatment” and half to be “control.” Then they asked the “treatment” students to participate in a sort of intervention (a few declined to participate but most agreed).

It turns out the school had been having a lot of trouble with bullying, so Paluck and Shepherd asked the chosen students to model anti-bullying behavior and participate in an anti-bullying assembly. At the assembly they publicly spoke about their experiences with bullying, either as someone who was bullied, someone who did the bullying, or as a silent bystander. They also wrote and acted out a skit about the consequences of bullying.

Paluck and Shepherd had surveyed all the students at the very beginning of the study period about their attitudes towards bullying, their perceptions about bullying, and their experiences with it. They also surveyed the teachers about disciplinary action and which students they thought created problems in the school. Then, they surveyed everyone again after the intervention and at the end of the school year.

They found that the “treatment” students seem to have changed the other students perceptions of the descriptive norms at the school. After the “intervention”, students who were connected in some way to the treatment students were more likely to report that bullying was bad, that they didn’t participate, and that they intervened on behalf of others.

Let’s put this in more concrete terms. This would be like asking the blonde clique leader in Mean Girls, the head of the football team, and the popular student body president to model anti-bullying behavior. The idea is that if these well connected and popular people, who presumably have lots of connections to other students and are what we might call “influencers” (i.e. people that others look up to and want to be like, or in the case of social media marketing, the people whose stuff you want to buy), then their actions will signal to the rest of the student body that the “what we do here” norm at the school is NOT bullying. Of course, its a whole different story if the clique leader and the captain of the football team are the main bullies to begin with …

This got me thinking about how “salient referents” could be leveraged in other kinds of organizations to change descriptive norms (or perceptions of descriptive norms) in order to positively affect organizational culture. For example, lets say a company has a “climate problem”, which is in effect a kind of grown-up bullying problem. Could that company identify “salient referents” within the organization and enlist them to model behavior that demonstrates descriptive norms around being equitable and inclusive to fix the “climate problem”? Of course there is the issue of identifying the “salient referents” and deciding the most effective way for them to exert social influence through modeling good behavior. But, it definitely seems like something worth exploring.

Testifying While Black: (Mis)transcription of African American English in the Court Room

[EDIT: The article is now available online, open access(!) via Project MUSE:]

Following four years of work [EDIT: I realized it’s actually been 6 years!], a project I did with several colleagues has just been accepted for publication in the journal Language. Together with Taylor Jones (U Penn Linguistics,, Ryan Hancock (Philadelphia Lawyers for Social Equity, WWD), and Robin Clark (U Penn Linguistics), I conducted a study to test Philadelphia court reporter’s transcription accuracy and comprehension of African American English (see Taylor Jones’ explainer on this language variety).

In order to work as an official court reporter in Philadelphia, court reporters have to be certified by the court system at 95% accuracy. In other words, they have to be able to correctly transcribe 95% of all words they hear in a test at specific speeds (words per minute) based on the type of speech — there are different required speeds for testimony versus question and answer, etc. However, based on informal interviews with court reporters, we determined that their certification tests and training are based on Mainstream American English spoken by lawyers, judges, and broadcasters. We know from previous research that there are issues of cross dialect comprehension between speakers of AAE and MAE (some examples outlined here) so we decided to test court reporters’ ability to accurately transcribe AAE as spoken by native speakers from the Philadelphia area.

We recruited 9 native speakers of AAE (5 men and 4 women of varying ages) from West Philadelphia, North Philadelphia, Harlem, and Jersey City and recorded them reading 83 sentences in AAE that were chosen from actual heard speech (we did not create the sentences from imagination). These sentences included syntactic features of AAE both alone and in combination. We then randomized the voices and sentences. We played the audio for 27 court reporters — one third of the official court reporting pool in Philadelphia. Reporters were given a 220Hz warning tone followed by a sentence repeated twice and then ten seconds of silence. The sentences were played at 70-80 decibels at 10 feet — more than loud enough for the court reporters to hear — and at speeds much slower than their certification tests. We asked the court reporters to transcribe the sentence and then paraphrase the sentence in “classroom” English. While we were aware that paraphrasing is not part of their normal job, we were curious if miscomprehension contributed to mistranscription.

The results showed that the court reporters in our sample could not transcribe spoken AAE at their required level of accuracy. When measured at the sentence level - was the sentence right or wrong - our sample transcribed 59.5% of sentences accurately on average. When measured at the word level, on average our court reporters transcribed 82.9% of the words accurately. In 31% of transcriptions, errors change who, what, when, or where. Accuracy was not related to race, age, where they got their training, or the number of years on the job. Additionally, court reporters paraphrased the sentences correctly only 33% of the time. Surprisingly, reporters’ individual paraphrase and transcription accuracy were not systematically related.

From post-test conversations with the study participants, it was clear that these court reporters’ wanted the tools to perform better and did not hold explicit malice towards the speakers or the individuals they transcribe in court. They did, however, express the opinion that speakers like the ones we played for them were speaking incorrectly and that the difficulties they had with transcription were the fault of the speaker. Rather, in our paper, we contend that court reporters are not given appropriate training related to other varieties of English they are likely to encounter in their day-to-day work. Given that the court reporter is responsible for the official court record and that the official record has consequences in terms of cross-examination, appeals, etc, it would behoove the court system to ensure that certification standards are related to the task at hand.

You can read more here on the study, its implications, and what we think comes next.