How People Tawk Affects How Well You Listen

4808475862_6129039fa8_o

People from different places speak differently – that we all know. Some dialects and accents are considered glamorous or authoritative, while others carry a definite social stigma. Speakers with a New York City dialect have even been known to enroll in speech therapy to lessen their ‘accent’ and avoid prejudice. Recent research indicates that they have good reason to be worried. It now appears that the prestige of people’s dialects can fundamentally affect how you process and remember what they say.

Meghan Sumner is a psycholinguist (not to be confused with a psycho linguist) at Stanford who studies the interaction between talker variation and speech perception. Together with Reiko Kataoka, she recently published a fascinating if troubling paper in the Journal of the Acoustical Society of America. The team conducted two separate experiments with undergraduates who speak standard American English (what you hear from anchors on the national nightly news). They had the undergraduates listen to words spoken by female speakers of 1) standard American English, 2) standard British English, or 3) the New York City dialect. Standard American English is a rhotic dialect, which means that its speakers pronounce the final –r in words like finger. Both speakers of British English and the New York City dialect drop that final –r sound, but one is a standard dialect that’s considered prestigious and the other is not. I bet you can guess which is which.

In their first experiment, Sumner and Kataoka tested how the dialect of spoken words affected semantic priming, an indication of how deeply the undergraduate listeners processed the words. The listeners first heard a word ending in –er  (e.g., slender) pronounced by one of the three different female speakers. After a very brief pause, they saw a written word (say, thin) and had to make a judgment about the written word. If they had processed the spoken word deeply, it should have brought related words to mind and allowed them to respond to a question about a related written word faster. The results? The listeners showed semantic priming for words spoken in standard American English but not in the New York City dialect. That’s not too surprising. The listeners might have been thrown off by the dropped r or simply the fact the word was spoken in a less familiar dialect than their own. But here’s the wild part: the listeners showed as much semantic priming for standard British English as they did for standard American English. Clearly, there’s something more to this story than a missing r.

In their second experiment, a new set of undergraduates with a standard American English dialect listened to sets of related words, each read by one of the speakers of the same three dialects: standard American, British, or NYC. Each set of words (say, rest, bed, dream, etc.) excluded a key related word (in this case, sleep). The listeners were then asked to list all of the words they remembered hearing. This is a classic task that consistently generates false memories. People tend to remember hearing the related lure (sleep) even though it wasn’t in the original set. In this experiment, listeners remembered about the same number of actual words from the sets regardless of dialect, indicating that they listened and understood the words irrespective of speaker. Yet listeners falsely recalled more lures for the word sets read by the NYC speaker than by either the standard American or British speakers.

Screen Shot 2014-02-06 at 3.23.28 PM

Figure from Sumner & Kataoka (2013) showing more false recalls from lists spoken with a NYC dialect than those spoken in standard American or British dialects.

The authors offer an explanation for the two findings. On some level, the listeners are paying less attention to the words spoken with a NYC dialect. In fact, decreased attention has been shown to both decrease semantic priming and increase the generation of false memories in similar tasks. In another paper, Sumner and her colleague Arthur Samuel showed that people with a standard American dialect as well as those with a NYC dialect showed better later memory for –er words that they originally heard in a standard American dialect compared with words heard in a NYC dialect. These results would also fit with the idea that speakers of standard American (and even speakers with a NYC dialect) do not pay as much attention to words spoken with a NYC dialect.

In fact, Sumner and colleagues recently published a review of a comprehensive theory based on a string of their findings. They suggest that we process the social features of speech sounds at the very earliest stages of speech perception and that we rapidly and automatically determine how deeply we will process the input according to its ‘social weight’ (read: the prestige of the speaker’s dialect). They present this theory in neutral, scientific terms, but it essentially means that we access our biases and prejudices toward certain dialects as soon as we listen to speech and we use this information to at least partially ‘tune out’ people who speak in a stigmatized way.

If true, this theory could apply to other dialects that are associated with low socioeconomic status or groups that face discrimination. Here in the United States, we may automatically devalue or pay less attention to people who speak with an African American Vernacular dialect, a Boston dialect, or a Southern drawl. It’s a troubling thought for a nation founded on democracy, regional diversity, and freedom of speech. Heck, it’s just a troubling thought.

_____

Photo credit: Melvin Gaal, used via Creative Commons license

Sumner M, & Kataoka R (2013). Effects of phonetically-cued talker variation on semantic encoding Journal of the Acoustical Society of America DOI: 10.1121/1.4826151

Sumner M, Kim S K, King E, & McGowan K B (2014). The socially weighted encoding of spoken words: a dual-route approach to speech perception Frontiers in Psychology DOI: 10.3389/fpsyg.2013.01015

Sandy, Science, and a New Campaign

As Tuesday’s election approaches and news coverage of super storm Sandy recedes, I’m struck by the absurdity of our current situation. While cities on the East Coast are still pumping water out of tunnels and salvaging belongings from ruined homes, we get back to talking about the economy. That and reproductive rights.

Yet we are surrounded by evidence of climate change, even beyond our recent run-ins with Sandy and Irene. We have seen increases in the frequency and severity of storms, droughts, and wildfires. Already, drought has affected food prices here in the U.S. and caused widespread famine in Africa. Massive ice shelves in Antarctica are melting and crumbling into the sea, demonstrably raising sea levels worldwide. And this past year brought us record-breaking temperatures, one after another, as we watched a freakishly warm winter give way to a sweltering summer.

Despite the mountain of scientific evidence that climate change is real and ample demonstrations of the devastation it can wreak, the topic has not been an issue in this year’s presidential election. It wasn’t discussed in any of the three presidential debates. This is not an oversight on the part of the candidates and the moderators. Americans are simply not worried about climate change. In a Gallup poll from September, only 2% of respondents ranked environmental issues as the most important problem facing our country today. Most ranked unemployment and our lagging economy as the nation’s greatest woe.

While people are certainly suffering in today’s economy, the dismissal of climate change is terribly shortsighted. Climate change is an economic threat. It has already raised (and will probably continue to raise) the cost of food. We have also faced steep costs as a result of extreme weather. New York State’s economy alone lost as much as 18 billion dollars due to Sandy and fortifying New York from future flooding could cost upwards of 20 billion dollars. Those figures don’t include the damage in other states and they don’t include the expense to homeowners who are rebuilding or who will try to insure their homes in the wake of this storm. And of course it can’t include the personal devastation and loss of life.

So why aren’t we talking more about climate change? And why aren’t we doing more, both in our own lives and in our voting choices, to try to stem the tide?

It seems to me that we are witnessing a human psychology experiment on the grandest scale. How can we ignore (and in fact perpetuate) an impending disaster of such magnitude? In fact, humans have quite a bit of practice at ignoring future doom. After all, we live out our lives with the certainty that we will die and we function in large part by not thinking about it. Death? What death? Climate change? What change?

I wrote before about how our disappearing glaciers may be suffering from a PR problem. They need a spokesman or a mascot – something that might tug at our heartstrings and make people care. Now I think we need a similar approach for climate change itself. The climatologists have done their job and demonstrated that climate change is real. But our first and greatest obstacle in fixing it may lie within ourselves or, more specifically, our skulls.

I think it’s time to call in the psychologists, the marketing specialists and the public relations gurus. Through years of research, we already know the many ways that human beings are illogical and we know how to persuade and manipulate them. Beer has bikini-clad women. Cigarettes have cowboys. Viagra and Cialis have politicians and quarterbacks. Why can’t we do the same for our planet? It’s time we held focus groups and raised ad dollars. It’s time for a climate campaign.

Popular opinion has always driven political will. We need to use every resource we have to raise awareness and change minds. So let’s bring in the psychologists. Let’s bring in the bikini-clad women if need be. (After all, it’s going to be hot!) But before we can influence others, we have to begin by changing ourselves. By changing our lifestyles. By changing our priorities. By changing our minds and then voting our minds. And there’s no better time to start than this Tuesday.

I’ll see you at the ballot box!

The Demise of the Expert

These days, I find myself turning off the news while thinking the same question. When did we stop valuing knowledge and expertise? When did impressive academic credentials become a political liability? When did the medical advice of celebrities like Jenny McCarthy and Ricki Lake become more trusted than those of government safety panels, scientists, and physicians? When did running a small business or being a soccer mom qualify a person to hold the office of president and make economic and foreign policy decisions?

As Rick Perry, the Republican front-runner for president recently told us, “You don’t have to have a PhD in economics from Harvard to really understand how to get America back working again.” Really? Why not? It certainly seems to me that some formal training would help. And yet many in Congress pooh-poohed economists’ warnings about the importance of raising the debt ceiling and have insisted on decreasing regulations despite the evidence that this won’t help to improve our economy (and will further harm our environment.) Meanwhile, man-made climate change is already affecting our planet. Natural disasters such as droughts and hurricanes are on the rise, just as scientists predicted. But we were slow to accept their warnings and have been slow to enact any meaningful policies to stem the course of this calamity.

The devaluation of expertise is puzzling enough, but perhaps more puzzling still is the timing. Never before in human history have we witnessed the fruits of expertise as we do today. Thanks to scientists and engineers, we rely on cell phones that wirelessly connect us to the very person we want to talk to at the moment we want to talk. In turn, these cell phones operate through satellites that nameless experts have set spinning in precise orbits around Earth. We keep in touch with friends, do our banking and bill-paying, and make major purchases using software written in codes we don’t understand and transmitted over a network whose very essence we struggle to comprehend. (I mean, what exactly is the Internet?) Meanwhile, physicians use lasers to excise tumors and correct poor vision. They replace damaged livers and hearts. They fit amputees with hi-tech artificial limbs, some with feet that flex and hands that grasp.

Obviously none of this would have been possible without experts. You need more than high school math and a casual grasp of physics or anatomy to develop these complex systems, tools, and techniques. So why on Earth would we discount experts now, when we have more proof than ever of their worth?

My only guess is education. Our national public education system is in shambles. American children rank 25th globally in math and 21st in science. At least two-thirds of American students cannot read at grade level. But there is something our student score high on. As the documentary Waiting for Superman highlighted, American students rank number one in confidence. This may stem from the can-do culture of the United States or from the success our nation has enjoyed over the last 65 years. But it makes for a dangerous combination. We are churning out students with inadequate knowledge and skills, but who believe they can intuit and accomplish anything. And if you believe that, then why not believe you know better than the experts?

I think the only remedy for this situation is better education, but not for the reasons you might think. In my opinion, the more a person learns about any given academic subject, the more realistic and targeted his or her self-confidence becomes.

The analogy that comes to mind is of a blind man trying to climb a tree. When he’s still at the base of the tree, all he can feel is the trunk. From there, he has little sense of the size or shape of the rest of the tree.  But suppose he climbs up on a limb and then out to even smaller branches. He still won’t know the shape of the rest of the tree, but from his perch on one branch, he can feel the extensive foliage. He’ll know that the tree must be large and he can presume that the other branches are equally long and intricate. He can appreciate how very much there must be of the tree that’s beyond his reach.

I think the same principle applies to knowledge. The more we know, the more we can appreciate how much else there is out there to know – things about which we haven’t got a clue. As we climb out on our tiny branches, acquiring knowledge, we also gain an awareness of our profound ignorance. Unfortunately, many of America’s children (and by now, adults too) aren’t climbing the tree at all; they’re still lounging at the base, enjoying a picnic in the shade.

Should it surprise us, then, to learn that they don’t see the value in expertise? That they can support political candidates who disparage the advice of specialists and depict academic achievement as a form of elitism? Why shouldn’t they trust the advice of a neighbor, a talk show host, or an actor over the warnings of the ‘educated elite’?

No single person can know everything there is to know in today’s world, so the sum of human knowledge must be dispersed among millions of specialized experts. Human progress relies on these people, dangling from their obscure little branches, to help guide our technology, our public policy, our research and governance. Our world has no shortage of experts. Now if only people would start listening to them.

%d bloggers like this: