How People Tawk Affects How Well You Listen

4808475862_6129039fa8_o

People from different places speak differently – that we all know. Some dialects and accents are considered glamorous or authoritative, while others carry a definite social stigma. Speakers with a New York City dialect have even been known to enroll in speech therapy to lessen their ‘accent’ and avoid prejudice. Recent research indicates that they have good reason to be worried. It now appears that the prestige of people’s dialects can fundamentally affect how you process and remember what they say.

Meghan Sumner is a psycholinguist (not to be confused with a psycho linguist) at Stanford who studies the interaction between talker variation and speech perception. Together with Reiko Kataoka, she recently published a fascinating if troubling paper in the Journal of the Acoustical Society of America. The team conducted two separate experiments with undergraduates who speak standard American English (what you hear from anchors on the national nightly news). They had the undergraduates listen to words spoken by female speakers of 1) standard American English, 2) standard British English, or 3) the New York City dialect. Standard American English is a rhotic dialect, which means that its speakers pronounce the final –r in words like finger. Both speakers of British English and the New York City dialect drop that final –r sound, but one is a standard dialect that’s considered prestigious and the other is not. I bet you can guess which is which.

In their first experiment, Sumner and Kataoka tested how the dialect of spoken words affected semantic priming, an indication of how deeply the undergraduate listeners processed the words. The listeners first heard a word ending in –er  (e.g., slender) pronounced by one of the three different female speakers. After a very brief pause, they saw a written word (say, thin) and had to make a judgment about the written word. If they had processed the spoken word deeply, it should have brought related words to mind and allowed them to respond to a question about a related written word faster. The results? The listeners showed semantic priming for words spoken in standard American English but not in the New York City dialect. That’s not too surprising. The listeners might have been thrown off by the dropped r or simply the fact the word was spoken in a less familiar dialect than their own. But here’s the wild part: the listeners showed as much semantic priming for standard British English as they did for standard American English. Clearly, there’s something more to this story than a missing r.

In their second experiment, a new set of undergraduates with a standard American English dialect listened to sets of related words, each read by one of the speakers of the same three dialects: standard American, British, or NYC. Each set of words (say, rest, bed, dream, etc.) excluded a key related word (in this case, sleep). The listeners were then asked to list all of the words they remembered hearing. This is a classic task that consistently generates false memories. People tend to remember hearing the related lure (sleep) even though it wasn’t in the original set. In this experiment, listeners remembered about the same number of actual words from the sets regardless of dialect, indicating that they listened and understood the words irrespective of speaker. Yet listeners falsely recalled more lures for the word sets read by the NYC speaker than by either the standard American or British speakers.

Screen Shot 2014-02-06 at 3.23.28 PM

Figure from Sumner & Kataoka (2013) showing more false recalls from lists spoken with a NYC dialect than those spoken in standard American or British dialects.

The authors offer an explanation for the two findings. On some level, the listeners are paying less attention to the words spoken with a NYC dialect. In fact, decreased attention has been shown to both decrease semantic priming and increase the generation of false memories in similar tasks. In another paper, Sumner and her colleague Arthur Samuel showed that people with a standard American dialect as well as those with a NYC dialect showed better later memory for –er words that they originally heard in a standard American dialect compared with words heard in a NYC dialect. These results would also fit with the idea that speakers of standard American (and even speakers with a NYC dialect) do not pay as much attention to words spoken with a NYC dialect.

In fact, Sumner and colleagues recently published a review of a comprehensive theory based on a string of their findings. They suggest that we process the social features of speech sounds at the very earliest stages of speech perception and that we rapidly and automatically determine how deeply we will process the input according to its ‘social weight’ (read: the prestige of the speaker’s dialect). They present this theory in neutral, scientific terms, but it essentially means that we access our biases and prejudices toward certain dialects as soon as we listen to speech and we use this information to at least partially ‘tune out’ people who speak in a stigmatized way.

If true, this theory could apply to other dialects that are associated with low socioeconomic status or groups that face discrimination. Here in the United States, we may automatically devalue or pay less attention to people who speak with an African American Vernacular dialect, a Boston dialect, or a Southern drawl. It’s a troubling thought for a nation founded on democracy, regional diversity, and freedom of speech. Heck, it’s just a troubling thought.

_____

Photo credit: Melvin Gaal, used via Creative Commons license

Sumner M, & Kataoka R (2013). Effects of phonetically-cued talker variation on semantic encoding Journal of the Acoustical Society of America DOI: 10.1121/1.4826151

Sumner M, Kim S K, King E, & McGowan K B (2014). The socially weighted encoding of spoken words: a dual-route approach to speech perception Frontiers in Psychology DOI: 10.3389/fpsyg.2013.01015

A New America of Mutts?

8687286808_ce53c853e7_oI recently wrote about my biracial daughter and public assumptions about inheritance for the blog DoubleXScience. Nearly the same day, columnist David Brooks’ op-ed piece, “A Nation of Mutts,” appeared in The New York Times. As you might imagine, I read it with interest.

In his column, Brooks writes about how the long-European roots of America are becoming outnumbered by those of immigrants from elsewhere in the world.  Add to that racial intermarriage and mixed-race offspring and you’re left with what Brooks calls the coming New America. What will this New America look like? Brooks is happy to venture guesses, predicting how the complex forces of socioeconomics, education, ethnicity and heritage may play out in coming generations. Among these predictions: that America will become “a nation of mutts, a nation with hundreds of fluid ethnicities from around the world, intermarrying and intermingling.” The piece sparked an outcry from readers and an online conversation via social media, much of it over his use of the term mutt. While it was obviously an unwise and insensitive word for him to use, I think this was the lesser of the problems with his piece.

According to New York Times public editor Margaret Sullivan, columnists are supposed to stir things up. But in “A Nation of Mutts,” Brooks merely takes a centuries-old argument, injects it with Botox, squeezes it into skinny jeans, and calls it something new.

He begins by telling us that “American society has been transformed” as increasing numbers of immigrants have come to the U.S. in recent decades. He adds that, “up until now, America was primarily an outpost of European civilization” with immigrants who came from Northern, Western, Southern, or Central Europe (depending on the era) but all “with European ideas and European heritage.” That is now changing. Brooks tells us that European-American five-year-olds are already a minority. We have thirty years, tops, before Caucasians will be the minority in America overall.

What strikes me is Brooks’ simplistic picture of racial and cultural differences. He portrays America’s past immigrants from Europe as a monolithic, homogeneous bunch (against which he will compare the diverse immigrants of today) when of course this is a straw man. Ask anyone at a European soccer championship match or at the Eurozone bailout negotiations whether Europeans all have the same “European ideas and European heritage.” On second thought, probably better you don’t.

Americans haven’t historically thought of all Europeans as similar or even equal. Take the 19th century Nativists, or Know-Nothings, who thought that immigrants were ruining America. What exotic nation supplied these immigrants? The Tropics of Ireland.

Our country’s long history of interracial children aside, Caucasians from all over Europe have been intermarrying in America for centuries. I have American ancestors dating back to colonial times and am part German and part British with at least one Frenchman and one Scot thrown in for good measure. Why does David Brooks consider my daughter a mutt yet doesn’t consider me one? Because my ancestors all had more or less the same skin color while my husband is several shades darker than me.

So what is David Brooks really recounting when he writes about the coming nation of mutts? What’s so different about the immigrants of today? Mostly superficial details of appearance. Brooks’ New America is based on the preponderance of pigments in skin, the shape and slope of eyes, the texture of hair. His seemingly profound comment is about the spread of a handful of genes that create innocuous (but visible) proteins like melanin. Big frickin’ deal.

Ultimately, David Brooks is guilty of recycling the same tired old tune: immigrants are changing America and who knows what it might become when they’re done with it? Of course immigrants change our national demographics and  cultural melange. Each generation has wrestled with this self-evident fact for different reasons and in different ways. But if there is anything constant about America’s history, it is the presence of immigration and continuous change. Which means that Brooks was wrong when he said that a New America is coming. It has been here all along.

___

Photo credit: Steve Baker on Flickr

Sandy, Science, and a New Campaign

As Tuesday’s election approaches and news coverage of super storm Sandy recedes, I’m struck by the absurdity of our current situation. While cities on the East Coast are still pumping water out of tunnels and salvaging belongings from ruined homes, we get back to talking about the economy. That and reproductive rights.

Yet we are surrounded by evidence of climate change, even beyond our recent run-ins with Sandy and Irene. We have seen increases in the frequency and severity of storms, droughts, and wildfires. Already, drought has affected food prices here in the U.S. and caused widespread famine in Africa. Massive ice shelves in Antarctica are melting and crumbling into the sea, demonstrably raising sea levels worldwide. And this past year brought us record-breaking temperatures, one after another, as we watched a freakishly warm winter give way to a sweltering summer.

Despite the mountain of scientific evidence that climate change is real and ample demonstrations of the devastation it can wreak, the topic has not been an issue in this year’s presidential election. It wasn’t discussed in any of the three presidential debates. This is not an oversight on the part of the candidates and the moderators. Americans are simply not worried about climate change. In a Gallup poll from September, only 2% of respondents ranked environmental issues as the most important problem facing our country today. Most ranked unemployment and our lagging economy as the nation’s greatest woe.

While people are certainly suffering in today’s economy, the dismissal of climate change is terribly shortsighted. Climate change is an economic threat. It has already raised (and will probably continue to raise) the cost of food. We have also faced steep costs as a result of extreme weather. New York State’s economy alone lost as much as 18 billion dollars due to Sandy and fortifying New York from future flooding could cost upwards of 20 billion dollars. Those figures don’t include the damage in other states and they don’t include the expense to homeowners who are rebuilding or who will try to insure their homes in the wake of this storm. And of course it can’t include the personal devastation and loss of life.

So why aren’t we talking more about climate change? And why aren’t we doing more, both in our own lives and in our voting choices, to try to stem the tide?

It seems to me that we are witnessing a human psychology experiment on the grandest scale. How can we ignore (and in fact perpetuate) an impending disaster of such magnitude? In fact, humans have quite a bit of practice at ignoring future doom. After all, we live out our lives with the certainty that we will die and we function in large part by not thinking about it. Death? What death? Climate change? What change?

I wrote before about how our disappearing glaciers may be suffering from a PR problem. They need a spokesman or a mascot – something that might tug at our heartstrings and make people care. Now I think we need a similar approach for climate change itself. The climatologists have done their job and demonstrated that climate change is real. But our first and greatest obstacle in fixing it may lie within ourselves or, more specifically, our skulls.

I think it’s time to call in the psychologists, the marketing specialists and the public relations gurus. Through years of research, we already know the many ways that human beings are illogical and we know how to persuade and manipulate them. Beer has bikini-clad women. Cigarettes have cowboys. Viagra and Cialis have politicians and quarterbacks. Why can’t we do the same for our planet? It’s time we held focus groups and raised ad dollars. It’s time for a climate campaign.

Popular opinion has always driven political will. We need to use every resource we have to raise awareness and change minds. So let’s bring in the psychologists. Let’s bring in the bikini-clad women if need be. (After all, it’s going to be hot!) But before we can influence others, we have to begin by changing ourselves. By changing our lifestyles. By changing our priorities. By changing our minds and then voting our minds. And there’s no better time to start than this Tuesday.

I’ll see you at the ballot box!

The Demise of the Expert

These days, I find myself turning off the news while thinking the same question. When did we stop valuing knowledge and expertise? When did impressive academic credentials become a political liability? When did the medical advice of celebrities like Jenny McCarthy and Ricki Lake become more trusted than those of government safety panels, scientists, and physicians? When did running a small business or being a soccer mom qualify a person to hold the office of president and make economic and foreign policy decisions?

As Rick Perry, the Republican front-runner for president recently told us, “You don’t have to have a PhD in economics from Harvard to really understand how to get America back working again.” Really? Why not? It certainly seems to me that some formal training would help. And yet many in Congress pooh-poohed economists’ warnings about the importance of raising the debt ceiling and have insisted on decreasing regulations despite the evidence that this won’t help to improve our economy (and will further harm our environment.) Meanwhile, man-made climate change is already affecting our planet. Natural disasters such as droughts and hurricanes are on the rise, just as scientists predicted. But we were slow to accept their warnings and have been slow to enact any meaningful policies to stem the course of this calamity.

The devaluation of expertise is puzzling enough, but perhaps more puzzling still is the timing. Never before in human history have we witnessed the fruits of expertise as we do today. Thanks to scientists and engineers, we rely on cell phones that wirelessly connect us to the very person we want to talk to at the moment we want to talk. In turn, these cell phones operate through satellites that nameless experts have set spinning in precise orbits around Earth. We keep in touch with friends, do our banking and bill-paying, and make major purchases using software written in codes we don’t understand and transmitted over a network whose very essence we struggle to comprehend. (I mean, what exactly is the Internet?) Meanwhile, physicians use lasers to excise tumors and correct poor vision. They replace damaged livers and hearts. They fit amputees with hi-tech artificial limbs, some with feet that flex and hands that grasp.

Obviously none of this would have been possible without experts. You need more than high school math and a casual grasp of physics or anatomy to develop these complex systems, tools, and techniques. So why on Earth would we discount experts now, when we have more proof than ever of their worth?

My only guess is education. Our national public education system is in shambles. American children rank 25th globally in math and 21st in science. At least two-thirds of American students cannot read at grade level. But there is something our student score high on. As the documentary Waiting for Superman highlighted, American students rank number one in confidence. This may stem from the can-do culture of the United States or from the success our nation has enjoyed over the last 65 years. But it makes for a dangerous combination. We are churning out students with inadequate knowledge and skills, but who believe they can intuit and accomplish anything. And if you believe that, then why not believe you know better than the experts?

I think the only remedy for this situation is better education, but not for the reasons you might think. In my opinion, the more a person learns about any given academic subject, the more realistic and targeted his or her self-confidence becomes.

The analogy that comes to mind is of a blind man trying to climb a tree. When he’s still at the base of the tree, all he can feel is the trunk. From there, he has little sense of the size or shape of the rest of the tree.  But suppose he climbs up on a limb and then out to even smaller branches. He still won’t know the shape of the rest of the tree, but from his perch on one branch, he can feel the extensive foliage. He’ll know that the tree must be large and he can presume that the other branches are equally long and intricate. He can appreciate how very much there must be of the tree that’s beyond his reach.

I think the same principle applies to knowledge. The more we know, the more we can appreciate how much else there is out there to know – things about which we haven’t got a clue. As we climb out on our tiny branches, acquiring knowledge, we also gain an awareness of our profound ignorance. Unfortunately, many of America’s children (and by now, adults too) aren’t climbing the tree at all; they’re still lounging at the base, enjoying a picnic in the shade.

Should it surprise us, then, to learn that they don’t see the value in expertise? That they can support political candidates who disparage the advice of specialists and depict academic achievement as a form of elitism? Why shouldn’t they trust the advice of a neighbor, a talk show host, or an actor over the warnings of the ‘educated elite’?

No single person can know everything there is to know in today’s world, so the sum of human knowledge must be dispersed among millions of specialized experts. Human progress relies on these people, dangling from their obscure little branches, to help guide our technology, our public policy, our research and governance. Our world has no shortage of experts. Now if only people would start listening to them.

Unsheltered

2011 jan26 Egypt Revolution

Over the past several weeks, I have found myself riveted by the protests in Egypt and across the Middle East, just as I was by the Iranian protests of 2009. I’ve watched the footage of chanting citizens and marauding thugs and remained glued to television and the Internet for details about tear gas, imprisonments, and casualties. I would like to claim that I’m always this engaged in world events, but that’s not true. It’s political protest and revolution; they are like crack to me. I crave updates and, without them, I experience withdrawal.

But any old update won’t do. While I read written reports, they are unsatisfying. I need video. I have to see the faces and hear the voices, and that got me wondering why.

I’m sure that one of the reasons has to do with imagination. When I see the settings, the throngs of people, the barricades and overturned cars, I can better imagine what it might be like to roam those streets and risk my life for the sake of a nation or a way of life. It makes me wonder: in the face of danger and oppression, would I dare to step out of my house and join the cause? Watching the footage of the protests has helped me realize that it’s a question I can’t answer. I don’t know what I would risk because I’ve never experienced oppression. I’ve been sheltered and lucky. I am wholly unfamiliar with this type of human drama. So on one level, the video coverage provides me with some vicarious taste of a different way of thinking and being. It unshelters me a little, even for a moment, even from the safety of my sofa, and for that I am grateful.

The other reason I watch is more basic, even primal. On some level, what I take from the video has less to do with governments or uprisings than with faces and emotions. Rarely is such a range of emotions expressed in such a short period of time. The faces I saw in the footage coming out of Egypt expressed desperation, anger, fear, but also hope and unadulterated exultation. As I watched coverage of the celebrations after Egypt’s president stepped down, I was struck by how rarely we see expressions of strong emotions in general, and unbridled joy in particular, during the course our everyday lives.

I’ll ask you the same question I asked myself. How often do you see people truly joyous? Not just laughing at a joke or having a good time with friends, but reveling in life-changing happiness? Almost never. In the U.S., most of us have what we need, if not always want we want. We have our freedom and our rights in a democracy. We usually have access to shelter, medical services, and plentiful (maybe too plentiful) food. Even if we’re sometimes unhappy with our government, we know we had a voice in its election and we know we only have to wait a few years to usher in a new one. The stakes are so much lower, and so is the potential for our experience of joy.

Social psychologists have theories about emotional contagion, or the idea that when we see a person experiencing a strong emotion we can ‘catch’ that emotion ourselves. I’m no social psychologist and I would rather not think of emotions as analogous to disease, but I will say that some of Egypt’s faces and emotions really gripped me over the last few weeks. In one cable news video, three young girls described their plans to become a doctor, a lawyer, and an engineer. Online footage by the New York Times captured an elderly advocate for women’s rights who spoke of freedom with breathless excitement amid the throngs in Tahrir Square. When I saw these and other smiling faces, I ‘caught’ some serious joy. And I was reminded of how damn remarkable and good it is to simply be alive.

____

Photo credit: Darla Hueske

%d bloggers like this: