The Changing Face of Science: Part Two

In my last post, I wrote about how scientists are beginning to engage with the public, particularly via social media and blogs. Here, I will use my recent experiences at the AAAS conference to illustrate how social media are changing the business of science itself.

The AAAS conference was the first science meeting I’ve attended as an active tweeter. The experience opened my eyes. Throughout the event, scientists and science writers were tweeting interesting talks or points made in various sessions. Essentially, this gave me ears and eyes throughout the conference. For instance, during a slow moment in the session I was attending, I checked out the #AAAS hashtag on Twitter and saw several intriguing tweets from people in another session:

Screen Shot 2014-02-20 at 4.28.10 PM

Screen Shot 2014-02-20 at 4.28.35 PM Screen Shot 2014-02-20 at 12.35.51 PMScreen Shot 2014-02-20 at 4.29.00 PM

These tweets drew my attention to a talk that I would otherwise have missed completely. I could then decide if I wanted to switch to the other session or learn more about the speaker and her work later on. Even if I did neither, I’d learned a few interesting facts with minimal effort.

Twitter can be a very useful tool for scientists. Aside from its usefulness at conferences, it’s a great way to learn about new and exciting papers in your field. Those who aren’t on Twitter might be surprised to hear that it can be a source for academic papers rather than celebrity gossip. Ultimately, the information you glean from Twitter depends entirely on the people you choose to follow. Scientists often follow other scientists in their own or related fields. Thus, they’re more likely to come upon a great review on oligodendrocytes than news on Justin Bieber’s latest antics. Scientists and science writers form their own interconnected Twitter networks through which they share the type of content that interests them.

Katie Mack, an astrophysicist at the University of Melbourne, has logged some 32,000 tweets as @AstroKatie and has about 7,300 followers on Twitter to date. She recently explained on the blog Real Scientists why she joined Twitter in the first place:

“Twitter started out as an almost purely professional thing for me — I used it to keep up with what other physicists and astronomers were talking about, what people were saying at conferences, that kind of thing. It’s great for networking as well, and just kind of seeing what everyone is up to, in your own field and in other areas of science. Eventually I realized it could also be a great tool for outreach and for sharing my love of science with the world.”

Social media and the Internet more broadly have also made new avenues of scientific research possible. They’ve spurred citizen science projects and collaborative online databases like the International Nucleotide Sequence Database Collaboration. Yet social media and online content have also affected research on a smaller scale as individual scientists discover the science diamonds in the rough. For example, Amina Khan described in a recent Los Angeles Times article how a group of scientists mined online content to compare the strategies different animals use to swim. She writes:

“They culled 112 clips from sites like YouTube and Vimeo depicting 59 different species of flying and swimming animals in action, including moths, bats, birds and even humpback whales. They wanted to see where exactly the animals’ wings (or fins) bent most, and exactly how much they bent.”

Another wonderful example of the influence of YouTube on science came to my attention at the AAAS meeting when I attended a session on rhythmic entrainment in non-human animals. Rhythmic entrainment is the ability to match your movements to a regular beat, such as when you tap your foot to the rhythm of a song. Only five years ago it was widely believed that the ability to match a beat is unique to humans . . . that is, until Aniruddh Patel of Tufts University received an email from his friend.

As Dr. Patel described in the AAAS session, the friend wrote to share a link to a viral YouTube video of a cockatoo named Snowball getting down to the Backstreet Boys. What did Patel make of it? Although the bird certainly seemed to be keeping the beat, it was impossible to know what cues the animal was receiving off-screen. Instead of shrugging off the video or declaring it a fraud, Patel contacted the woman who posted it. She agreed to collaborate with Patel and let him test Snowball under carefully controlled conditions. Remarkably, Snowball was still able to dance to various beats. Patel and his colleagues published their results in 2009, upending the field of beat perception.

That finding sparked a string of new experiments with various species and an entertaining lineup of speakers and animal videos at the AAAS session. Among them, I had the pleasure of watching a sea lion nodding along to “Boogie Wonderland” and a bonobo pounding on a drum.

In essence, the Internet and social media are bringing new opportunities to the doorsteps of scientists. As Dr. Patel’s experience shows, it’s wise to open the door and invite them in. Like everything else in modern society, science does not lie beyond the reach of social media. And thank goodness for that.

_____

Patel, Aniruddh D., Iversen, John R., Bregman, Micah R., & Schulz, Irena (2009). Experimental Evidence for Synchronization to a Musical Beat in a Nonhuman Animal Current Biology, 19 (10), 827-830 DOI: 10.1016/j.cub.2009.03.038

The Changing Face of Science: Part One

Keyboard

While waiting for the L train to attend the American Academy for the Advancement of Science (AAAS) meeting this week, I came upon Nicholas Kristof’s latest New York Times op-ed: “Professors, We Need You!” In his piece, Kristof portrays professors as out-of-touch intellectuals who study esoteric fields and hide their findings in impenetrable jargon. He also says that academia crushes rebels who communicate their science to the public. I admire Mr. Kristof for his efforts to bring awareness to injustices around the world and I agree that academic papers are often painful – if not impossible – to read. But my experience at the AAAS conference this week highlights how wrong he is, both in his depiction of academics and of the driving forces within academia itself.

AAAS is the organization behind Science magazine, ScienceNOW news, Science Careers, and the AAAS fellowship programs. Among the goals in its mission statement: to enhance communications among scientists, engineers, and the public; to provide a voice for science on societal issues; and to increase public engagement with science and technology. So yes, you would expect their conference to focus on science communication. Still, the social media sessions (Engaging with Social Media and Getting Started in Social Media) were full of scientists of all ages. Another well-attended session taught listeners how to use sites and services like Google Scholar, Mendeley, ORCID, and ResearchGate to improve the visibility of their work online.

Throughout the conference, scientists were live-tweeting interesting facts and commentary from the sessions they attended using the #AAASmtg hashtag. I saw a particularly wonderful example of this at a Saturday morning symposium called Building Babies. All five of the speakers at the symposium have accounts on Twitter and four of them were live-tweeting during each other’s presentations. Three of them (Kate Clancy, Julienne Rutherford, and Katie Hinde) also have popular blogs: Context and Variation, BANDIT, and Mammals Suck, respectively. After the symposium, Dr. Hinde compiled the symposium-related tweets on Storify.

I won’t claim that this panel of speakers is representative of scientists as a whole, but I do believe that they are representative of the direction in which scientists are moving. And contrary to Mr. Kristof’s claims, I would argue that their public visibility and embrace of online communication have probably helped rather than hindered their careers. Increased visibility can lead to more invitations to give talks, more coverage from the science press, and added connections outside of one’s narrow field of expertise. The first two of these can fill out a CV and attract positive public attention to a department, both pluses for a young academic who’s up for tenure. Moreover, while hiring and tenure decisions are made within departments, funding comes from organizations and institutions that typically value plain-speaking scientists who do research with societal relevance. For these reasons (and, I’m sure, others), it’s becoming obvious that scientists can benefit from clarity, accessibility, and visibility. In turn, many scientists are learning the necessary skills and making inroads to communicating with the public.

Of course, public visibility offers both promise and peril for scientists. As climate scientist and blogger Kim Cobb explained in her wonderful AAAS talk, scientists worry about appearing biased or unprofessional when they venture into the public conversation on social media. Science writer and former researcher Bethany Brookshire mentioned another potential peril: the fact that thoughtless or offensive off-the-cuff comments made on social media can come back to haunt scientists in their professional lives. It is also certainly true in academia (as it is in most spheres) that people are disdainful of peers who seem arrogant or overly self-promotional.

In short, scientists hoping to reach the public have their work cut out for them. They must learn how to talk about science in clear and comprehensible terms for non-scientists. They must be engaging yet appropriate in public forums and strike the right balance between public visibility and the hard-won research results to back up the attention they receive. They have good reason to tread carefully as they wade into the rapid waters of the Twitterverse, the blogosphere, and other wide-open forums. Yet in they are wading all the same.

There have already been some great responses to Kristof’s call for professors. Political scientist Erik Voeten argued that many academics already engage the public in a variety of ways. Political scientist Robin Corey pointed out that the engagement of academics with the public is often stymied by a lack of time and funding. Academics are rarely paid for the time they spend communicating with the public and may need to concentrate their efforts on academic publications and grant applications because of the troubling job market and funding situation.

Still, many academics are ready to take the plunge and engage with the public. What they need is more training and guidance. Graduate programs should provide better training in writing and communicating science. Universities and  societies should offer mentorship and seminars for scientists who want to improve the visibility of their research via the web. We need to have many more panels and discussions like the ones that took place at the AAAS meeting this week.

Oh, and while we’re at it: fewer misinformed, stereotypical descriptions of stodgy professors in ivory towers would be nice.

____

Photo credit: Ian Britton, used via Creative Commons license

How People Tawk Affects How Well You Listen

4808475862_6129039fa8_o

People from different places speak differently – that we all know. Some dialects and accents are considered glamorous or authoritative, while others carry a definite social stigma. Speakers with a New York City dialect have even been known to enroll in speech therapy to lessen their ‘accent’ and avoid prejudice. Recent research indicates that they have good reason to be worried. It now appears that the prestige of people’s dialects can fundamentally affect how you process and remember what they say.

Meghan Sumner is a psycholinguist (not to be confused with a psycho linguist) at Stanford who studies the interaction between talker variation and speech perception. Together with Reiko Kataoka, she recently published a fascinating if troubling paper in the Journal of the Acoustical Society of America. The team conducted two separate experiments with undergraduates who speak standard American English (what you hear from anchors on the national nightly news). They had the undergraduates listen to words spoken by female speakers of 1) standard American English, 2) standard British English, or 3) the New York City dialect. Standard American English is a rhotic dialect, which means that its speakers pronounce the final –r in words like finger. Both speakers of British English and the New York City dialect drop that final –r sound, but one is a standard dialect that’s considered prestigious and the other is not. I bet you can guess which is which.

In their first experiment, Sumner and Kataoka tested how the dialect of spoken words affected semantic priming, an indication of how deeply the undergraduate listeners processed the words. The listeners first heard a word ending in –er  (e.g., slender) pronounced by one of the three different female speakers. After a very brief pause, they saw a written word (say, thin) and had to make a judgment about the written word. If they had processed the spoken word deeply, it should have brought related words to mind and allowed them to respond to a question about a related written word faster. The results? The listeners showed semantic priming for words spoken in standard American English but not in the New York City dialect. That’s not too surprising. The listeners might have been thrown off by the dropped r or simply the fact the word was spoken in a less familiar dialect than their own. But here’s the wild part: the listeners showed as much semantic priming for standard British English as they did for standard American English. Clearly, there’s something more to this story than a missing r.

In their second experiment, a new set of undergraduates with a standard American English dialect listened to sets of related words, each read by one of the speakers of the same three dialects: standard American, British, or NYC. Each set of words (say, rest, bed, dream, etc.) excluded a key related word (in this case, sleep). The listeners were then asked to list all of the words they remembered hearing. This is a classic task that consistently generates false memories. People tend to remember hearing the related lure (sleep) even though it wasn’t in the original set. In this experiment, listeners remembered about the same number of actual words from the sets regardless of dialect, indicating that they listened and understood the words irrespective of speaker. Yet listeners falsely recalled more lures for the word sets read by the NYC speaker than by either the standard American or British speakers.

Screen Shot 2014-02-06 at 3.23.28 PM

Figure from Sumner & Kataoka (2013) showing more false recalls from lists spoken with a NYC dialect than those spoken in standard American or British dialects.

The authors offer an explanation for the two findings. On some level, the listeners are paying less attention to the words spoken with a NYC dialect. In fact, decreased attention has been shown to both decrease semantic priming and increase the generation of false memories in similar tasks. In another paper, Sumner and her colleague Arthur Samuel showed that people with a standard American dialect as well as those with a NYC dialect showed better later memory for –er words that they originally heard in a standard American dialect compared with words heard in a NYC dialect. These results would also fit with the idea that speakers of standard American (and even speakers with a NYC dialect) do not pay as much attention to words spoken with a NYC dialect.

In fact, Sumner and colleagues recently published a review of a comprehensive theory based on a string of their findings. They suggest that we process the social features of speech sounds at the very earliest stages of speech perception and that we rapidly and automatically determine how deeply we will process the input according to its ‘social weight’ (read: the prestige of the speaker’s dialect). They present this theory in neutral, scientific terms, but it essentially means that we access our biases and prejudices toward certain dialects as soon as we listen to speech and we use this information to at least partially ‘tune out’ people who speak in a stigmatized way.

If true, this theory could apply to other dialects that are associated with low socioeconomic status or groups that face discrimination. Here in the United States, we may automatically devalue or pay less attention to people who speak with an African American Vernacular dialect, a Boston dialect, or a Southern drawl. It’s a troubling thought for a nation founded on democracy, regional diversity, and freedom of speech. Heck, it’s just a troubling thought.

_____

Photo credit: Melvin Gaal, used via Creative Commons license

Sumner M, & Kataoka R (2013). Effects of phonetically-cued talker variation on semantic encoding Journal of the Acoustical Society of America DOI: 10.1121/1.4826151

Sumner M, Kim S K, King E, & McGowan K B (2014). The socially weighted encoding of spoken words: a dual-route approach to speech perception Frontiers in Psychology DOI: 10.3389/fpsyg.2013.01015

The Demise of the Expert

These days, I find myself turning off the news while thinking the same question. When did we stop valuing knowledge and expertise? When did impressive academic credentials become a political liability? When did the medical advice of celebrities like Jenny McCarthy and Ricki Lake become more trusted than those of government safety panels, scientists, and physicians? When did running a small business or being a soccer mom qualify a person to hold the office of president and make economic and foreign policy decisions?

As Rick Perry, the Republican front-runner for president recently told us, “You don’t have to have a PhD in economics from Harvard to really understand how to get America back working again.” Really? Why not? It certainly seems to me that some formal training would help. And yet many in Congress pooh-poohed economists’ warnings about the importance of raising the debt ceiling and have insisted on decreasing regulations despite the evidence that this won’t help to improve our economy (and will further harm our environment.) Meanwhile, man-made climate change is already affecting our planet. Natural disasters such as droughts and hurricanes are on the rise, just as scientists predicted. But we were slow to accept their warnings and have been slow to enact any meaningful policies to stem the course of this calamity.

The devaluation of expertise is puzzling enough, but perhaps more puzzling still is the timing. Never before in human history have we witnessed the fruits of expertise as we do today. Thanks to scientists and engineers, we rely on cell phones that wirelessly connect us to the very person we want to talk to at the moment we want to talk. In turn, these cell phones operate through satellites that nameless experts have set spinning in precise orbits around Earth. We keep in touch with friends, do our banking and bill-paying, and make major purchases using software written in codes we don’t understand and transmitted over a network whose very essence we struggle to comprehend. (I mean, what exactly is the Internet?) Meanwhile, physicians use lasers to excise tumors and correct poor vision. They replace damaged livers and hearts. They fit amputees with hi-tech artificial limbs, some with feet that flex and hands that grasp.

Obviously none of this would have been possible without experts. You need more than high school math and a casual grasp of physics or anatomy to develop these complex systems, tools, and techniques. So why on Earth would we discount experts now, when we have more proof than ever of their worth?

My only guess is education. Our national public education system is in shambles. American children rank 25th globally in math and 21st in science. At least two-thirds of American students cannot read at grade level. But there is something our student score high on. As the documentary Waiting for Superman highlighted, American students rank number one in confidence. This may stem from the can-do culture of the United States or from the success our nation has enjoyed over the last 65 years. But it makes for a dangerous combination. We are churning out students with inadequate knowledge and skills, but who believe they can intuit and accomplish anything. And if you believe that, then why not believe you know better than the experts?

I think the only remedy for this situation is better education, but not for the reasons you might think. In my opinion, the more a person learns about any given academic subject, the more realistic and targeted his or her self-confidence becomes.

The analogy that comes to mind is of a blind man trying to climb a tree. When he’s still at the base of the tree, all he can feel is the trunk. From there, he has little sense of the size or shape of the rest of the tree.  But suppose he climbs up on a limb and then out to even smaller branches. He still won’t know the shape of the rest of the tree, but from his perch on one branch, he can feel the extensive foliage. He’ll know that the tree must be large and he can presume that the other branches are equally long and intricate. He can appreciate how very much there must be of the tree that’s beyond his reach.

I think the same principle applies to knowledge. The more we know, the more we can appreciate how much else there is out there to know – things about which we haven’t got a clue. As we climb out on our tiny branches, acquiring knowledge, we also gain an awareness of our profound ignorance. Unfortunately, many of America’s children (and by now, adults too) aren’t climbing the tree at all; they’re still lounging at the base, enjoying a picnic in the shade.

Should it surprise us, then, to learn that they don’t see the value in expertise? That they can support political candidates who disparage the advice of specialists and depict academic achievement as a form of elitism? Why shouldn’t they trust the advice of a neighbor, a talk show host, or an actor over the warnings of the ‘educated elite’?

No single person can know everything there is to know in today’s world, so the sum of human knowledge must be dispersed among millions of specialized experts. Human progress relies on these people, dangling from their obscure little branches, to help guide our technology, our public policy, our research and governance. Our world has no shortage of experts. Now if only people would start listening to them.

The Little Glacier That Could

Melting1

My husband and I just returned from an Alaskan cruise. Yes, life is cruel. We ate dessert at every meal, had our very own butler, and enjoyed every type of hedonistic frivolity. We also experienced Alaska for the first time and had our first encounter with a glacier. And it looked, well, cold. And hard. And not nearly as much fun as the ship’s chocolate buffet.

It seems to me that glaciers are suffering from a public relations problem. As temperatures rise, they’ll continue to disappear, altering sea levels and destabilizing ecosystems. Only idiots and corporate zealots think global warming isn’t happening or isn’t harmful. The rest of us are at least aware that glaciers are going the way of popsicles in an August sun. And after seeing a glacier firsthand, I’ve decided the problem is one of image. Glaciers simply aren’t cute.

In one of my recent posts, Six Loves Seven, I wrote about our natural inclination to personify objects. We are social animals and we naturally ascribe genders to our cars and personalities to our misbehaving gadgets. Historically, we’ve even personified nature. We had gods of the sea, of the sun, moon, and earth. And with that personification came respect, or at least awareness. We’re such social animals that we can’t make ourselves care about a hunk of rock, even if that rock happens to be our home. But call that rock Mother Earth and the guilt pours in. Guilt and maybe even the action that it engenders. When we personify, we make ourselves care.

Humans can feel some strong emotions toward inanimate objects – just think of the look of yearning on a window shopper’s face. Or how people will fight over possessions – from divorcing spouses to those divvying up a loved one’s estate. But inanimate objects can’t engender the love and guilt that seems uniquely able to spur us to philanthropy and self-sacrifice.

On an intellectual level, we may understand that glacial melt poses a serious risk to our planet and possibly ourselves. We may even feel anxiety about it. But all of that knowledge and self-interest has probably amounted to less individual action (and certainly less personal agonizing) than the reports that polar bears have been dying as a result. The image of exhausted polar bears searching in vain for sea ice evokes a personal empathy that a block of frozen water never could. If you’re like me, you feel physical discomfort when clips of hungry children flash on your TV screen or when mass mailers stuffed with sad photos arrive in the mail. We understand misery best when we see it on a face.

The solution came to me as my husband and I sailed away on our luxury ocean liner. What we need is a mascot. Maybe a new cartoon franchise featuring Glen the Baby Glacier. Little bitty Glen wants nothing more than to grow to be big like his dad. If only it weren’t so gosh darn hot! Maybe if he’s cute enough and famous enough, kids will start asking to ride their bikes to school. Adults will shell out for the energy-saving light bulbs. And next time my husband and I will opt for a more eco-friendly vacation. Maybe, if only glaciers seemed a little more, well, warm and fuzzy.

My money’s on you, Glen.

____

Photo credit: Sabin Dang

Jaded

In December 2008, I stared up at one of the great marvels of the world, the gleaming Taj Mahal. And I felt – nothing. Curiosity about its fabled history, yes. But other than that, all I felt was ambivalence about posing for pictures in its imposing foreground and a certain reluctance to leave my shoes unattended as I toured the palace itself.

I should have been awestruck. The Taj Mahal is stunning, a brilliant feat of engineering and craftsmanship, design and artistic grandeur. But the problem was, this wasn’t the first time I’d seen it, or even the second. Over the years, I’d seen the iconic structure in countless photographs, documentaries, and movies. By 2008, I’d encountered the great edifice so many times from the comfort of my couch that now, having traveled halfway around the world to gaze upon it, I was wondering what we would have for lunch.

It’s shameful, I know. But I suspect I’m not the only guilty one.

Recently, a friend told me why she couldn’t stand modern literature. “I hate the descriptions,” she said. “They’re flowery and over-blown and just plain weird.” Although I enjoy contemporary fiction, I knew what she was referring to. While authors of the past could devote full paragraphs to describing fields in bloom or dank urban alleys, they generally used concrete, sensible words. Contemporary writers tend to rely heavily on metaphors, or else they describe things in odd, non-literal ways. In her novel A Gate at the Stairs, Lorrie Moore uses the term “a papery caramel of leaves” to describe the wet waste that lined the roads. Whoever thought of soggy, caked leaves as caramel? And yet I think the description gives us something – a sense of color, of texture, and a fresh perspective.

It occurred to me that modern writers are faced with an interesting challenge, namely jaded readers who have seen (if not experienced) it all. Readers like me who can look upon the Taj Mahal without being awestruck. Not only are we more well-traveled than days of yore, but we’re exposed to places all over the world by way of screens large and small. In movies and through television we have seen rainforests and polar expeditions, villages from Scotland to Africa to Guatemala, Texas rodeos, Manhattan sex clubs, Roman amphitheaters, ocean floors, mountain peaks, and even the surface of the moon. No wonder we’re jaded. And no wonder fiction writers today have to sweat and toil to describe the world in a different way if we are to take note of it at all.

I’m torn about the vicarious exposure we get to our world through TV and movies. It’s a strange sort of life without living, experience that is like reality without actually being real. On the one hand, it gives us access to other places, times, and ways of life, showing us things we may never otherwise see. It can educate us, but I think it also steals something from us – the freshness and newness of discovery. I don’t want to be jaded, so I’m going to take this as a challenge. I’m going to push myself to experience each new surrounding fully, to open my eyes and look. More than that, I’m going to challenge myself to touch, taste, and smell the world around me. As yet, technology doesn’t stimulate those senses in our living rooms and movie theaters, which means the real world has got that market cornered.

Dumb Kids

I had an unpleasant experience in the car today. For the first time, I listened carefully to the lyrics of Van Morrison’s Brown Eyed Girl. You know, the one requested by brown eyed girls everywhere and played at dances as a romantic upbeat song. I could sing all of the lyrics, but had never really listened to what I was singing.

Here’s the last verse:

So hard to find my way,
Now that I’m all on my own.
I saw you just the other day,
My, how you have grown!
Cast my memory back there, Lord
Sometimes I’m overcome thinkin’ ‘bout
Makin’ love in the green grass
Behind the stadium
With you, my brown eyed girl
You my brown eyed girl.

It’s a break up song! Some of you may have known this, but my fiancé and I were shocked and saddened. He said I’d ruined the song for him. We moved it to a different, sadder playlist. How could we have sung lyrics we’d never even listened to?

I had a similar discovery about the 80’s song Second Chance by 38 Special. The clearly-enunciated bridge contains the following lyrics:

I never loved her
I never needed her
She was willing and that’s all there is to say.
Don’t forsake me;
Please don’t leave me now.

She was willing? The song went from nostalgic to disturbing. I still listen to it, but with much less glee.

It’s true of movies too. I knew all of the lines to The Breakfast Club growing up but didn’t realize until adulthood that it wasn’t cigarettes they were smoking and that they didn’t get silly just because they’d become friends. And I didn’t figure out that there was an abortion in Dirty Dancing until years afterward.

It got me to wondering: how could I have loved these movies and learned the lyrics to these songs without understanding what they were about? And not even understanding that I didn’t understand?

It occurs to me that kids can’t get too hung up on what they don’t understand. If they did, they wouldn’t be able to enjoy most of what they saw or heard. They make sense of what they can and move on, oblivious and happy. That’s why Disney can sneak adult humor into movies without children noticing. And that’s why it wasn’t until today, in the car, that I learned the truth about the brown eyed girl.

%d bloggers like this: