In my last post, I wrote about how scientists are beginning to engage with the public, particularly via social media and blogs. Here, I will use my recent experiences at the AAAS conference to illustrate how social media are changing the business of science itself.
The AAAS conference was the first science meeting I’ve attended as an active tweeter. The experience opened my eyes. Throughout the event, scientists and science writers were tweeting interesting talks or points made in various sessions. Essentially, this gave me ears and eyes throughout the conference. For instance, during a slow moment in the session I was attending, I checked out the #AAAS hashtag on Twitter and saw several intriguing tweets from people in another session:
These tweets drew my attention to a talk that I would otherwise have missed completely. I could then decide if I wanted to switch to the other session or learn more about the speaker and her work later on. Even if I did neither, I’d learned a few interesting facts with minimal effort.
Twitter can be a very useful tool for scientists. Aside from its usefulness at conferences, it’s a great way to learn about new and exciting papers in your field. Those who aren’t on Twitter might be surprised to hear that it can be a source for academic papers rather than celebrity gossip. Ultimately, the information you glean from Twitter depends entirely on the people you choose to follow. Scientists often follow other scientists in their own or related fields. Thus, they’re more likely to come upon a great review on oligodendrocytes than news on Justin Bieber’s latest antics. Scientists and science writers form their own interconnected Twitter networks through which they share the type of content that interests them.
Katie Mack, an astrophysicist at the University of Melbourne, has logged some 32,000 tweets as @AstroKatie and has about 7,300 followers on Twitter to date. She recently explained on the blog Real Scientists why she joined Twitter in the first place:
“Twitter started out as an almost purely professional thing for me — I used it to keep up with what other physicists and astronomers were talking about, what people were saying at conferences, that kind of thing. It’s great for networking as well, and just kind of seeing what everyone is up to, in your own field and in other areas of science. Eventually I realized it could also be a great tool for outreach and for sharing my love of science with the world.”
Social media and the Internet more broadly have also made new avenues of scientific research possible. They’ve spurred citizen science projects and collaborative online databases like the International Nucleotide Sequence Database Collaboration. Yet social media and online content have also affected research on a smaller scale as individual scientists discover the science diamonds in the rough. For example, Amina Khan described in a recent Los Angeles Times article how a group of scientists mined online content to compare the strategies different animals use to swim. She writes:
“They culled 112 clips from sites like YouTube and Vimeo depicting 59 different species of flying and swimming animals in action, including moths, bats, birds and even humpback whales. They wanted to see where exactly the animals’ wings (or fins) bent most, and exactly how much they bent.”
Another wonderful example of the influence of YouTube on science came to my attention at the AAAS meeting when I attended a session on rhythmic entrainment in non-human animals. Rhythmic entrainment is the ability to match your movements to a regular beat, such as when you tap your foot to the rhythm of a song. Only five years ago it was widely believed that the ability to match a beat is unique to humans . . . that is, until Aniruddh Patel of Tufts University received an email from his friend.
As Dr. Patel described in the AAAS session, the friend wrote to share a link to a viral YouTube video of a cockatoo named Snowball getting down to the Backstreet Boys. What did Patel make of it? Although the bird certainly seemed to be keeping the beat, it was impossible to know what cues the animal was receiving off-screen. Instead of shrugging off the video or declaring it a fraud, Patel contacted the woman who posted it. She agreed to collaborate with Patel and let him test Snowball under carefully controlled conditions. Remarkably, Snowball was still able to dance to various beats. Patel and his colleagues published their results in 2009, upending the field of beat perception.
That finding sparked a string of new experiments with various species and an entertaining lineup of speakers and animal videos at the AAAS session. Among them, I had the pleasure of watching a sea lion nodding along to “Boogie Wonderland” and a bonobo pounding on a drum.
In essence, the Internet and social media are bringing new opportunities to the doorsteps of scientists. As Dr. Patel’s experience shows, it’s wise to open the door and invite them in. Like everything else in modern society, science does not lie beyond the reach of social media. And thank goodness for that.
Patel, Aniruddh D., Iversen, John R., Bregman, Micah R., & Schulz, Irena (2009). Experimental Evidence for Synchronization to a Musical Beat in a Nonhuman Animal Current Biology, 19 (10), 827-830 DOI: 10.1016/j.cub.2009.03.038
While waiting for the L train to attend the American Academy for the Advancement of Science (AAAS) meeting this week, I came upon Nicholas Kristof’s latest New York Times op-ed: “Professors, We Need You!” In his piece, Kristof portrays professors as out-of-touch intellectuals who study esoteric fields and hide their findings in impenetrable jargon. He also says that academia crushes rebels who communicate their science to the public. I admire Mr. Kristof for his efforts to bring awareness to injustices around the world and I agree that academic papers are often painful – if not impossible – to read. But my experience at the AAAS conference this week highlights how wrong he is, both in his depiction of academics and of the driving forces within academia itself.
AAAS is the organization behind Science magazine, ScienceNOW news, Science Careers, and the AAAS fellowship programs. Among the goals in its mission statement: to enhance communications among scientists, engineers, and the public; to provide a voice for science on societal issues; and to increase public engagement with science and technology. So yes, you would expect their conference to focus on science communication. Still, the social media sessions (Engaging with Social Media and Getting Started in Social Media) were full of scientists of all ages. Another well-attended session taught listeners how to use sites and services like Google Scholar, Mendeley, ORCID, and ResearchGate to improve the visibility of their work online.
Throughout the conference, scientists were live-tweeting interesting facts and commentary from the sessions they attended using the #AAASmtg hashtag. I saw a particularly wonderful example of this at a Saturday morning symposium called Building Babies. All five of the speakers at the symposium have accounts on Twitter and four of them were live-tweeting during each other’s presentations. Three of them (Kate Clancy, Julienne Rutherford, and Katie Hinde) also have popular blogs: Context and Variation, BANDIT, and Mammals Suck, respectively. After the symposium, Dr. Hinde compiled the symposium-related tweets on Storify.
I won’t claim that this panel of speakers is representative of scientists as a whole, but I do believe that they are representative of the direction in which scientists are moving. And contrary to Mr. Kristof’s claims, I would argue that their public visibility and embrace of online communication have probably helped rather than hindered their careers. Increased visibility can lead to more invitations to give talks, more coverage from the science press, and added connections outside of one’s narrow field of expertise. The first two of these can fill out a CV and attract positive public attention to a department, both pluses for a young academic who’s up for tenure. Moreover, while hiring and tenure decisions are made within departments, funding comes from organizations and institutions that typically value plain-speaking scientists who do research with societal relevance. For these reasons (and, I’m sure, others), it’s becoming obvious that scientists can benefit from clarity, accessibility, and visibility. In turn, many scientists are learning the necessary skills and making inroads to communicating with the public.
Of course, public visibility offers both promise and peril for scientists. As climate scientist and blogger Kim Cobb explained in her wonderful AAAS talk, scientists worry about appearing biased or unprofessional when they venture into the public conversation on social media. Science writer and former researcher Bethany Brookshire mentioned another potential peril: the fact that thoughtless or offensive off-the-cuff comments made on social media can come back to haunt scientists in their professional lives. It is also certainly true in academia (as it is in most spheres) that people are disdainful of peers who seem arrogant or overly self-promotional.
In short, scientists hoping to reach the public have their work cut out for them. They must learn how to talk about science in clear and comprehensible terms for non-scientists. They must be engaging yet appropriate in public forums and strike the right balance between public visibility and the hard-won research results to back up the attention they receive. They have good reason to tread carefully as they wade into the rapid waters of the Twitterverse, the blogosphere, and other wide-open forums. Yet in they are wading all the same.
There have already been some great responses to Kristof’s call for professors. Political scientist Erik Voeten argued that many academics already engage the public in a variety of ways. Political scientist Robin Corey pointed out that the engagement of academics with the public is often stymied by a lack of time and funding. Academics are rarely paid for the time they spend communicating with the public and may need to concentrate their efforts on academic publications and grant applications because of the troubling job market and funding situation.
Still, many academics are ready to take the plunge and engage with the public. What they need is more training and guidance. Graduate programs should provide better training in writing and communicating science. Universities and societies should offer mentorship and seminars for scientists who want to improve the visibility of their research via the web. We need to have many more panels and discussions like the ones that took place at the AAAS meeting this week.
Oh, and while we’re at it: fewer misinformed, stereotypical descriptions of stodgy professors in ivory towers would be nice.
People from different places speak differently – that we all know. Some dialects and accents are considered glamorous or authoritative, while others carry a definite social stigma. Speakers with a New York City dialect have even been known to enroll in speech therapy to lessen their ‘accent’ and avoid prejudice. Recent research indicates that they have good reason to be worried. It now appears that the prestige of people’s dialects can fundamentally affect how you process and remember what they say.
Meghan Sumner is a psycholinguist (not to be confused with a psycho linguist) at Stanford who studies the interaction between talker variation and speech perception. Together with Reiko Kataoka, she recently published a fascinating if troubling paper in the Journal of the Acoustical Society of America. The team conducted two separate experiments with undergraduates who speak standard American English (what you hear from anchors on the national nightly news). They had the undergraduates listen to words spoken by female speakers of 1) standard American English, 2) standard British English, or 3) the New York City dialect. Standard American English is a rhotic dialect, which means that its speakers pronounce the final –r in words like finger. Both speakers of British English and the New York City dialect drop that final –r sound, but one is a standard dialect that’s considered prestigious and the other is not. I bet you can guess which is which.
In their first experiment, Sumner and Kataoka tested how the dialect of spoken words affected semantic priming, an indication of how deeply the undergraduate listeners processed the words. The listeners first heard a word ending in –er (e.g., slender) pronounced by one of the three different female speakers. After a very brief pause, they saw a written word (say, thin) and had to make a judgment about the written word. If they had processed the spoken word deeply, it should have brought related words to mind and allowed them to respond to a question about a related written word faster. The results? The listeners showed semantic priming for words spoken in standard American English but not in the New York City dialect. That’s not too surprising. The listeners might have been thrown off by the dropped r or simply the fact the word was spoken in a less familiar dialect than their own. But here’s the wild part: the listeners showed as much semantic priming for standard British English as they did for standard American English. Clearly, there’s something more to this story than a missing r.
In their second experiment, a new set of undergraduates with a standard American English dialect listened to sets of related words, each read by one of the speakers of the same three dialects: standard American, British, or NYC. Each set of words (say, rest, bed, dream, etc.) excluded a key related word (in this case, sleep). The listeners were then asked to list all of the words they remembered hearing. This is a classic task that consistently generates false memories. People tend to remember hearing the related lure (sleep) even though it wasn’t in the original set. In this experiment, listeners remembered about the same number of actual words from the sets regardless of dialect, indicating that they listened and understood the words irrespective of speaker. Yet listeners falsely recalled more lures for the word sets read by the NYC speaker than by either the standard American or British speakers.
The authors offer an explanation for the two findings. On some level, the listeners are paying less attention to the words spoken with a NYC dialect. In fact, decreased attention has been shown to both decrease semantic priming and increase the generation of false memories in similar tasks. In another paper, Sumner and her colleague Arthur Samuel showed that people with a standard American dialect as well as those with a NYC dialect showed better later memory for –er words that they originally heard in a standard American dialect compared with words heard in a NYC dialect. These results would also fit with the idea that speakers of standard American (and even speakers with a NYC dialect) do not pay as much attention to words spoken with a NYC dialect.
In fact, Sumner and colleagues recently published a review of a comprehensive theory based on a string of their findings. They suggest that we process the social features of speech sounds at the very earliest stages of speech perception and that we rapidly and automatically determine how deeply we will process the input according to its ‘social weight’ (read: the prestige of the speaker’s dialect). They present this theory in neutral, scientific terms, but it essentially means that we access our biases and prejudices toward certain dialects as soon as we listen to speech and we use this information to at least partially ‘tune out’ people who speak in a stigmatized way.
If true, this theory could apply to other dialects that are associated with low socioeconomic status or groups that face discrimination. Here in the United States, we may automatically devalue or pay less attention to people who speak with an African American Vernacular dialect, a Boston dialect, or a Southern drawl. It’s a troubling thought for a nation founded on democracy, regional diversity, and freedom of speech. Heck, it’s just a troubling thought.
Sumner M, & Kataoka R (2013). Effects of phonetically-cued talker variation on semantic encoding Journal of the Acoustical Society of America DOI: 10.1121/1.4826151
Sumner M, Kim S K, King E, & McGowan K B (2014). The socially weighted encoding of spoken words: a dual-route approach to speech perception Frontiers in Psychology DOI: 10.3389/fpsyg.2013.01015
Hey there, friends. I recently contributed a post to the online Scientific American column Mind Matters. The piece is about how children develop the ability to contemplate, predict, and communicate other people’s thoughts and beliefs. You can read it here. Come for the new research findings, stay for the somewhat eerie revelation that babies as young as 10 months are predicting your thoughts and expectations.
We are great at identifying a color as blue versus yellow or a surface as scratchy, soft, or bumpy. But how do we do with a scent? Not so well, it turns out. If I presented you with several everyday scents, ranging from chocolate to sawdust to tea, you would probably name fewer than half of them correctly.
This sort of dismal performance is often chalked up to idiosyncrasies of the human brain. Compared to many mammals, we have paltry neural smelling machinery. To smell something, the smell receptors in your upper nasal cavity must detect a molecule and pass this information on to twin nubs that stick out of the brain. These nubs are called olfactory bulbs and they carry out the earliest steps of scent processing in the brain. While the size of the human brain is impressive with respect to our bodies, the human olfactory bulbs are nothing to brag about. Below, take a look at the honking olfactory bulbs (relative to overall brain size) on the dog. Compared to theirs, our bulbs look like a practical joke.
It’s easy to blame our bulbs for our smelling deficiencies. Indeed, many scientists have offered brain-based explanations for our shortcomings in the smell department. But could there be more to the story? What if we are hamstrung by a lackluster odor vocabulary? After all, we use abstract, categorical words to identify colors (e.g., blue), shapes (round), and textures (rough), but we generally identify odors by their specific sources. You might say: “This smells like coffee,” or “I detect a hint of cinnamon,” or offer up a subjective judgment like, “That smells gross.” We lack a descriptive, abstract vocabulary for scents. Could this fact account for some of our smell shortcomings?
Linguists Asifa Majid and Niclas Burenhult tackled this question by studying a group of people with a smell vocabulary quite unlike our own. The Jahai are a relatively small group of hunter-gatherers who live in Malaysia and Thailand. They use their sense of smell often in everyday life and their native language (also called Jahai) includes many abstract words for odors. Check out the first two columns in the table below for several examples of abstract odor words in Jahai.
Table from Majid & Burenhult (2014) in Cognition providing Jahai odor and color words, as well as their rough translations into English.
Majid and Burenhult tested whether Jahai speakers and speakers of American English could effectively and consistently name scents in their respective native languages. They stacked the deck in favor of the Americans by using odors that are all commonplace for Americans, while many are unfamiliar to the Jahai. The scents were: cinnamon, turpentine, lemon, smoke, chocolate, rose, paint thinner, banana, pineapple, gasoline, soap, and onion. For a comparison, they also asked both groups to name a range of color swatches.
The researchers published their findings in a recent issue of Cognition. As expected, English speakers used abstract descriptions for colors but largely source-based descriptions for scents. Their responses differed substantially from one person to the next on the odor task, while they were relatively consistent on the color task. Their answers were also nearly five times longer for the odor task than for the color task. That’s because English speakers struggled and tried to describe individual scents in more than one way. For example, here’s how one English speaker struggled to describe the cinnamon scent:
“I don’t know how to say that, sweet, yeah; I have tasted that gum like Big Red or something tastes like, what do I want to say? I can’t get the word. Jesus it’s like that gum smell like something like Big Red. Can I say that? Ok. Big Red. Big Red gum.”
Now compare that with Jahai speakers, who gave slightly shorter responses to name odors than to name colors and used abstract descriptors 99% of the time for both tasks. They were equally consistent at naming both colors and scents. And, if anything, this study probably underestimated the odor-naming consistency of the Jahai because many of the scents used in the test were unfamiliar to them.
The performance of the Jahai proves that odors are not inherently inexpressible, either by virtue of their diversity or the human brain’s inability to do them justice. As the authors state in the paper’s abstract and title, odors are expressible in language, as long as you speak the right language.
Yet this discovery is not the final word either. The differences between Americans and Jahai don’t end with their vocabularies. The Jahai participants in the study use their sense of smell every day for foraging (their primary source of income). Presumably, their language contains a wealth of odor words because of the integral role this sense plays in their lives. While Americans and other westerners are surrounded by smells, few of us rely on them for our livelihood, safety, or well-being. Thanks to the adaptive nature of brain organization, there may be major differences in how Americans and Jahai represent odors in the brain. In fact, I’d wager that there are. Neuroscience studies have shown time and again that training and experience have very real effects on how the brain represents information from the senses.
As with all scientific discoveries, answers raise new questions. Is it the Jahai vocabulary that allows the Jahai to consistently identify and categorize odors? Or is it their lifelong experience and expertise that gave rise to their vocabulary and, separately, trained their brains in ways that alter their experience of odor? If someone magically endowed English speakers with the power to speak Jahai, would they have the smelling abilities to put its abstract odor words to use?
Would a rose by any other name smell as Itpit? The answer awaits the linguist, neuroscientist, or psychologist who is brave and clever enough to sniff it out.
Asifa Majid, Niclas Burenhult (2014). Odors are expressible in language, as long as you speak the right language Cognition, 130 (2), 266-270 DOI: 10.1016/j.cognition.2013.11.004
I can just hear the advertisement now.
Do you have perfect pitch? Would you like to? Then Depakote might be right for you . . .
Perfect pitch is the ability to name or produce a musical note without a reference note. While most children presumably have the capacity to learn perfect pitch, only one in about ten thousand adults can actually do it. That’s because children must receive extensive musical training as youngsters to develop it. Most adults with perfect pitch began studying music at six years of age or younger. By the time children turn nine, their window to learn perfect pitch has already closed. They may yet blossom into wonderful musicians but they will never be able to count perfect pitch among their talents.
Or might they after all?
Well no, probably not. But a new study, published in Frontiers in Systems Neuroscience, has opened the door to such questions. Its authors tested how young men learned to name notes when they were on or off of a drug called valproate (brand name: Depakote). Valproate is widely used to treat epilepsy and bipolar disorder. It’s part of a class of drugs called histone-deacetylase, or HDAC, inhibitors that fiddle with how DNA is stored and alter how genes are read out and translated into proteins.
The intricacies of how HDAC inhibitors affect gene expression and how those changes reduce seizures and mania are still up in the air. But while some scientists have been working those details out, others have been noticing that HDAC inhibitors help old mice learn new tricks. These drugs allow adult mice to adapt to visual and auditory changes in ways that are only otherwise possible for juvenile mice. In other words, HDAC inhibitors allowed mice to learn things beyond the typical window, or critical period, in which the brain is capable of that specific type of learning.
Judit Gervain, Allan Young, and the other authors of the current study set out to test whether HDAC inhibitors can reopen a learning window in humans as well. They randomly assigned their young male subjects to take valproate for either the first or the second half of the study. (Although I usually get my hackles up about the exclusion of female participants from biomedical studies, I understand their reason for doing so in this case. Valproate can cause severe birth defects. By testing men, the authors could be one hundred percent certain that their participants weren’t pregnant.) The subjects took valproate for one half of the study and a placebo for the other half . . . and of course they weren’t told which was which.
During the first half of the study, they trained twenty-four participants to learn six pitch classes. Instead of teaching them the formal names of these pitches in the twelve-tone musical system, they assigned proper names to each one (e.g., Eric, Rachel, or Francine), indicating that each is the name of a person who only plays one pitch class. The participants received this training online for up to ten minutes daily for seven days. During the second half of the study, eighteen of the same subjects underwent the same training with six new pitch classes and names. At the end of each seven-day training session, they heard the six pitch classes one at a time and, for each, answered the question: “Who played that note?”
The results? There was a whopping effect of treatment on performance in the first half of the study. The young men on valproate did significantly better than the men on placebo. That’s pretty cool and amazing. It is particularly impressive and surprising because the participants received very little training. The online training summed to a mere seventy minutes and some of the participants didn’t even complete all seven of the ten-minute sessions.
As cool as the main finding is, there are some odd aspects to the study. As you can see from the figure, the second half of the experiment (after the treatments were switched) doesn’t show the same result as the first. Here, participants on valproate perform no differently from those on placebo. The authors suggest that the training in the first half of the experiment interfered with learning in the second half – a plausible explanation (and one they might have predicted in advance). Still, at this point we can’t tell if we are looking at a case of proactive interference or a failure to replicate results. Only time and future experiments will tell.
There were two other odd aspects of the study that caught my eye. The authors used synthesized piano tones instead of pure tones because the former has additional cues like timbre that help people without perfect pitch complete the task. They also taught the participants to associate each note with the name of the person who supposedly plays it rather than the name of the actual note or some abstract stand-in identifier. Both choices make it easier for the participants to perform well on the task but call into question how similar the participants’ learning is to the specific phenomenon of perfect pitch. Perhaps the subjects on valproate in the first half of the experiment were relying on different cues (e.g., timbre instead of frequency). Likewise, associating proper names of people with notes may help subjects learn precisely because it recruits social processes and networks that people with perfect pitch don’t use for the task. If these social processes don’t have a critical period like perfect pitch judgment does, well then valproate might be boosting a very different kind of learning.
As the authors themselves point out, this small study is merely a “proof-of-concept,” albeit a dramatic one. It is not meant to be the final word on the subject. Still, I am curious to see where this leads. Might valproate’s success with seizures and mania have something to do with its ability to trigger new learning? And if HDAC inhibitors do alter the brain’s ability to learn skills that are typically crystallized by adulthood, how has that affected the millions of adults who have been taking these drugs for years? Yet again, only time and science will tell.
I, for one, will be waiting to hear what they have to say.
Gervain J, Vines BW, Chen LM, Seo RJ, Hensch TK, Werker JF, & Young AH (2013). Valproate reopens critical-period learning of absolute pitch. Frontiers in Systems Neuroscience, 7 PMID: 24348349