How People Tawk Affects How Well You Listen

4808475862_6129039fa8_o

People from different places speak differently – that we all know. Some dialects and accents are considered glamorous or authoritative, while others carry a definite social stigma. Speakers with a New York City dialect have even been known to enroll in speech therapy to lessen their ‘accent’ and avoid prejudice. Recent research indicates that they have good reason to be worried. It now appears that the prestige of people’s dialects can fundamentally affect how you process and remember what they say.

Meghan Sumner is a psycholinguist (not to be confused with a psycho linguist) at Stanford who studies the interaction between talker variation and speech perception. Together with Reiko Kataoka, she recently published a fascinating if troubling paper in the Journal of the Acoustical Society of America. The team conducted two separate experiments with undergraduates who speak standard American English (what you hear from anchors on the national nightly news). They had the undergraduates listen to words spoken by female speakers of 1) standard American English, 2) standard British English, or 3) the New York City dialect. Standard American English is a rhotic dialect, which means that its speakers pronounce the final –r in words like finger. Both speakers of British English and the New York City dialect drop that final –r sound, but one is a standard dialect that’s considered prestigious and the other is not. I bet you can guess which is which.

In their first experiment, Sumner and Kataoka tested how the dialect of spoken words affected semantic priming, an indication of how deeply the undergraduate listeners processed the words. The listeners first heard a word ending in –er  (e.g., slender) pronounced by one of the three different female speakers. After a very brief pause, they saw a written word (say, thin) and had to make a judgment about the written word. If they had processed the spoken word deeply, it should have brought related words to mind and allowed them to respond to a question about a related written word faster. The results? The listeners showed semantic priming for words spoken in standard American English but not in the New York City dialect. That’s not too surprising. The listeners might have been thrown off by the dropped r or simply the fact the word was spoken in a less familiar dialect than their own. But here’s the wild part: the listeners showed as much semantic priming for standard British English as they did for standard American English. Clearly, there’s something more to this story than a missing r.

In their second experiment, a new set of undergraduates with a standard American English dialect listened to sets of related words, each read by one of the speakers of the same three dialects: standard American, British, or NYC. Each set of words (say, rest, bed, dream, etc.) excluded a key related word (in this case, sleep). The listeners were then asked to list all of the words they remembered hearing. This is a classic task that consistently generates false memories. People tend to remember hearing the related lure (sleep) even though it wasn’t in the original set. In this experiment, listeners remembered about the same number of actual words from the sets regardless of dialect, indicating that they listened and understood the words irrespective of speaker. Yet listeners falsely recalled more lures for the word sets read by the NYC speaker than by either the standard American or British speakers.

Screen Shot 2014-02-06 at 3.23.28 PM

Figure from Sumner & Kataoka (2013) showing more false recalls from lists spoken with a NYC dialect than those spoken in standard American or British dialects.

The authors offer an explanation for the two findings. On some level, the listeners are paying less attention to the words spoken with a NYC dialect. In fact, decreased attention has been shown to both decrease semantic priming and increase the generation of false memories in similar tasks. In another paper, Sumner and her colleague Arthur Samuel showed that people with a standard American dialect as well as those with a NYC dialect showed better later memory for –er words that they originally heard in a standard American dialect compared with words heard in a NYC dialect. These results would also fit with the idea that speakers of standard American (and even speakers with a NYC dialect) do not pay as much attention to words spoken with a NYC dialect.

In fact, Sumner and colleagues recently published a review of a comprehensive theory based on a string of their findings. They suggest that we process the social features of speech sounds at the very earliest stages of speech perception and that we rapidly and automatically determine how deeply we will process the input according to its ‘social weight’ (read: the prestige of the speaker’s dialect). They present this theory in neutral, scientific terms, but it essentially means that we access our biases and prejudices toward certain dialects as soon as we listen to speech and we use this information to at least partially ‘tune out’ people who speak in a stigmatized way.

If true, this theory could apply to other dialects that are associated with low socioeconomic status or groups that face discrimination. Here in the United States, we may automatically devalue or pay less attention to people who speak with an African American Vernacular dialect, a Boston dialect, or a Southern drawl. It’s a troubling thought for a nation founded on democracy, regional diversity, and freedom of speech. Heck, it’s just a troubling thought.

_____

Photo credit: Melvin Gaal, used via Creative Commons license

Sumner M, & Kataoka R (2013). Effects of phonetically-cued talker variation on semantic encoding Journal of the Acoustical Society of America DOI: 10.1121/1.4826151

Sumner M, Kim S K, King E, & McGowan K B (2014). The socially weighted encoding of spoken words: a dual-route approach to speech perception Frontiers in Psychology DOI: 10.3389/fpsyg.2013.01015

Outsourcing Memory

3088541520_00a0721cde_b

Do you rely on your spouse to remember special events and travel plans? Your coworker to remember how to submit some frustrating form? Your cell phone to store every phone number you’ll ever need? Yeah, me too. You might call this time saving or delegating, but if you were a fancy psychologist you’d call it transactive memory.

Transactive memory is a wonderful concept. There’s too much information in this world to know and remember. Why not store some of it in “the cloud” that is your partner or coworker’s brain or in “the cloud” itself, whatever and wherever that is? The idea of transactive memory came from the innovative psychologist Daniel Wegner, most recently of Harvard, who passed away in July of this year. Wegner proposed the idea in the mid-80s and framed it in terms of the “intimate dyad” – spouses or other close couples who know each other very well over a long period of time.

Transactive memory between partners can be a straightforward case of cognitive outsourcing. I remember monthly expenses and you remember family birthdays. But it can also be a subtler and more interactive process. For example, one spouse remembers why you chose to honeymoon at Waikiki and the other remembers which hotel you stayed in. If the partners try to recall their honeymoon together, they can produce a far richer description of the experience than if they were to try separately.

Here’s an example from a recent conversation with my husband. It began when my husband mentioned that a Red Sox player once asked me out.

“Never happened,” I told him. And it hadn’t. But he insisted.

“You know, years ago. You went out on a date or something?”

“Nope.” But clearly he was thinking of something specific.

I thought really hard until a shred of a recollection came to me. “I’ve never met a Red Sox player, but I once met a guy who was called up from the farm team.”

My husband nodded. “That guy.”

But what interaction did we have? I met the guy nine years ago, not long before I met my husband. What were the circumstances? Finally, I began to remember. It wasn’t actually a date. We’d gone bowling with mutual friends and formed teams. The guy – a pitcher – was intensely competitive and I was the worst bowler there. He was annoyed that I was ruining our team score and I was annoyed that he was taking it all so seriously. I’d even come away from the experience with a lesson: never play games with competitive athletes.

Apparently, I’d told the anecdote to my husband after we met and he remembered a nugget of the story. Even though all of the key details from that night were buried somewhere in my brain, I’m quite sure that I would never have remembered them again if not for my husband’s prompts. This is a facet of transactive memory, one that Wegner called interactive cueing.

In a sense, transactive memory is a major benefit of having long-term relationships. Sharing memory, whether with a partner, parent, or friend, allows you to index or back up some of that memory. This fact also underscores just how much you lose when a loved one passes away. When you lose a spouse, a parent, a sibling, you are also losing part of yourself and the shared memory you have with that person. After I lost my father, I noticed this strange additional loss. I caught myself wondering when I’d stopped writing stories on his old typewriter. I realized I’d forgotten parts of the fanciful stories he used to tell me on long drives. I wished I could ask him to fill in the blanks, but of course it was too late.

Memories can be shared with people, but they can also be shared with things. If you write in a diary, you are storing details about current experiences that you can access later in life. No spouse required. You also upload memories and information to your technological gadgets. If you store phone numbers in your cell phone and use bookmarks and autocomplete tools in your browser, you are engaging in transactive memory. You are able to do more while remembering less. It’s efficient, convenient, and downright necessary in today’s world of proliferating numbers, websites, and passwords.

In 2011, a Science paper described how people create transactive memory with online search engines. The study, authored by Betsy Sparrow, Jenny Liu, and Wegner, received plenty of attention at the time, including here and here.

In one experiment, they asked participants either hard or easy questions and then had them do a modified Stroop task that involved reporting the physical color of a written word rather than naming the word. This was a measure of priming, essentially whether a participant has been thinking about that word or similar concepts recently. Sometimes the participants were tested with the names of online search engines (Google, Yahoo) and at others they were tested with other name brands (Nike, Target). After hard questions, the participants took much longer to do the Stroop task with Google and Yahoo than with the other brand names, suggesting that hard questions made them automatically think about searching the Internet for the answer.

Screen Shot 2013-11-21 at 1.53.54 PM

The other experiments described in the paper showed that people are less likely to remember trivia if they believe they will be able to look it up later. When participants thought that items of trivia were saved somewhere on a computer, they were also more likely to remember where the items were saved than they were to remember the actual trivia items themselves. Together, the study’s findings suggest that people actively outsource memory to their computers and to the Internet. This will come as no surprise to those of us who can’t remember a single phone number offhand, don’t know how to get around without the GPS, and hop on our smartphones to answer the simplest of questions.

Search engines, computer atlases, and online databases are remarkable things. In a sense, we’d be crazy not to make use of them. But here’s the rub: the Internet is jam-packed with misinformation or near-miss information. Anti-vaxxers, creationists, global warming deniers: you can find them all on the web. And when people want the definitive answer, they almost always find themselves at Wikipedia. While Wikipedia has valuable information, it is not written and curated by experts. It is not always the God’s-honest-truth and it is not a safe replacement for learning and knowing information ourselves. Of course, the memories of our loved ones aren’t foolproof either, but at least they don’t carry the aura of authority that comes with a list of citations.

Speaking of which. There is now a Wikipedia page for “The Google Effect” that is based on the 2011 Science article. A banner across the top shows an open book featuring a large question mark and the following warning: “This article relies largely or entirely upon a single source. . . . Please help improve this article by introducing citations to additional sources.” The citation for the first section is a dead link. The last section has two placeholders for citations, but in lieu of numbers they say, According to whom?

Folks, if that ain’t a reminder to be wary of the outsourcing your brain to Google and Wikipedia, I don’t know what is.

_________

Photo credits:

1. Photo by Mike Baird on Flickr, used via Creative Commons license

2. Figure from “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips” by Betsy Sparrow, Jenny Liu, and Daniel M. Wegner.

Sparrow B, Liu J, & Wegner DM (2011). Google effects on memory: cognitive consequences of having information at our fingertips. Science (New York, N.Y.), 333 (6043), 776-8 PMID: 21764755

Delusions: Making Sense of Mistaken Senses

6738201_646b9e485b_o

For a common affliction that strikes people of every culture and walk of life, schizophrenia has remained something of an enigma. Scientists talk about dopamine and glutamate, nicotinic receptors and hippocampal atrophy, but they’ve made little progress in explaining psychosis as it unfolds on the level of thoughts, beliefs, and experiences. Approximately one percent of the world’s population suffers from schizophrenia. Add to that the comparable numbers of people who suffer from affective psychoses (certain types of bipolar disorder and depression) or psychosis from neurodegenerative disorders like Alzheimer’s disease. All told, upwards of 3% of the population have known psychosis first-hand. These individuals have experienced how it transformed their sensations, emotions, and beliefs. Why hasn’t science made more progress explaining this level of the illness? What have those slouches at the National Institute of Mental Health been up to?

There are several reasons why psychosis has proved a tough nut to crack. First and foremost, neuroscience is still struggling to understand the biology of complex phenomena like thoughts and memories in the healthy brain. Add to that the incredible diversity of psychosis: how one psychotic patient might be silent and unresponsive while another is excitable and talking up a storm. Finally, a host of confounding factors plague most studies of psychosis. Let’s say a scientist discovers that a particular brain area tends to be smaller in patients with schizophrenia than healthy controls. The difference might have played a role in causing the illness in these patients, it might be a direct result of the illness, or it might be the result of anti-psychotic medications, chronic stress, substance abuse, poor nutrition, or other factors that disproportionately affect patients.

So what’s a well-meaning neuroscientist to do? One intriguing approach is to study psychosis in healthy people. They don’t have the litany of confounding experiences and exposures that make patients such problematic subjects. Yet at first glance, the approach seems to have a fatal flaw. How can you study psychosis in people who don’t have it? It sounds as crazy as studying malaria in someone who’s never had the bug.

In fact, this approach is possible because schizophrenia is a very different illness from malaria or HIV. Unlike communicable diseases, it is a developmental illness triggered by both genetic and environmental factors. These factors affect us all to varying degrees and cause all of us – clinically psychotic or not – to land somewhere on a spectrum of psychotic traits. Just as people who don’t suffer from anxiety disorders can still differ in their tendency to be anxious, nonpsychotic individuals can differ in their tendency to develop delusions or have perceptual disturbances. One review estimates that 1 to 3% of nonpsychotic people harbor major delusional beliefs, while another 5 to 6% have less severe delusions. An additional 10 to 15% of the general population may experience milder delusional thoughts on a regular basis.

Delusions are a common symptom of schizophrenia and were once thought to reflect the poor reasoning abilities of a broken brain. More recently, a growing number of physicians and scientists have opted for a different explanation. According to this model, patients first experience the surprising and mysterious perceptual disturbances that result from their illness. These could be full-blown hallucinations or they could be subtler abnormalities, like the inability to ignore a persistent noise. Patients then adopt delusions in a natural (if misguided) attempt to explain their odd experiences.

An intriguing study from the early 1960s illustrates how rapidly delusions can develop in healthy subjects when expectations and perceptions inexplicably conflict. The study, run on twenty college students at the University of Copenhagen, involved a version of the trick now known as the rubber hand illusion. Each subject was instructed to trace a straight line while his or her hand was inside a box with a secret mirror. For several trials, the subject watched his or her own hand trace the line correctly. Then the experimenters surreptitiously changed the mirror position so that the subject was now watching someone else’s hand trace the straight line – until the sham hand unexpectedly veered off to the right! All of the subjects experienced the visible (sham) hand as their own and felt that an involuntary movement had sent it off course. After several trials with this misbehaving hand, the subjects offered explanations for the deviation. Some chalked it up to their own fatigue or inattention while others came up with wilder, tech-based explanations:

 . . . five subjects described that they felt something strange and queer outside themselves, which pressed their hand to the right or resisted their free mobility. They suggested that ‘magnets’, ‘unidentified forces’, ‘invisible traces under the paper’, or the like, could be the cause.

In other words, delusions may be a normal reaction to the unexpected and inexplicable. Under strange enough circumstances, anyone might develop them – but some of us are more likely to than others.

My next post will describe a clever experiment that planted a delusion-like belief in the heads of healthy subjects and used trickery and fMRI to see how it influenced some more than others. So stay tuned. In the meantime, you may want to ask yourself which members of your family and friends are prone to delusional thinking. Or ask yourself honestly: could it be you?

_______

Photo credit: MiniTar on Flickr, available through Creative Commons

Modernity, Madness, and the History of Neuroscience

4666194636_a4d78d506e_o

I recently read a wonderful piece in Aeon Magazine about how technology shapes psychotic delusions. As the author, Mike Jay, explains:

Persecutory delusions, for example, can be found throughout history and across cultures; but within this category a desert nomad is more likely to believe that he is being buried alive in sand by a djinn, and an urban American that he has been implanted with a microchip and is being monitored by the CIA.

While delusional people of the past may have fretted over spirits, witches, demons and ghouls, today they often worry about wireless signals controlling their minds or hidden cameras recording their lives for a reality TV show. Indeed, reality TV is ubiquitous in our culture and experiments in remote mind-control (albeit on a limited scale) have been popping up recently in the news. As psychiatrist Joel Gold of NYU and philosopher Ian Gold of McGill University wrote in 2012: “For an illness that is often characterized as a break with reality, psychosis keeps remarkably up to date.”

Whatever the time or the place, new technologies are pervasive and salient. They are on the tips of our tongues and, eventually, at the tips of our fingers. Psychotic or not, we are all captivated by technological advances. They provide us with new analogies and new ways of explaining the all-but-unexplainable. And where else do we attempt to explain the mysteries of the world, if not through science?

As I read Jay’s piece on psychosis, it struck me that science has historically had the same habit of co-opting modern technologies for explanatory purposes. In the case of neuroscience, scientists and physicians across cultures and ages have invoked the  innovations of their day to explain the mind’s mysteries. For instance, the science of antiquity was rooted in the physical properties of matter and the mechanical interactions between them. Around 7th century BC, empires began constructing great aqueducts to bring water to their growing cities. The great engineering challenge of the day was to control and guide the flow of water across great distances. It was in this scientific milieu that the ancient Greeks devised a model for the workings of the mind. They believed that a person’s thoughts, feelings, intellect and soul were physical stuff: specifically, an invisible, weightless fluid called psychic pneuma. Around 200 AD, a physician and scientist of the Roman Empire (known for its masterful aqueducts) would revise and clarify the theory. The physician, Galen, believed that pneuma fills the brain cavities called ventricles and circulates through white matter pathways in the brain and nerves in the body just as water flows through a tube. As psychic pneuma traveled throughout the body, it carried sensation and movement to the extremities. Although the idea may sound farfetched to us today, this model of the brain persisted for more than a millennium and influenced Renaissance thinkers including Descartes.

By the 18th century, however, the science world was a-buzz with two strange new forces: electricity and magnetism. At the same time, physicians and anatomists began to think of the brain itself as the stuff that gives rise to thought and feeling, rather than a maze of vats and tunnels that move fluid around. In the 179os, Luigi Galvani’s experiments zapping frog legs showed that nerves communicate with muscles using electricity. So in the 19th century, just as inventors were harnessing electricity to run motors and light up the darkness, scientists reconceived the brain as an organ of electricity. It was a wise innovation and one supported by experiments, but also driven by the technical advances of the day.

Science was revolutionized once again with the advent of modern computers in the 1940s and ‘50s. In the 1950s, the new technology sparked a surge of research and theories that used the computer as an analogy for the brain. Psychologists began to treat mental events like computer processes, which can be broken up and analyzed as a set of discrete steps. They equated brain areas to processors and neural activity in these areas to the computations carried out by computers. Just as computers rule our modern technological world, this way of thinking about the brain still profoundly influences how neuroscience and psychology research is carried out and interpreted. Today, some labs cut out the middleman (the brain) entirely. Results from computer models of the brain are regularly published in neuroscience journals, sometimes without any data from an actual physical brain.

I’m sure there are other examples from the history of neuroscience in general and certainly from the history of science as a whole. Please comment and share any other ways that technology has shaped the models, themes, and analogies of science!

Additional sources:

Crivellato E & Ribatti D (2007) Soul, mind, brain: Greek philosophy and the birth of neuroscience. Brain Research Bulletin 71:327-336.

Karenberg A (2009) Cerebral Localization in the Eighteenth Century – An Overview. Journal of the History of the Neurosciences, 18:248-253.

_________

Photo Credit: dominiqueb on Flickr, available through Creative Commons

Plastic and the Developing Brain

7921839158_7ed88d6e80_o

When I was pregnant with my daughter, I had enough on my mind. I didn’t have much time to think much about plastic. I knew vaguely that plastics can release estrogen-mimicking substances like bisphenol A (BPA) into our food and I’d heard that they might cause genital defects in male fetuses. But once my husband and I had the 20-week ultrasound and knew we were having a girl, I thought I could stop searching for products in cardboard or glass. It was just too hard. Everything is packaged in plastic these days.

Apparently I jumped the gun.

Scientific papers warning about the hazards of prenatal exposure to BPA have been coming out in a steady stream, with a string of particularly damning ones appearing over the last 18 months in the Proceedings of the National Academy of Sciences. Last month one in particular caught my eye: a study of how prenatal BPA exposure changes the brain. The results were enough to make this neuroscientist pause.

While we tend to think of estrogens as the sex hormones that manage ovulation and pregnancy, these molecules also have powerful and direct effects on the brain. Many types of neurons have estrogen receptors on their outer surface. While there are several kinds of estrogen receptors in the brain, all bind to estrogens (and other molecules that resemble estrogens) and all trigger changes within their neurons as a result. These small changes can potentially add up to alter how entire neural circuits function. In fact, estrogens influence a wide range of skills and behaviors – from cognitive function to mood regulation and even fine motor control. While we don’t yet know why estrogens have such a broad and powerful influence on the brain, it does appear that we should think twice before mucking around with estrogen levels, particularly in the developing brain.

BPA and other compounds found in plastics resemble estrogens. The similarity is close enough to fool estrogen receptors, which bind to these foreign molecules and interpret them as additional estrogen. Although BPA has been used commercially as a dental sealant and liner for food containers (among many other uses) since the 1960s, the health consequences of this case of mistaken identity are just beginning to be understood.

In the PNAS paper published last month, a group of scientists headed by Dr. Frances Champagne at Columbia report the effect of prenatal BPA exposure on mice. They fed pregnant laboratory mice one of three daily doses of BPA (2, 20, or 200 μg/kg) or a control product without BPA. These are not high doses of BPA. Based on the amount of BPA found in humans, scientists estimate that we are exposed to about 400 μg/kg per day. The U.S. Food and Drug Administration reached their own estimate by testing the amount of BPA in various foods and then approximating how much of these people consume daily. Their calculations put the figure at around 0.19 μg/kg daily for adults. This discrepancy (400 versus 0.19) is one of many points of contention between the FDA, the packaging industry, and the scientific community on the subject of BPA.

Champagne and her colleagues fed their mice BPA on each of the twenty days of mouse gestation. (That’s right, ladies: mouse pregnancies last less than three weeks.) After each mouse pup was born, the scientists either studied its behavior or sacrificed it and examined its brain.

What did they find? Prenatal BPA exposure had a noticeable impact on mouse brains, even at the lowest dose. They found BPA-induced changes in the number of new estrogen receptors being made in all three brain areas they examined: the prefrontal cortex, hypothalamus, and hippocampus. These effects were complex and differed depending on the gender of the animal, the brain area, the BPA dose, and the type of estrogen receptor. Still, in several cases the researchers found a surprising pattern. Without BPA-exposure, female mice typically made more new estrogen receptors than their male counterparts. The same was true for mice given the highest BPA dose. But among pups exposed to the two lowest BPA doses, male mice made more estrogen receptors than females! This sex-difference reversal stemmed from changes in both genders; male mice made more estrogen receptors than normal at these doses while female mice made fewer than their norm.

Champagne and colleagues also observed and recorded several behaviors of the mice in different circumstances. For most behaviors, males and females were naturally different from one another.  Just as human boys tend to chase each other more than girls do, male mouse pups chased more than females. Unexposed male mice sniffed a new mouse more than unexposed females did. They showed more anxiety-like behavior in an open space and were less active in their home cages. Prenatal BPA treatment reversed these natural sex differences. Exposed female mice did more sniffing, acted more anxious, and ran around less than their exposed male counterparts. And at the highest prenatal BPA dose, the male mice chased each other as rarely as the females did. In one case, BPA treatment affected the two genders similarly; both sexes were less aggressive than normal at the two lower doses and more aggressive than normal at the highest dose.

Overall, the results of the study are complex and it might be easy to ignore them because they don’t seem to tell a straightforward tale. Yet their findings can be summed up in a single sentence: BPA exposure in utero has diverse effects on the mouse brain and later behavior. Not only does the BPA ingested by the mom manage to affect the growing fetus, but those effects persist beyond the womb and past the end of the exposure to BPA.

Some will dismiss these results because they come from mice. After all, how much do we really resemble mice? Yet studies in monkeys have also found that BPA affects fetal development. And while mice and monkeys excrete BPA differently, they clear it at a similar rate — to each other and to human women. Results from correlational studies in humans also suggest that BPA exposure during development affects mood, anxiety and aggressiveness to varying degrees (depending on the child’s gender).

Still, there’s a lot we don’t know about the relevance of this study for humans. At the end of the day, mice aren’t humans and no one has agreed on how much BPA pregnant women ingest. Moreover, Champagne and colleagues examined only a small subset of the neural markers and behaviors that BPA might affect in mice. Perhaps the changes they describe are the worst of BPA’s effects, or perhaps they are only the tip of the iceberg. We don’t yet know.

What’s the upshot of all this? You may want to err on the side of caution, particularly if you’re pregnant. Avoid plastics when possible. Be aware of other sources of BPA like canned foods (which have plastic liners) and thermal receipts. Do what you can do and then try not to let it stress you out. If you’re pregnant, you already have enough on your mind.

As for my daughter, she seems to be fine despite her plasticized third trimester. While she doesn’t do much sniffing, she does occasionally slap my husband or me in the face. It could be the BPA making her aggressive. I choose to blame it on her sassy genes instead.

__

Photo credit: .imelda on Flickr

ResearchBlogging.org

Kundakovic M, Gudsnuk K, Franks B, Madrid J, Miller RL, Perera FP, & Champagne FA (2013). Sex-specific epigenetic disruption and behavioral changes following low-dose in utero bisphenol A exposure. Proceedings of the National Academy of Sciences of the United States of America, 110 (24), 9956-61 PMID: 23716699

%d bloggers like this: