Outsourcing Memory

3088541520_00a0721cde_b

Do you rely on your spouse to remember special events and travel plans? Your coworker to remember how to submit some frustrating form? Your cell phone to store every phone number you’ll ever need? Yeah, me too. You might call this time saving or delegating, but if you were a fancy psychologist you’d call it transactive memory.

Transactive memory is a wonderful concept. There’s too much information in this world to know and remember. Why not store some of it in “the cloud” that is your partner or coworker’s brain or in “the cloud” itself, whatever and wherever that is? The idea of transactive memory came from the innovative psychologist Daniel Wegner, most recently of Harvard, who passed away in July of this year. Wegner proposed the idea in the mid-80s and framed it in terms of the “intimate dyad” – spouses or other close couples who know each other very well over a long period of time.

Transactive memory between partners can be a straightforward case of cognitive outsourcing. I remember monthly expenses and you remember family birthdays. But it can also be a subtler and more interactive process. For example, one spouse remembers why you chose to honeymoon at Waikiki and the other remembers which hotel you stayed in. If the partners try to recall their honeymoon together, they can produce a far richer description of the experience than if they were to try separately.

Here’s an example from a recent conversation with my husband. It began when my husband mentioned that a Red Sox player once asked me out.

“Never happened,” I told him. And it hadn’t. But he insisted.

“You know, years ago. You went out on a date or something?”

“Nope.” But clearly he was thinking of something specific.

I thought really hard until a shred of a recollection came to me. “I’ve never met a Red Sox player, but I once met a guy who was called up from the farm team.”

My husband nodded. “That guy.”

But what interaction did we have? I met the guy nine years ago, not long before I met my husband. What were the circumstances? Finally, I began to remember. It wasn’t actually a date. We’d gone bowling with mutual friends and formed teams. The guy – a pitcher – was intensely competitive and I was the worst bowler there. He was annoyed that I was ruining our team score and I was annoyed that he was taking it all so seriously. I’d even come away from the experience with a lesson: never play games with competitive athletes.

Apparently, I’d told the anecdote to my husband after we met and he remembered a nugget of the story. Even though all of the key details from that night were buried somewhere in my brain, I’m quite sure that I would never have remembered them again if not for my husband’s prompts. This is a facet of transactive memory, one that Wegner called interactive cueing.

In a sense, transactive memory is a major benefit of having long-term relationships. Sharing memory, whether with a partner, parent, or friend, allows you to index or back up some of that memory. This fact also underscores just how much you lose when a loved one passes away. When you lose a spouse, a parent, a sibling, you are also losing part of yourself and the shared memory you have with that person. After I lost my father, I noticed this strange additional loss. I caught myself wondering when I’d stopped writing stories on his old typewriter. I realized I’d forgotten parts of the fanciful stories he used to tell me on long drives. I wished I could ask him to fill in the blanks, but of course it was too late.

Memories can be shared with people, but they can also be shared with things. If you write in a diary, you are storing details about current experiences that you can access later in life. No spouse required. You also upload memories and information to your technological gadgets. If you store phone numbers in your cell phone and use bookmarks and autocomplete tools in your browser, you are engaging in transactive memory. You are able to do more while remembering less. It’s efficient, convenient, and downright necessary in today’s world of proliferating numbers, websites, and passwords.

In 2011, a Science paper described how people create transactive memory with online search engines. The study, authored by Betsy Sparrow, Jenny Liu, and Wegner, received plenty of attention at the time, including here and here.

In one experiment, they asked participants either hard or easy questions and then had them do a modified Stroop task that involved reporting the physical color of a written word rather than naming the word. This was a measure of priming, essentially whether a participant has been thinking about that word or similar concepts recently. Sometimes the participants were tested with the names of online search engines (Google, Yahoo) and at others they were tested with other name brands (Nike, Target). After hard questions, the participants took much longer to do the Stroop task with Google and Yahoo than with the other brand names, suggesting that hard questions made them automatically think about searching the Internet for the answer.

Screen Shot 2013-11-21 at 1.53.54 PM

The other experiments described in the paper showed that people are less likely to remember trivia if they believe they will be able to look it up later. When participants thought that items of trivia were saved somewhere on a computer, they were also more likely to remember where the items were saved than they were to remember the actual trivia items themselves. Together, the study’s findings suggest that people actively outsource memory to their computers and to the Internet. This will come as no surprise to those of us who can’t remember a single phone number offhand, don’t know how to get around without the GPS, and hop on our smartphones to answer the simplest of questions.

Search engines, computer atlases, and online databases are remarkable things. In a sense, we’d be crazy not to make use of them. But here’s the rub: the Internet is jam-packed with misinformation or near-miss information. Anti-vaxxers, creationists, global warming deniers: you can find them all on the web. And when people want the definitive answer, they almost always find themselves at Wikipedia. While Wikipedia has valuable information, it is not written and curated by experts. It is not always the God’s-honest-truth and it is not a safe replacement for learning and knowing information ourselves. Of course, the memories of our loved ones aren’t foolproof either, but at least they don’t carry the aura of authority that comes with a list of citations.

Speaking of which. There is now a Wikipedia page for “The Google Effect” that is based on the 2011 Science article. A banner across the top shows an open book featuring a large question mark and the following warning: “This article relies largely or entirely upon a single source. . . . Please help improve this article by introducing citations to additional sources.” The citation for the first section is a dead link. The last section has two placeholders for citations, but in lieu of numbers they say, According to whom?

Folks, if that ain’t a reminder to be wary of the outsourcing your brain to Google and Wikipedia, I don’t know what is.

_________

Photo credits:

1. Photo by Mike Baird on Flickr, used via Creative Commons license

2. Figure from “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips” by Betsy Sparrow, Jenny Liu, and Daniel M. Wegner.

Sparrow B, Liu J, & Wegner DM (2011). Google effects on memory: cognitive consequences of having information at our fingertips. Science (New York, N.Y.), 333 (6043), 776-8 PMID: 21764755

Neural Conspiracy Theories

140775790_e3e122cd65_bLast month, a paper quietly appeared in The Journal of Neuroscience to little fanfare and scant media attention (with these exceptions). The study revolved around a clever and almost diabolical premise: that using perceptual trickery and outright deception, its authors could plant a delusion-like belief in the heads of healthy subjects. Before you call the ethics police, I should mention that the belief wasn’t a delusion in the formal sense of the word. It didn’t cause the subjects any distress and was limited to the unique materials used in the study. Still, it provided a model delusion that scientists Katharina Schmack, Philipp Sterzer, and colleagues could study to investigate the interplay of perception and belief in healthy subjects. The experiment is quite involved, so I’ll stick to the coolest and most relevant details.

As I mentioned in my last post, delusions are not exclusive to people suffering from psychosis. Many people who are free of any diagnosable mental illness still have a tendency to develop them, although the frequency and severity of these delusions differ across individuals. There are some good reasons to conduct studies like this one on healthy people rather than psychiatric patients. Healthy subjects are a heck of a lot easier to recruit, easier to work with, and less affected by confounding factors like medication and stress.

Schmack, Sterzer, and colleagues designed their experiment to test the idea that delusions arise from two distinct but related processes. First, a person experiences perceptual disturbances. According to the group’s model, these disturbances actually reflect poor expectation signals as the brain processes information from the senses. In theory, these poor signals would make irrelevant or commonplace sights, sounds, and sensations seem surprising and important. Without an explanation for this unexpected weirdness, the individual comes up with a delusion to make sense of it all. Once the delusion is in place, so-called higher areas of the brain (those that do more complex things like ponder, theorize, and believe) generate new expectation signals based on the delusion. These signals feed back on so-called lower sensory areas and actually bias the person’s perception of the outside world based on the delusion. According to the authors, this would explain why people become so convinced of their delusions: they are constantly perceiving confirmatory evidence. Strangely enough, this model sounds like a paranoid delusion in its own right. Various regions of your brain may be colluding to fool your senses into making you believe a lie!

To test the idea, the experimenters first had to toy with their subjects’ senses. They did so by capitalizing on a quirk of the visual system: that when people are shown two conflicting images separately to their two eyes, they don’t perceive both images at once. Instead, perception alternates between the two. In the first part of this experiment, the two images were actually movies of moving dots that appeared to form a 3-D sphere spinning either to the left (for one eye) or to the right (for the other). For this ambiguous visual condition, subjects were equally likely to see a sphere spinning to the right or to the left at any given moment in time, with it switching direction periodically.

Now the experimenters went about planting the fake belief. They gave the subjects a pair of transparent glasses and told them that the lenses contained polarizing filters that would make the sphere appear to spin more in one of the two directions. In fact, the lenses were made of simple plastic and could do no such thing. Once the subjects had the glasses on, the experimenters began showing the same movie to both eyes. While this change allowed the scientists to control exactly what the subjects saw, the subjects had no idea that the visual setup had changed. In this unambiguous condition, all subjects saw a sphere that alternated direction (just as the ambiguous sphere had done), except that this sphere spun far more in one of the two directions. This visual trick, paired with the story about polarized lenses, was meant to make subjects believe that the glasses caused the change in perception.

After that clever setup, the scientists were ready to see how the model delusion would affect each subject’s actual perception. While the subject continued to wear the glasses, they were shown the two original, conflicting movies to their two separate eyes. In the first part of the experiment, this ambiguous condition caused subjects to see a rotating sphere that alternated equally between spinning to the left and right. But if their new belief about the glasses biased their perception of the spinning sphere, they would now report seeing the sphere spin more often in the belief-consistent direction.

What happened? Subjects did see the sphere spin more in the belief-consistent direction. While the effect was small, it was still impressive that they could bias perception at all, considering the simplicity of the images. They also found that each subject’s delusional conviction score (how convinced they were by their delusional thoughts in everyday life) correlated with this effect. The more the subject believed her real-life delusional thoughts, the more her belief about the glasses affected her perception of the ambiguous spinning sphere.

But there’s a hitch. What if subjects were reporting the motion bias because they thought that was what they were supposed to see and not because they actually saw it? To answer this question, they recruited a new batch of participants and ran the experiment again in a scanner using fMRI.

Since the subjects’ task hinged on motion perception, Sterzer and colleagues first looked at the activity in a brain area called MT that processes visual motion. By analyzing the patterns of fMRI activity in this area, the scientists confirmed that subjects were accurately reporting the motion they perceived. That may sound far-fetched, but this kind of ‘mind reading’ with fMRI  has been done quite successfully for basic visual properties like motion.

The group also studied activity throughout the brain while their glasses-wearing subjects learned the false belief (unambiguous condition) and allowed the false belief to more or less affect their perception (ambiguous condition). They found that belief-based perceptual bias correlated with activity in the left orbitofrontal cortex, a region just behind the eyes that is involved in decision-making and expectation. In essence, subjects with more activity in this region during both conditions tended to also report lopsided spin directions that confirmed their expectations during the ambiguous condition. And here’s the cherry on top: subjects with higher delusional conviction scores appeared to have greater communication between left orbitofrontal cortex and motion-processing area MT during the ambiguous visual condition. Although fMRI can’t directly measure communication between areas and can’t tell us the direction of communication, this pattern suggests that the left orbitofrontal cortex may be directly responsible for biasing motion perception in delusion-prone subjects.

All told, the results of the experiment seem to tell a neat story that fits the authors’ model about delusions. Yet there are a couple of caveats worth mentioning. First, the key finding of their study – that a person’s delusional conviction score correlates with his or her belief-based motion perception bias – is built upon a quirky and unnatural aspect of human vision that may or may not reflect more typical sensory processes. Second, it’s hard to say how clinically relevant the results are. No one knows for certain if delusions arise by the same neural mechanisms in the general population as they do in patients with illnesses like schizophrenia. It has been argued that they probably do because the same risk factors pop up for patients as for non-psychotic people with delusions: unemployment, social difficulties, urban surroundings, mood disturbances and drug or alcohol abuse. Then again, this group is probably also at the highest risk for getting hit by a bus, dying from an curable disease, or suffering any number of misfortunes that disproportionately affect people in vulnerable circumstances. So the jury is still about on the clinical applicability of these results.

Despite the study’s limitations, it was brilliantly designed and tells a compelling tale about how the brain conspires to manipulate perception based on beliefs. It also implicates a culprit in this neural conspiracy. Dare I say ringleader? Mastermind? Somebody cue the close up of orbitofrontal cortex cackling and stroking a cat.

_____

Photo credit: Daniel Horacio Agostini (dhammza) on Flickr, used through Creative Commons license

Schmack K, Gòmez-Carrillo de Castro A, Rothkirch M, Sekutowicz M, Rössler H, Haynes JD, Heinz A, Petrovic P, & Sterzer P (2013). Delusions and the role of beliefs in perceptual inference. The Journal of Neuroscience, 33 (34), 13701-13712 PMID: 23966692

Delusions: Making Sense of Mistaken Senses

6738201_646b9e485b_o

For a common affliction that strikes people of every culture and walk of life, schizophrenia has remained something of an enigma. Scientists talk about dopamine and glutamate, nicotinic receptors and hippocampal atrophy, but they’ve made little progress in explaining psychosis as it unfolds on the level of thoughts, beliefs, and experiences. Approximately one percent of the world’s population suffers from schizophrenia. Add to that the comparable numbers of people who suffer from affective psychoses (certain types of bipolar disorder and depression) or psychosis from neurodegenerative disorders like Alzheimer’s disease. All told, upwards of 3% of the population have known psychosis first-hand. These individuals have experienced how it transformed their sensations, emotions, and beliefs. Why hasn’t science made more progress explaining this level of the illness? What have those slouches at the National Institute of Mental Health been up to?

There are several reasons why psychosis has proved a tough nut to crack. First and foremost, neuroscience is still struggling to understand the biology of complex phenomena like thoughts and memories in the healthy brain. Add to that the incredible diversity of psychosis: how one psychotic patient might be silent and unresponsive while another is excitable and talking up a storm. Finally, a host of confounding factors plague most studies of psychosis. Let’s say a scientist discovers that a particular brain area tends to be smaller in patients with schizophrenia than healthy controls. The difference might have played a role in causing the illness in these patients, it might be a direct result of the illness, or it might be the result of anti-psychotic medications, chronic stress, substance abuse, poor nutrition, or other factors that disproportionately affect patients.

So what’s a well-meaning neuroscientist to do? One intriguing approach is to study psychosis in healthy people. They don’t have the litany of confounding experiences and exposures that make patients such problematic subjects. Yet at first glance, the approach seems to have a fatal flaw. How can you study psychosis in people who don’t have it? It sounds as crazy as studying malaria in someone who’s never had the bug.

In fact, this approach is possible because schizophrenia is a very different illness from malaria or HIV. Unlike communicable diseases, it is a developmental illness triggered by both genetic and environmental factors. These factors affect us all to varying degrees and cause all of us – clinically psychotic or not – to land somewhere on a spectrum of psychotic traits. Just as people who don’t suffer from anxiety disorders can still differ in their tendency to be anxious, nonpsychotic individuals can differ in their tendency to develop delusions or have perceptual disturbances. One review estimates that 1 to 3% of nonpsychotic people harbor major delusional beliefs, while another 5 to 6% have less severe delusions. An additional 10 to 15% of the general population may experience milder delusional thoughts on a regular basis.

Delusions are a common symptom of schizophrenia and were once thought to reflect the poor reasoning abilities of a broken brain. More recently, a growing number of physicians and scientists have opted for a different explanation. According to this model, patients first experience the surprising and mysterious perceptual disturbances that result from their illness. These could be full-blown hallucinations or they could be subtler abnormalities, like the inability to ignore a persistent noise. Patients then adopt delusions in a natural (if misguided) attempt to explain their odd experiences.

An intriguing study from the early 1960s illustrates how rapidly delusions can develop in healthy subjects when expectations and perceptions inexplicably conflict. The study, run on twenty college students at the University of Copenhagen, involved a version of the trick now known as the rubber hand illusion. Each subject was instructed to trace a straight line while his or her hand was inside a box with a secret mirror. For several trials, the subject watched his or her own hand trace the line correctly. Then the experimenters surreptitiously changed the mirror position so that the subject was now watching someone else’s hand trace the straight line – until the sham hand unexpectedly veered off to the right! All of the subjects experienced the visible (sham) hand as their own and felt that an involuntary movement had sent it off course. After several trials with this misbehaving hand, the subjects offered explanations for the deviation. Some chalked it up to their own fatigue or inattention while others came up with wilder, tech-based explanations:

 . . . five subjects described that they felt something strange and queer outside themselves, which pressed their hand to the right or resisted their free mobility. They suggested that ‘magnets’, ‘unidentified forces’, ‘invisible traces under the paper’, or the like, could be the cause.

In other words, delusions may be a normal reaction to the unexpected and inexplicable. Under strange enough circumstances, anyone might develop them – but some of us are more likely to than others.

My next post will describe a clever experiment that planted a delusion-like belief in the heads of healthy subjects and used trickery and fMRI to see how it influenced some more than others. So stay tuned. In the meantime, you may want to ask yourself which members of your family and friends are prone to delusional thinking. Or ask yourself honestly: could it be you?

_______

Photo credit: MiniTar on Flickr, available through Creative Commons

Modernity, Madness, and the History of Neuroscience

4666194636_a4d78d506e_o

I recently read a wonderful piece in Aeon Magazine about how technology shapes psychotic delusions. As the author, Mike Jay, explains:

Persecutory delusions, for example, can be found throughout history and across cultures; but within this category a desert nomad is more likely to believe that he is being buried alive in sand by a djinn, and an urban American that he has been implanted with a microchip and is being monitored by the CIA.

While delusional people of the past may have fretted over spirits, witches, demons and ghouls, today they often worry about wireless signals controlling their minds or hidden cameras recording their lives for a reality TV show. Indeed, reality TV is ubiquitous in our culture and experiments in remote mind-control (albeit on a limited scale) have been popping up recently in the news. As psychiatrist Joel Gold of NYU and philosopher Ian Gold of McGill University wrote in 2012: “For an illness that is often characterized as a break with reality, psychosis keeps remarkably up to date.”

Whatever the time or the place, new technologies are pervasive and salient. They are on the tips of our tongues and, eventually, at the tips of our fingers. Psychotic or not, we are all captivated by technological advances. They provide us with new analogies and new ways of explaining the all-but-unexplainable. And where else do we attempt to explain the mysteries of the world, if not through science?

As I read Jay’s piece on psychosis, it struck me that science has historically had the same habit of co-opting modern technologies for explanatory purposes. In the case of neuroscience, scientists and physicians across cultures and ages have invoked the  innovations of their day to explain the mind’s mysteries. For instance, the science of antiquity was rooted in the physical properties of matter and the mechanical interactions between them. Around 7th century BC, empires began constructing great aqueducts to bring water to their growing cities. The great engineering challenge of the day was to control and guide the flow of water across great distances. It was in this scientific milieu that the ancient Greeks devised a model for the workings of the mind. They believed that a person’s thoughts, feelings, intellect and soul were physical stuff: specifically, an invisible, weightless fluid called psychic pneuma. Around 200 AD, a physician and scientist of the Roman Empire (known for its masterful aqueducts) would revise and clarify the theory. The physician, Galen, believed that pneuma fills the brain cavities called ventricles and circulates through white matter pathways in the brain and nerves in the body just as water flows through a tube. As psychic pneuma traveled throughout the body, it carried sensation and movement to the extremities. Although the idea may sound farfetched to us today, this model of the brain persisted for more than a millennium and influenced Renaissance thinkers including Descartes.

By the 18th century, however, the science world was a-buzz with two strange new forces: electricity and magnetism. At the same time, physicians and anatomists began to think of the brain itself as the stuff that gives rise to thought and feeling, rather than a maze of vats and tunnels that move fluid around. In the 179os, Luigi Galvani’s experiments zapping frog legs showed that nerves communicate with muscles using electricity. So in the 19th century, just as inventors were harnessing electricity to run motors and light up the darkness, scientists reconceived the brain as an organ of electricity. It was a wise innovation and one supported by experiments, but also driven by the technical advances of the day.

Science was revolutionized once again with the advent of modern computers in the 1940s and ‘50s. In the 1950s, the new technology sparked a surge of research and theories that used the computer as an analogy for the brain. Psychologists began to treat mental events like computer processes, which can be broken up and analyzed as a set of discrete steps. They equated brain areas to processors and neural activity in these areas to the computations carried out by computers. Just as computers rule our modern technological world, this way of thinking about the brain still profoundly influences how neuroscience and psychology research is carried out and interpreted. Today, some labs cut out the middleman (the brain) entirely. Results from computer models of the brain are regularly published in neuroscience journals, sometimes without any data from an actual physical brain.

I’m sure there are other examples from the history of neuroscience in general and certainly from the history of science as a whole. Please comment and share any other ways that technology has shaped the models, themes, and analogies of science!

Additional sources:

Crivellato E & Ribatti D (2007) Soul, mind, brain: Greek philosophy and the birth of neuroscience. Brain Research Bulletin 71:327-336.

Karenberg A (2009) Cerebral Localization in the Eighteenth Century – An Overview. Journal of the History of the Neurosciences, 18:248-253.

_________

Photo Credit: dominiqueb on Flickr, available through Creative Commons

We Got the Beat

3937480966_26cd287141_o

It is both amusing and enlightening to hear my 21-month-old daughter sing the alphabet song. The song is her favorite, though she is years from grasping how symbols represent sound, not to mention the concept of alphabetical order. Still, if you start singing the song she will chime in. Before you think that’s impressive, keep in mind that her version of the song is more or less this: “CD . . . G . . . I . . . No P . . . S . . . V . . . Dub X . . . Z.”

Her alphabet song adds up to little more than a Scrabble hand, yet it is a surprising feat of memory all the same. My daughter doesn’t know her last name, can’t read or write, and has been known to mistake stickers for food. It turns out that her memory for the alphabet has far less to do with letters than lyrics. From Wheels on the Bus to Don’t Stop Believin’, she sings along to all of her favorite songs, piping up with every word and vowel she remembers. Her performance has nothing to do with comprehension; she has never seen or heard about a locker, yet she sings the word at just the right time in her rendition of the Glee song Loser like Me. (Go ahead and judge me. I judge myself.)

My daughter’s knack for learning lyrics is not unique to her or to toddlers in general. Adults are also far better at remembering words set to song than other strings of verbal material. That’s why college students have used music to memorize subjects from human anatomy to U.S. presidents. It’s why advertisers inundate you with catchy snippets of song. Who can forget a good jingle? To this day, I remember the phone number for a carpet company I saw advertised decades ago.

But what is it about music that helps us remember? And how does it work?

It turns out that rhythm, rather than melody, is the crucial component to remembering lyrics. In a 2008 study, subjects remembered unfamiliar lyrics far better if they heard them sung to a familiar melody (Scarborough Fair) than if they heard them sung to an unfamiliar song or merely spoken without music. But they remembered the lyrics better still if they heard the lines spoken to a rhythmic drummed arrangement of Scarborough Fair. Even an unfamiliar drummed rhythm boosted later memory for the words. By why should any of these conditions improve memory? According to the prevailing theory, lyrics have a structural framework that helps you learn and recall them. They are set to a particular melody through a process called textsetting that matches the natural beat and meter of the music and words. Composers, lyricists, and musicians do this by aligning the stressed syllables of words with strong beats in the music as much as possible. Music is also comprised of musical phrases; lyrics naturally break down into lines, or “chunks,” based on these phrase boundaries. And just in case you missed those boundaries, lyricists often emphasize the ends of these lines with a rhyming scheme.

Rhythm, along with rhyme and chunking, may be enough to explain the human knack for learning lyrics. Let’s say you begin singing that old classic, Twinkle, Twinkle, Little Star. You make it to “How I wonder,” but what’s next? Since the meter of the song is BUM bah BUM bah and you ended on bah, you know that the next words must have the stress pattern BUM bah. This helps limit your mental search for these words. (Oh yeah: WHAT you!) The final word in the line is a breeze, as it has to rhyme with “star.” And there you have it. Rhythm, along with rhyme and chunking, provide a sturdy scaffold for your memory of words.

For a more personal example of rhythm and memory, consider your own experience when you remember the alphabet. It’s worth noting that the alphabet song is set to a familiar melody (the same as Twinkle, Twinkle, Little Star and Baa, Baa, Black Sheep), a fact that surely helped you learn the alphabet lyrics in the first place. Now that you know them, ask yourself this: which comes first, the letter O or L? If you’re like me, you have to mentally run through the first half of the song to figure it out. Yet this mental rendition lacks a melody. Instead, you list the letters according to the song’s rhythm. Your list probably pauses after G and again after P and V, which each mark the end of a line in the song. The letters L, M, N, and O each last half as long as the average letter, while S sprawls out across twice the average. Centuries ago, a musician managed to squeeze the letters of the alphabet into the rhythm of an old French folk song. Today, the idiosyncratic pairing he devised remains alive – not just in kindergarten classrooms, but in the recesses of your brain. Its longevity, across generations and across the lifespan, illustrates how word and beat can be entwined in human memory.

While a rhythm-and-rhyme framework could explain the human aptitude for learning lyrics, there may be more to the story. As a 2011 study published in the Journal of Neuroscience shows, beat and meter have special representations in the brain. Participants in the study listened to a pure tone with brief beats at a rate of 144 per minute, or 2.4 Hz. Some of the participants were told to imagine one of two meters on top of the beat: either a binary meter (a march: BUM bah BUM bah BUM) or a ternary meter (a waltz: BUM bah bah BUM bah bah BUM). These meters divided the interval between beats into two or three evenly spaced intervals, respectively. A third group performed a control task that ensured subjects were paying attention to the sound without imagining a meter. All the while, the scientists recorded traces of neural activity that could be detected at the scalp with EEG.

The results were remarkable. Brain waves synchronized with the audible beat and with the imagined meters. This figure from the paper shows the combined and averaged data from the three experimental groups. The subjects in the control group (blue) heard the beat without imagining a meter; their EEGs showed strong brain waves at the frequency of the beat, 2.4 Hz. Both the march (red) and waltz (green) groups showed this 2.4 Hz rhythm plus increased brain waves at the frequency of their imagined meters (1.2 Hz and 0.8 Hz, respectively). The waltz group also showed another small peak of waves at 1.6 Hz, or twice the frequency of their imagined meter, a curiosity that may have as much to do with the mechanics of brain waves as the perception of meter and beat.

Screen Shot 2013-08-21 at 1.15.47 PMIn essence, these results show that beat and meter have a profound effect on the brain. They alter the waves of activity that are constantly circulating through your brain, but more remarkably, they do so in a way that syncs activity with sound (be it real or imagined). This phenomenon, called neural entrainment, may help you perceive rhythm by making you more receptive to sounds at the very moment when the next beat is due. It can also be a powerful tool for learning and memory. So far, only one group has tried to link brain waves to the benefits of learning words with music. Their papers have been flawed and inconclusive. Hopefully some intrepid scientist will forge ahead with this line of research. Until then, stay tuned. (Or should I say metered?)

Whatever the ultimate explanation, the cozy relationship between rhythm and memory may have left its mark on our cultural inheritance. Poetry predated the written word and once served the purpose of conveying epic tales across distances and generations. Singer-poets had to memorize a harrowing amount of verbal material. (Just imagine: the Iliad and Odyssey began as oral recitations and were only written down centuries later.) Scholars think poetic conventions like meter and rhyme arose out of necessity; how else could a person remember hours of text? The conventions persisted in poetry, song, and theater even after the written word became more widespread. No one can say why. But whatever the reason, Shakespeare’s actors would have learned their lines more quickly because of his clever rhymes and iambic pentameter. Mozart’s opera stars would have learned their libretti more easily because of his remarkable music. And centuries later you can sing along to Cyndi Lauper or locate Fifty Shades of Grey in the library stacks – all thanks to the rhythms of music and speech.

__________

Photo credits: David Martyn Hunt on Flickr and Nozaradan, Peretz, Missal & Mouraux via The Journal of Neuroscience

Nozaradan, S., Peretz, I., Missal, M., & Mouraux, A. (2011). Tagging the Neuronal Entrainment to Beat and Meter The Journal of Neuroscience DOI: 10.1523/JNEUROSCI.0411-11.2011

%d bloggers like this: