We Got the Beat

3937480966_26cd287141_o

It is both amusing and enlightening to hear my 21-month-old daughter sing the alphabet song. The song is her favorite, though she is years from grasping how symbols represent sound, not to mention the concept of alphabetical order. Still, if you start singing the song she will chime in. Before you think that’s impressive, keep in mind that her version of the song is more or less this: “CD . . . G . . . I . . . No P . . . S . . . V . . . Dub X . . . Z.”

Her alphabet song adds up to little more than a Scrabble hand, yet it is a surprising feat of memory all the same. My daughter doesn’t know her last name, can’t read or write, and has been known to mistake stickers for food. It turns out that her memory for the alphabet has far less to do with letters than lyrics. From Wheels on the Bus to Don’t Stop Believin’, she sings along to all of her favorite songs, piping up with every word and vowel she remembers. Her performance has nothing to do with comprehension; she has never seen or heard about a locker, yet she sings the word at just the right time in her rendition of the Glee song Loser like Me. (Go ahead and judge me. I judge myself.)

My daughter’s knack for learning lyrics is not unique to her or to toddlers in general. Adults are also far better at remembering words set to song than other strings of verbal material. That’s why college students have used music to memorize subjects from human anatomy to U.S. presidents. It’s why advertisers inundate you with catchy snippets of song. Who can forget a good jingle? To this day, I remember the phone number for a carpet company I saw advertised decades ago.

But what is it about music that helps us remember? And how does it work?

It turns out that rhythm, rather than melody, is the crucial component to remembering lyrics. In a 2008 study, subjects remembered unfamiliar lyrics far better if they heard them sung to a familiar melody (Scarborough Fair) than if they heard them sung to an unfamiliar song or merely spoken without music. But they remembered the lyrics better still if they heard the lines spoken to a rhythmic drummed arrangement of Scarborough Fair. Even an unfamiliar drummed rhythm boosted later memory for the words. By why should any of these conditions improve memory? According to the prevailing theory, lyrics have a structural framework that helps you learn and recall them. They are set to a particular melody through a process called textsetting that matches the natural beat and meter of the music and words. Composers, lyricists, and musicians do this by aligning the stressed syllables of words with strong beats in the music as much as possible. Music is also comprised of musical phrases; lyrics naturally break down into lines, or “chunks,” based on these phrase boundaries. And just in case you missed those boundaries, lyricists often emphasize the ends of these lines with a rhyming scheme.

Rhythm, along with rhyme and chunking, may be enough to explain the human knack for learning lyrics. Let’s say you begin singing that old classic, Twinkle, Twinkle, Little Star. You make it to “How I wonder,” but what’s next? Since the meter of the song is BUM bah BUM bah and you ended on bah, you know that the next words must have the stress pattern BUM bah. This helps limit your mental search for these words. (Oh yeah: WHAT you!) The final word in the line is a breeze, as it has to rhyme with “star.” And there you have it. Rhythm, along with rhyme and chunking, provide a sturdy scaffold for your memory of words.

For a more personal example of rhythm and memory, consider your own experience when you remember the alphabet. It’s worth noting that the alphabet song is set to a familiar melody (the same as Twinkle, Twinkle, Little Star and Baa, Baa, Black Sheep), a fact that surely helped you learn the alphabet lyrics in the first place. Now that you know them, ask yourself this: which comes first, the letter O or L? If you’re like me, you have to mentally run through the first half of the song to figure it out. Yet this mental rendition lacks a melody. Instead, you list the letters according to the song’s rhythm. Your list probably pauses after G and again after P and V, which each mark the end of a line in the song. The letters L, M, N, and O each last half as long as the average letter, while S sprawls out across twice the average. Centuries ago, a musician managed to squeeze the letters of the alphabet into the rhythm of an old French folk song. Today, the idiosyncratic pairing he devised remains alive – not just in kindergarten classrooms, but in the recesses of your brain. Its longevity, across generations and across the lifespan, illustrates how word and beat can be entwined in human memory.

While a rhythm-and-rhyme framework could explain the human aptitude for learning lyrics, there may be more to the story. As a 2011 study published in the Journal of Neuroscience shows, beat and meter have special representations in the brain. Participants in the study listened to a pure tone with brief beats at a rate of 144 per minute, or 2.4 Hz. Some of the participants were told to imagine one of two meters on top of the beat: either a binary meter (a march: BUM bah BUM bah BUM) or a ternary meter (a waltz: BUM bah bah BUM bah bah BUM). These meters divided the interval between beats into two or three evenly spaced intervals, respectively. A third group performed a control task that ensured subjects were paying attention to the sound without imagining a meter. All the while, the scientists recorded traces of neural activity that could be detected at the scalp with EEG.

The results were remarkable. Brain waves synchronized with the audible beat and with the imagined meters. This figure from the paper shows the combined and averaged data from the three experimental groups. The subjects in the control group (blue) heard the beat without imagining a meter; their EEGs showed strong brain waves at the frequency of the beat, 2.4 Hz. Both the march (red) and waltz (green) groups showed this 2.4 Hz rhythm plus increased brain waves at the frequency of their imagined meters (1.2 Hz and 0.8 Hz, respectively). The waltz group also showed another small peak of waves at 1.6 Hz, or twice the frequency of their imagined meter, a curiosity that may have as much to do with the mechanics of brain waves as the perception of meter and beat.

Screen Shot 2013-08-21 at 1.15.47 PMIn essence, these results show that beat and meter have a profound effect on the brain. They alter the waves of activity that are constantly circulating through your brain, but more remarkably, they do so in a way that syncs activity with sound (be it real or imagined). This phenomenon, called neural entrainment, may help you perceive rhythm by making you more receptive to sounds at the very moment when the next beat is due. It can also be a powerful tool for learning and memory. So far, only one group has tried to link brain waves to the benefits of learning words with music. Their papers have been flawed and inconclusive. Hopefully some intrepid scientist will forge ahead with this line of research. Until then, stay tuned. (Or should I say metered?)

Whatever the ultimate explanation, the cozy relationship between rhythm and memory may have left its mark on our cultural inheritance. Poetry predated the written word and once served the purpose of conveying epic tales across distances and generations. Singer-poets had to memorize a harrowing amount of verbal material. (Just imagine: the Iliad and Odyssey began as oral recitations and were only written down centuries later.) Scholars think poetic conventions like meter and rhyme arose out of necessity; how else could a person remember hours of text? The conventions persisted in poetry, song, and theater even after the written word became more widespread. No one can say why. But whatever the reason, Shakespeare’s actors would have learned their lines more quickly because of his clever rhymes and iambic pentameter. Mozart’s opera stars would have learned their libretti more easily because of his remarkable music. And centuries later you can sing along to Cyndi Lauper or locate Fifty Shades of Grey in the library stacks – all thanks to the rhythms of music and speech.

__________

Photo credits: David Martyn Hunt on Flickr and Nozaradan, Peretz, Missal & Mouraux via The Journal of Neuroscience

Nozaradan, S., Peretz, I., Missal, M., & Mouraux, A. (2011). Tagging the Neuronal Entrainment to Beat and Meter The Journal of Neuroscience DOI: 10.1523/JNEUROSCI.0411-11.2011

Near-Death Experiment

2568975142_5cdb987617_o

If you own a tv, radio, or computer, you’ve probably heard about the recent neuroscience experiment that studied after-death brain activity in rats. Perhaps you’ve seen it under titles like: Near-death experiences are ‘electrical surge in dying brain’ or Near-death experiences exposed: Surge of brain activity after the heart stops may trigger paranormal visions. You may have heard some jargon about brainwaves and frequency coupling or some such. What does it mean? It is time to chuck your rosary, or at least your copy of Proof of Heaven? (The answer to the latter, in case you’re wondering, is yes.)

The article that caused such a stir was penned by researchers at the University of Michigan and published in the scientific journal PNAS. The experiment was simple and so obvious that I immediately wondered why no one had done it before. The scientists implanted six electrodes in the surface of the rat’s brain. They recorded from the electrodes while the rat was awake and then anesthetized. Finally, they injected a solution into the rat’s heart to make it stop beating and recorded in activity in the rat’s brain while it died. None of these steps are unique. Neuroscientists often place electrodes in the brains of living rats and certainly lab rats are anesthetized and sacrificed on a daily basis. The crucial change that these scientists made was recording after the animal’s death.

What happened once its heart stopped?  A lot, probably more than anyone would have expected. In the first 30 seconds, the researchers observed rapid and coordinated neural activity in the rat’s brain. Unlike under anesthesia, when the rat’s brain was quieter than its wakeful norm, the dying brain was as active and, by some measures, more active than it was when fully awake and alive. We’re not talking about zombie rats here – this activity faded and disappeared beyond the 30-second window after cardiac arrest. Still, something dramatic and consistent happened in those dying moments. The brain activity was essentially the same across all nine rats that died from cardiac arrest and eight other rats that the scientists sacrificed using carbon dioxide inhalation. The results were no fluke.

Of course, these findings (and the headlines touting them in the news) beg the question: is this activity the neural basis for near-death experiences? The answer, of course, is we don’t know. We obviously can’t ask the rats what they experienced, if they experienced anything at all. Still, the activity during the 30-second window wasn’t drastically different from the brain’s wakeful activity, at least according to some of their measures. It’s certainly possible, maybe even probable, that the rat experienced something during this time. That fact alone is intriguing. To say more, we’ll need more grants, more studies, and more dead rats.

For the time being, I’m sure people will spin these results according to their pre-existing beliefs. Some will probably say that the brain activity at death is the physiological echo of God coaxing the soul from the body. And who am I say it ain’t so? But there are certainly other explanations. Neural rhythms arise naturally from the wiring of the brain. Neurons form an incredible number of circuits, or wiring loops, that reverberate. Each neuron is a complex little creature in its own right: electrically charged, tiny, tentacled, and bustling with messenger molecules, neurotransmitters, and ions. When neurons are deprived of oxygen and energy, their electrical charges change drastically, which can cause them to fire errant signals at each other. Without input from the outside world, these errant signals may harmonize in ways that reflect the internal wiring of the system. It’s a little like playing a trumpet. When you blow into the trumpet, your breath is a chaotic rush of air, yet it emerges as a clear and orderly tone. An organized system can make order out of chaos. The same might be said of your brain. And if it turns out that this type of coordinated brain activity actually does cause a special experience when you die, consider it an accidental symphony that plays you one last song before you go.

______

Photo credit: Paul Stocker on Flickr, used via Creative Commons license

ResearchBlogging.org

Borjigin J, Lee U, Liu T, Pal D, Huff S, Klarr D, Sloboda J, Hernandez J, Wang MM, & Mashour GA (2013). Surge of neurophysiological coherence and connectivity in the dying brain. Proceedings of the National Academy of Sciences of the United States of America PMID: 23940340

Mother’s Ruin, Moralists, and the Circuitous Path of Science

William_Hogarth_-_Gin_Lane

Update: Since posting this piece, I’ve come across a paper that questions ancient knowledge about the effects of prenatal alcohol exposure. In particular, the author makes a compelling argument that the biblical story mentioned below has nothing to do with the safety of drinking wine while pregnant. Another paper (sorry, paywall) suggests that the “rhetoric of rediscovery” about the potential harm of alcohol during pregnancy was part of a coordinated attempt by “moral entrepreneurs” to sell a moralist concept to the American public in the late 1970s. All of which goes to show: when science involves controversial topics, its tortuous path just keeps on twisting.

If you ask someone to draw you a roadmap of science, you’re likely to get something linear and orderly: a one-way highway, perhaps, with new ideas and discoveries converging upon it like so many on-ramps. We like to think of science as something that slowly and deliberately moves in the right direction. It doesn’t seem like a proper place for off-ramps, not to mention detours, dead-ends, or roundabouts.

In reality, science is messy and more than a little fickle. As I mentioned in the last post, research is not immune to fads. Ideas fall in and out of fashion based on the political, financial, and social winds of the time. I’m not just talking about wacky ideas either. Even the idea that drinking during pregnancy can harm a developing fetus has had its share of rises and falls.

The belief that drinking while pregnant is harmful has been around since antiquity, popping up among the Ancient Greeks and even appearing in the Old Testament when an angel instructs Samson’s mother to abstain from alcohol while pregnant. Yet the belief was far from universal across different epochs and different peoples. In fact, it took a special kind of disaster for England and, in turn, America to rediscover this idea in the 18th century. The disaster was an epidemic . . . of people drunk on gin.

By the close of the 17th century, bickering between England and France caused the British to restrict the import of French brandy and encourage the local production of gin. Soon gin was cheap and freely available to even the poor and working classes. The Gin Epidemic was underway. Rampant drunkenness became a fact of life in England by 1720 and would persist for several decades after. During this time, gin was particularly popular among the ladies – a fact that earned it the nickname “Mother’s Ruin.”

Soon after the start of the Gin Epidemic, a new constellation of abnormalities became common in newborns. Physicians wondered if heavy prenatal exposure to alcohol disrupted fetal development. In 1726, England’s College of Physicians argued that gin was “a cause of weak, feeble and distempered children.” Other physicians noted the rise in miscarriages, stillbirths, and early infant mortality. And by the end of this gin-drenched era, Britain’s scientific community had little doubt that prenatal alcohol could irreversibly harm a developing fetus.

The notion eventually trickled across the Atlantic Ocean and took hold in America. By the early 19th century, American physicians like Benjamin Rush began to discourage the widespread use of alcohol-based treatments for morning sickness and other pregnancy-related ailments. By the middle of the century, research on the effects of prenatal alcohol exposure had become a talking point for the growing temperance movement. Medical temperance journals sprung up with names like Journal of Inebriety and Scientific Temperance Journal. Soon religious and moralistic figures were using the harmful effects of alcohol on fetal development to bolster their claims that all alcohol is evil and should be banned. They often couched the findings in inflammatory language, full of condemnations and reproach. In the end, their tactics worked. The 18th Amendment to the U.S. Constitution was ratified in 1919, outlawing the production, transportation, and sale of alcohol on American soil.

When the nation finally emerged from Prohibition more than thirteen years later, it had fundamentally changed. People were disillusioned with the temperance movement and wary of the moralistic rhetoric that had once seemed so persuasive. They discounted the old familiar lines from teetotal preachers – including those about the harms of drinking while pregnant. Scientists rejected studies published in medical temperance journals and began to deny that alcohol was harmful during pregnancy. In 1942, the prestigious Journal of the American Medical Association published a response to a reader’s question about drinking during pregnancy which said that even large amounts of alcohol had not been shown to be harmful to the developing human fetus. In 1948, an article in The Practitioner recommended that pregnant women drink alcohol with meals to aid digestion. Science was, in essence, back to square one yet again.

It wasn’t until 1973 that physicians rediscovered and named the constellation of features that characterize infants exposed to alcohol in the womb. The disease, fetal alcohol syndrome, is now an accepted medical phenomenon. Modern doctors and medical journals now caution women to avoid alcohol while pregnant. After a few political and religious detours, we’ve finally made it back to where we were in 1900. That’s the funny thing about science: it isn’t always fast or direct or immune to its cultural milieu. But if we all just have faith and keep driving, we’re bound to get there eventually. I’m almost sure of it.

______

Photo Credit: Gin Lane by William Hogarth 1751 (re-engraving by Samuel Davenport circa 1806). Image in public domain and obtained from Wikipedia.

Eyes Wide Shut

3717066825_cf1b3f86a3_o

In the middle of the 20th century, experimental psychologists began to notice a strange interaction between human vision and time. If they showed people flashes of light close together in time, subjects experienced the flashes as if they all occurred simultaneously. When they asked people to detect faint images, the speed of their subjects’ responses waxed and waned according to a mysterious but predictable rhythm. Taken together, the results pointed to one conclusion: that human vision operates within a particular time window – about 100 milliseconds, or one-tenth of a second.

This discovery sparked a controversy about the nature of vision. Pretty much anyone with a pair of eyes will tell you that vision feels smooth and unbroken. But is it truly as continuous as it feels, or might it occur in discrete chunks of time? Could the cohesive experience of vision be nothing more than an illusion?

Enthusiasm for the idea of discrete visual processing faded over the years, although it was never disproven. Science is not immune to fads; ideas often fall in and out of favor. Besides, vision-in-chunks was a hard sell. It was counterintuitive and contrary to people’s subjective experience. Vision scientists set it aside and moved on to new questions and controversies instead.

The debate resurfaced in the last twenty years, sparked by the discovery of a new twist on an old optical illusion. Scientists have long known about the wagon wheel illusion, which makes it appear as if the wheels of moving cars (or wagons) in films are either turning in the wrong direction or not turning at all. The illusion is caused by a technical glitch: the combination of the periodic rotating wheel and the frame rate of the movie. Your brain doesn’t get enough examples of the spinning wheel to know its direction and speed. But in 1996, scientists discovered that the illusion also occurred in the real world. When hubcaps, tires, and modified LPs turned at certain rates, their direction appeared to reverse. Scientists dug the idea of discrete vision out of a trunk in the attic, dusted it off, and tried it out to explain the effect. In essence, the visual system might have a frame rate of its own. Cross this frame rate with an object rotating at a certain frequency and you’re left seeing tires spin backwards. It seemed to make sense.

In a clever set of experiments, the neuroscientist and author David Eagleman (of Incognito and Sum fame) shot this explanation down. He and his colleague, Keith Kline, chalked the illusion up to tiring motion-processing cells instead. Still, the debate about the nature of vision was reignited. Several neuroscientists became intrigued with the notion of vision-in-chunks and began to think about it in relation to a particular type of brain rhythm that cycles at a rate of – you guessed it – about ten times per second.

In recent years, a slew of experiments have supported the idea that certain aspects of vision happen in discrete packets of time – and that these packets are roughly one-tenth of a second long. The brain rhythms that correspond to this timing – called alpha waves – have acted as the missing link. Brain rhythms essentially tamp down activity in a brain area at a regular interval, like a librarian who keeps shushing a crowd of noisy kids. Cells in a given part of the brain momentarily fall silent but, as kids will do, they start right up again once the shushing is done.

Work by Rufin VanRullen at the Université de Toulouse and, separately, by Kyle Mathewson at the University of Illinois show how this periodic shushing can affect visual perception. For example, Mathewson and colleagues were able to predict whether a subject would detect a briefly flashed circle based on its timing relative to the alpha wave in that subject’s visual cortex. This and other studies like it demonstrate that alpha waves are not always helpful. If something appears at the wrong moment in your rhythm, you could be slower to see it or you might just miss it altogether. In other words, every tenth of a second you might be just a little bit blind.

If you’re a healthy skeptic, you may be wondering how well such experiments reflect vision in the real world. Unless your computer’s on the fritz, you probably don’t spend much time staring at circles on a screen. Does the 10-per-second frame rate apply when you’re looking at the complex objects and people that populate your everyday world?

Enter Frédéric Gosselin and colleagues from the Université de Montréal. Last month they published a simple study in the journal Cognition that tested the idea of discrete vision using pictures of human faces. They made the faces hard to see by bathing them in different amounts of visual ‘noise’ (like the static on a misbehaving television). Subjects had to identify each face as one of six that they had learned in advance. But while they were trying to identify each face, the amount of static on the face kept changing. In fact, Gosselin and colleagues were cycling the amount of static to see how its rate and phase (timing relative to the appearance of each new face) affected their subjects’ performance. They figured that if visual processing is discrete and varies with time, then subjects should perform best when their moments of best vision coincided with the moments of least static obscuring the face.

What did they find? People were best at identifying the faces when the static cycled at 10 or 15 times per second. Gosselin and colleagues suggest that the ideal rate may be somewhere between the two (a possibility that they can’t test after-the-fact). Their results imply that the visual alpha wave affects face recognition – a task that people do every day. But it may only affect it a little. The difference between the subjects’ best accuracy (when the static cycling was set just right) and their worst accuracy was only 7%. In the end, the alpha wave is one of many factors that determine perception. And even when these rhythms are shushing visual cortex, it’s not enough to shut down the entire area. Some troublemakers keep yapping right through it.

When it comes to alpha waves and the nature of discrete visual processing, scientists have their work cut out for them. For example, while some studies found that perception was affected by an ongoing visual alpha wave, others found that visual events (like the appearance of a new image) triggered new alpha waves in visual cortex. In fact, brain rhythms are not by any means exclusive; different rhythms can be layered one upon the other within a brain area, making it harder to pull out the role of any one of them.  For now it’s at least safe to say that visual processing is nowhere near as smooth and continuous as it appears. Your vision flickers and occasionally fails. As if your brain dims the lights, you have moments when you see less and miss more – moments that may happen tens of thousands of times each hour.

This fact raises a troubling question. Why would the brain have rhythms that interfere with perception? Paradoxically enough, discrete visual processing and alpha waves may actually give your visual perception its smooth, cohesive feel. In the last post I mentioned how you move your eyes about 2 or 3 times per second. Your visual system must somehow stitch together the information from these separate glimpses that are offset from each other both in time and space. Alpha waves allow visual information to echo in the brain. They may stabilize visual representations over time, allowing them to linger long enough for the brain, that master seamstress, to do her work.

_____

Photo credit: Tom Conger on Flickr with Creative Commons license

Blais C, Arguin M, & Gosselin F (2013). Human visual processing oscillates: Evidence from a classification image technique Cognition, 128 (3), 353-62 PMID: 23764998

Sight Unseen

6238705478_384842373b_o

Eyelids. They come in handy for sandstorms, eye shadow, and poolside naps. You don’t see much when they’re closed, but when they’re open you have an all-access pass to the visible world around you. Right? Well, not exactly. Here at Garden of the Mind, the next two posts are dedicated to the ways that you are blind – every day – and with your eyes wide open.

One of the ways you experience everyday blindness has to do with the movements of your eyes. If you stuck a camera in your retina and recorded the images that fall on your eye, the footage would be nauseating. Think The Blair Witch Project, only worse. That’s because you move your eyes about once every half a second – more often than your heart beats. You make these eye movements constantly, without intention or even awareness. Why? Because, thanks to inequalities in the eye and visual areas of the brain, your peripheral vision is abysmal. It’s true even if you have 20/20 vision. You don’t sense that you are legally blind in your peripheral vision because you compensate by moving your eyes from place to place. Like snapping a series of overlapping photographs to create a panoramic picture, you move your eyes to catch different parts of a scene and your brain stitches these ‘shots’ together.

As it turns out, the brain is a wonderful seamstress. All this glancing and stitching leaves us with a visual experience that feels cohesive and smooth – nothing like the Frankenstein creation it actually is. One reason this beautiful self-deception works is that we turn off much of our visual system every time we move our eyes. You can test this out by facing a mirror and moving your eyes quickly back and forth (as if you are looking at your right and left ears). Try as you might, you won’t be able to catch your eyes moving. It’s not because they’re moving too little for you to see; a friend looking over your shoulder would clearly see them darting back and forth. You can feel them moving yourself if you gently rest your fingers below your lower lashes.

It would be an overstatement to say that you are completely blind every time you move your eyes. While some aspects of visual processing (like that of motion) are switched off, others (like that of image contrast) seem to stay on. Still, this means that twice per second, or 7,200 times each hour, your brain shuts you out of your own sense of sight.  In these moments you are denied access to full visual awareness. You are left, so to speak, in the dark.

Photo credit: Pete Georgiev on Flickr under Creative Commons license