Flipping the Baby Switch

img_2348-1Rewind to last night. It was bedtime. My infant daughter was screaming and struggling in my lap while I tried to rock her to sleep. She pulled and twisted the skin on my face. She sunk her tiny teeth into my shoulder and chest. Exasperated, I rose from the rocker and started pacing around the nursery. Her tense little body instantly relaxed. Within ten seconds she was quiet and still. Within two minutes she was asleep.

The scene was not unusual for our household. Even as a newborn, my daughter was easy to upset and hard to soothe. When nothing else worked and I was about to lose my mind I’d get up and walk with her. Often the results were nothing short of miraculous. Imagine going from 100 miles per hour to zero in a snap. For those who recall the child android Vicki on the ‘80s TV show Small Wonder, think of the times someone flipped the off-switch on her back. That’s what it’s like when I walk with my daughter. Our aimless walking flips a switch somewhere inside of her. But how does the switch work? And why does she have one in the first place? A study published in Current Biology last month helps to explain this curious facet of infant behavior.

The head scientist behind the study was Dr. Kumi Kuroda at the RIKEN Brain Science Institute in Japan. As she described in an interview with ScienceNOW, she became interested in the topic when she noticed that she could calm her own newborn son by carrying him. She later tested 12 other newborns with their mothers and found that they behaved like her son. Overall, the effect was rapid and dramatic. Some babies stopped crying as soon as their mothers began to walk with them. The rest cried less and were less shrill when they did cry. The babies also moved less and had lower heart rates while they were being carried.

To study the biological mechanisms behind this remarkable calming response, Dr. Kuroda and her colleagues turned to mice. They showed that mouse pups have a similar response when carried by their mothers. Mouse moms carry their pups by the scruff of their necks. When carried, mouse pups less than 20 days old stop wriggling. Their heart rate slows and they stop crying out. (Like most mouse vocalizations, baby mouse cries are ultrasonic). They also draw their legs in when carried, making their bodies more compact for toting around.

Kuroda and colleagues investigated several physiological aspects of the calming response in mice. Only a few of these experiments are probably relevant for infants, since human babies don’t assume a compact position like carried mouse pups do. One looked for the triggers that make carried pups stop squirming. The scientists anesthetized the neck skin of baby mice and found that these animals wriggled more than untreated mouse pups when carried. They got the same result when they overdosed pups with vitamin B6 before testing. (Vitamin B6 overdose causes animals and humans to lose the sense of their body position and movement.) The upshot? For a mouse pup to stop wriggling when carried it must 1) sense that it’s being lifted and 2) sense that something is pulling on its neck skin. Take either sense away and the calming response disappears. My daughter may draw on similar senses to trigger her miraculous stillness while carried. (Only if you replace neck pulling with the pressure of my arms around her, of course. I don’t carry her by her neck skin, I swear.)

The scientists also wondered why a baby’s heart rate drops when it’s picked up and carried. To test this in mice, they gave pups a drug that turns down the parasympathetic nervous system (the set of nerves that return the body to a calm state after arousal). Pups treated with the drug still stopped wriggling when lifted, but their heart rates didn’t drop as they do in untreated pups. So while the parasympathetic nervous system slows down the carried pup’s (and possibly infant’s) heartbeat, it can’t take credit for other features of the calming response.

Clearly this calming response is more complicated than it seems. Many of my daughter’s brain areas, neural pathways, and sensory mechanisms were working in concert to soothe her last night as I walked her in circles. But why does she have this complex reaction to carrying in the first place? Grateful parents might imagine that the calming response evolved to keep us from going crazy, but unless going crazy involves committing infanticide, this explanation doesn’t hold water. Evolution doesn’t care whether parents are happy or well rested or have time to watch Game of Thrones. It only cares whether our offspring survive.

Dr. Kuroda and her colleagues propose that the calming response helped parents escape dangerous situations while protecting their young. According to this logic, calmer carried babies meant faster escapes and higher rates of survival. Certainly if you were running from a wild beast or a member of a rival village, holding a struggling infant might slow you down. Of course holding any infant would slow you down and it’s not clear that sprinting with a struggling newborn is much harder than lugging one that’s asleep.  The paper’s authors present little evidence to support their proposal, particularly in the context of human evolution. They point to a minor result with their mice that doesn’t easily translate to human behavior. In effect, the jury’s still out.

There are other possible explanations for the calming response, ones that don’t involve predators outrunning parents. Shushing can calm crying babies too, probably because it simulates an aspect of their environment in the womb (in this case,  physiological noise). The same could be true of walking with infants. The mothers in the Kuroda study held their babies against their chest and abdomen, which is also how I hold my daughter when I walk to soothe her. The type of movement she feels in that position is probably similar to the rocking and jostling she felt as a fetus in utero whenever I walked. If so, the calming response might be a result of early learning and comfort by association – a nice thought when you consider the gory alternative.

Each year at the end of May we find ourselves as far as possible from Thanksgiving Day. It can be something of a thankfulness drought. This May I am thankful for women in science and maternity leaves, computer-generated dragons and ’80s sitcom androids. And like Vicki’s parents, I am profoundly thankful that my daughter came furnished with an off-switch. Whatever the reason why.

___

Photo credit: Sabin Dang

Esposito G, Yoshida S, Ohnishi R, Tsuneoka Y, Rostagno Mdel C, Yokota S, Okabe S, Kamiya K, Hoshino M, Shimizu M, Venuti P, Kikusui T, Kato T, & Kuroda KO (2013). Infant Calming Responses during Maternal Carrying in Humans and Mice. Current biology : CB, 23 (9), 739-45 PMID: 23602481

Remains of the Plague

The history of science is littered with bones. Since antiquity, humans have studied the remains of the dead to understand the living. The practice is as common now as ever; only the methods have changed. In recent years, high-tech analyses of human remains have solved mysteries ranging from our ancestors’ prehistoric mating patterns to the cause of Beethoven’s death. The latest example of this morbid scientific tradition can be found in the e-pages of this month’s PLOS Pathogens. The colorful cast of characters includes European geneticists, a handful of teeth, a 6th century plague, and the US Department of Homeland Security.

Although the word plague is often used as a synonym for disease, plague actually refers to a particular type of illness caused by the bacterium Yersinia pestis. Rampant infection by Y. pestis was responsible for a recent pandemic in the 19th to 20th centuries. Before that it caused the 14th to 17th century pandemic that included the epidemic known as the Black Death.

Yet the pestilence of pestis may have swept across human populations long before the Black Death. According to historical records, a terrible pandemic killed people from Asia to Africa to Europe between the 6th and 8th centuries. It struck the Roman Empire under the watch of Emperor Justinian I, who contracted the disease himself but survived. The pandemic now bears his name: the Justinianic Plague. But was Justinian’s malady really a plague or has history pinned the blame on the wrong bacterium? A group of researchers in Munich decided to find out.

How?

By digging up ancient graves, of course. And helping themselves to some teeth.

The ancient graves were in an Early Medieval cemetery called Aschheim in the German state of Bavaria. The site was a strange choice; the authors reveal in their paper that the historical record shows no evidence that the Justinianic Plague reached Bavaria. However, the site was conveniently located within driving distance of most of the study’s authors. (It’s always easiest to do your gravedigging closer to home.) The authors did have solid evidence that the graves were from the 6th century and that each grave contained two or more bodies (a common burial practice during deadly epidemics). In total, the group dug up 12 graves and collected teeth from 19 bodies.

The scientists took the teeth back to their labs and tested them for a stretch of DNA unique to Y. pestis. Their logic: if the individuals died from infection by Y. pestis, their remains should contain ample DNA from the bacteria. Of course, some of this DNA would have deteriorated over the course of 1.5 millennia. The scientists would have to make do with what they found. They used three different methods to amplify and detect the bacterial DNA, however they only found a reliably large amount of it in the teeth of one individual, a body they affectionately nicknamed A120. They genotyped the Y. pestis DNA found in A120 to see how the bacterial strain compared with other versions of the bacterium (including those that caused the Black Death and the 19th-20th century plague pandemic.) The analysis showed that the Justinianic strain was an evolutionary precursor to the strain that caused the Black Death. Like the strains that sparked the second and third pandemics, this strain bore the genetic hallmarks of Y. pestis from Asia, suggesting that all three plague pandemics spread from the East.

The authors write that they have solved their historical mystery.

“These findings confirm that Y. pestis was the causative agent of the Justinianic Plague and should end the controversy over the etiological agent of the first plague pandemic.”

Ordinarily, the discussion sections of scientific papers are littered with qualifiers and terms like might be and suggestive. Not so here, even though the authors’ conclusion explains a phenomenon that killed many millions of people worldwide based on data from the decomposing remains of a single person who lived in a region that historians haven’t connected with the pandemic. In most branches of science, sweeping conclusions can only be made based on large and meticulously selected samples. In genetics, such rules can be swept aside. It is its own kind of magic. If you know how to read the code of life, you can peer into the distant past and divine real answers based on a handful of ancient teeth.

As it turns out, the study’s result is more than a cool addition to our knowledge of the Early Middle Ages. Plague would make a terrible weapon in the hands of a modern bioterrorist. That’s why the US Department of Homeland Security is listed as one of the funding sources for this study. So the next time you hear about your tax dollars hard at work, think of Bavarian graves, ancient teeth, and poor old A120.

_____

Photo credit: Dallas Krentzel

ResearchBlogging.org

Harbeck M, Seifert L, Hansch S, Wagner DM, Birdsell D, Parise KL, Wiechmann I, Grupe G, Thomas A, Keim P, Zoller L, Bramanti B, Riehm JM, Scholz HC (2013). Yersinia Pestis DNA from Skeletal Remains from the 6th Century Reveals Insights Into Justiniac Plague PLOS Pathogens DOI: 10.1371/journal.ppat.1003349

Cuddling Up with a Scimoir

1688897198_28302e8ce6_oYou might call it a Frankenstein genre – two quite different literary genres stitched together and brought to life. For the moment, I am calling it the scimoir. The rare science memoir can be found tucked away in the Science section, in Memoir or Biography, even sometimes in Health, Psychology, or Self Help. It defies categorization, flummoxing librarians and booksellers alike. Science and memoir, memoir and science. It just doesn’t seem right.

At first glance the two genres seem incompatible. Science is the study of the immutable and absolute while memoir is the most personal and subjective of all genres. Yet somehow they can go together, and when done well, they resonate with honesty and relevance. They tame each other. Memoir reminds us that the whirring mechanics of science play out on the scale of our individual lives, while science reminds us that the memoirists’ struggles and stories reflect something of the universal. Moreover, the drama of memoir adds the narrative kick that science writing so desperately needs. It’s a match made in genre heaven.

Why am I waxing poetic about a literary genre? I suppose because I recently discovered that I’m drawn to this combination, both as a blogger and as a reader. The majority of my posts are amalgamations of personal experience and scientific theory. This was never my intent; somehow the combination fell out of my interests and whatever spark motivated me to write about a given topic. I’ve also discovered that I’ve read and enjoyed a number of scimoirs, even though I didn’t consciously seek them out and scimoirs are none too common.

In point of fact, I shouldn’t be surprised that book-length scimoirs are relatively rare. To write a compelling one, an author generally has to be a scientist or science writer who has also personally experienced something dramatic that is relevant to the topic. You might be both a leading researcher and lifelong sufferer of a particular illness, like Kay Redfield Jamison in An Unquiet Mind. You might be the researcher behind an infamous experiment, like Philip Zimbardo in The Lucifer Effect. Or you might be able to approach the topic through your experience with ailing relatives. In Mapping Fate, Alice Wexler wrote about her mother’s battle with Huntington’s disease and her sister’s scientific quest to isolate the culprit gene. In Acquainted with the Night, the science writer Paul Raeburn documented his children’s struggles with mental illness in the context of the current state of juvenile psychiatric knowledge and treatment.

I am on a quest to identify other books in this wonderful Franken-genre and I need your help. Here are the other scimoirs I can think of that I’ve already read (aside from those listed above): My Stroke of Insight by Jill Bolte Taylor, The Double Helix by James Watson, A Primate’s Memoir by Robert Sapolsky, and several of Oliver Sacks’s books. I’ve come across a few more that I plan to read: Memoirs of an Addicted Brain by Marc Lewis, Moonwalking with Einstein by Joshua Foer, and What Mad Pursuit by Francis Crick.

Please let me know what other scimoirs you’ve read, want to read, or simply know are out there. And do share any other ideas for naming the genre. Scimoir sounds like a half-android, half-alien monster, and who wants to cuddle up with that?

______

Photo credit: Karoly Czifra

The End of History

Intersection 12-12-12 Day 347 G+ 365 Project 12 December 2012I just read a wonderful little article about how we think about ourselves. The paper, which came out in January, opens with a tantalizing paragraph that I simply have to share:

“At every stage of life, people make decisions that profoundly influence the lives of the people they will become—and when they finally become those people, they aren’t always thrilled about it. Young adults pay to remove the tattoos that teenagers paid to get, middle-aged adults rush to divorce the people whom young adults rushed to marry, and older adults visit health spas to lose what middle-aged adults visited restaurants to gain. Why do people so often make decisions that their future selves regret?”

To answer this question, the study’s authors recruited nearly 20,000 participants from the website of “a popular television show.” (I personally think they should have told us which one. I’d imagine there are differences between the people who flock to the websites for Oprah, The Nightly News, or, say, Jersey Shore.)

The study subjects ranged in age from 18 to 68 years of age. For the experiment, they had to fill out an online questionnaire about their current personality, core values, or personal preferences (such as favorite food). Half of the subjects—those in the reporter group—were then asked to report how they would have filled out the questionnaire ten years prior, while the other half—those in the predictor group—were asked to predict how they will fill it out ten years hence. For each subject, the authors computed the difference between the subject’s responses for his current self and those for his reported past self or predicted future self. And here’s the clever part: they could compare participants across ages. For example, they could compare how an 18-year-old’s prediction of his 28-year-old future self differed from a 28-year-old’s report of his 18-year-old self. It sounds crazy, but they did some great follow up studies to make sure the comparison was valid.

The results show a remarkable pattern. People believe that they have changed considerably in the past, even while they expect to change little in the future. And while they tend to be pretty accurate in their assessment of how much they’ve changed in years passed, they are grossly underestimating how much they will change in the coming years. The authors call this effect The End of History Illusion. And it’s not just found in shortsighted teenagers or twenty-somethings. While the study showed that older people do change less than younger people, they still underestimate how much they will continue to change in the decade to come.

The End of History Illusion is interesting in its own right. Why are we so illogical when reasoning about ourselves – and particularly, our own minds? We all understand that we will change physically as we age, both in how well our bodies function and how they look to others. Yet we deny the continued evolution (or devolution) of our traits, values, and preferences. We live each day as though we have finally achieved our ultimate selves. It is, in some ways, a depressing outlook. As much as we may like ourselves now, wouldn’t it be more heartening to believe that we will keep growing and improving as human beings?

The End of History Illusion also comes with a cost. We are constantly making flawed decisions for our future selves. As the paper’s opening paragraph illustrated, we take actions today under the assumption that our future desires and needs won’t change. In a follow up study, the authors even demonstrate this effect by showing that people would be willing to pay an average of $129 now to see a concert by their favorite band in ten years, while they would only be willing to pay an average of $80 now to see a concert by their favorite band from ten years back. Here, the illusion will only cost us money. In real life, it could cost us our health, our families, our future well-being.

This study reminded me of a book I read a while back called Stumbling on Happiness (written, it turns out, by the second author on this paper). The book’s central thesis is that we are bad at predicting what will make us happy and the whole thing is written in the delightful style of this paper’s opening paragraph. For those of you with the time, it’s worth a read. For those of you without time, I can only hope you’ll have more time in the future. With any luck we’ll all have more – more insight, more compassion, more happiness—in the decade to come.

____

Photo credit: Darla Hueske

ResearchBlogging.org

Quoidbach J, Gilbert DT, & Wilson TD (2013). TheEnd of History Illusion Science DOI: 10.1126/science.1229294

Feeling Invisible Light

7401773382_19963f6a8b_cIn my last post, I wrote about whether we can imagine experiencing a sense that we don’t possess (such as a trout’s sense of magnetic fields). Since then a study has come out that adds a new twist to our little thought experiment. And for that we can thank six trailblazing rats in North Carolina.

Like us, rats see only a sliver of the full electromagnetic spectrum. They can perceive red light with wavelengths as long as about 650 nanometers, but radiation with longer wavelengths (known as infrared, or IR, radiation) is invisible to them. Or it was before a group of researchers at Duke began their experiment. They first trained the rats to indicate with a nose poke where they saw a visible light turned on. Then the researchers mounted an IR detector to each rat’s head and surgically implanted tiny electrodes into the part of its brain that processes tactile sensations from its whiskers.

After these sci-fi surgeries, each rat was trained to do the same light detection task again – only this time it had to detect infrared instead of visible light. Whenever the IR detectors on the animal’s head picked up IR radiation, the electrodes stimulated the tactile whisker-responsive area of its brain. So while the rat’s eyes could not detect the IR lights, a part of its brain was still receiving information about them.

Could they do the new task? Not very well at first. But within a month, these adult rats learned to do the IR detection task quite well. They even developed new strategies to accomplish their new task; as these videos show, they learned to sweep their heads back and forth to detect and localize the infrared sources.

Overall, this study shows us that the adult brain is capable of acquiring a new or expanded sense. But it doesn’t tell us how the rats experienced this new sense. Two details from the study suggest that the rats experienced IR radiation as a tactile sensation. First, the post-surgical rats scratched at their faces when first exposed to IR radiation, just as they might if they initially interpreted the IR-related brain activity as something brushing against their whiskers. Second, when the scientists studied the activity of the touch neurons receiving IR-linked stimulation after extensive IR training, they found that the majority responded to both touch and infrared light. At least to some degree, the senses of touch and of infrared vision were integrated within the individual neurons themselves.

In my last post, I found that I was only able to imagine magnetosensation by analogy to my sense of touch. Using some fancy technology, the scientists at Duke were able to turn this exercise in imagination into a reality. The rats were truly able to experience a new sense by piggybacking on an existing sense. The findings demonstrate the remarkable plasticity of the adult brain – a comforting thought as we all barrel toward our later years – but they also provide us with a glimpse of future possibilities. Someday we might be able to follow up on our thought experiment with an actual experiment. With a little brain surgery, we may someday be able to ‘see’ infrared or ultraviolet light. Or we might just hook ourselves up to a magnificent compass and have a taste (or feel or smell or sight or sound) of magnetosensation after all.

____

Photo credit: Novartis AG

ResearchBlogging.org

Thomson EE, Carra R, & Nicolelis MA (2013). Perceiving invisible light through a somatosensory cortical prosthesis. Nature communications, 4 PMID: 23403583