Tooling Around

tools1There was a time when my daughter used her hands exclusively to shovel things into her mouth. Not so anymore. For the last few months, she has been hard at work banging objects together. This simple action is setting the stage for some pretty cool neural development. She is learning to use tools.

Of course it doesn’t look too impressive right now. She might bang a ball with a block and then switch and strike the block with the ball. In one recent playtime she tore a cardboard flap out of her board book and examined it, trying different grips and holding it from different angles as she watched how it cut the air. Then, brandishing her precious flap, she went to work. She wielded it with a scooping motion to lift other flaps in the book, and later, to turn the book pages themselves. After that, she descended on her toy box with the flap. She used it to wipe her blanket, poke her stuffed animal, and finally scrape its face like she was giving it a close shave.

Although my daughter’s fun with flaps may seem aimless, it had an important purpose. Through experimentation and observation, she was learning how two objects can interact and how such interactions are affected by object shape, configuration, and pliability. Such details are so well known to adults that we forget there was anything to learn. But consider how often we use objects against one another. We hammer nails, rake leaves, and staple pages. When using scissors, we must apply different levels of force to cut through paper versus cardboard or fabric. When lifting a pan with a potholder, we must adjust our grip depending on the weight of the pan and whether we are lifting it by the base, side, or handle. We must know the subtle differences between holding a sponge to wash a glass and using a towel to dry it, and we must do each deftly enough that our glassware comes out clean and intact at the end.

There are also countless tools we create on the fly every day. When you use a magazine to nudge your cell phone within reach or flip a light switch with a book because your hands are full, you are devising novel tools to fit your momentary needs. To do this, our brains must store extensive knowledge about the properties of household objects. Through experimentation, like the kind my daughter is doing, we learned to predict how objects will interact and to capitalize on those predictions.

So far I’ve described the value of tools in terms of what they can do: push, pull, gather, polish, lift, etc. But there is another side to tool use that may play a role in my daughter’s little experiments: sensory information gleaned through the tool. As I watched her probe one object with another, I was reminded of research described in Sandra and Matthew Blakeslee’s book The Body Has a Mind of Its Own. The book discussed neurons in the parietal cortex that are tuned to the sight or feel of objects near a particular body part. For example, cells representing your right hand would fire if something touched your right hand, if you saw an object near your right hand, or both. Neuroscientists have discovered that experience using a tool can change the properties of these cells in monkeys (and therefore likely in us as well). They found that if monkeys used a rake to gather goodies otherwise beyond their reach, the parietal neurons that had responded to objects around the hand now fired for items located anywhere along both the hand and the rake it held. In a sense, object manipulation can temporarily extend certain neural body representations to include the tools we wield. The Blakeslees suggest that this may be how a blind person learns to perceive the contour of items encountered at the tip of his cane. In effect, the cane and the hand are one.

For now, our house is filled with smashing, scraping, banging and bending as our baby descends on toys and her parents’ belongings alike. In the midst of such havoc, it’s good to know that the destruction is part of a crucial learning process. And someday, once it slows down, I can buy her a new board book with all the flaps intact.

____

Photo credit: zzpza

The Trouble with (and without) Fish

Once upon a time in a vast ocean, life evolved. And then, over many millions of years, neurons and spinal cords and eyes developed, nourished all the while in a gentle bath of nutrients and algae.

Our brains and eyes are distant descendants of those early nervous systems formed in the sea. And even though our ancestors eventually sprouted legs and waddled out of the ocean, the neural circuitry of modern humans is still dependent on certain nutrients that their water-logged predecessors had in abundance.

This obscure fact about a distant evolution has recently turned into a major annoyance for me now that I’m pregnant. In fact, whether they know it or not, all pregnant women are trapped in a no-win dilemma over what they put into their stomachs. Take, for instance, a popular guidebook for pregnant women. On one page, it advocates eating lots of seafood while pregnant, explaining that fish contain key nutrients that the developing eyes and brains of the fetus will need. A few pages later, however, the author warns that seafood contains methylmercury, a neurotoxic pollutant, and that fish intake should be strictly curtailed. What is a well-meaning pregnant lady to do?

On a visceral level, nothing sounds worse than poisoning your child with mercury, and so many women reduce their seafood intake while pregnant. I have spoken with women who cut all seafood out of their diet while pregnant, for fear that a little exposure could prove to be too much. They had good reason to be worried. Extreme methylmercury poisoning episodes in Japan and Iraq in past decades have shown that excessive methylmercury intake during pregnancy can cause developmental delays, deafness, blindness, and seizures in the babies exposed.

But what happens if pregnant women eliminate seafood from their diet altogether? Without careful supplementation of vital nutrients found in marine ecosystems, children face neural setbacks or developmental delays on a massive scale. Consider deficiencies in iodine, a key nutrient readily found in seafood. Its scarcity in the modern land-based diet was causing mental retardation in children – and sparked the creation of iodized salt (salt supplemented with iodine) to ensure that the nutritional need was met.

Perhaps the hardest nutrient to get without seafood is an omega-3 fatty acid known as DHA. In recent years, scientists have learned that this particular fatty acid is essential for proper brain development and functioning, yet it is almost impossible to get from non-aquatic dietary sources. At the grocery store, you’ll find vegetarian products that claim to fill those needs by supplying the biochemical precursor to DHA (found in flaxseed, walnuts, and soybean oils), but we now know that the precursor simply won’t cut it. Our bodies are remarkably slow at synthesizing DHA from its precursor. In fact, we burn the vast majority of the precursor for energy before we have the chance to convert it to DHA.

So pregnant women must eat food from marine sources if they are to meet all the needs of their growing babies. Yet thanks to global practices of burning coal and disposing of industrial and medical waste, any seafood women eat will expose their offspring to some amount of methylmercury. There’s no simple solution to this problem, although recent studies suggest that child outcomes are best when women consume ample seafood while avoiding species with higher levels of methylmercury (such as shark, tilefish, walleye, pike, and some types of tuna). Of course much is still unknown. Exactly how much DHA intake is enough? And since mercury levels vary based on where the fish was caught and what waste was released nearby, you can never be sure it’s safe to eat.

Unless we start cleaning up our oceans, pregnant women will continue to face this awful decision each time they sit down at the dinner table. Far worse, we may face future generations with lower IQs and developmental delays regardless of which choice their mothers make. Thanks to shoddy environmental oversight, we may be saddling our children with brains that don’t work as well as our own. And that is something I truly can’t swallow.

Good Morning, Sleepyhead

A few weeks ago, I passed out. One moment I was standing by the door to our apartment, wishing my departing husband a good day at work. The next, my eyes had rolled back in my head and I fell face-first into the wall. My forehead struck the lower hinges of the door; I bruised my cheek and arm and knee, nothing badly. My husband, who was halfway out the door when I fell, rushed to gather me up. He held me and said, “Are you all right? Are you okay?” And that was how I awoke, as if from a long dreamless sleep, on the floor beside our front door.

I was only out a few seconds, but it felt like it could have been hours. I remembered the minutes leading up to my dramatic tumble, but they felt like long ago. A bit ethereal, and separated from the present by a gap that didn’t feel odd to me in the slightest.

I’ve always tended toward low blood pressure and often felt dizzy when standing up. After the fall, doctors checked me out and said I was fine. (My prescriptions are to drink more water and maybe eat more salt.) Still, the experience got me thinking about memory and how it’s a strange and elusive creature. How we always think we’ve caught it but we never have.

Back in my grad school days, we studied the case of H.M., the famous amnesic patient who was unable to form new memories. We learned that his journal was filled with descriptions of waking up as if for the first time and having no recollection of writing any of the prior journal entries, nor of how he came to be where he was. I wonder if the feeling was something like my contradictory experience on the floor, when I lacked memory of the preceding moments and yet felt as if nothing were missing. Time felt continuous, despite the fact that my memory was not.

The experience also reminded me of a dramatic story I read in the nonfiction book Soul Made Flesh. In 1650, a young British servant named Anne Green was seduced by her master’s grandson and gave birth to a stillborn baby. Thanks to the social mores of the time, she was tried and convicted of infanticide and sentenced to death. She proclaimed her innocence to the crowd that gathered in the courtyard of Oxford Castle to watch her hanging. After her speech, the executioner kicked the ladder out from under her and she hanged for almost half an hour before they cut her down and sent her body down the street to be dissected for science. Her designated dissectors were Drs. William Petty and Thomas Willis (of the Circle of Willis). But when they opened the coffin, they heard a rattle in her throat and managed to revive her with water, heat, and herbs.

When Anne Green came to, she began reciting the speech she’d delivered at the gallows. She didn’t remember leaving the prison, climbing the ladder, or giving the speech, much less (thankfully) hanging. A pamphlet later circulated about the event described her memory as “a clock whose weights had been taken off a while and afterward hung on again.” The incident illustrated the machine-like quality of memory. Today we describe it as flipping a switch. Anne Green’s memory had been turned off and then turned on again.

As strange as the stories of H.M. and Anne Green sound, their wild memory lapses aren’t so different from what happens to us everyday. We all experience time as continuous and ongoing, even though our memory is often shot through with holes. We spend a full third of our lives in unconscious slumber and remember little of our dreams. Even our waking lives are terribly preserved in the vault of our memory. How many of your breakfasts can you recall? How many birthday parties and drives to work? How many classroom lectures and airplane rides and showers can you individually call to mind?

Our recollections are mere fragments. They pepper the timeline of our past just enough to form a narrative – one’s life story. This story may feel solid and unbroken, but don’t kid yourself. Your memory is not. We are all amnesic, all a little untethered from the passing moments of our lives. We are continually rediscovering and resurrecting our past to move forward in the present. In one way or another, we have all roused from our coffin reciting a speech from the gallows or come to on the floor with a sore face and an astonished husband. We are all perpetually in the process of waking up for the very first time.

Who Am I Again?

5370003348_16454204e6_z

Thanks to a recent kerfuffle over the Earth’s precession and its effect on our astrological signs, many people have spent this week questioning their personality traits. I went from being a life-long Gemini (changeable, duplicitous) to a possible Taurus (stubborn, steady), neither of which I think describe me. I’ve never believed in astrological signs, but many people do, and this week must have been a confusing one for them.

The whole thing got me thinking about how we look outward for explanations and definitions of our inner selves. No one has a better vantage point than we do to observe our own personal thoughts, feelings, attitudes, and behaviors. How funny that we once looked to the stars in order to understand ourselves! Those of us who consider ourselves scientific and modern are no better. Although we scoff at sun signs and palm readings, increasingly we are turning to our brains and our DNA for answers that they simply can’t give.

In the 1800’s, the Phrenological Fowlers (later Fowlers and Wells) founded a nationwide industry on reading people’s personalities based on the bumps on their heads. They published extensively and sent emissaries to small towns throughout the U.S. so that, for a small fee, the masses might come to know themselves better. The company and its methods were an unrivaled success. America was obsessed with phrenology. Sometime in the 1860’s, a curious Mark Twain visited Fowler’s office under an assumed name. Fowler read his head and said that his skull dipped in at a particular point where it should have bulged out – a sure sign that Twain, the preeminent American humorist, utterly lacked a sense of humor.

Nowadays, many still look to their brains for answers. When I used to scan participants in fMRI experiments, they would often ask what I could tell them about their brains. I couldn’t tell them anything; all the analysis took place later, back at the lab. But as a frequent subject in pilot experiments for my own and colleagues’ studies, I’ve had unfettered access to data from my own brain. I know that I have a large and robust fusiform face area (a region thought to be critical for face recognition) and a rather dinky visual word form area (implicated in identifying letters of the alphabet.) What does that mean, when I am an avid reader and often embarrass myself with my poor ability to recognize faces?

While people still look to the stars and to brains (if not skulls) in order to understand themselves, the next big thing has arrived. The age of personal genomics is upon us and countless startups out there are eager to swap a check and a swab of our cells for a glimpse into our futures and ourselves. I have to admit, I fantasize sometimes about having my genome read. I would love the chance to pour through details about my ancestral line or learn what type of diseases I am predisposed to developing. But the biggest draw is to learn about myself. What forms of the anxiety genes do I have? What about genes linked to mental illness, intelligence, novelty-seeking? As a scientist, I know that complex traits are determined by a mixture of environment and numerous genes, many of which we haven’t yet discovered. Beyond that, epigenetic factors influence the expression of our genes in ways we don’t yet understand. Yet I still find myself wishing someone would hand me that printout with the secrets to myself.

The cognitive scientist Steven Pinker wrote a wonderful essay wading through his results when his own genome was sequenced. In it, he struggles with the discrepancies. His genome says he should be sensitive to bitter flavors, yet he enjoys beer, broccoli, and brussels sprouts. His genome says he has a high risk of baldness, yet he is known for his thick mane of overflowing, curly hair. Other results he believes or would like to believe. What is a person seeking direction and self-wisdom to do?

So at the end of this astrologically confusing week, I find myself at a loss. Why do we crave external guidance to help us understand our internal selves? It may be because we are less static and more changeable than we like to believe. As I alluded to in my post about our potential to do evil, psychology experiments (and history) have shown that human beings are heavily influenced by their circumstances. Because we are adaptable, we behave very differently depending on who we are with and what we are doing. Although the adaptability may be advantageous, I suspect it unsettles us. We want to believe we have a solid, stable identity, and we will look to mystics or scientists – anyone who can give us that assurance. I know who I am and who I always will be.

The hard (but in its own way beautiful) truth is that we are each a complex and contradictory landscape of traits, behaviors, and passions. Be wary of those who try to describe you with a handful of paltry adjectives. Know thyself. Or keep trying, anyway. It should take at least a lifetime.

Why Bigger Isn’t Always Better

One of my entertainments this holiday season was following the online buzz over a recent article in Nature Neuroscience. The authors’ findings were covered by Wired, Time, Slate, U.S. News & World Report, and the BBC, to name a few. One headline read: Scientists Discover Facebook Center of the Brain. Another: How to Win Friends: Have a Big Amygdala?

The authors of the Nature Neuroscience article report a correlation between the size of a subcortical brain structure called the amygdala and the extent of a person’s social network. In effect, people with larger amygdalas tended to have more friends and close acquaintances than those with lesser-sized amygdalas. The popular press and the public leapt on this idea. We are predestined by our anatomy to be popular or not. If we were alone on New Years Eve, if our Facebook friend count is low, it’s not our fault. Chalk that one off to our brains, our genes, our parents.

All of this struck me as both amusing and sad because of a book I was reading at the time. The book, Postcards from the Brain Museum by Brian Burrell, chronicles the history of neuroscience in the context of our search for greatness (as well as criminality, idiocy, and inferiority.) It tells how scientists spent most of the 19th century collecting human brains from geniuses, criminals, and the poor to try to understand why some people demonstrate remarkable abilities while others flounder and fail.

It is a sad and sordid history. At first, some believed that the sheer size or weight of one’s brain predicted greatness, so that large brains were capable of better thinking. Since women’s brains (like the rest of their bodies) were on average smaller than those of their male counterparts, this provided a perfect explanation for their intellectual inferiority. Later, when the link between brain volume and intelligence was debunked, scientists suggested that the amount of folding on the brain’s surface was the marker of a brilliant brain. The more convolutions on the surface, the smarter the individual. Other scientists identified specific fissures that they deemed inferior, as they were supposedly found more often in apes and women. These lines of research would be used to justify racial and gender stereotypes and give rise to the practice of eugenics in the first half of the 20th century.

The peer review process and established statistical methods ensure that today’s science is more legitimate than it was in centuries past. But neuroimaging has allowed us to probe the living brain to a degree heretofore unimagined. With it, scientists amass enormous amounts of data that strain our standard statistical techniques and challenge our ability to distinguish between profound, universal discoveries and those idiosyncratic to our subject sample or functionally irrelevant. We still don’t know whether ‘bigger is better’ nor understand the source or functional consequences of individual differences in the size and shape of brain regions. Certainly we don’t know enough to look at a person’s brain and guess with accuracy how smart they are, how good they are, or, yes, even how many friends they have.

Just look at this graph from the Nature Neuroscience paper plotting amygdala volume on the horizontal axis and social network size on the vertical axis:

The figure above shows each subject as a black dot (for younger participants) or a gray triangle (for older ones). The diagonal line shows a mathematical correlation between amygdala volume and social network size, but look at how many dots and triangles lie away from the line. For the same amygdala volume (say, 3 cubic mm), there are dots that lie far above the line and others that lie far below it. No one looking at this figure can say that amygdala size determines one’s sociability. Perhaps it plays some small role, sure. But we are not slaves to our amygdala volumes, just like we’re not slaves to our overall brain size, our fissural patterns or cerebral convolutions. Our abilities and thoughts do come from our brains, but we have to keep in mind that those brains are far more complex than we can fathom. Who you are can never be reduced to a list of measured volumes. It’s important that we remember that, and that we never return to those days of ‘mine’s bigger than yours.’