Outsourcing Memory

3088541520_00a0721cde_b

Do you rely on your spouse to remember special events and travel plans? Your coworker to remember how to submit some frustrating form? Your cell phone to store every phone number you’ll ever need? Yeah, me too. You might call this time saving or delegating, but if you were a fancy psychologist you’d call it transactive memory.

Transactive memory is a wonderful concept. There’s too much information in this world to know and remember. Why not store some of it in “the cloud” that is your partner or coworker’s brain or in “the cloud” itself, whatever and wherever that is? The idea of transactive memory came from the innovative psychologist Daniel Wegner, most recently of Harvard, who passed away in July of this year. Wegner proposed the idea in the mid-80s and framed it in terms of the “intimate dyad” – spouses or other close couples who know each other very well over a long period of time.

Transactive memory between partners can be a straightforward case of cognitive outsourcing. I remember monthly expenses and you remember family birthdays. But it can also be a subtler and more interactive process. For example, one spouse remembers why you chose to honeymoon at Waikiki and the other remembers which hotel you stayed in. If the partners try to recall their honeymoon together, they can produce a far richer description of the experience than if they were to try separately.

Here’s an example from a recent conversation with my husband. It began when my husband mentioned that a Red Sox player once asked me out.

“Never happened,” I told him. And it hadn’t. But he insisted.

“You know, years ago. You went out on a date or something?”

“Nope.” But clearly he was thinking of something specific.

I thought really hard until a shred of a recollection came to me. “I’ve never met a Red Sox player, but I once met a guy who was called up from the farm team.”

My husband nodded. “That guy.”

But what interaction did we have? I met the guy nine years ago, not long before I met my husband. What were the circumstances? Finally, I began to remember. It wasn’t actually a date. We’d gone bowling with mutual friends and formed teams. The guy – a pitcher – was intensely competitive and I was the worst bowler there. He was annoyed that I was ruining our team score and I was annoyed that he was taking it all so seriously. I’d even come away from the experience with a lesson: never play games with competitive athletes.

Apparently, I’d told the anecdote to my husband after we met and he remembered a nugget of the story. Even though all of the key details from that night were buried somewhere in my brain, I’m quite sure that I would never have remembered them again if not for my husband’s prompts. This is a facet of transactive memory, one that Wegner called interactive cueing.

In a sense, transactive memory is a major benefit of having long-term relationships. Sharing memory, whether with a partner, parent, or friend, allows you to index or back up some of that memory. This fact also underscores just how much you lose when a loved one passes away. When you lose a spouse, a parent, a sibling, you are also losing part of yourself and the shared memory you have with that person. After I lost my father, I noticed this strange additional loss. I caught myself wondering when I’d stopped writing stories on his old typewriter. I realized I’d forgotten parts of the fanciful stories he used to tell me on long drives. I wished I could ask him to fill in the blanks, but of course it was too late.

Memories can be shared with people, but they can also be shared with things. If you write in a diary, you are storing details about current experiences that you can access later in life. No spouse required. You also upload memories and information to your technological gadgets. If you store phone numbers in your cell phone and use bookmarks and autocomplete tools in your browser, you are engaging in transactive memory. You are able to do more while remembering less. It’s efficient, convenient, and downright necessary in today’s world of proliferating numbers, websites, and passwords.

In 2011, a Science paper described how people create transactive memory with online search engines. The study, authored by Betsy Sparrow, Jenny Liu, and Wegner, received plenty of attention at the time, including here and here.

In one experiment, they asked participants either hard or easy questions and then had them do a modified Stroop task that involved reporting the physical color of a written word rather than naming the word. This was a measure of priming, essentially whether a participant has been thinking about that word or similar concepts recently. Sometimes the participants were tested with the names of online search engines (Google, Yahoo) and at others they were tested with other name brands (Nike, Target). After hard questions, the participants took much longer to do the Stroop task with Google and Yahoo than with the other brand names, suggesting that hard questions made them automatically think about searching the Internet for the answer.

Screen Shot 2013-11-21 at 1.53.54 PM

The other experiments described in the paper showed that people are less likely to remember trivia if they believe they will be able to look it up later. When participants thought that items of trivia were saved somewhere on a computer, they were also more likely to remember where the items were saved than they were to remember the actual trivia items themselves. Together, the study’s findings suggest that people actively outsource memory to their computers and to the Internet. This will come as no surprise to those of us who can’t remember a single phone number offhand, don’t know how to get around without the GPS, and hop on our smartphones to answer the simplest of questions.

Search engines, computer atlases, and online databases are remarkable things. In a sense, we’d be crazy not to make use of them. But here’s the rub: the Internet is jam-packed with misinformation or near-miss information. Anti-vaxxers, creationists, global warming deniers: you can find them all on the web. And when people want the definitive answer, they almost always find themselves at Wikipedia. While Wikipedia has valuable information, it is not written and curated by experts. It is not always the God’s-honest-truth and it is not a safe replacement for learning and knowing information ourselves. Of course, the memories of our loved ones aren’t foolproof either, but at least they don’t carry the aura of authority that comes with a list of citations.

Speaking of which. There is now a Wikipedia page for “The Google Effect” that is based on the 2011 Science article. A banner across the top shows an open book featuring a large question mark and the following warning: “This article relies largely or entirely upon a single source. . . . Please help improve this article by introducing citations to additional sources.” The citation for the first section is a dead link. The last section has two placeholders for citations, but in lieu of numbers they say, According to whom?

Folks, if that ain’t a reminder to be wary of the outsourcing your brain to Google and Wikipedia, I don’t know what is.

_________

Photo credits:

1. Photo by Mike Baird on Flickr, used via Creative Commons license

2. Figure from “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips” by Betsy Sparrow, Jenny Liu, and Daniel M. Wegner.

Sparrow B, Liu J, & Wegner DM (2011). Google effects on memory: cognitive consequences of having information at our fingertips. Science (New York, N.Y.), 333 (6043), 776-8 PMID: 21764755

Looking Schizophrenia in the Eye

272994276_3c83654e97_bMore than a century ago, scientists discovered something usual about how people with schizophrenia move their eyes. The men, psychologist and inventor Raymond Dodge and psychiatrist Allen Diefendorf, were trying out one of Dodge’s inventions: an early incarnation of the modern eye tracker. When they used it on psychiatric patients, they found that most of their subjects with schizophrenia had a funny way of following a moving object with their eyes.

When a healthy person watches a smoothly moving object (say, an airplane crossing the sky), she tracks the plane with a smooth, continuous eye movement to match its displacement. This action is called smooth pursuit. But smooth pursuit isn’t smooth for most patients with schizophrenia. Their eyes often fall behind and they make a series of quick, tiny jerks to catch up or even dart ahead of their target. For the better part of a century, this movement pattern would remain a mystery. But in recent decades, scientific discoveries have led to a better understanding of smooth pursuit eye movements – both in health and in disease.

Scientists now know that smooth pursuit involves a lot more than simply moving your eyes. To illustrate, let’s say a sexy jogger catches your eye on the street. When you first see the runner, your eyes are stationary and his or her image is moving across your retinas at some relatively constant rate. Your visual system (in particular, your visual motion-processing area MT) must first determine this rate. Then your eyes can move to catch up with the target and match its speed. If you do this well, the jogger’s image will no longer be moving relative to your retinas. From your visual system’s perspective, the jogger is running in place and his or her surroundings are moving instead. From both visual cues and signals about your eye movements, your brain can predict where the jogger is headed and keep moving your eyes at just the right speed to keep pace.

Although the smooth pursuit abnormalities in schizophrenia may sound like a movement problem, they appear to reflect a problem with perception. Sensitive visual tests show that motion perception is disrupted in many patients. They can’t tell the difference between the speeds of two objects or integrate complex motion information as well as healthy controls. A functional MRI study helped explain why. The study found that people with schizophrenia activated their motion-processing area MT less than controls while doing motion-processing tasks. The next logical question – why MT doesn’t work as well for patients – remains unanswered for now.

In my last two posts I wrote about how delusions can develop in healthy people who don’t suffer from psychosis. The same is true of not-so-smooth pursuit. In particular, healthy relatives of patients with schizophrenia tend to have jerkier pursuit movements than subjects without a family history of the illness. They are also impaired at some of the same motion-processing tests that stymie patients. This pattern, along with the results of twin studies, suggests that smooth pursuit dysfunction is inherited. Following up on this idea, two studies have compared subjects’ genotypes with the inheritance patterns of smooth pursuit problems within families. While they couldn’t identify exactly which gene was involved (a limitation of the technique), they both tracked the culprit gene to the same genetic neighborhood on the sixth chromosome.

Despite this progress, the tale of smooth pursuit in schizophrenia is more complex than it appears. For one, there’s evidence that smooth pursuit problems differ for patients with different forms of the disorder. Patients with negative symptoms (like social withdrawal or no outward signs of emotion) may have problems with the first step of smooth pursuit: judging the target’s speed and moving their eyes to catch up. Meanwhile, those with more positive symptoms (like delusions or hallucinations) may have more trouble with the second step: predicting the future movement of the target and keeping pace with their eyes.

It’s also unclear exactly how common these problems are among patients; depending on the study, as many as 95% or as few as 12% of patients may have disrupted smooth pursuit. The studies that found the highest rates of smooth pursuit dysfunction in patients also found rates as high as 19% for the problems among healthy controls. These differences may boil down to the details of how the eye movements were measured in the different experiments. Still, the studies all agreed that people with schizophrenia are far more likely to have smooth pursuit problems than healthy controls. What the studies don’t agree on is how specific these problems are to schizophrenia compared with other psychiatric illnesses. Some studies have found smooth pursuit abnormalities in patients with bipolar disorder and major depression as well as in their close relatives; other studies have not.

Despite these messy issues, a group of scientists at the University of Aberdeen in Scotland recently tried to tell whether subjects had schizophrenia based on their eye movements alone. In addition to smooth pursuit, they used two other measures: the subject’s ability to fix her gaze on a stable target and how she looked at pictures of complex scenes. Most patients have trouble holding their eyes still in the presence of distractors and, when shown a meaningful picture, they tend to look at fewer objects or features in the scene.

Taking the results from all three measures into account, the group could distinguish between a new set of patients with schizophrenia and new healthy controls with an accuracy of 87.8%. While this rate is high, keep in mind that the scientists removed real-world messiness by selecting controls without other psychiatric illnesses or close relatives with psychosis. This makes their demonstration a lot less impressive – and a lot less useful in the real world. I don’t think this method will ever become a viable alternative to diagnosing schizophrenia based on their clinical symptoms, but the approach may hold promise in a similar vein: identifying young people who are at risk for developing the illness. Finding these individuals and helping them sooner could truly mean the difference between life and death.

_____

Photo credit: Travis Nep Smith on Flickr, used via Creative Commons License

Benson PJ, Beedie SA, Shephard E, Giegling I, Rujescu D, & St Clair D (2012). Simple viewing tests can detect eye movement abnormalities that distinguish schizophrenia cases from controls with exceptional accuracy. Biological psychiatry, 72 (9), 716-24 PMID: 22621999

Memory: Up in Smoke?

002578cd_scan199_0199I recently joined a memory lab at Wayne State University. The timing seems fitting, as I’ve been doing a little memory experiment of my own of late. My father died ten years ago today and I’ve found myself wondering how my memory of him has fared over the decade. Which parts of him do I remember and which have I lost? They say we live on after we die, if nowhere else than in the memories of those we leave behind. Is it true, or does my father die a little each day as my brain cells age and adjust the strengths of their tiny connections?

I do, at least, remember how my father looked. Certain small details stick out in my memory – the wart beside his nose, his dulled gold wedding band beside a broad, flat knuckle, the remarkable definition of his calf muscles (thanks to his marathon bike rides). I can still see how he brushed his hair back from his face and how he crossed his legs – ankle to knee – and mopped up his sweat with a paper towel after a long ride. But are those the memories that matter? Do I remember how it felt to hug him? Do I remember all of the stories from his youth or any particular instance (of the many) when he said that he loved me? Not really. Not well enough to save him from oblivion.

I imagine I’m not the first person to experience the guilt of forgetting.

Unfortunately, memory loss picks up speed with the passage of time and the brain changes associated with old age. We will only ever have more to feel guilty about. But sometimes, on rare and bittersweet occasions, a chance encounter can trigger a memory we didn’t know we had. It is the psychological equivalent to finding coins wedged between the cushions of the couch and it happened to me a couple years back.

I was walking home from work when I smelled something. It was an odor I couldn’t identify, one that didn’t seem familiar, and yet it filled me with a sense of well-being. I stopped walking and inhaled deeply through my nose. What on earth was this compound? I spotted a man walking half a block ahead of me. He was a professor type with long white hair, a briefcase, and a trail of smoke fanning out behind him. The smell had to be coming from him, yet it was nothing like cigarette smoke.

I started walking again and then picked up the pace to get closer to the man. I’m not proud to say it, but I started to follow him, inhaling as I went. When he turned a corner I caught him in profile and saw that he was smoking a pipe. The intriguing smell was that of pipe smoke. For a moment I was confused. I didn’t recall having ever smelled someone smoking a pipe before and I find both cigar and cigarette smoke aversive.

Then I remembered hearing stories about my dad’s pipe. A professor type himself, my father smoked a pipe for many years and only gave up the habit after a triple bypass surgery. I was three years old at the time. Thanks to childhood amnesia, I don’t remember seeing or smelling my father with his pipe. Yet the memory of that smell, and the comfort I once associated it with, have been buried in my brain all these years like lost coins.

In theory, the memory isn’t a positive one. The secondhand smoke my brother and I inhaled early in life may have had something to do with the asthma we developed later in childhood. Still, my reaction to that stranger’s pipe smoke feels positive.  Precious, even. I’d like to think it reflects how I felt in those early years when I sat in my father’s lap or wrapped my fingers around those broad, flat knuckles. Contented and safe. And as a mother, I’d like to think that I’m planting the same warm feelings in my young daughter. Maybe someday after I’m gone an association will unearth them and she can revisit that innocent comfort all over again.

002578cd_scan46_0046

Even after I solved the mystery of the scent I followed the smoking stranger for a couple more blocks, inhaling and even closing my eyes as I experienced something of my father that I never knew I knew. It was hard to turn back for home. I didn’t want to lose him quite yet. I wasn’t ready. But then again no one ever is.

___

Photo credits: Sally Frye Schwarzlose

At the Gates of Sleep

497736998_45c09a136e_oNow that my daughter is about to reach her first birthday, I’m in the mood to reflect on the year that just passed. Unfortunately, my recollections of it are a little fuzzy, probably because I can count on one hand the number of times I’ve enjoyed a good night’s sleep over the past year. Some people have babies who regularly sleep through the night and I am happy for them. Truly, I am. But clearly I was not meant to be in their ranks.

Still, the never-ending parade of nighttime awakenings has taught me something about my own brain. It is precisely tuned to hear my baby. Although I sleep blithely through my husband’s thunderous snoring and the loud buzz of his alarm clock – multiple times a day, thanks to the snooze button – I awaken at the faintest sound of my daughter’s sighs, coos, or grumbles. When she cries, I am immediately awake while my husband sleeps on beside me, undisturbed.

People are generally able to sleep through minor sounds and sensations thanks to a subcortical structure in the brain called the thalamus. This structure receives incoming signals from our senses and relays them to cortical areas devoted to processing sensory information like sounds or tactile sensations. When we’re awake, the thalamus faithfully relays nearly every sensory signal on to the cortex. But when we’re asleep, neurons in the thalamus participate in strong, synchronized waves of activity that squelch incoming signals. As a result, about 70% of these signals never make it to the cortex. This process, known as sensory-gating, is how we manage to sleep through the roar of rainstorms or the brush of the sheets against our skin each time we turn in bed. It is also how we sleep through our husband’s room-rattling snores.

Yet some sensory information does get through to the rest of the brain during sleep. These signals do get processed and can even wake us up if they are either intense (like a loud noise) or personally relevant. A clever study illustrated the importance of personal relevance by exposing sleeping subjects to a loud presentation (via tape recorder) of their own name spoken aloud. The scientists played the recording either normally or backwards and found that subjects awoke in less than half the time when they heard their names presented in the recognizable form.

So did my daughter, in effect, sleep train me by training my brain to recognize her sounds as personally relevant? It’s a plausible explanation, but one that is ultimately lacking. It cannot explain that first night when I slept beside my baby at the hospital nearly one year ago. Although I had labored through the entire night before and had not slept in the ensuing day, I awoke constantly to every little sound my mewing newborn made, not to mention the cries that told me she wanted to nurse. She’d had no time to train me; I had come pre-trained. Just as my breasts were primed to make milk for her, my brain was primed to wake for her. We seemed to be engineered for one other, mother and child, body and brain. And we spent that first long night discovering how clever a designer Nature can be, while my husband slept peacefully on the couch.

___

Photo credit: planetchopstick

Divvying Up Baby

I recently bought my baby new pajamas with a decal that says, “50% Dad + 50% Mom = 100% Me!” I couldn’t resist an outfit that doubles as both math and biology lessons. But on further reflection, I’ve realized that this simple formula is wrong in more ways than one.

To begin with, my baby doesn’t look like she’s 50% Mom. At best, she looks about 10% Mom. I’ve written before about how our daughter would be a mixture of traits from European and Indian peoples, reflecting her mom and dad’s respective heritages. Yet she arrived looking like a wholly Indian baby. This is fine, of course. I think she’s absolutely perfect with her caramel skin and jet black eyes and hair. But it’s hard to keep a straight face when friends politely ask us who we think she resembles. And when I’m out with her in public I’m aware that I look like her nanny, if not someone who’s stolen a baby. She truly doesn’t look like she’s mine.

How else is the formula wrong? Genetically. Sure, our daughter’s nuclear genes are comprised of DNA sequences from both my husband and me. But she has another sort of DNA in her body, one that literally outweighs the conventional type. This DNA lives in her mitochondria, the bacteria-like structures that populate our every cell. Mitochondria are like tiny internal combustion engines, generating all of our energy through respiration and releasing heat that makes us warm-blooded animals. Although mitochondria don’t have many actual genes, they each carry several copies of those genes. Multiply that by the 10 million billion or so mitochondria in our bodies and you’ll find that we each contain more DNA by weight for mitochondria than humans. And these mitochondrial genes are inherited entirely from the mother.

Mitochondrial genes can’t claim credit for your eye color, jaw shape, or intrinsic disposition. Their reach is mostly limited to details of your metabolism and your susceptibility to certain diseases. But mitochondrial DNA is significant for another reason: scientists use it to trace human lineages across the globe. After all, they don’t get reshuffled in each generation as our nuclear genes are. Mitochondrial inheritance can be traced back hundreds of thousands of years, following the maternal lineage at every generation. Unlike the historian’s genealogy, which often follows surnames passed down from fathers, the scientist’s genealogy is a tree built of mothers alone.

So it is through our mothers that our heritages can be traced into the distant past. In every one of her cells, my baby carries a map leading back through me and my mother and her mother and beyond . . . unbroken all the way back to our earliest origins as modern humans. And since my baby is a girl, she can continue that line. So long as she has a daughter and she has a daughter and so on, I will remain a part of that ongoing chain.

My condolences to all you men out there. Same to all you women who only had sons. You’ve passed on your nuclear genes and your child may be the spitting image of you, but your mitochondrial chain has been broken and you will be left out of the biologist’s tree. Although my daughter looks classically Indian, her mitochondrial DNA reveal only her European lineage. Despite the hair, eyes, and skin she inherited from her daddy, my baby’s mitochondria are mine all mine. She and I are links in a traceable chain of human life while my husband is nowhere to be found.

That’s something I can remember the next time I’m mistaken for the nanny.

Halfsies!

My husband spotted another one yesterday. A half-Indian, half-Caucasian blend. The woman had an Indian first and last name, but her features were more typical of a Persian ethnicity than either Indian or white. My husband overheard her describing her heritage and smiled. These days, with a half-Indian, half-white baby on the way, we’re hungry for examples of what our baby might look like. We’ve found a few examples among our acquaintances and some of my husband’s adorable nieces and nephews, not to mention the occasional Indian-Caucasian celebrity like Norah Jones. We think our baby will be beautiful and perfect, of course, although we’re doubtful that she’ll look very much like either one of us.

Many couples and parents-to-be are in the same position we are. In the United States, at least 1 in 7 marriages takes place between people of different races or ethnicities, and that proportion only seems to be increasing. It’s a remarkable statistic, particularly when you consider that interracial marriage was illegal in several states less than 50 years ago. (See the story of Loving Day for details on how these laws were finally overturned.) In keeping with the marriage rates, the number of American mixed race children is skyrocketing as well. It’s common to be, as a friend puts it, a “halfsie.” At least in urban areas like Los Angeles, being mixed race has lost the negative stigma it had decades ago and many young people celebrate their mixed heritages. Their unique combinations of facial and physical features can be worn with pride. But the mixture goes deeper than just the skin and eyes and hair.

At the level of DNA, all modern humans are shockingly similar to one another (and for that matter, to chimpanzees). However, over the hundreds of thousands of years of migrations to different climates and environments, we’ve accumulated a decent number of variant genes. Some of these differences emerged and hung around for no obvious reason, but others stuck because they were adaptive for the new climates and circumstances that different peoples found themselves in. Genes that regulate melanin production and determine skin color are a great example of this; peoples who stayed in Africa or settled in other locations closer to the Equator needed more protection from the sun while those who settled in sites closer to the poles may have benefited from lighter skin to absorb more of the sun’s scarce winter rays and stave off vitamin D deficiency.

In a very real way, the genetic variations endemic to different ethnic groups carry the history of their people and the environments and struggles that they faced. For instance, my husband’s Indian heritage puts him at risk for carrying a gene mutation that causes alpha thalassemia. If a person inherits two copies of this mutation (one from each parent), he or she will either die soon after birth or develop anemia. But inheriting one copy of the gene variant confers a handy benefit – it makes the individual less likely to catch malaria. (The same principle applies for beta thalassemia and sickle cell anemia found in other ethnic populations.) Meanwhile, my European heritage puts me at risk for carrying a genetic mutation linked to cystic fibrosis. Someone who inherits two copies of this gene will develop the debilitating respiratory symptoms of cystic fibrosis, but thanks to a handy molecular trick, those with only one copy may be less susceptible to dying from cholera or typhoid fever. As the theory goes, these potentially lethal mutations persist in their respective populations because they confer a targeted survival advantage.

Compared to babies born to two Indian or two Caucasian parents, our baby has a much lower risk of inheriting alpha thalassemia or cystic fibrosis, respectively, since these diseases require two copies of the mutation. But our child could potentially inherit one copy of each of these mutations, endowing her with some Suberbaby immunity benefits but also putting her children at risk for either disease (depending on the ethnicity of her spouse).

The rise in mixed race children will require changes down the road for genetic screening protocols. It will also challenge preconceived notions about appearance, ethnicity, and disease. But beyond these practical issues, there is something wonderful about this mixing of genetic variants and the many thousands of years of divergent world histories they represent. With the growth in air travel, communication, and the Internet, it’s become a common saying that the world is getting smaller. But Facebook and YouTube are only the beginning. Thanks to interracial marriage, we’ve shrunk the world to the size of a family. And now, in the form of our children’s DNA, it has been squeezed inside the nucleus of the tiny human cell.

How the Giraffe Didn’t Get His Long Neck

iStock_000009818096XSmallIt’s the early 19th century, before Darwin’s Origin of Species. Before Mendel’s peas and Watson and Crick’s double helix. Scientists are struggling with the big questions of inheritance and reproduction without the aid of modern scientific methods. In this vacuum of concrete information, odd theories gained traction – some based on racial or social agendas, others on intuition or supposition.

Lamarckism, or soft inheritance, was one of the more pervasive of these ideas. According to the theory, organisms can inherit acquired traits. In the days before Darwin’s evolutionary theory, Lamarckism helped explain why organisms were so well adapted to their environments. Take the example of the giraffe’s long neck. A giraffe of yore (when giraffes had shorter necks) had to stretch its neck to reach the luscious leaves further up on tree branches. All that stretching lengthened its neck a little, and this longer neck was passed on to its offspring, who in turn stretched their necks and sired offspring who could reach even higher and munch the choicest leaves. It went on like this until giraffes were tall enough that they didn’t have to strain to reach leaves anymore.

It was a neat explanation that appealed to many 19th century scientists; even Darwin occasionally made use of it. But the theory had a nasty side as well. People applied it to humans and used it to explain differences between races or socioeconomic classes, calling the phenomenon degeneration. The mental and physical effects of years spent boozing and behaving badly would be passed down from father to son to grandson, each successively worse than his predecessor as the collective sum of each reckless lifetime added up. There was a technical term for the poor souls who wound up literally inheriting the sins of their fathers: degenerates. Certain scientists (or pseudoscientists) of the era, such as Benedict Morel and Cesare Lombroso, used the ideas of soft inheritance and degeneration to explain how violence, poverty, and criminality were heritable and could be categorized and studied.

Lamarckism, in the hands of Morel and others, offered a credible explanation of why the son of an alcoholic was more likely to be an alcoholic himself. But it did so by implying that the poor, the miserable, the suffering were inherently inferior to those with better, healthier (and probably wealthier) lifestyles. The poor were genetically degenerate, and they had no one to blame but themselves.

Thank god, thank god, Lamarckism and its corollary, degeneration, were debunked. By the 20th century, scientists knew that inheritance didn’t work that way. Our genetic information isn’t changed by what we do during our lifetimes. Besides, our sex cells are segregated from the other cells in our bodies. We don’t descend from our mothers, subject to all the stresses, strains, and yes, even boozing that their brains and bodies may have experienced. Instead, we descend from their ovaries. And thankfully, those things are well protected.

Only there’s a catch. In the last few decades, we’ve learned that while Lamarckism isn’t correct, it isn’t entirely wrong either. We’ve learned this through the field of epigenetics (literally, above genetics). This burgeoning field has helped us understand why the causes of so many heritable diseases still elude us, nearly a decade after we sequenced the human genome. Epigenetics adds untold complexity to an already complex genome. Some of its mechanisms are transient, others last a lifetime, but they all regulate gene expression and are necessary for normal growth and development. Thanks to them, females inactivate one of their X chromosomes (so women don’t get a double dose of proteins from that set of genes). Epigenetic mechanisms also oversee cellular differentiation, the process by which embryonic cells containing identical genetic material become skin cells, hepatocytes, neurons, and every other diverse cell type in the human body.

It now appears that epigenetic factors play an enormous role in human health. And what we do in our lives, the choices we make, affect our epigenome. Exposure to chemicals, stressors, or dietary changes can cause long-lasting tags to sit on our DNA or chromatin, controlling which genes are read and transcribed into proteins. For example, chronic cocaine use causes lasting epigenetic changes in the nucleus accumbens, a brain area linked to addiction. These changes boost plasticity and drug-related gene expression, which in turn probably contribute to the reinforcing effects of the drug.

But that’s not all. Epigenetic effects can span generations. No, the hardships of your parents’ lifetimes aren’t literally passed on to you in a cumulative fashion, giving you that longer neck or boozier disposition that Lamarckism might predict. Nonetheless, what your parents (and even your grandmother) did before you were born can be affecting your epigenome today.

It’s pretty wild stuff. Even if you’ve never met your maternal grandmother, even if she died long before your birth, her experiences and behavior could be affecting your health. First of all, the prenatal environment your mother experienced can have epigenetic effects on her that then propagate on to the next generation (you). Moreover, all the eggs a female will ever make have already formed in her ovaries by the time she’s born. They may not be mature, but they are there, DNA and all. I think that’s a pretty amazing transgenerational link. It means that half the strands of DNA that wound up becoming you were initially made inside your grandmother’s body. As science reveals the power of the prenatal environment, evidence is mounting that even what your grandmother ate during your mother’s gestational period and whether she suffered hardships like famine can alter your own risk for heart disease or diabetes.

Luckily, epigenetic gene regulation is softer and less absolute than its cousin Lamarckism. It is reversible and it can’t accumulate, generation upon generation, to create a degenerate class. The science of today is more humane than the old guys predicted, but it doesn’t let us off the hook. Epigenetics should remind us that we must be thoughtful in how we live. Our choices matter, for ourselves and for our offspring. We don’t yet understand how epigenetic mechanisms control our health and longevity, but that isn’t stopping our bodies from making us pay for what we do now.

%d bloggers like this: