Seeds of Science

154824818_22980b9cc5_oOCTOBER, 1889. Scientists flocked to Berlin for the annual meeting of the German Anatomical Society. The roster read like a who’s who of famous scientists of the day.

Into the fray marched a little-known Spaniard who’d spent years in Valencia and, later, Barcelona improving upon a method that made neurons visible under a microscope. Thanks to his patient tinkering, the Spaniard could see neurons in all their delicate, branching intricacy. He wanted to share his discoveries with other scientists. As he’d later say, he “gathered together for the purpose all my scanty savings and set out, full of hope, for the capital of the German Empire.”

In those days, scientific meetings were different from the parade of slideshows and posters sessions that they are today. The scientists at the 1889 meeting first read aloud from their papers and then took to their microscopes for demonstrations. The Spaniard unpacked his specimens and put them under several microscopes for the circulating scientists to view. Few came to see, in part because they expected little from a Spaniard. Spain was no scientific powerhouse. It lacked the scientific infrastructure and resources of countries like Germany, England, and France. What could one of its humble scientists possibly contribute to the meeting?

For the few curious gents who did stop by his demonstration, the Spaniard described his technique in broken French. Then he stepped aside and let them peer into the microscopes. Those who did became converts. The specimens spoke for themselves. Clear and complete, they revealed the intricate microarchitecture of neural structures like the retina and cerebellum.

Prominent German anatomists immediately adopted his technique and the Spaniard’s name quickly became known throughout the scientific community.

That name was Santiago Ramón y Cajal.Cajal

Ask any neuroscientist for his or her hero in the field and you are likely to hear this very name. Many consider him the founder of neurobiology as we know it today. The observations he made with his improved technique for seeing neurons allowed him to resolve a major controversy of the time and show that neurons are separate cells (as opposed to one huge, connected net). For his work, he won the Nobel Prize in Physiology or Medicine in 1906.

In short, he was an amazing guy who did amazing things – even though he wasn’t born in a wealthy nation known for science. Luckily, Cajal was able to get the tools and resources he needed to do his work. But what if he’d lived elsewhere, somewhere without the funds or equipment he needed? How far would that have set neuroscience back?

When I recently read an account of Cajal’s visit to Berlin, I found myself asking these questions. They reminded me of a Boston-based organization that is trying to equip the Cajals of today. The organization, a non-profit called Seeding Labs, partners with scientists, universities, and biomedical companies to equip stellar labs around the globe. (Full disclosure: The founder of Seeding Labs is the daughter of a family friend, which is how I first learned about the organization.)

The group’s core idea makes a lot of sense. Well-funded labs in the U.S. and other wealthy nations tend to update to newer models of their equipment often. These labs often discard perfectly functional older models that would be invaluable to scientists in developing nations. I’ve witnessed this kind of waste at major American universities. In the rush of doing science, people don’t have the time or energy to find new homes for their old autoclaves. They don’t even realize there’s a reason to try. While Seeding Labs now runs several programs to advance science in developing nations, its original aim was simply to turn one lab’s trash into another lab’s treasure.

I’m sure some struggling postdoc or assistant professor will read this post and scoff. Why devote energy to helping scientists in developing nations when we have a glut of scientists and a dearth of grants right here at home? It’s certainly true that research funding in America has tanked in recent years – a fact that needs to change. But in some countries the need is so great that a secondhand centrifuge could mean the difference between disappointment and discovery. That’s a pretty decent return on investment.

Here’s another benefit: labs in developing nations may be studying different problems than we are. They might focus on addressing local health or environmental concerns that we aren’t even aware of. So while scientists in wealthy nations find themselves racing to publish about well-trodden topics before competing labs, people in other countries may be researching crucial problems that wouldn’t otherwise be addressed.

And who knows? Perhaps these scientists are a good investment, in part, because of their relative isolation. Maybe a little distance from the scientific fray promotes ingenuity, creativity, and some good-old-fashioned tinkering. It certainly worked for Cajal.

____

Source: Stevens, Leonard A. Explorers of the Brain. Alfred A. Knopf, New York, 1971.

First photo credit: baigné par le soleil on Flickr, used via Creative Commons license

Second photo credit: Anonymous [Public domain], via Wikimedia Commons

Modernity, Madness, and the History of Neuroscience

4666194636_a4d78d506e_o

I recently read a wonderful piece in Aeon Magazine about how technology shapes psychotic delusions. As the author, Mike Jay, explains:

Persecutory delusions, for example, can be found throughout history and across cultures; but within this category a desert nomad is more likely to believe that he is being buried alive in sand by a djinn, and an urban American that he has been implanted with a microchip and is being monitored by the CIA.

While delusional people of the past may have fretted over spirits, witches, demons and ghouls, today they often worry about wireless signals controlling their minds or hidden cameras recording their lives for a reality TV show. Indeed, reality TV is ubiquitous in our culture and experiments in remote mind-control (albeit on a limited scale) have been popping up recently in the news. As psychiatrist Joel Gold of NYU and philosopher Ian Gold of McGill University wrote in 2012: “For an illness that is often characterized as a break with reality, psychosis keeps remarkably up to date.”

Whatever the time or the place, new technologies are pervasive and salient. They are on the tips of our tongues and, eventually, at the tips of our fingers. Psychotic or not, we are all captivated by technological advances. They provide us with new analogies and new ways of explaining the all-but-unexplainable. And where else do we attempt to explain the mysteries of the world, if not through science?

As I read Jay’s piece on psychosis, it struck me that science has historically had the same habit of co-opting modern technologies for explanatory purposes. In the case of neuroscience, scientists and physicians across cultures and ages have invoked the  innovations of their day to explain the mind’s mysteries. For instance, the science of antiquity was rooted in the physical properties of matter and the mechanical interactions between them. Around 7th century BC, empires began constructing great aqueducts to bring water to their growing cities. The great engineering challenge of the day was to control and guide the flow of water across great distances. It was in this scientific milieu that the ancient Greeks devised a model for the workings of the mind. They believed that a person’s thoughts, feelings, intellect and soul were physical stuff: specifically, an invisible, weightless fluid called psychic pneuma. Around 200 AD, a physician and scientist of the Roman Empire (known for its masterful aqueducts) would revise and clarify the theory. The physician, Galen, believed that pneuma fills the brain cavities called ventricles and circulates through white matter pathways in the brain and nerves in the body just as water flows through a tube. As psychic pneuma traveled throughout the body, it carried sensation and movement to the extremities. Although the idea may sound farfetched to us today, this model of the brain persisted for more than a millennium and influenced Renaissance thinkers including Descartes.

By the 18th century, however, the science world was a-buzz with two strange new forces: electricity and magnetism. At the same time, physicians and anatomists began to think of the brain itself as the stuff that gives rise to thought and feeling, rather than a maze of vats and tunnels that move fluid around. In the 179os, Luigi Galvani’s experiments zapping frog legs showed that nerves communicate with muscles using electricity. So in the 19th century, just as inventors were harnessing electricity to run motors and light up the darkness, scientists reconceived the brain as an organ of electricity. It was a wise innovation and one supported by experiments, but also driven by the technical advances of the day.

Science was revolutionized once again with the advent of modern computers in the 1940s and ‘50s. In the 1950s, the new technology sparked a surge of research and theories that used the computer as an analogy for the brain. Psychologists began to treat mental events like computer processes, which can be broken up and analyzed as a set of discrete steps. They equated brain areas to processors and neural activity in these areas to the computations carried out by computers. Just as computers rule our modern technological world, this way of thinking about the brain still profoundly influences how neuroscience and psychology research is carried out and interpreted. Today, some labs cut out the middleman (the brain) entirely. Results from computer models of the brain are regularly published in neuroscience journals, sometimes without any data from an actual physical brain.

I’m sure there are other examples from the history of neuroscience in general and certainly from the history of science as a whole. Please comment and share any other ways that technology has shaped the models, themes, and analogies of science!

Additional sources:

Crivellato E & Ribatti D (2007) Soul, mind, brain: Greek philosophy and the birth of neuroscience. Brain Research Bulletin 71:327-336.

Karenberg A (2009) Cerebral Localization in the Eighteenth Century – An Overview. Journal of the History of the Neurosciences, 18:248-253.

_________

Photo Credit: dominiqueb on Flickr, available through Creative Commons

Mother’s Ruin, Moralists, and the Circuitous Path of Science

William_Hogarth_-_Gin_Lane

Update: Since posting this piece, I’ve come across a paper that questions ancient knowledge about the effects of prenatal alcohol exposure. In particular, the author makes a compelling argument that the biblical story mentioned below has nothing to do with the safety of drinking wine while pregnant. Another paper (sorry, paywall) suggests that the “rhetoric of rediscovery” about the potential harm of alcohol during pregnancy was part of a coordinated attempt by “moral entrepreneurs” to sell a moralist concept to the American public in the late 1970s. All of which goes to show: when science involves controversial topics, its tortuous path just keeps on twisting.

If you ask someone to draw you a roadmap of science, you’re likely to get something linear and orderly: a one-way highway, perhaps, with new ideas and discoveries converging upon it like so many on-ramps. We like to think of science as something that slowly and deliberately moves in the right direction. It doesn’t seem like a proper place for off-ramps, not to mention detours, dead-ends, or roundabouts.

In reality, science is messy and more than a little fickle. As I mentioned in the last post, research is not immune to fads. Ideas fall in and out of fashion based on the political, financial, and social winds of the time. I’m not just talking about wacky ideas either. Even the idea that drinking during pregnancy can harm a developing fetus has had its share of rises and falls.

The belief that drinking while pregnant is harmful has been around since antiquity, popping up among the Ancient Greeks and even appearing in the Old Testament when an angel instructs Samson’s mother to abstain from alcohol while pregnant. Yet the belief was far from universal across different epochs and different peoples. In fact, it took a special kind of disaster for England and, in turn, America to rediscover this idea in the 18th century. The disaster was an epidemic . . . of people drunk on gin.

By the close of the 17th century, bickering between England and France caused the British to restrict the import of French brandy and encourage the local production of gin. Soon gin was cheap and freely available to even the poor and working classes. The Gin Epidemic was underway. Rampant drunkenness became a fact of life in England by 1720 and would persist for several decades after. During this time, gin was particularly popular among the ladies – a fact that earned it the nickname “Mother’s Ruin.”

Soon after the start of the Gin Epidemic, a new constellation of abnormalities became common in newborns. Physicians wondered if heavy prenatal exposure to alcohol disrupted fetal development. In 1726, England’s College of Physicians argued that gin was “a cause of weak, feeble and distempered children.” Other physicians noted the rise in miscarriages, stillbirths, and early infant mortality. And by the end of this gin-drenched era, Britain’s scientific community had little doubt that prenatal alcohol could irreversibly harm a developing fetus.

The notion eventually trickled across the Atlantic Ocean and took hold in America. By the early 19th century, American physicians like Benjamin Rush began to discourage the widespread use of alcohol-based treatments for morning sickness and other pregnancy-related ailments. By the middle of the century, research on the effects of prenatal alcohol exposure had become a talking point for the growing temperance movement. Medical temperance journals sprung up with names like Journal of Inebriety and Scientific Temperance Journal. Soon religious and moralistic figures were using the harmful effects of alcohol on fetal development to bolster their claims that all alcohol is evil and should be banned. They often couched the findings in inflammatory language, full of condemnations and reproach. In the end, their tactics worked. The 18th Amendment to the U.S. Constitution was ratified in 1919, outlawing the production, transportation, and sale of alcohol on American soil.

When the nation finally emerged from Prohibition more than thirteen years later, it had fundamentally changed. People were disillusioned with the temperance movement and wary of the moralistic rhetoric that had once seemed so persuasive. They discounted the old familiar lines from teetotal preachers – including those about the harms of drinking while pregnant. Scientists rejected studies published in medical temperance journals and began to deny that alcohol was harmful during pregnancy. In 1942, the prestigious Journal of the American Medical Association published a response to a reader’s question about drinking during pregnancy which said that even large amounts of alcohol had not been shown to be harmful to the developing human fetus. In 1948, an article in The Practitioner recommended that pregnant women drink alcohol with meals to aid digestion. Science was, in essence, back to square one yet again.

It wasn’t until 1973 that physicians rediscovered and named the constellation of features that characterize infants exposed to alcohol in the womb. The disease, fetal alcohol syndrome, is now an accepted medical phenomenon. Modern doctors and medical journals now caution women to avoid alcohol while pregnant. After a few political and religious detours, we’ve finally made it back to where we were in 1900. That’s the funny thing about science: it isn’t always fast or direct or immune to its cultural milieu. But if we all just have faith and keep driving, we’re bound to get there eventually. I’m almost sure of it.

______

Photo Credit: Gin Lane by William Hogarth 1751 (re-engraving by Samuel Davenport circa 1806). Image in public domain and obtained from Wikipedia.

A New America of Mutts?

8687286808_ce53c853e7_oI recently wrote about my biracial daughter and public assumptions about inheritance for the blog DoubleXScience. Nearly the same day, columnist David Brooks’ op-ed piece, “A Nation of Mutts,” appeared in The New York Times. As you might imagine, I read it with interest.

In his column, Brooks writes about how the long-European roots of America are becoming outnumbered by those of immigrants from elsewhere in the world.  Add to that racial intermarriage and mixed-race offspring and you’re left with what Brooks calls the coming New America. What will this New America look like? Brooks is happy to venture guesses, predicting how the complex forces of socioeconomics, education, ethnicity and heritage may play out in coming generations. Among these predictions: that America will become “a nation of mutts, a nation with hundreds of fluid ethnicities from around the world, intermarrying and intermingling.” The piece sparked an outcry from readers and an online conversation via social media, much of it over his use of the term mutt. While it was obviously an unwise and insensitive word for him to use, I think this was the lesser of the problems with his piece.

According to New York Times public editor Margaret Sullivan, columnists are supposed to stir things up. But in “A Nation of Mutts,” Brooks merely takes a centuries-old argument, injects it with Botox, squeezes it into skinny jeans, and calls it something new.

He begins by telling us that “American society has been transformed” as increasing numbers of immigrants have come to the U.S. in recent decades. He adds that, “up until now, America was primarily an outpost of European civilization” with immigrants who came from Northern, Western, Southern, or Central Europe (depending on the era) but all “with European ideas and European heritage.” That is now changing. Brooks tells us that European-American five-year-olds are already a minority. We have thirty years, tops, before Caucasians will be the minority in America overall.

What strikes me is Brooks’ simplistic picture of racial and cultural differences. He portrays America’s past immigrants from Europe as a monolithic, homogeneous bunch (against which he will compare the diverse immigrants of today) when of course this is a straw man. Ask anyone at a European soccer championship match or at the Eurozone bailout negotiations whether Europeans all have the same “European ideas and European heritage.” On second thought, probably better you don’t.

Americans haven’t historically thought of all Europeans as similar or even equal. Take the 19th century Nativists, or Know-Nothings, who thought that immigrants were ruining America. What exotic nation supplied these immigrants? The Tropics of Ireland.

Our country’s long history of interracial children aside, Caucasians from all over Europe have been intermarrying in America for centuries. I have American ancestors dating back to colonial times and am part German and part British with at least one Frenchman and one Scot thrown in for good measure. Why does David Brooks consider my daughter a mutt yet doesn’t consider me one? Because my ancestors all had more or less the same skin color while my husband is several shades darker than me.

So what is David Brooks really recounting when he writes about the coming nation of mutts? What’s so different about the immigrants of today? Mostly superficial details of appearance. Brooks’ New America is based on the preponderance of pigments in skin, the shape and slope of eyes, the texture of hair. His seemingly profound comment is about the spread of a handful of genes that create innocuous (but visible) proteins like melanin. Big frickin’ deal.

Ultimately, David Brooks is guilty of recycling the same tired old tune: immigrants are changing America and who knows what it might become when they’re done with it? Of course immigrants change our national demographics and  cultural melange. Each generation has wrestled with this self-evident fact for different reasons and in different ways. But if there is anything constant about America’s history, it is the presence of immigration and continuous change. Which means that Brooks was wrong when he said that a New America is coming. It has been here all along.

___

Photo credit: Steve Baker on Flickr

Remains of the Plague

The history of science is littered with bones. Since antiquity, humans have studied the remains of the dead to understand the living. The practice is as common now as ever; only the methods have changed. In recent years, high-tech analyses of human remains have solved mysteries ranging from our ancestors’ prehistoric mating patterns to the cause of Beethoven’s death. The latest example of this morbid scientific tradition can be found in the e-pages of this month’s PLOS Pathogens. The colorful cast of characters includes European geneticists, a handful of teeth, a 6th century plague, and the US Department of Homeland Security.

Although the word plague is often used as a synonym for disease, plague actually refers to a particular type of illness caused by the bacterium Yersinia pestis. Rampant infection by Y. pestis was responsible for a recent pandemic in the 19th to 20th centuries. Before that it caused the 14th to 17th century pandemic that included the epidemic known as the Black Death.

Yet the pestilence of pestis may have swept across human populations long before the Black Death. According to historical records, a terrible pandemic killed people from Asia to Africa to Europe between the 6th and 8th centuries. It struck the Roman Empire under the watch of Emperor Justinian I, who contracted the disease himself but survived. The pandemic now bears his name: the Justinianic Plague. But was Justinian’s malady really a plague or has history pinned the blame on the wrong bacterium? A group of researchers in Munich decided to find out.

How?

By digging up ancient graves, of course. And helping themselves to some teeth.

The ancient graves were in an Early Medieval cemetery called Aschheim in the German state of Bavaria. The site was a strange choice; the authors reveal in their paper that the historical record shows no evidence that the Justinianic Plague reached Bavaria. However, the site was conveniently located within driving distance of most of the study’s authors. (It’s always easiest to do your gravedigging closer to home.) The authors did have solid evidence that the graves were from the 6th century and that each grave contained two or more bodies (a common burial practice during deadly epidemics). In total, the group dug up 12 graves and collected teeth from 19 bodies.

The scientists took the teeth back to their labs and tested them for a stretch of DNA unique to Y. pestis. Their logic: if the individuals died from infection by Y. pestis, their remains should contain ample DNA from the bacteria. Of course, some of this DNA would have deteriorated over the course of 1.5 millennia. The scientists would have to make do with what they found. They used three different methods to amplify and detect the bacterial DNA, however they only found a reliably large amount of it in the teeth of one individual, a body they affectionately nicknamed A120. They genotyped the Y. pestis DNA found in A120 to see how the bacterial strain compared with other versions of the bacterium (including those that caused the Black Death and the 19th-20th century plague pandemic.) The analysis showed that the Justinianic strain was an evolutionary precursor to the strain that caused the Black Death. Like the strains that sparked the second and third pandemics, this strain bore the genetic hallmarks of Y. pestis from Asia, suggesting that all three plague pandemics spread from the East.

The authors write that they have solved their historical mystery.

“These findings confirm that Y. pestis was the causative agent of the Justinianic Plague and should end the controversy over the etiological agent of the first plague pandemic.”

Ordinarily, the discussion sections of scientific papers are littered with qualifiers and terms like might be and suggestive. Not so here, even though the authors’ conclusion explains a phenomenon that killed many millions of people worldwide based on data from the decomposing remains of a single person who lived in a region that historians haven’t connected with the pandemic. In most branches of science, sweeping conclusions can only be made based on large and meticulously selected samples. In genetics, such rules can be swept aside. It is its own kind of magic. If you know how to read the code of life, you can peer into the distant past and divine real answers based on a handful of ancient teeth.

As it turns out, the study’s result is more than a cool addition to our knowledge of the Early Middle Ages. Plague would make a terrible weapon in the hands of a modern bioterrorist. That’s why the US Department of Homeland Security is listed as one of the funding sources for this study. So the next time you hear about your tax dollars hard at work, think of Bavarian graves, ancient teeth, and poor old A120.

_____

Photo credit: Dallas Krentzel

ResearchBlogging.org

Harbeck M, Seifert L, Hansch S, Wagner DM, Birdsell D, Parise KL, Wiechmann I, Grupe G, Thomas A, Keim P, Zoller L, Bramanti B, Riehm JM, Scholz HC (2013). Yersinia Pestis DNA from Skeletal Remains from the 6th Century Reveals Insights Into Justiniac Plague PLOS Pathogens DOI: 10.1371/journal.ppat.1003349

Pb on the Brain

6865041631_7bdcf0cc44_o

I’ve got lead on my mind. Lead the element, not the verb; the toxic metal that used to grace every gas tank and paint can in this grand country of ours. For the most part we’ve stopped spewing lead into our environment, but the lead of prior generations doesn’t go away. It lingers on the walls and windows of older buildings, on floors as dust, and in the soil. These days it lingers in my thoughts as well.

I started worrying about lead when my daughter became a toddler and began putting everything in her mouth. I fretted more when I learned that lead is far more damaging to young children than was previously thought. Even a tiny amount of it can irreversibly harm a child’s developing brain, leading to lower IQs, attention problems and behavioral disorders. You may never even see the culprit; lead can sit around as microscopic dust, waiting to be inhaled or sucked off of an infant’s fingers.

Public health programs use blood lead levels (BLLs) to evaluate the amount of lead in a child’s system and decide whether to take preventative or medical action. In the 1960s, only BLLs above 60 μg/dL were considered toxic in children. That number has been creeping downward ever since. In 1985 the CDC’s stated blood lead level of concern became 25 μg/dL and in 1991 it went down to 10. But last year the CDC moved the cutoff down to 5 μg/dL and got rid of the term “level of concern.” That’s because scientists now believe that any amount of lead is toxic. In fact, it seems as if lead’s neurotoxic effects are most potent at BLLs below 5 μg/dL. In other words, a disproportionately large amount of the brain damage occurs at the lowest doses. Recent studies have shown subtle intellectual impairments in kids with BLLs as low as 2 μg/dL (which is roughly the mean BLL of American preschoolers today). All great reasons for parents to worry about even tiny exposures to lead, no?

Yes. Absolutely. Parents never want to handicap their children, even if only by an IQ point or two. But here’s what’s crazy: nearly every American in their fifties, forties, or late-thirties today would have clocked in well over the current CDC’s cutoff when they were little. The average BLL of American preschoolers in the late ‘70s was 15 μg/dL – and 88% had BLLs greater than 10 μg/dL.

These stats made me wonder if whole generations of Americans are cognitively and behaviorally impaired from lead poisoning as children. Have we been blaming our intellectually underwhelming workforce on a mismanaged education system, cultural complacency, or the rise of television and video games when we should have been blaming a toxic metal element?

I was sure I wasn’t the first person to wonder about the upshot of poisoning generations of Americans. And lo and behold, a quick Google search led me to this brilliant article on Mother Jones from January. The piece chronicles a rise in urban crime that began in the ‘60s and fell off precipitously in the early-to-mid ‘90s nationwide. The author, Kevin Drum, walks readers through very real evidence that lead fumes from leaded gasoline were a major cause of the rise in crime (and that increased regulation restricting lead in gasoline could be credited for the sudden drop off.)

The idea certainly sounds far-fetched: generations of city-dwellers were more prone to violence as adults because they breathed high levels of lead fumes when they were kids. It doesn’t seem possible. But when you put the pieces together it’s hard to imagine any other outcome. We know that children of the ‘50s, ‘60s, and ‘70s had BLLs high enough to cause irreversible IQ deficits and behavioral problems (of which aggression and impulse control are particularly common). Why is it so hard to imagine that more of these children behaved violently when they became adults?

In the end, this terrible human experiment in mass poisoning has left me pondering two particular questions. First, what does it mean for generations of children to be, in a sense, retroactively damaged by lead? At the time, our levels were considered harmless, but now we know better. Does knowing now, at this point, explain anything about recent history and current events? Does it explain the remarkable intransigence of certain politicians or the bellicosity of certain talk show hosts, athletes, or drivers with a road rage problem? Aside from the crime wave, what other sweeping societal trends might be credited to the poisoning of children past? How might history have played out differently if we had all been in our right minds?

Finally, I’ve been thinking a lot about the leads and asbestoses and thalidomides of today. Pesticides? Bisphenol A? Flame retardants? What is my daughter licking off of those toys of hers and how is it going to harm her twenty years down the line? This is not just a question for parents. Think crime waves. Think lost productivity and innovation. Today’s children grow up to be tomorrow’s adults. Someday when we are old and convalescing they’ll take the reigns of our society and drive it heaven-knows-where. That makes child health and safety an issue for us all. We may never even know how much we stand to lose.

_____

Photo credit: Zara Evens

Halfsies!

My husband spotted another one yesterday. A half-Indian, half-Caucasian blend. The woman had an Indian first and last name, but her features were more typical of a Persian ethnicity than either Indian or white. My husband overheard her describing her heritage and smiled. These days, with a half-Indian, half-white baby on the way, we’re hungry for examples of what our baby might look like. We’ve found a few examples among our acquaintances and some of my husband’s adorable nieces and nephews, not to mention the occasional Indian-Caucasian celebrity like Norah Jones. We think our baby will be beautiful and perfect, of course, although we’re doubtful that she’ll look very much like either one of us.

Many couples and parents-to-be are in the same position we are. In the United States, at least 1 in 7 marriages takes place between people of different races or ethnicities, and that proportion only seems to be increasing. It’s a remarkable statistic, particularly when you consider that interracial marriage was illegal in several states less than 50 years ago. (See the story of Loving Day for details on how these laws were finally overturned.) In keeping with the marriage rates, the number of American mixed race children is skyrocketing as well. It’s common to be, as a friend puts it, a “halfsie.” At least in urban areas like Los Angeles, being mixed race has lost the negative stigma it had decades ago and many young people celebrate their mixed heritages. Their unique combinations of facial and physical features can be worn with pride. But the mixture goes deeper than just the skin and eyes and hair.

At the level of DNA, all modern humans are shockingly similar to one another (and for that matter, to chimpanzees). However, over the hundreds of thousands of years of migrations to different climates and environments, we’ve accumulated a decent number of variant genes. Some of these differences emerged and hung around for no obvious reason, but others stuck because they were adaptive for the new climates and circumstances that different peoples found themselves in. Genes that regulate melanin production and determine skin color are a great example of this; peoples who stayed in Africa or settled in other locations closer to the Equator needed more protection from the sun while those who settled in sites closer to the poles may have benefited from lighter skin to absorb more of the sun’s scarce winter rays and stave off vitamin D deficiency.

In a very real way, the genetic variations endemic to different ethnic groups carry the history of their people and the environments and struggles that they faced. For instance, my husband’s Indian heritage puts him at risk for carrying a gene mutation that causes alpha thalassemia. If a person inherits two copies of this mutation (one from each parent), he or she will either die soon after birth or develop anemia. But inheriting one copy of the gene variant confers a handy benefit – it makes the individual less likely to catch malaria. (The same principle applies for beta thalassemia and sickle cell anemia found in other ethnic populations.) Meanwhile, my European heritage puts me at risk for carrying a genetic mutation linked to cystic fibrosis. Someone who inherits two copies of this gene will develop the debilitating respiratory symptoms of cystic fibrosis, but thanks to a handy molecular trick, those with only one copy may be less susceptible to dying from cholera or typhoid fever. As the theory goes, these potentially lethal mutations persist in their respective populations because they confer a targeted survival advantage.

Compared to babies born to two Indian or two Caucasian parents, our baby has a much lower risk of inheriting alpha thalassemia or cystic fibrosis, respectively, since these diseases require two copies of the mutation. But our child could potentially inherit one copy of each of these mutations, endowing her with some Suberbaby immunity benefits but also putting her children at risk for either disease (depending on the ethnicity of her spouse).

The rise in mixed race children will require changes down the road for genetic screening protocols. It will also challenge preconceived notions about appearance, ethnicity, and disease. But beyond these practical issues, there is something wonderful about this mixing of genetic variants and the many thousands of years of divergent world histories they represent. With the growth in air travel, communication, and the Internet, it’s become a common saying that the world is getting smaller. But Facebook and YouTube are only the beginning. Thanks to interracial marriage, we’ve shrunk the world to the size of a family. And now, in the form of our children’s DNA, it has been squeezed inside the nucleus of the tiny human cell.

%d bloggers like this: