Did I Do That? Distinguishing Real from Imagined Actions

473089805_b4cb670102_o

If you’re like most people, you spend a great deal of your time remembering past events and planning or imagining events that may happen in the future. While these activities have their uses, they also make it terribly hard to keep track of what you have and haven’t actually seen, heard, or done. Distinguishing between memories of real experiences and memories of imagined or dreamt experiences is called reality monitoring and it’s something we do (or struggle to do) all of the time.

Why is reality monitoring a challenge? To illustrate, let’s say you’re at the Louvre standing before the Mona Lisa. As you look at the painting, visual areas of your brain are busy representing the image with specific patterns of activity. So far, so good. But problems emerge if we rewind to a time before you saw the Mona Lisa at the Louvre. Let’s say you were about to head over to the museum and you imagined the special moment when you would gaze upon Da Vinci’s masterwork. When you imagined seeing the picture, you were activating the same visual areas of the brain in a similar pattern to when you would look at the masterpiece itself.*

When you finally return home from Paris and try to remember that magical moment at the Louvre, how will you be able to distinguish your memories of seeing the Mona Lisa from imagining her? Reality monitoring studies have asked this very question (minus the Mona Lisa). Their findings suggest that you’ll probably use additional details associated with the memory to ferret out the mnemonic wheat from the chaff. You might use memory of perceptual details, like how the lights reflected off the brushstrokes, or you might use details of what you thought or felt, like your surprise at the painting’s actual size. Studies find that people activate both visual areas (like the fusiform gyrus) and self-monitoring regions of the brain (like the medial prefrontal cortex) when they are deciding whether they saw or just imagined seeing a picture.

It’s important to know what you did and didn’t see, but another crucial and arguably more important facet of reality monitoring involves determining what you did and didn’t do. How do you distinguish memories of things you’ve actually done from those you’ve planned to do or imagined doing? You have to do this every day and it isn’t a trivial task. Perhaps you’ve left the house and headed to work, only to wonder en route if you’d locked the door. Even if you thought you did, it can be hard to tell whether you remember actually doing it or just thinking about doing it. The distinction has consequences. Going home and checking could make you late for work, but leaving your door unlocked all day could mean losing your possessions. So how do we tell the possibilities apart?

Valerie Brandt, Jon Simons, and colleagues at the University of Cambridge looked into this question and published their findings last month in the journal Cognitive, Affective, and Behavioral Neuroscience. For the first part of the experiment (the study phase), they sat healthy adult participants down in front of two giant boxes – one red and one blue – that each contained 80 ordinary objects. The experimenter would draw each object out of one of the two boxes, place it in front of the participant, and tell him or her to either perform or to imagine performing a logical action with the object. For example, when the object was a book, participants were told to either open or imagine opening it.

After the study phase, the experiment moved to a scanner for fMRI. During these scans, participants were shown photographs of all 160 of the studied objects and, for each item, were asked to indicate either 1) whether they had performed or merely imagined performing an action on that object, or 2) which box the object had been drawn from.** When the scans were over, the participants saw the pictures of the objects again and were asked to rate how much specific detail they’d recalled about encountering each object and how hard it had been to bring that particular memory to mind.

The scientists compared fMRI measures of brain activation during the reality-monitoring task (Did I use or imagine using that object?) with activation during the location task (Which box did this object come from?). One of the areas they found to be more active during reality monitoring was the supplementary motor area, a region involved in planning and executing movements of the body. Just as visual areas are activated for reality monitoring of visual memories, motor areas are activated when people evaluate their action memories. In other words, when you ask yourself whether you locked the door or just imagined it, you may be using details of motor aspects of the memory (e.g., pronating your wrist to turn the key in the lock) to make your decision.

The study’s authors also found greater activation in the anterior medial prefrontal cortex when they compared reality monitoring for actions participants performed with those they only imagined performing. The medial prefrontal cortex encompasses a respectable swath of the brain with a variety of functions that appear to include making self-referential judgments, or evaluating how you feel or think about experiences, sensations, and the like. Other experiments have implicated a role for this or nearby areas in reality monitoring of visual memories. The study by Brandt and Simons also found that activation of this medial prefrontal region during reality-monitoring trials correlated with the number of internal details the participants said they’d recalled in those trials. In other words, the more details participants remembered about their thoughts and feelings during the past actions, the busier this area appeared to be. So when faced with uncertainty about a past action, the medial prefrontal cortex may be piping up about the internal details of the memory. I must have locked the door because I remember simultaneously wondering when my package would arrive from Amazon, or, because I was also feeling sad about leaving my dog alone at home.

As I read these results, I found myself thinking about the topic of my prior post on OCD. Pathological checking is a common and often disruptive symptom of the illness. Although it may seem like a failure of reality monitoring, several behavioral studies have shown that people with OCD have normal reality monitoring for past actions. The difference is that people with checking symptoms of OCD have much lower confidence in the quality of their memories than others. It seems to be this distrust of their own memories, along with relentless anxiety, that drives them to double-check over and over again.

So the next time you find yourself wondering whether you actually locked the door, cut yourself some slack. Reality monitoring ain’t easy. All you can do is trust your brain not to lead you astray. Make a call and stick with it. You’re better off being wrong than being anxious about it – that is, unless you have really nice stuff.

_____

Photo credit: Liz (documentarist on Flickr), used via Creative Commons license

* Of course, the mental image you conjure of the painting is actually based on the memory of having seen it in ads, books, or posters before. In fact, a growing area of neuroscience research focuses on how imagining the future relies on the same brain areas involved in remembering the past. Imagination seems to be, in large part, a collage of old memories cut and pasted together to make something new.

**The study also had a baseline condition, used additional contrasts, and found additional activations that I didn’t mention for the sake of brevity. Check out the original article for full details.

Brandt, V., Bergström, Z., Buda, M., Henson, R., & Simons, J. (2014). Did I turn off the gas? Reality monitoring of everyday actions Cognitive, Affective, & Behavioral Neuroscience, 14 (1), 209-219 DOI: 10.3758/s13415-013-0189-z

The Slippery Question of Control in OCD

6058142799_d4422a8fe2_b

It’s nice to believe that you have control over your environment and your fate – that is until something bad happens that you’d rather not be responsible for. In today’s complex and interconnected world, it can be hard to figure out who or what causes various events to happen and to what degree you had a hand in shaping their outcomes. Yet in order to function, everyone has to create mental representations of causation and control. What happens when I press this button? Did my glib comment upset my friends? If I belch on the first date, will it scare her off?

People often believe they have more control over outcomes (particularly positive outcomes) than they actually do. Psychologists discovered this illusion of control in controlled experiments, but you can witness the same principle in many a living room now that March Madness is upon us. Of course, wearing your lucky underwear or sitting in your go-to La-Z-Boy isn’t going to help your team win the game, and the very idea that it might shows how easily one’s sense of personal control can become inflated. Decades ago, researchers discovered that the illusion of control is not universal. People suffering from depression tend not to fall for this illusion. That fact, along with similar findings from depression, gave rise to the term depressive realism. Two recent studies now suggest that patients with obsessive-compulsive disorder (OCD) may also represent contingency and estimate personal control differently from the norm.

OCD is something of a paradox when it comes to the concept of control. The illness has two characteristic features: obsessions based on fears or regrets that occupy a sufferer’s thoughts and make him or her anxious, and compulsions, or repetitive and unnecessary actions that may or may not relieve the anxiety. For decades, psychiatrists and psychologists have theorized that control lies at the heart of this cycle. Here’s how the NIMH website on OCD describes it (emphasis is mine):

The frequent upsetting thoughts are called obsessions. To try to control them, a person will feel an overwhelming urge to repeat certain rituals or behaviors called compulsions. People with OCD can’t control these obsessions and compulsions. Most of the time, the rituals end up controlling them.

In short, their obsessions cause them distress and they perform compulsions in an effort to regain some sense of control over their thoughts, fears, and anxieties. Yet in some cases, compulsions (like sports fans’ superstitions) seem to indicate an inflated sense of personal control. Based on this conventional model of OCD, you might predict that people with the illness will either underestimate or overestimate their personal control over events. So which did the studies find? In a word: both.

The latest study, which appeared this month in Frontiers in Psychology, used a classic experimental design to study the illusion of control. The authors tested 26 people with OCD and 26 comparison subjects. The subjects were shown an image of an unlit light bulb and told that their goal was to illuminate the light bulb as often as possible. On each trial, they could choose to either press or not press the space bar. After they made their decision, the light bulb either did or did not light up. Their job was to estimate, based on their trial-by-trial experimentation, how much control they had over the light bulb. Here’s the catch: the subjects had absolutely no control over the light bulb, which lit up or remained dark according to a fixed sequence.*

After 40 trials, subjects were asked to rate the degree of control they thought they had over the illumination of the light bulb, ranging from 0 (no control) to 100 (complete control). Estimates of control were consistently higher for the comparison subjects than for the subjects with OCD. In other words, the people with OCD believed they had less control – and since they actually had no control, that means that they were also more accurate than the comparison subjects. As the paper points out, this is a limitation of the study: it can’t tell us whether patients are generally prone to underestimating their control over events or if they’re simply more accurate that comparison subjects. To do that, it would need to have included situations in which subjects actually did have some degree of control over the outcomes.

Why wasn’t the light bulb study designed to distinguish between these alternatives? Because the authors were expecting the opposite result. They had designed their experiment to follow up on a 2008 study that found a heightened illusion of control among people with OCD. The earlier study used a different test. They showed subjects either neutral pictures of household items or disturbing pictures of distorted faces. The experimenters encouraged the subjects to try to control the presentation of images by pressing buttons on a keyboard and asked them to estimate their control over the images three times during the session. However, just like in the light bulb study, the presentation of the images was fixed in advance and could not be affected by the subjects’ button presses.

How can two studies of estimated control in OCD have opposite results? It seems that the devil is in the details. Prior studies with tasks like these have shown that healthy subjects’ control estimates depend on details like the frequency of the preferred outcome and whether the experimenter is physically in the room during testing.  Mental illness throws additional uncertainty into the mix. For example, the disturbing face images in the 2008 study might have made the subjects with OCD anxious, which could have triggered a different cognitive pattern. Still, both findings suggest that control estimation is abnormal for people with OCD, possibly in complex and situation-dependent ways.

These and other studies indicate that decision-making and representations of causality in OCD are altered in interesting and important ways. A better understanding of these differences could help us understand the illness and, in the process, might even shed light on the minor rituals and superstitions that are common to us all. Sadly, like a lucky pair of underwear, it probably won’t help your team get to the Final Four.

_____

Photo by Olga Reznik on Flickr, used via Creative Commons license

*The experiment also manipulated reinforcement (how often the light bulb lit up) and valence (whether the lit bulb earned them money or the unlit bulb cost them money) across different testing sections, but I don’t go into that here because the manipulations didn’t affect the results.

Gillan CM, Morein-Zamir S, Durieux AM, Fineberg NA, Sahakian BJ, & Robbins TW (2014). Obsessive-compulsive disorder patients have a reduced sense of control on the illusion of control task. Frontiers in Psychology, 5 PMID: 24659974

Known Unknowns

Why no one can say exactly how much is safe to drink while pregnant

8538709738_0e2f5bb2ab_b

I was waiting in the dining car of an Amtrak train recently when I looked up and saw that old familiar sign:

“According to the Surgeon General, women should not drink alcoholic beverages during pregnancy because of the risk of birth defects.”

One finds this warning everywhere: printed on bottles and menus or posted on placards at restaurants and even train cars barreling through Midwestern farmland in the middle of the night. The warnings are, of course, intended to reduce the number of cases of fetal alcohol syndrome in the United States. To that end, the Centers for Disease Control and Prevention (CDC) and the American Congress of Obstetricians and Gynecologists (ACOG) recommend that women avoid drinking any alcohol throughout their pregnancies.

Here’s how the CDC puts it:

“There is no known safe amount of alcohol to drink while pregnant.”

And here’s ACOG’s statement in 2008:

“. . . ACOG reiterates its long-standing position that no amount of alcohol consumption can be considered safe during pregnancy.”

Did you notice what they did there? These statements don’t actually say that no amount of alcohol is safe during pregnancy. They say that no safe amount is known and that no amount can be considered safe, respectively. Ultimately, these are statements of uncertainty. We don’t know how much is safe to drink, so it’s best if you don’t drink any at all.

Lest you think this is a merely a reflection of America’s puritanical roots, check out the recommendations of the U.K.’s National Health Service. While they make allowances for the fact that some women choose to drink, they still advise pregnant women to avoid alcohol altogether. As they say:

“If women want to avoid all possible alcohol-related risks, they should not drink alcohol during pregnancy because the evidence on this is limited.”

Yet it seems odd that the evidence is so limited. The damaging effects of binge drinking on fetal development were known in the 18th century and the first modern description of fetal alcohol syndrome was published in a French medical journal nearly 50 years ago. Six years later, in 1973, a group of researchers at the University of Washington documented the syndrome in The Lancet. Even then, people knew the cause of fetal alcohol syndrome: alcohol. And in the forty years since, fetal alcohol syndrome has become a well-known and well-studied illness. NIH alone devotes more than $30 million dollars annually to research in the field. So how come no one has answered the most pressing question (at least for pregnant women): How much is safe to drink?

One reason is that fetal alcohol syndrome isn’t like HIV. You can’t diagnose it with a blood test. Doctors rely on a characteristic pattern of facial abnormalities, growth delays and neural or mental problems – often in addition to evidence of prenatal alcohol exposure – to diagnose a child. Yet children exposed to and affected by alcohol during fetal development don’t always show all of these symptoms. Doctors and agencies now define fetal alcohol syndrome as the extreme end of a spectrum of disorders caused by prenatal alcohol exposure. The full spectrum, called fetal alcohol spectrum disorders (FASD), includes milder forms of the illness that involve subtler cognitive or behavioral problems and lack the classic facial features of the full-blown syndrome.

As you might imagine, milder cases of FASD are hard to identify. Pediatricians can miss the signs altogether. And there’s a fundamental difficulty in diagnosing the mildest cases of FASD. To put it crudely, if your child is slow, who’s to say whether the culprit is a little wine during pregnancy, genetics, too much television, too few vegetables, or god-knows-what-else? Unfortunately, identifying and understanding the mildest cases is crucial. These are the cases that worry pregnant women who drink lightly. They lie at the heart of the uncertainty voiced by the CDC, ACOG, and others. Most pregnant women would like to enjoy the occasional merlot or Sam Adams, but not if they thought it would rob their children of IQ points or otherwise limit their abilities – even just a little – down the line.

While it’s hard to pin down the subtlest cases in the clinic, scientists can still detect them by looking for differences between groups of children with different exposures. The most obvious way of testing this would be to randomly assign pregnant women to drink alcohol at different doses, but of course that experiment would be unethical and should never be done. Instead, researchers capitalize on the variability in how much women choose to drink during pregnancy (or at least how much they report that they drank, which may not always be the same thing.) In addition to interviewing moms about their drinking habits, the scientists test their children at different ages and look for correlations between prenatal alcohol exposure and test performance.

While essential, these studies can be messy and hard to interpret. When researchers do find correlations between moderate prenatal alcohol exposure and poor test performance, they can’t definitively claim that the former caused the latter (although it’s suggestive). A mysterious third variable (say, maternal cocaine use) might be responsible for them both. On the flip side, interpreting studies that don’t find correlations is even trickier.  It’s hard to show that one thing doesn’t affect another, particularly when you are interested in very small effects. To establish this with any confidence, scientists must show that it holds with large numbers of people and that they are using the right outcome measure (e.g., IQ score). FASD impairments can span language, movement, math skills, goal-directed behaviors, and social interactions. Any number of measures from wildly different tests might be relevant. If a given study doesn’t find a correlation between prenatal alcohol exposure and outcome measure, it might be because the study didn’t test enough children or didn’t choose the right test to pick up the subtle differences between groups.

When studies in humans get tricky, scientists often turn to animal models. FASD research has been no exception. These animal studies have helped us understand the physiological and biochemical mechanisms behind fetal alcohol syndrome, but they can’t tell us how much alcohol a pregnant woman can safely drink. Alcohol metabolism rates vary quite a bit between species. The sensitivity of developing neurons to alcohol may differ too. One study used computational modeling to predict that the blood alcohol level of a pregnant rat must be 10 times that of a pregnant human to wreak the same neural havoc on the fetus. Yet computational models are far from foolproof. Scientists simply don’t know precisely how a dose in a rat, monkey, or other animal would translate to a human mother and fetus.

And here’s the clincher: alcohol’s prenatal effects also differ between humans. Thanks to genetic differences, people metabolize alcohol at very different rates. The faster a pregnant woman clears alcohol from her system, the lower the exposure to her fetus. Other factors make a difference, too. Prenatal alcohol exposure seems to take a heavier toll on the fetuses of older mothers. The same goes for poor mothers, probably because of confounding factors like nutrition and stress. Taken together, these differences mean that if two pregnant women drink the same amount of alcohol at the same time, their fetuses might experience very different alcohol exposures and have very different outcomes. In short, there is no single limit to how much a pregnant woman can safely drink because every woman and every pregnancy is different.

As organizations like the CDC point out, the surest way to prevent FASD is to avoid alcohol entirely while pregnant. Ultimately, every expecting mother has to make her own decision about drinking based on her own understanding of the risk. She may hear strong opinions from friends, family, the blogosphere and conventional media. Lots of people will seem sure of many things and those are precisely the people that she should ignore.

When making any important decision, it’s best to know as much as you can – even when that means knowing how much remains unknown.

_____

Photo Credit: Uncalno Tekno on Flickr, used via Creative Commons license

Hurley TD, & Edenberg HJ (2012). Genes encoding enzymes involved in ethanol metabolism. Alcohol research : current reviews, 34 (3), 339-44 PMID: 23134050

Stoler JM, & Holmes LB (1999). Under-recognition of prenatal alcohol effects in infants of known alcohol abusing women. The Journal of Pediatrics, 135 (4), 430-6 PMID: 10518076

The End of History

Intersection 12-12-12 Day 347 G+ 365 Project 12 December 2012I just read a wonderful little article about how we think about ourselves. The paper, which came out in January, opens with a tantalizing paragraph that I simply have to share:

“At every stage of life, people make decisions that profoundly influence the lives of the people they will become—and when they finally become those people, they aren’t always thrilled about it. Young adults pay to remove the tattoos that teenagers paid to get, middle-aged adults rush to divorce the people whom young adults rushed to marry, and older adults visit health spas to lose what middle-aged adults visited restaurants to gain. Why do people so often make decisions that their future selves regret?”

To answer this question, the study’s authors recruited nearly 20,000 participants from the website of “a popular television show.” (I personally think they should have told us which one. I’d imagine there are differences between the people who flock to the websites for Oprah, The Nightly News, or, say, Jersey Shore.)

The study subjects ranged in age from 18 to 68 years of age. For the experiment, they had to fill out an online questionnaire about their current personality, core values, or personal preferences (such as favorite food). Half of the subjects—those in the reporter group—were then asked to report how they would have filled out the questionnaire ten years prior, while the other half—those in the predictor group—were asked to predict how they will fill it out ten years hence. For each subject, the authors computed the difference between the subject’s responses for his current self and those for his reported past self or predicted future self. And here’s the clever part: they could compare participants across ages. For example, they could compare how an 18-year-old’s prediction of his 28-year-old future self differed from a 28-year-old’s report of his 18-year-old self. It sounds crazy, but they did some great follow up studies to make sure the comparison was valid.

The results show a remarkable pattern. People believe that they have changed considerably in the past, even while they expect to change little in the future. And while they tend to be pretty accurate in their assessment of how much they’ve changed in years passed, they are grossly underestimating how much they will change in the coming years. The authors call this effect The End of History Illusion. And it’s not just found in shortsighted teenagers or twenty-somethings. While the study showed that older people do change less than younger people, they still underestimate how much they will continue to change in the decade to come.

The End of History Illusion is interesting in its own right. Why are we so illogical when reasoning about ourselves – and particularly, our own minds? We all understand that we will change physically as we age, both in how well our bodies function and how they look to others. Yet we deny the continued evolution (or devolution) of our traits, values, and preferences. We live each day as though we have finally achieved our ultimate selves. It is, in some ways, a depressing outlook. As much as we may like ourselves now, wouldn’t it be more heartening to believe that we will keep growing and improving as human beings?

The End of History Illusion also comes with a cost. We are constantly making flawed decisions for our future selves. As the paper’s opening paragraph illustrated, we take actions today under the assumption that our future desires and needs won’t change. In a follow up study, the authors even demonstrate this effect by showing that people would be willing to pay an average of $129 now to see a concert by their favorite band in ten years, while they would only be willing to pay an average of $80 now to see a concert by their favorite band from ten years back. Here, the illusion will only cost us money. In real life, it could cost us our health, our families, our future well-being.

This study reminded me of a book I read a while back called Stumbling on Happiness (written, it turns out, by the second author on this paper). The book’s central thesis is that we are bad at predicting what will make us happy and the whole thing is written in the delightful style of this paper’s opening paragraph. For those of you with the time, it’s worth a read. For those of you without time, I can only hope you’ll have more time in the future. With any luck we’ll all have more – more insight, more compassion, more happiness—in the decade to come.

____

Photo credit: Darla Hueske

ResearchBlogging.org

Quoidbach J, Gilbert DT, & Wilson TD (2013). TheEnd of History Illusion Science DOI: 10.1126/science.1229294

The Demise of the Expert

These days, I find myself turning off the news while thinking the same question. When did we stop valuing knowledge and expertise? When did impressive academic credentials become a political liability? When did the medical advice of celebrities like Jenny McCarthy and Ricki Lake become more trusted than those of government safety panels, scientists, and physicians? When did running a small business or being a soccer mom qualify a person to hold the office of president and make economic and foreign policy decisions?

As Rick Perry, the Republican front-runner for president recently told us, “You don’t have to have a PhD in economics from Harvard to really understand how to get America back working again.” Really? Why not? It certainly seems to me that some formal training would help. And yet many in Congress pooh-poohed economists’ warnings about the importance of raising the debt ceiling and have insisted on decreasing regulations despite the evidence that this won’t help to improve our economy (and will further harm our environment.) Meanwhile, man-made climate change is already affecting our planet. Natural disasters such as droughts and hurricanes are on the rise, just as scientists predicted. But we were slow to accept their warnings and have been slow to enact any meaningful policies to stem the course of this calamity.

The devaluation of expertise is puzzling enough, but perhaps more puzzling still is the timing. Never before in human history have we witnessed the fruits of expertise as we do today. Thanks to scientists and engineers, we rely on cell phones that wirelessly connect us to the very person we want to talk to at the moment we want to talk. In turn, these cell phones operate through satellites that nameless experts have set spinning in precise orbits around Earth. We keep in touch with friends, do our banking and bill-paying, and make major purchases using software written in codes we don’t understand and transmitted over a network whose very essence we struggle to comprehend. (I mean, what exactly is the Internet?) Meanwhile, physicians use lasers to excise tumors and correct poor vision. They replace damaged livers and hearts. They fit amputees with hi-tech artificial limbs, some with feet that flex and hands that grasp.

Obviously none of this would have been possible without experts. You need more than high school math and a casual grasp of physics or anatomy to develop these complex systems, tools, and techniques. So why on Earth would we discount experts now, when we have more proof than ever of their worth?

My only guess is education. Our national public education system is in shambles. American children rank 25th globally in math and 21st in science. At least two-thirds of American students cannot read at grade level. But there is something our student score high on. As the documentary Waiting for Superman highlighted, American students rank number one in confidence. This may stem from the can-do culture of the United States or from the success our nation has enjoyed over the last 65 years. But it makes for a dangerous combination. We are churning out students with inadequate knowledge and skills, but who believe they can intuit and accomplish anything. And if you believe that, then why not believe you know better than the experts?

I think the only remedy for this situation is better education, but not for the reasons you might think. In my opinion, the more a person learns about any given academic subject, the more realistic and targeted his or her self-confidence becomes.

The analogy that comes to mind is of a blind man trying to climb a tree. When he’s still at the base of the tree, all he can feel is the trunk. From there, he has little sense of the size or shape of the rest of the tree.  But suppose he climbs up on a limb and then out to even smaller branches. He still won’t know the shape of the rest of the tree, but from his perch on one branch, he can feel the extensive foliage. He’ll know that the tree must be large and he can presume that the other branches are equally long and intricate. He can appreciate how very much there must be of the tree that’s beyond his reach.

I think the same principle applies to knowledge. The more we know, the more we can appreciate how much else there is out there to know – things about which we haven’t got a clue. As we climb out on our tiny branches, acquiring knowledge, we also gain an awareness of our profound ignorance. Unfortunately, many of America’s children (and by now, adults too) aren’t climbing the tree at all; they’re still lounging at the base, enjoying a picnic in the shade.

Should it surprise us, then, to learn that they don’t see the value in expertise? That they can support political candidates who disparage the advice of specialists and depict academic achievement as a form of elitism? Why shouldn’t they trust the advice of a neighbor, a talk show host, or an actor over the warnings of the ‘educated elite’?

No single person can know everything there is to know in today’s world, so the sum of human knowledge must be dispersed among millions of specialized experts. Human progress relies on these people, dangling from their obscure little branches, to help guide our technology, our public policy, our research and governance. Our world has no shortage of experts. Now if only people would start listening to them.

%d bloggers like this: