Did I Do That? Distinguishing Real from Imagined Actions

473089805_b4cb670102_o

If you’re like most people, you spend a great deal of your time remembering past events and planning or imagining events that may happen in the future. While these activities have their uses, they also make it terribly hard to keep track of what you have and haven’t actually seen, heard, or done. Distinguishing between memories of real experiences and memories of imagined or dreamt experiences is called reality monitoring and it’s something we do (or struggle to do) all of the time.

Why is reality monitoring a challenge? To illustrate, let’s say you’re at the Louvre standing before the Mona Lisa. As you look at the painting, visual areas of your brain are busy representing the image with specific patterns of activity. So far, so good. But problems emerge if we rewind to a time before you saw the Mona Lisa at the Louvre. Let’s say you were about to head over to the museum and you imagined the special moment when you would gaze upon Da Vinci’s masterwork. When you imagined seeing the picture, you were activating the same visual areas of the brain in a similar pattern to when you would look at the masterpiece itself.*

When you finally return home from Paris and try to remember that magical moment at the Louvre, how will you be able to distinguish your memories of seeing the Mona Lisa from imagining her? Reality monitoring studies have asked this very question (minus the Mona Lisa). Their findings suggest that you’ll probably use additional details associated with the memory to ferret out the mnemonic wheat from the chaff. You might use memory of perceptual details, like how the lights reflected off the brushstrokes, or you might use details of what you thought or felt, like your surprise at the painting’s actual size. Studies find that people activate both visual areas (like the fusiform gyrus) and self-monitoring regions of the brain (like the medial prefrontal cortex) when they are deciding whether they saw or just imagined seeing a picture.

It’s important to know what you did and didn’t see, but another crucial and arguably more important facet of reality monitoring involves determining what you did and didn’t do. How do you distinguish memories of things you’ve actually done from those you’ve planned to do or imagined doing? You have to do this every day and it isn’t a trivial task. Perhaps you’ve left the house and headed to work, only to wonder en route if you’d locked the door. Even if you thought you did, it can be hard to tell whether you remember actually doing it or just thinking about doing it. The distinction has consequences. Going home and checking could make you late for work, but leaving your door unlocked all day could mean losing your possessions. So how do we tell the possibilities apart?

Valerie Brandt, Jon Simons, and colleagues at the University of Cambridge looked into this question and published their findings last month in the journal Cognitive, Affective, and Behavioral Neuroscience. For the first part of the experiment (the study phase), they sat healthy adult participants down in front of two giant boxes – one red and one blue – that each contained 80 ordinary objects. The experimenter would draw each object out of one of the two boxes, place it in front of the participant, and tell him or her to either perform or to imagine performing a logical action with the object. For example, when the object was a book, participants were told to either open or imagine opening it.

After the study phase, the experiment moved to a scanner for fMRI. During these scans, participants were shown photographs of all 160 of the studied objects and, for each item, were asked to indicate either 1) whether they had performed or merely imagined performing an action on that object, or 2) which box the object had been drawn from.** When the scans were over, the participants saw the pictures of the objects again and were asked to rate how much specific detail they’d recalled about encountering each object and how hard it had been to bring that particular memory to mind.

The scientists compared fMRI measures of brain activation during the reality-monitoring task (Did I use or imagine using that object?) with activation during the location task (Which box did this object come from?). One of the areas they found to be more active during reality monitoring was the supplementary motor area, a region involved in planning and executing movements of the body. Just as visual areas are activated for reality monitoring of visual memories, motor areas are activated when people evaluate their action memories. In other words, when you ask yourself whether you locked the door or just imagined it, you may be using details of motor aspects of the memory (e.g., pronating your wrist to turn the key in the lock) to make your decision.

The study’s authors also found greater activation in the anterior medial prefrontal cortex when they compared reality monitoring for actions participants performed with those they only imagined performing. The medial prefrontal cortex encompasses a respectable swath of the brain with a variety of functions that appear to include making self-referential judgments, or evaluating how you feel or think about experiences, sensations, and the like. Other experiments have implicated a role for this or nearby areas in reality monitoring of visual memories. The study by Brandt and Simons also found that activation of this medial prefrontal region during reality-monitoring trials correlated with the number of internal details the participants said they’d recalled in those trials. In other words, the more details participants remembered about their thoughts and feelings during the past actions, the busier this area appeared to be. So when faced with uncertainty about a past action, the medial prefrontal cortex may be piping up about the internal details of the memory. I must have locked the door because I remember simultaneously wondering when my package would arrive from Amazon, or, because I was also feeling sad about leaving my dog alone at home.

As I read these results, I found myself thinking about the topic of my prior post on OCD. Pathological checking is a common and often disruptive symptom of the illness. Although it may seem like a failure of reality monitoring, several behavioral studies have shown that people with OCD have normal reality monitoring for past actions. The difference is that people with checking symptoms of OCD have much lower confidence in the quality of their memories than others. It seems to be this distrust of their own memories, along with relentless anxiety, that drives them to double-check over and over again.

So the next time you find yourself wondering whether you actually locked the door, cut yourself some slack. Reality monitoring ain’t easy. All you can do is trust your brain not to lead you astray. Make a call and stick with it. You’re better off being wrong than being anxious about it – that is, unless you have really nice stuff.

_____

Photo credit: Liz (documentarist on Flickr), used via Creative Commons license

* Of course, the mental image you conjure of the painting is actually based on the memory of having seen it in ads, books, or posters before. In fact, a growing area of neuroscience research focuses on how imagining the future relies on the same brain areas involved in remembering the past. Imagination seems to be, in large part, a collage of old memories cut and pasted together to make something new.

**The study also had a baseline condition, used additional contrasts, and found additional activations that I didn’t mention for the sake of brevity. Check out the original article for full details.

Brandt, V., Bergström, Z., Buda, M., Henson, R., & Simons, J. (2014). Did I turn off the gas? Reality monitoring of everyday actions Cognitive, Affective, & Behavioral Neuroscience, 14 (1), 209-219 DOI: 10.3758/s13415-013-0189-z

The Slippery Question of Control in OCD

6058142799_d4422a8fe2_b

It’s nice to believe that you have control over your environment and your fate – that is until something bad happens that you’d rather not be responsible for. In today’s complex and interconnected world, it can be hard to figure out who or what causes various events to happen and to what degree you had a hand in shaping their outcomes. Yet in order to function, everyone has to create mental representations of causation and control. What happens when I press this button? Did my glib comment upset my friends? If I belch on the first date, will it scare her off?

People often believe they have more control over outcomes (particularly positive outcomes) than they actually do. Psychologists discovered this illusion of control in controlled experiments, but you can witness the same principle in many a living room now that March Madness is upon us. Of course, wearing your lucky underwear or sitting in your go-to La-Z-Boy isn’t going to help your team win the game, and the very idea that it might shows how easily one’s sense of personal control can become inflated. Decades ago, researchers discovered that the illusion of control is not universal. People suffering from depression tend not to fall for this illusion. That fact, along with similar findings from depression, gave rise to the term depressive realism. Two recent studies now suggest that patients with obsessive-compulsive disorder (OCD) may also represent contingency and estimate personal control differently from the norm.

OCD is something of a paradox when it comes to the concept of control. The illness has two characteristic features: obsessions based on fears or regrets that occupy a sufferer’s thoughts and make him or her anxious, and compulsions, or repetitive and unnecessary actions that may or may not relieve the anxiety. For decades, psychiatrists and psychologists have theorized that control lies at the heart of this cycle. Here’s how the NIMH website on OCD describes it (emphasis is mine):

The frequent upsetting thoughts are called obsessions. To try to control them, a person will feel an overwhelming urge to repeat certain rituals or behaviors called compulsions. People with OCD can’t control these obsessions and compulsions. Most of the time, the rituals end up controlling them.

In short, their obsessions cause them distress and they perform compulsions in an effort to regain some sense of control over their thoughts, fears, and anxieties. Yet in some cases, compulsions (like sports fans’ superstitions) seem to indicate an inflated sense of personal control. Based on this conventional model of OCD, you might predict that people with the illness will either underestimate or overestimate their personal control over events. So which did the studies find? In a word: both.

The latest study, which appeared this month in Frontiers in Psychology, used a classic experimental design to study the illusion of control. The authors tested 26 people with OCD and 26 comparison subjects. The subjects were shown an image of an unlit light bulb and told that their goal was to illuminate the light bulb as often as possible. On each trial, they could choose to either press or not press the space bar. After they made their decision, the light bulb either did or did not light up. Their job was to estimate, based on their trial-by-trial experimentation, how much control they had over the light bulb. Here’s the catch: the subjects had absolutely no control over the light bulb, which lit up or remained dark according to a fixed sequence.*

After 40 trials, subjects were asked to rate the degree of control they thought they had over the illumination of the light bulb, ranging from 0 (no control) to 100 (complete control). Estimates of control were consistently higher for the comparison subjects than for the subjects with OCD. In other words, the people with OCD believed they had less control – and since they actually had no control, that means that they were also more accurate than the comparison subjects. As the paper points out, this is a limitation of the study: it can’t tell us whether patients are generally prone to underestimating their control over events or if they’re simply more accurate that comparison subjects. To do that, it would need to have included situations in which subjects actually did have some degree of control over the outcomes.

Why wasn’t the light bulb study designed to distinguish between these alternatives? Because the authors were expecting the opposite result. They had designed their experiment to follow up on a 2008 study that found a heightened illusion of control among people with OCD. The earlier study used a different test. They showed subjects either neutral pictures of household items or disturbing pictures of distorted faces. The experimenters encouraged the subjects to try to control the presentation of images by pressing buttons on a keyboard and asked them to estimate their control over the images three times during the session. However, just like in the light bulb study, the presentation of the images was fixed in advance and could not be affected by the subjects’ button presses.

How can two studies of estimated control in OCD have opposite results? It seems that the devil is in the details. Prior studies with tasks like these have shown that healthy subjects’ control estimates depend on details like the frequency of the preferred outcome and whether the experimenter is physically in the room during testing.  Mental illness throws additional uncertainty into the mix. For example, the disturbing face images in the 2008 study might have made the subjects with OCD anxious, which could have triggered a different cognitive pattern. Still, both findings suggest that control estimation is abnormal for people with OCD, possibly in complex and situation-dependent ways.

These and other studies indicate that decision-making and representations of causality in OCD are altered in interesting and important ways. A better understanding of these differences could help us understand the illness and, in the process, might even shed light on the minor rituals and superstitions that are common to us all. Sadly, like a lucky pair of underwear, it probably won’t help your team get to the Final Four.

_____

Photo by Olga Reznik on Flickr, used via Creative Commons license

*The experiment also manipulated reinforcement (how often the light bulb lit up) and valence (whether the lit bulb earned them money or the unlit bulb cost them money) across different testing sections, but I don’t go into that here because the manipulations didn’t affect the results.

Gillan CM, Morein-Zamir S, Durieux AM, Fineberg NA, Sahakian BJ, & Robbins TW (2014). Obsessive-compulsive disorder patients have a reduced sense of control on the illusion of control task. Frontiers in Psychology, 5 PMID: 24659974

Perfect Pitch Redux

5819184201_df0392f0e7_b

I can just hear the advertisement now.

Do you have perfect pitch? Would you like to? Then Depakote might be right for you . . .

Perfect pitch is the ability to name or produce a musical note without a reference note. While most children presumably have the capacity to learn perfect pitch, only one in about ten thousand adults can actually do it. That’s because children must receive extensive musical training as youngsters to develop it. Most adults with perfect pitch began studying music at six years of age or younger. By the time children turn nine, their window to learn perfect pitch has already closed. They may yet blossom into wonderful musicians but they will never be able to count perfect pitch among their talents.

Or might they after all?

Well no, probably not. But a new study, published in Frontiers in Systems Neuroscience, has opened the door to such questions. Its authors tested how young men learned to name notes when they were on or off of a drug called valproate (brand name: Depakote). Valproate is widely used to treat epilepsy and bipolar disorder. It’s part of a class of drugs called histone-deacetylase, or HDAC, inhibitors that fiddle with how DNA is stored and alter how genes are read out and translated into proteins.

The intricacies of how HDAC inhibitors affect gene expression and how those changes reduce seizures and mania are still up in the air. But while some scientists have been working those details out, others have been noticing that HDAC inhibitors help old mice learn new tricks. These drugs allow adult mice to adapt to visual and auditory changes in ways that are only otherwise possible for juvenile mice. In other words, HDAC inhibitors allowed mice to learn things beyond the typical window, or critical period, in which the brain is capable of that specific type of learning.

Judit Gervain, Allan Young, and the other authors of the current study set out to test whether HDAC inhibitors can reopen a learning window in humans as well. They randomly assigned their young male subjects to take valproate for either the first or the second half of the study. (Although I usually get my hackles up about the exclusion of female participants from biomedical studies, I understand their reason for doing so in this case. Valproate can cause severe birth defects. By testing men, the authors could be one hundred percent certain that their participants weren’t pregnant.) The subjects took valproate for one half of the study and a placebo for the other half . . . and of course they weren’t told which was which.

During the first half of the study, they trained twenty-four participants to learn six pitch classes. Instead of teaching them the formal names of these pitches in the twelve-tone musical system, they assigned proper names to each one (e.g., Eric, Rachel, or Francine), indicating that each is the name of a person who only plays one pitch class. The participants received this training online for up to ten minutes daily for seven days. During the second half of the study, eighteen of the same subjects underwent the same training with six new pitch classes and names. At the end of each seven-day training session, they heard the six pitch classes one at a time and, for each, answered the question: “Who played that note?”

fnsys-07-00102-g002

Study results showing better performance at naming tones for participants on valproate in the first half of the experiment. From: Gervain et al, 2013

The results? There was a whopping effect of treatment on performance in the first half of the study. The young men on valproate did significantly better than the men on placebo. That’s pretty cool and amazing. It is particularly impressive and surprising because the participants received very little training. The online training summed to a mere seventy minutes and some of the participants didn’t even complete all seven of the ten-minute sessions.

As cool as the main finding is, there are some odd aspects to the study. As you can see from the figure, the second half of the experiment (after the treatments were switched) doesn’t show the same result as the first. Here, participants on valproate perform no differently from those on placebo. The authors suggest that the training in the first half of the experiment interfered with learning in the second half – a plausible explanation (and one they might have predicted in advance). Still, at this point we can’t tell if we are looking at a case of proactive interference or a failure to replicate results. Only time and future experiments will tell.

There were two other odd aspects of the study that caught my eye. The authors used synthesized piano tones instead of pure tones because the former has additional cues like timbre that help people without perfect pitch complete the task. They also taught the participants to associate each note with the name of the person who supposedly plays it rather than the name of the actual note or some abstract stand-in identifier. Both choices make it easier for the participants to perform well on the task but call into question how similar the participants’ learning is to the specific phenomenon of perfect pitch. Perhaps the subjects on valproate in the first half of the experiment were relying on different cues (e.g., timbre instead of frequency). Likewise, associating proper names of people with notes may help subjects learn precisely because it recruits social processes and networks that people with perfect pitch don’t use for the task. If these social processes don’t have a critical period like perfect pitch judgment does, well then valproate might be boosting a very different kind of learning.

As the authors themselves point out, this small study is merely a “proof-of-concept,” albeit a dramatic one. It is not meant to be the final word on the subject. Still, I am curious to see where this leads. Might valproate’s success with seizures and mania have something to do with its ability to trigger new learning? And if HDAC inhibitors do alter the brain’s ability to learn skills that are typically crystallized by adulthood, how has that affected the millions of adults who have been taking these drugs for years? Yet again, only time and science will tell.

I, for one, will be waiting to hear what they have to say.

_______

Photo credit: Brandon Giesbrecht on Flickr, used via Creative Commons license

Gervain J, Vines BW, Chen LM, Seo RJ, Hensch TK, Werker JF, & Young AH (2013). Valproate reopens critical-period learning of absolute pitch. Frontiers in Systems Neuroscience, 7 PMID: 24348349

Looking Schizophrenia in the Eye

272994276_3c83654e97_bMore than a century ago, scientists discovered something usual about how people with schizophrenia move their eyes. The men, psychologist and inventor Raymond Dodge and psychiatrist Allen Diefendorf, were trying out one of Dodge’s inventions: an early incarnation of the modern eye tracker. When they used it on psychiatric patients, they found that most of their subjects with schizophrenia had a funny way of following a moving object with their eyes.

When a healthy person watches a smoothly moving object (say, an airplane crossing the sky), she tracks the plane with a smooth, continuous eye movement to match its displacement. This action is called smooth pursuit. But smooth pursuit isn’t smooth for most patients with schizophrenia. Their eyes often fall behind and they make a series of quick, tiny jerks to catch up or even dart ahead of their target. For the better part of a century, this movement pattern would remain a mystery. But in recent decades, scientific discoveries have led to a better understanding of smooth pursuit eye movements – both in health and in disease.

Scientists now know that smooth pursuit involves a lot more than simply moving your eyes. To illustrate, let’s say a sexy jogger catches your eye on the street. When you first see the runner, your eyes are stationary and his or her image is moving across your retinas at some relatively constant rate. Your visual system (in particular, your visual motion-processing area MT) must first determine this rate. Then your eyes can move to catch up with the target and match its speed. If you do this well, the jogger’s image will no longer be moving relative to your retinas. From your visual system’s perspective, the jogger is running in place and his or her surroundings are moving instead. From both visual cues and signals about your eye movements, your brain can predict where the jogger is headed and keep moving your eyes at just the right speed to keep pace.

Although the smooth pursuit abnormalities in schizophrenia may sound like a movement problem, they appear to reflect a problem with perception. Sensitive visual tests show that motion perception is disrupted in many patients. They can’t tell the difference between the speeds of two objects or integrate complex motion information as well as healthy controls. A functional MRI study helped explain why. The study found that people with schizophrenia activated their motion-processing area MT less than controls while doing motion-processing tasks. The next logical question – why MT doesn’t work as well for patients – remains unanswered for now.

In my last two posts I wrote about how delusions can develop in healthy people who don’t suffer from psychosis. The same is true of not-so-smooth pursuit. In particular, healthy relatives of patients with schizophrenia tend to have jerkier pursuit movements than subjects without a family history of the illness. They are also impaired at some of the same motion-processing tests that stymie patients. This pattern, along with the results of twin studies, suggests that smooth pursuit dysfunction is inherited. Following up on this idea, two studies have compared subjects’ genotypes with the inheritance patterns of smooth pursuit problems within families. While they couldn’t identify exactly which gene was involved (a limitation of the technique), they both tracked the culprit gene to the same genetic neighborhood on the sixth chromosome.

Despite this progress, the tale of smooth pursuit in schizophrenia is more complex than it appears. For one, there’s evidence that smooth pursuit problems differ for patients with different forms of the disorder. Patients with negative symptoms (like social withdrawal or no outward signs of emotion) may have problems with the first step of smooth pursuit: judging the target’s speed and moving their eyes to catch up. Meanwhile, those with more positive symptoms (like delusions or hallucinations) may have more trouble with the second step: predicting the future movement of the target and keeping pace with their eyes.

It’s also unclear exactly how common these problems are among patients; depending on the study, as many as 95% or as few as 12% of patients may have disrupted smooth pursuit. The studies that found the highest rates of smooth pursuit dysfunction in patients also found rates as high as 19% for the problems among healthy controls. These differences may boil down to the details of how the eye movements were measured in the different experiments. Still, the studies all agreed that people with schizophrenia are far more likely to have smooth pursuit problems than healthy controls. What the studies don’t agree on is how specific these problems are to schizophrenia compared with other psychiatric illnesses. Some studies have found smooth pursuit abnormalities in patients with bipolar disorder and major depression as well as in their close relatives; other studies have not.

Despite these messy issues, a group of scientists at the University of Aberdeen in Scotland recently tried to tell whether subjects had schizophrenia based on their eye movements alone. In addition to smooth pursuit, they used two other measures: the subject’s ability to fix her gaze on a stable target and how she looked at pictures of complex scenes. Most patients have trouble holding their eyes still in the presence of distractors and, when shown a meaningful picture, they tend to look at fewer objects or features in the scene.

Taking the results from all three measures into account, the group could distinguish between a new set of patients with schizophrenia and new healthy controls with an accuracy of 87.8%. While this rate is high, keep in mind that the scientists removed real-world messiness by selecting controls without other psychiatric illnesses or close relatives with psychosis. This makes their demonstration a lot less impressive – and a lot less useful in the real world. I don’t think this method will ever become a viable alternative to diagnosing schizophrenia based on their clinical symptoms, but the approach may hold promise in a similar vein: identifying young people who are at risk for developing the illness. Finding these individuals and helping them sooner could truly mean the difference between life and death.

_____

Photo credit: Travis Nep Smith on Flickr, used via Creative Commons License

Benson PJ, Beedie SA, Shephard E, Giegling I, Rujescu D, & St Clair D (2012). Simple viewing tests can detect eye movement abnormalities that distinguish schizophrenia cases from controls with exceptional accuracy. Biological psychiatry, 72 (9), 716-24 PMID: 22621999

Neural Conspiracy Theories

140775790_e3e122cd65_bLast month, a paper quietly appeared in The Journal of Neuroscience to little fanfare and scant media attention (with these exceptions). The study revolved around a clever and almost diabolical premise: that using perceptual trickery and outright deception, its authors could plant a delusion-like belief in the heads of healthy subjects. Before you call the ethics police, I should mention that the belief wasn’t a delusion in the formal sense of the word. It didn’t cause the subjects any distress and was limited to the unique materials used in the study. Still, it provided a model delusion that scientists Katharina Schmack, Philipp Sterzer, and colleagues could study to investigate the interplay of perception and belief in healthy subjects. The experiment is quite involved, so I’ll stick to the coolest and most relevant details.

As I mentioned in my last post, delusions are not exclusive to people suffering from psychosis. Many people who are free of any diagnosable mental illness still have a tendency to develop them, although the frequency and severity of these delusions differ across individuals. There are some good reasons to conduct studies like this one on healthy people rather than psychiatric patients. Healthy subjects are a heck of a lot easier to recruit, easier to work with, and less affected by confounding factors like medication and stress.

Schmack, Sterzer, and colleagues designed their experiment to test the idea that delusions arise from two distinct but related processes. First, a person experiences perceptual disturbances. According to the group’s model, these disturbances actually reflect poor expectation signals as the brain processes information from the senses. In theory, these poor signals would make irrelevant or commonplace sights, sounds, and sensations seem surprising and important. Without an explanation for this unexpected weirdness, the individual comes up with a delusion to make sense of it all. Once the delusion is in place, so-called higher areas of the brain (those that do more complex things like ponder, theorize, and believe) generate new expectation signals based on the delusion. These signals feed back on so-called lower sensory areas and actually bias the person’s perception of the outside world based on the delusion. According to the authors, this would explain why people become so convinced of their delusions: they are constantly perceiving confirmatory evidence. Strangely enough, this model sounds like a paranoid delusion in its own right. Various regions of your brain may be colluding to fool your senses into making you believe a lie!

To test the idea, the experimenters first had to toy with their subjects’ senses. They did so by capitalizing on a quirk of the visual system: that when people are shown two conflicting images separately to their two eyes, they don’t perceive both images at once. Instead, perception alternates between the two. In the first part of this experiment, the two images were actually movies of moving dots that appeared to form a 3-D sphere spinning either to the left (for one eye) or to the right (for the other). For this ambiguous visual condition, subjects were equally likely to see a sphere spinning to the right or to the left at any given moment in time, with it switching direction periodically.

Now the experimenters went about planting the fake belief. They gave the subjects a pair of transparent glasses and told them that the lenses contained polarizing filters that would make the sphere appear to spin more in one of the two directions. In fact, the lenses were made of simple plastic and could do no such thing. Once the subjects had the glasses on, the experimenters began showing the same movie to both eyes. While this change allowed the scientists to control exactly what the subjects saw, the subjects had no idea that the visual setup had changed. In this unambiguous condition, all subjects saw a sphere that alternated direction (just as the ambiguous sphere had done), except that this sphere spun far more in one of the two directions. This visual trick, paired with the story about polarized lenses, was meant to make subjects believe that the glasses caused the change in perception.

After that clever setup, the scientists were ready to see how the model delusion would affect each subject’s actual perception. While the subject continued to wear the glasses, they were shown the two original, conflicting movies to their two separate eyes. In the first part of the experiment, this ambiguous condition caused subjects to see a rotating sphere that alternated equally between spinning to the left and right. But if their new belief about the glasses biased their perception of the spinning sphere, they would now report seeing the sphere spin more often in the belief-consistent direction.

What happened? Subjects did see the sphere spin more in the belief-consistent direction. While the effect was small, it was still impressive that they could bias perception at all, considering the simplicity of the images. They also found that each subject’s delusional conviction score (how convinced they were by their delusional thoughts in everyday life) correlated with this effect. The more the subject believed her real-life delusional thoughts, the more her belief about the glasses affected her perception of the ambiguous spinning sphere.

But there’s a hitch. What if subjects were reporting the motion bias because they thought that was what they were supposed to see and not because they actually saw it? To answer this question, they recruited a new batch of participants and ran the experiment again in a scanner using fMRI.

Since the subjects’ task hinged on motion perception, Sterzer and colleagues first looked at the activity in a brain area called MT that processes visual motion. By analyzing the patterns of fMRI activity in this area, the scientists confirmed that subjects were accurately reporting the motion they perceived. That may sound far-fetched, but this kind of ‘mind reading’ with fMRI  has been done quite successfully for basic visual properties like motion.

The group also studied activity throughout the brain while their glasses-wearing subjects learned the false belief (unambiguous condition) and allowed the false belief to more or less affect their perception (ambiguous condition). They found that belief-based perceptual bias correlated with activity in the left orbitofrontal cortex, a region just behind the eyes that is involved in decision-making and expectation. In essence, subjects with more activity in this region during both conditions tended to also report lopsided spin directions that confirmed their expectations during the ambiguous condition. And here’s the cherry on top: subjects with higher delusional conviction scores appeared to have greater communication between left orbitofrontal cortex and motion-processing area MT during the ambiguous visual condition. Although fMRI can’t directly measure communication between areas and can’t tell us the direction of communication, this pattern suggests that the left orbitofrontal cortex may be directly responsible for biasing motion perception in delusion-prone subjects.

All told, the results of the experiment seem to tell a neat story that fits the authors’ model about delusions. Yet there are a couple of caveats worth mentioning. First, the key finding of their study – that a person’s delusional conviction score correlates with his or her belief-based motion perception bias – is built upon a quirky and unnatural aspect of human vision that may or may not reflect more typical sensory processes. Second, it’s hard to say how clinically relevant the results are. No one knows for certain if delusions arise by the same neural mechanisms in the general population as they do in patients with illnesses like schizophrenia. It has been argued that they probably do because the same risk factors pop up for patients as for non-psychotic people with delusions: unemployment, social difficulties, urban surroundings, mood disturbances and drug or alcohol abuse. Then again, this group is probably also at the highest risk for getting hit by a bus, dying from an curable disease, or suffering any number of misfortunes that disproportionately affect people in vulnerable circumstances. So the jury is still about on the clinical applicability of these results.

Despite the study’s limitations, it was brilliantly designed and tells a compelling tale about how the brain conspires to manipulate perception based on beliefs. It also implicates a culprit in this neural conspiracy. Dare I say ringleader? Mastermind? Somebody cue the close up of orbitofrontal cortex cackling and stroking a cat.

_____

Photo credit: Daniel Horacio Agostini (dhammza) on Flickr, used through Creative Commons license

Schmack K, Gòmez-Carrillo de Castro A, Rothkirch M, Sekutowicz M, Rössler H, Haynes JD, Heinz A, Petrovic P, & Sterzer P (2013). Delusions and the role of beliefs in perceptual inference. The Journal of Neuroscience, 33 (34), 13701-13712 PMID: 23966692

%d bloggers like this: