Author Archives: apelduna

Weighing the Evidence – Can We Overcome Confirmation Bias?

Implicit bias and how our attention, memories, and decisions are so strongly affected by processes beyond our control has intrigued me the most of all the topics we’ve discussed this semester. While discussing problem solving, we learned about confirmation bias. Confirmation bias concerns how we handle evidence related to our beliefs. When we evaluate our beliefs, we’re likely to encounter both confirming and disconfirming evidence. A logical approach would be to consider both types of evidence equally valuable, with some researchers suggesting that we should give disconfirming evidence more thought because it is likely to provide more information. Confirmation bias, instead, compels us to be more sensitive to confirming evidence supporting our existing beliefs and to neglect disconfirming evidence that may oppose them.

When discussing confirmation bias in class, Dr. Rettinger mentioned that the best way to overcome confirmation bias and belief perseverance (the tendency to ignore even undeniable disconfirming evidence) was by appealing to people’s emotions and engaging “System 1” (automatic, emotional) resources. This intrigued me, so I set out to find some more information about how to combat confirmation bias. Additionally, I was interested in how to make myself more aware of and resistant to my own biases.

The first article I read from BBC.com, “How to Get People to Overcome Their Bias” discussed an experiment conducted at Princeton that explored two theories of confirmation bias and the effectiveness of different strategies to overcome confirmation bias arising from each. Experimenters believed there are two accounts for why confirmation bias happens. The motivational theory of confirmation bias suggests that people are biased because of their motivations: their job, friends, desires, or how they see themselves. To defeat motivation-based biases, we would need to change people’s motivations. The second theory, the cognition theory, suggests that people are biased because they lack effective strategies for evaluating new evidence. To defeat cognition-based biases, we need to give people better methods for considering evidence.

To test these theories, researchers presented study participants with information supporting the death penalty (i.e. the death penalty lowers murder rates). Participants either strongly supported or strongly opposed the death penalty. In the first part of the experiment, they simply presented the evidence and measured how participants’ beliefs changed. Not surprisingly, those who already supported the death penalty felt stronger about their beliefs after seeing the evidence. However, those who opposed the death penalty also felt stronger in their opposition even after seeing evidence supporting the death penalty …belief perseverance in action. The researchers referred to this as biased assimilation.

In a follow-up study by the same researchers, they tested two methods for assimilating new evidence. One group was instructed to be “objective and unbiased” and imagine themselves “as a judge or juror asked to weigh all of the evidence in a fair and impartial manner.” This approach was testing how changing an individual’s motivations would impact the assimilation of new evidence. The second group was told to consider the methodology of the study producing the evidence that the death penalty lowered murder rates and then imagine the results had instead supported the opposite finding (death penalty = higher murder rates) This strategy was referred to as “consider the opposite” and was testing whether changing participants’ cognitive processes for evaluating evidence could affect beliefs.

They found that the strategy to change people’s motivations did not work and results were the same as in the initial study. However, asking people to “consider the opposite” did result in participants overcoming biased assimilation: evaluating new evidence fairly even if it didn’t support their existing beliefs and not becoming more extreme in their existing views. The article concluded that the study demonstrated that wanting to be objective isn’t enough to overcome confirmation bias and instead we must learn better cognitive strategies for scrutinizing new information.

While I feel the information gained from this study could be useful for helping me to overcome my own confirmation bias, I’m skeptical whether this could work with others. When I have encountered confirmation bias, it appears to be strongest during debates on highly controversial topics that people feel personally invested in (topics such as evolution vs. creationism, climate change, gun control, etc.). I believe both motivational and cognitive strategies as mentioned above would fail because people simply do not want to see things any other way. Outside of a lab, I believe the lack of motivation to consider evidence fairly would negate any ability of the “consider the opposite” approach to overcome biased assimilation. While the research findings shouldn’t be discarded as irrelevant, I also feel it highlights how appealing to emotion and motivations is necessary in more real-life situations.

With that in mind, I read a second article from Pacific Standard online titled “Changing Anti-Vaxxers’ Minds.” This article discussed studies done to investigate the effect of disconfirming evidence in different forms on people opposed to vaccination. In one study, participants were either asked to read evidence debunking links between vaccines and autism, read about bird feeders, or read about a child with measles and look at pictures of children with vaccine-preventable diseases. This first study found that reading about vaccines and autism had the same effect as reading about bird feeders (none) while reading a story about a child with measles and see other photos of sick children made anti-vaxxers slightly less skeptical and more in favor of vaccinating. However, a second similar study found that both test conditions (refuting vaccine-autism connections and stories/images of sick children) resulted in stronger anti-vaccination beliefs.

I believe the second study does indicate that appealing to emotions shows some promise if we wish to overcome confirmation bias, but I believe it also emphasizes just how strongly we cling to our beliefs. From my own, non-scientific experience, it seems like the personal attachment people feel to their beliefs results in defensiveness whenever disconfirming evidence is encountered. I’ve felt this in myself and have to work hard to remind myself that my identity and personal values aren’t attached to the evidence I choose to believe in. However, I realize confirmation bias is strongest when people feel they’ve invested more effort into believing something and defending that belief. I think some people feel as though considering disconfirming evidence or changing their opinions is like abandoning a figurative ship and not being strong enough to maintain conviction. I wonder if the key to overcoming confirmation bias is not to continue seeking ways to make people better at evaluating evidence, but instead making people more aware that confirmation bias exists and how it’s keeping them from having a truly educated opinion. If people feel they’re being misled by their own unconscious tendencies, would they then feel more compelled to seek out new evidence on their own? I know I do.

If you’re interested in reading the articles I mentioned, you can click on the titles of the articles above or follow the links below:

How to Get People to Overcome Their Bias: http://www.bbc.com/future/story/20170131-why-wont-some-people-listen-to-reason

Changing Anti-Vaxxers’ Minds: https://psmag.com/social-justice/changing-anti-vaccine-minds

Bigger Than Your Bias

How would you feel if I said you are inherently biased against black people? While there are certainly individuals who are aware and perhaps even comfortable with their own racial prejudices, I’m going to assume that most of you reading this post would disagree with and possibly be offended by that statement. Unfortunately, according to several studies conducted on this topic, it turns out implicit bias may be unavoidable.

A few weeks ago, we discussed implicit versus explicit memories and how previous encounters with stimuli can shape our memories and beliefs. We learned that priming plays a significant role in what we find familiar or believe to be true. While this can be useful in some instances and may allow us to process stimuli and access stored information more efficiently, it can also be problematic when previous associations create harmful biases. During an episode of The Hidden Brain from March 16 called “The Mind of the Village,” Shankar Vedantam, along with psychologists from various universities, explore the problem of implicit social bias, how communities unconsciously shape individual minds, and what we can do to prevent our implicit biases from affecting our conscious behaviors.

Dr. Mahzarin Banaji, a professor at Harvard University, and Vedantam begin the episode discussing the Implicit Association Test (IAT). Banaji designed the IAT to measure an individual’s level of implicit bias. The IAT is a sorting exercise that requires faces of individuals to be grouped with other objects that hold either positive or negative associations. Banaji explains that the IAT works well because of our unconscious tendency to group related objects together. For instance, we would be faster at grouping “bread” with “butter” than we would be at grouping “bread” with “hammer.” This should sound familiar as we’ve discussed the effect of priming and repetition as it relates to recognition and memories in class extensively. However, instead of innocuous groupings of familiar pairs, the IAT has individuals sort black and white faces with words like heaven, hell, evil or love. In some trials, white faces are to be sorted with negative words and black faces with positive words. In other trials, this is reversed. What Banaji found is that the majority of test participants were faster at sorting black faces with negative words and white faces with positive words. This was true not only for white participants, but also for black participants and for Banaji herself.

(Sample faces used in the IAT)

Dr. Joshua Correll from the University of Colorado Boulder developed another way to test for implicit bias which was a bit more direct than the IAT. Correll designed a video game, called The Police Officer’s Dilemma, that required participants to assess and engage a potential threat. On a screen, a black or white man would appear holding an object. The object could be something harmless or it could be a gun. The test measured response times to both assess the threat and shoot the target as well as measuring whether the responses were correct or incorrect. Like the IAT, The Police Officer’s Dilemma found that people were faster both to associate black people with threats and to shoot black people. They were also more likely to make incorrect associations consistent with having implicit bias against black people.

(Sample images used in The Police Officer’s Dilemma)

While the data collected by Banaji and Correll can seem disheartening, there is a silver lining on the dark clouds. Interestingly, when Correll had police officers instead of students and laypeople take the test, he found that while the police officers showed the same levels of implicit bias in response times and mistakes when assessing threats, they were less likely to shoot the wrong person. He also found that things like lack of sleep and stress affected how well police officers controlled their implicit bias. This suggests that while we can’t change the implicit biases we possess, we can learn to control how we let those biases affect our behavior. By using cognitive resources and making a point to be aware of our biases, we can avoid making the mistakes that would be more likely if we relied on impulse and emotion. As we’ve discussed in class, engaging our “System 2” resources would allow us to combat the inescapable bias that drives our “System 1.”

I must admit that I was uncomfortable acknowledging that I might be vulnerable to implicit bias. However, based upon abundant research, this seems to exist in all of us, to some extent. Think about it for a minute …what color do you associate with death, evil, fear or hell? And what color do you associate with purity, heaven, peace or good? I can write a long list of black things with negative associations and white things with positive associations, many that I learned about as a young child. Whether we like it or not, at some point, society decided that black equaled bad and white equaled good. It’s unfortunate, but unavoidable that those associations carry over into how we react to other people. Consciously, we might strive to treat others equally and not judge people by the color of their skin. However, until we can change the world around us, we likely won’t be able to change the automatic biases that exist. What we can do, however, is admit that implicit bias exists, become aware of it and be prepared to control for its effects. Instead of pretending it isn’t there because it’s uncomfortable to talk about, we are better off addressing bias head on and saying, “I won’t let it determine how I behave.” It takes deliberate conscious effort, but maybe just as those associations became automatic after years of repetition, we can make our deliberately unbiased response automatic as well.

If you’re interested, you can take the IAT online. There are a number of IATs that measure various biases such as attractive vs. unattractive, black vs. white, thin vs. obese, old vs. young, etc. I’ve taken two so far and while neither test has shown me to have any preference for one group over another, I do find that just taking the tests made me more aware of my thoughts and behaviors. I also found it difficult to keep the responses and buttons straight on the test which does make me question its validity somewhat. However, the effect of confusing buttons and not being able to remember how to group objects should be consistent throughout the trials so I suspect it wouldn’t skew the results anyway.

Link to The Hidden Brain: The Mind of the Village podcast/transcript:

https://www.npr.org/templates/transcript/transcript.php?storyId=591895426

Link to the Implicit Association Test:

https://implicit.harvard.edu/implicit/takeatest.html

 

“We don’t perceive objects as they are, we perceive them as we are”

We’ve just started discussing how memories are processed, stored, and retrieved and how this relates to our perception of the world around us. Our lectures have focused mostly on memory and recall as they pertain to simple stimuli such as words and numbers, but we are able to apply what we know about memory processing and recall techniques to more complex stimuli. In addition to discussing the processing of memories, we’ve also addressed both how our previous experiences affect how we process stimuli (top-down processing) and how the stimuli we encounter can determine how higher processing proceeds (bottom-up processing).

I recently began re-watching The Brain with David Eagleman, a six-part documentary series that explores some of the mysteries and complexities of the human brain. The second episode, “What Makes Me?” addresses how our experiences and memories shape how we perceive the world around us. At one point, David Eagleman addresses the issue of false memories with researcher, Elizabeth Loftus. Loftus has conducted a number of experiments that reveal how unreliable memories can be. One study that I found both remarkable and unsettling is similar to something we have touched upon in class as well.

To investigate just how vulnerable we are to the power of suggestion, Loftus designed an experiment to test whether individuals could be persuaded to believe in an elaborate lie. Researchers contacted relatives of study participants and recorded three stories from each participant’s childhood. A fourth story was completely fabricated. During the experiment, Loftus described the four stories (three true and one false) to the participants and asked them if they could recall details from those experiences. Every participant not only remembered the false story (described as an account of the participant as a young child lost in the mall who was eventually assisted by a kind, elderly stranger), but when they returned for a follow-up interview a week later, had recalled additional details about the false experience (e.g. what the kind stranger looked like, what they were wearing, etc.).

While it’s not a groundbreaking revelation that our memories are not accurate, I think it’s incredible that not only can we be convinced to believe a false memory, we also become so invested in that false memory that we fabricate additional details without prompting. When we explored the fallibility of memories in class and in the “False Memory” ZAPS, it was more along the lines of forgetting an experience that we actually had or confusing two different, but similar experiences. Seeing such definitive evidence that we can be convinced to believe a “memory” that is completely false was disturbing to me. It not only pulled the proverbial rug out from under my confidence in my own memories and beliefs, it also gave me more reason to doubt what other people tell me when describing past experiences.

I think, however, this also reveals another perspective that I find encouraging: no two people will ever share the same experience. As much as we try to relate and share experiences with one another, we can never have anything but an entirely unique experience. Each moment is a fusion of past experiences and present stimuli. Our memories, emotions, and beliefs all influence how we perceive the present and because no two people can have exactly the same history, our experiences will always be unique …our one private possession. We use our current knowledge to reconstruct our memories, but we also allow our memories and beliefs to influence our perception of the present. The dynamic relationship between expectations and data produces a reality that is unique to each of us. As David Eagleman put it, “we don’t perceive objects as they are, we perceive them as we are.”

If you’d like to watch the episode, here’s a link:

The Brain That Changes Itself

A while back, I purchased the book The Brain That Changes Itself. I began reading it, but as I often do, got sidetracked and forgot about it. Recently, I found myself looking for a movie or documentary to watch while I worked around the house. I stumbled across the documentary The Brain That Changes Itself which features the author and some of the research presented in the book. I thought it could provide interesting fodder for my first blog post and I was certainly correct. The Brain That Changes Itself focuses on brain neuroplasticity and the power we have to change how brains function.

The movie opens with various images from the documentary and a man (I believe to be Dr. Michael Merzenich) saying, “There’s always a strong temptation to think of the brain or to call a brain a machine or talk about it as a computer.” Throughout the documentary, different researchers and neuroplasticians explore research and case studies that provide evidence that the wiring in the brain is not permanent and can be altered by changing one’s thoughts or behavior. Throughout the documentary, scientists use the various research techniques discussed in class and our text to gain insight into the plastic nature of the brain. By combining techniques such as neuropsychology, MRIs, fMRIs and TMS, researchers not only demonstrated how it’s possible to change cognitive function, but also how the underlying circuitry changes as well.

In class this past week, we focused on the visual system and how visual information is integrated and processed. Other senses, such as touch and proprioception are integrated similarly, by the routing of afferent information to different parts of the brain. What’s amazing is how quickly and proficiently our brains can learn to reroute and repurpose incoming signals to compensate for deficiencies. The first two case studies in The Brain That Changes Itself concern this ability of the brain (called sensory substitution) to reassign or create alternate pathways when the primary neural pathway is damaged.

Roger Behm, blind for most of his adult life (38 years, at the time the documentary was filmed), is fitted with a device that is held in his mouth and vibrates on his tongue, allowing him to “see.” I put that in quotes even though Roger explains, “Definitely people think, ‘Well, it’s touch.’ Well, not for me …as soon as I put that on, within a matter of seconds, I am seeing it. It’s drawing pictures in my head.” His brain is able to transform images drawn on his tongue with vibrations into visual images distinct enough that he can navigate his way through a path taped on the floor or point to specific features of shapes mounted on the wall.

Cheryl Schlitz lost 95-100% functionality in her vestibular apparatus as a side effect of a medication she was prescribed. Using the same device as Roger, Cheryl goes from wobbling and nearly falling over to being able to stand upright and completely still within minutes. Incredibly, Cheryl’s case not only demonstrates how quickly the brain adapts, but also that this adaptation is residual and cumulative. After each treatment, Cheryl’s sense of equilibrium was restored for longer until she no longer required use of the device at all.

Later in the film, Dr. Doidge, the author of The Brain That Changes Itself introduces what he calls the plastic paradox. We are all born with plastic potential, he explains. Our experiences and routines (or lack thereof) determine whether our brain becomes more flexible and adaptable as we age or more rigid and constrained. To emphasize this, Dr. Alvaro Pascual-Leone discusses the work he is doing to investigate neuroplasticity using TMS and visualization exercises. Dr. Pascual-Leone discusses a study he conducted comparing the brain scans of individuals who sat in front of a piano and practiced moving their fingers to the scans of those instructed to simply visualize the movements. After five days, the brain area associated with the movement of those fingers had gotten larger not only in the group that was actually performing the movements, but also in the group that was simply mentally rehearsing the movements. Similar to the study we discussed in class that illustrated the functional similarities in brains of patients making judgements about actual pictures and patients making judgements about mental pictures, Dr. Pascual-Leone’s study shows how thinking alone can activate and change the brain. He says, “The idea is that just thinking will change your brain …and what that ultimately means is that one needs to be careful with what one thinks.”

While I can’t possibly detail every case study presented in the documentary in one blog post, I’ve tried to present some of the most relevant and thought-provoking stories to hopefully inspire others to check it out. The ability of our brains to react and change in response to our thoughts, behaviors and environment makes them both resilient and vulnerable. Watching The Brain That Changes Itself certainly left me with thoughts about how I might improve my own cognitive function and teach my brain to better focus and adapt. As Dr. Doidge and Dr. Pascual-Leone point out, neuroplasticity is not a rare phenomenon, but instead an inherent quality of the brain. If we do not work to keep our brains flexible and instead allow our lives to be rigid and repetitive, our cognitive function may very well suffer.

To watch the documentary on YouTube, here’s the link:

https://youtu.be/bFCOm1P_cQQ