Author Archives: jpeiris

Dual Processes and the Elaboration Likelihood Model

Image result for communication

As society progresses, media has had greater influence on what we listen to, talk about, and seek additional information on. However, do you ever notice that there are times when you are completely disengaged from what media is saying while other times you are so interested, you find yourself asking questions? This has to do with the elaboration likelihood model (ELM), which posits that individuals can engage in one of two routs – central or peripheral – in order to process information.

Image result for elaboration likelihood modelThe central route allows for a thoughtful evaluation of the pros and cons of the information and requires that the individual have a motivation to think deeply as well as the cognitive resources to. This route is most commonly associated with the controlled processing aspect of the dual process model, in that we are consciously aware of what we are processing. The peripheral route allows us to think at a surface level or processing where there is no real or thoughtful evaluation of the information presented – we take it at face-value. Therefore, little motivation or cognitive resources are needed. This route is most commonly associated with the automatic aspect of the dual-process model, because we tend to engage heuristics and other methods to allow for us to make judgement with minimal (conscious) usage of our cognition.

Robert and Dennis (2005) discuss this model in relation to how humans use cognition to choose which media they pay attention to and not. They define media richness as how much content the source uses to convey a message and social presence as whether or not an individual is physically present. For instance, e-mails would be considered low in social presence while a videoconference would be considered higher in social presence. They particularly focus on how this model impacts multiple aspects of media in terms of the sender and receiver.

One aspect is how our decision quality is impacted based on social presence and the receiver’s motivation and attention. Senders face the obstacle of getting the receiver to be motivated to fully listen to the content as well as pay attention. If these are not paying attention, it makes it hard for the sender to get his/her information across. Social presence also impacts decision quality in that when an individual is given a lot of content in high social presence contexts, it may lead to poor decision-making since they are not given enough time to fully absorb and reflect on the information. In turn, the receiver will feel a sense of information overload.

Image result for information overloadIn contrast, those who are given the same amount of content, but in low social presence contexts (e.g. via e-mail) have time to do more research and weigh the pros and cons before making a final decision. In the former situation, this may cause people to take on a more simplistic style of decision-making (and engage in peripheral processing) where the receiver relies on cues or what is readily available to them in terms of memories or experiences (availability heuristic). The switch to a simpler style of thinking, assuming the individual did want to engage in central route processing, may  be due to the fact that individual’s working memory has a limited capacity. Therefore, if there is a lot of information to process, coupled with high social presence, it makes it hard for the receiver to fully process all the information since it can’t all be stored in working memory.

However, receivers who engage in high social presence media are already showing motivation to use the central route of cognitive processing in order to fully understand the information being presented to them. The sender has to be wary of the type of medium they send to the receiver because that can determine if and how much the receiver will further engage in deeper levels of processing. If the sender expects a quick response, they would choose a medium with a higher social presence. But if the sender expects a thorough response, they would most likely choose a medium with a lower social presence so that the receiver has time to process the information given.

Robert and Denies (2005) also discuss the idea of reprocessability, which is the extent that the receiver can go over the information presented more than once to, for lack of a better word, reprocess the information. Even though this is advantageous on the receiver’s part, if the receiver has a chance to fully elaborate and think-through the information presented and the sender redistributes the media, it may decrease message acceptance. We can see this when an individual is has to engage in the same advertisement throughout the course of a show they are streaming. At first the individual may be engaged in the advertisement, but after watching it more than three times, they are just frustrated and disengaged.

So how should the receiver present information and in which contexts? One way is through media switching, which is where the sender presents information in various forms of media. For instance, even though a lot of products are advertised on television, companies also use various social media platforms, word of mouth from customers, or by huge posters or billboards. Media switching allows receivers to be presented information in various forms as to not make them reject it after repeated exposure…at first. Of course, over-advertisement can lead to the receiver becoming disengaged.

Another question that comes to mind is, in what situations does central and peripheral processing work best? In terms of central processing, it seems that this is best used when you want to change an attitude or belief since central processing requires deeper processing. Therefore, when it comes to prejudice or stereotypes, helping the individual question their automatic processes and trying to avoid using heuristics, can help lessen these. Peripheral processes are best used when companies want to sell products. They most likely would use surface-level qualities (“look at all these colors”, “a new camera with better resolution”, etc.) to engage recipients. In the case of peripheral route processing, either low or high social presence would be fine considering there is not much information that needs to be processed. However, low social presence would be best for more central processing because the individual can really take time to reflect. However, the first step is catching the receiver when they have the cognitive resources available, the motivated to learn, and the ability to pay attention to your message.

Phonological Ambiguity and Autocorrect

Your teacher hands back your paper and you realize you were marked a few points because of a few typos. You had read it the night before turning it in, but did not catch it. What happened?

When looking for the answer to this, I found that one of the main findings of previous literature is that as long as the first and last letter are in the correct place, we can accurately read the sentence or paragraph. For instance:

"Aoccdrnig to a rscheeah at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae." 

But, turns out this is not the only important thing to deciphering ambiguous words. When talking about language, we learned how language is a hierarchical structure where phonemes are the smallest unit of words and phrases are the more complex forms of language. This article mainly explains how we actually take into account the sound of the words and the context in which we hear it to make sense of such ambiguous words.Image result for brain autocorrect

This is the first study to specifically look at phoneme ambiguity and how it is influenced by subsequent context to ‘autocorrect’ a word when hearing a sentence. In the first experiment, participants were given two syllables on a screen (e.g. B or P) and then were played the syllable. After hearing it, they were asked to choose which they heard. There was no limit placed on the response time and participants could move on when they were ready. In the second experiment, new participants listened to a word with the word having the same beginning syllable as those in the first experiment. After, they were asked to read the word on a screen and say if it was match or mismatch.

What was different between these two experiments was that the first played the syllable followed by a silence while the second played the syllable followed by the rest of the word. What the researchers were getting at was to see if subsequent context influenced how we interpreted a word as ambiguous or not and how fast it took. Both experiments all had participants wear a magnetoencephalography to record the participant’s auditory cortex.

After looking at the MEG’s of participants, researchers found the following results when it came to sensitivity towards phonological ambiguity in the brain, specifically, the primary auditory cortex:

 

 

 

 

 

Figure B shows the location that appears to sensitive to ambiguity in Experiment 1 where just the syllable was given followed by silence. In comparison, Figure D shows the location that appears to be sensitive to ambiguity in Experiment 2 where the syllable is followed by the rest of the word. The results found that phonological ambiguity occurs at an early point (around20 ms after onset). This suggests that there is more sensitivity to ambiguity when given just the syllable (Figure B) versus the word (Figure D) since the word gives more of a context to decipher the ambiguous word versus giving just the syllable and nothing else.

The results further showed that response times were significantly slower for more ambiguous items. The researchers note how the auditory system actively maintains the acoustic signal in the auditory cortex while trying to figure out what the ambiguous word is. For example, if a participant hears the word ‘parakeet’, is given the word ‘barakeet’ and has to determine if it is a match mismatch, it may take a little longer to decide because we have to go back and re-evaluate what we heard before. Then, once we re-evaluate it and realize it is a mismatch do we press the ‘mismatch’ button.

Another explanation researchers talk about as to why there is a slower response time is that as we hear each subsequent sound, we re-evalate the previous phonemes to update interpretations as necessary.

Once part of the method I found interesting was that they allowed participants to take as long as they needed to properly give a response. It might be interesting to see how many mistakes would be made in a time-constrained situation given the finding of the study.

So how does this all relate to autocorrect and the brain? Going back to the example of typos in the paper you turned in for class. When reading the paper, you probably did not read it out loud and because of that, you did not catch the typos. By reading the paper out loud, the sound that reaches the auditory cortex are held while the brain interprets the combination of phonemes to figure out the correct word. This would most likely make it easier for you to catch any mistakes or typos.

Can Smartphones Really Hurt More Than Help?

You are in the library, studying for a big exam tomorrow. Your phone is right next to you and  you are having a hard time paying attention to the material. You move your phone to the other side of the table. You still can’t pay attention, so you put it on Do Not Disturb mode and in your backpack. Next thing you know, it has been four hours and you got through a lot of your material. Image result for studying

We all have heard of the phrase, “out of sight, out of mind.” How does this relate to smartphones, memory, and attention? A lot of research has been done since the the smartphone came out on how it could actually hurt your attention and memory. I remember thinking to myself on multiple occasions and even asking those I know if they think people had better memory before the smartphone came out versus after. It was when I came across an article by Wilmer, Sherman, and Chein (2017) that I kind of found my answer.

The article is primarily a summary of past research that has been done on this topic. Wilmer, Sherman, and Chein (2017) talk about two types of interruptions – endogenous and exogenous. Using the scenario above, an endogenous interruption is when you think about your phone while studying whereas an exogenous interruption is when you get a notification and start thinking about your phone. In other words, endogenous is thinking about your phone when you are doing a task while exogenous is when an environmental cue (e.g. a notification) captures your attention.

One study in particular relates to the scenario that you first read. In the first study done by Thornton et al. (2014) where they had participants perform two tasks (one simple and one hard) designed to measure executive function and attention. In this study, the researchers would “accidentally” leave either a notebook or a phone on the table while the participant performed the tasks. They found that participant’s performance was the same for the simpler tasks, but when it came to the difficult task, participants exposed to the phone on the table performed worse than participants exposed to the notebook. Replication studies were done and found very similar results – one study had participants put their own phones on the table. These studies basically found that endogenous interruptions had a very negative effect on participant’s attention span.Image result for attention

When it comes to smartphones and its effects on attention, the results are mixed. When studies use self-report and correlational data, the results show a negative results. However, when the research done is more controlled, positive relationship are found and even show smartphones help effectively filter out distractions.  This implies two possible explanations: (1) the self-reported data is biased and participants may not be giving accurate data since it is a fact that our own memory is not as reliable or (2) these findings of a positive relationship only work in controlled settings and not in the real-world.

The article then moves on to talk about how smartphones can affect memory. It is a common notion that smartphones reduce our need and/or ability to store information since information is made readily available to us, such as through Google. One study done by Sparrow et al. (2011) had participants type up trivia facts, but told half of the participants that they would be able to access the information later on while the other half were told that the information would be deleted soon. When asked to recall the facts, the second half of participants had better recollection of the facts versus the half that were told the information would be available later. This shows how believing that information is readily available can cause us to feel less inclined to encode and store information in long-term memory. However, one limitation could be the type of facts participants were given.

Research has also looked into how using GPS’s can negatively impact our memory because we are relying on it versus our own memory. One researcher even found in a simulation study that one participant knowingly took the bad route just because the GPS said to even though they knew it was wrong! So, why do we do this? One explanation could be that people rely on their phones so much there could be something going on that causes us to think the phone knows best even though we know the better route. I have noticed that even though people know exactly where they are going, they still depend on their phones. I would always ask them why they do this and their response is simply, “I don’t know, I just keep it on just in case.”

Image result for using phones

More recent research presented in the article has looked into how smartphones might impact our need for instant gratification. However, there is no definite conclusions that can be drawn and there is a need (no pun intended) for more research to be done. While smartphone use may effect our need for instant gratification, it could be that some people have a tendency towards more immediate gratification and, thus, use their smartphones more often. Currently, there is suspicion that smartphones are rewiring our brains in a way that seeks instant gratification, however, there is no longitudinal evidence that this is true. It could be that over time and from constant exposure to smartphones as humans, there could be rewiring of our brain. Maybe not to seek instant gratification, but possible more multitasking or filtering out distractions. Further research would have to be done to see if this is the case.

Improving Concentration: Electrical Shocks or Deep Work?

This Hidden Brain segment looks at working memory and how distractions can hinder how much it can store at a given time. Steve Inskeep, the host, explains how working memory can be improved by limiting distractions. However, the way to improve it can vary. Inskeep talked to Melissa Scheldrup, a PhD student at George Mason University, and Cal Newport, an associate professor at Georgetown University. Both individuals found ways to improve working memory by decreasing distractions, but in two different ways.

Inskeep first interviewed Scheldrup who had him play a game called “Warship Commander”, where you have to make sure allied planes get through unscathed and listen for updates about the warship all while attacking the enemy planes. Of course, there are more rules to the game and the planes are colored differently, indicating different status’s. The overall point of this game is to show how easily the mind can get distracted and how the working memory can be overwhelmed. Scheldrup explained that someone with a good working memory can bounce back easily from distraction, but not everyone has good working memory. So, Scheldrup wanted to see how exactly she could improve working memory. Her solution? Electrical shocks.

Electrodes were attached to the side of Inskeep’s head and he was given small, electrical shocks that he describes as “very, very mild tingling”. He then was asked to play the game again and found he was able to figure out who the enemy was six out of six times. Of course there was skepticism as to whether it was the electrodes or practice, but Scheldrup had ran analyses (while controlling the effects of practice) on all her previous participants and found the results to be the same – the electrical shocks does improve working memory capacity.

But this can’t be the only way to improve working memory, right? After all, it’s not necessarily practical to walk around with electrodes attached to your head. Cal Newport believed the best way to improve working memory is to do deep work. This is when we do work that is distraction-free to allow for deep thinking.

Most people today do shallow work, which is work with distractions (e.g. checking e-mails, text messages, answering calls, etc.). This is what Newport thinks makes us robotic and decreases our working memory. Even by quickly checking that text message during work to confirm dinner with a friend, we are switching our context and this negatively impacts our cognitive performance. So what does Newport suggest doing to help with deep work? Three simple steps:

  1. Plan out your day and have set hours to do only work – no distractions.
  2. Don’t let your mood dictate your day. Try to stick to the schedule.
  3. Don’t let yourself get distracted by e-mails, text messages, and other things that don’t require your immediate attention so you can focus more on your deep work. (Newport made the joke that he has learned to be comfortable at annoying people with the lack of response he gives people in regards to emails and messages).

Newport and Inskeep mention how Mark Twain actually would go to his cabin at the edge of his property to reflect and do the deep thinking Newport discusses. At the end of the day, it is up to us on how we spend our time and use our working memory. Overall, I found this segment interesting considering the exercise we did in class to see how many numbers we could remember. It made me wonder which of the two methods discusses in the segment would work better.

https://www.npr.org/templates/transcript/transcript.php?storyId=580577161