Feature Nets and Word Recognition

 

This brain expanding meme, also known as Galaxy Brain, has been all over social media platforms like Twitter and Facebook since 2017 and it seems to only be gaining popularity as time progresses. The concept behind the meme is that the brain metaphorically “grows” as the variable that it is presented with becomes more intellectually involved. The irony of the meme and the information or variables that each image is paired with is what makes it so comical. 

 

Using the conceptual foundations behind this ironic trend, I have created a meme to simplify the feature net model and represent the intricate layers of the hierarchical system of word recognition in the brain. According to Grainger and colleagues, the feature net model, originally known as the Pandemonium model, was created by Oliver Selfridge in 1959. Grainger, states that the hypothesis of which Selfridge based this work off of inferred that “letters are identified via their component features”. With this hypothesis, Selfridge was able to create a model that is still used today when discussing word recognition. 

Even though the basics of Selfridge’s model are still used today, it has evolved with time and additional research. Recent research has discovered the simplest and possibly most important layer of this complex hierarchical chain of word and letter recognition called feature detectors. Grainger and his colleagues describe these feature detectors as, “the part of our word recognition system responsible for acknowledging and interpreting lines of varying curves and orientations”. This article discusses different research on these feature-based detectors and concludes that this additional layer to Selfridge’s original model is pertinent. Based on new research using a more psycho-physical approach to break down and dissect this complicated system, Grainger claims there is strong evidence that letters are identified by their varying features of lines and curves. 

In addition to the first layer, we currently understand this process of word recognition in four basic components: feature detectors, letter detectors, bigram detectors, and word detectors. Moving up from feature detectors, letter detectors are the pieces of this model that string each feature into a letter. According to “How the Brain Works: Explaining Consciousness” by Ben Salzberg, this letter recognition occurs because of the firing of different neurons based on which ones are used more frequently and, therefore, have a higher starting activation level and fire more easily.

After these letters are recognized, the same process happens with the next step in our recognition system: bigram detectors. Bigram detectors connect the letters we previously recognized based on the frequency of firing and threshold levels just like letter detectors. However, just as Salzberg concludes in his article, these bigram detectors are based more on the typicality of our specific language. For instance, in English, “Q” rarely ever comes after “L”, so this neuron would have a much higher threshold and not fire as easily as “CL” would in this situation. Finally, bigram detectors are stringed together with word detectors, using the same neuron-firing principles to make a full word.

Even though this process is so complex, using so many different detectors and neurons at the same time, this happens unconsciously at a rapid speed each time we see a word. The way this complexity increases with each step is the very reason and explanation for the meme that I have created. It is a way to represent this process of word recognition in a manner that anyone who is familiar with the meme world and has knowledge of word recognition can understand.

3 thoughts on “Feature Nets and Word Recognition

  1. blogmatt21

    Feature nets are fascinating and complex to understand. I appreciate the meme as it is comic and informative at the same time. In the beginning of the feature net discussion in class, it was tough to understand. Overtime, it started to make sense. Thus, it is nice to have read this blog post as it is good and has reinforced my knowledge of feature nets.

  2. mocooper

    This is a great way to reinforce feature nets. When we were first learning about it in class, it was hard follow along. I started to get it after going over it a few times and having it explained to me by other classmates. This is a good way to really drill it into my brain, especially since I was already familiar with the meme.

  3. rachelg

    This was a humorous and accurate way of describing a complicated topic! I appreciate that you provided a good visual as well as a detailed written explanation of how the process works. The memes certainly caught my attention and it was nice to have two forms of the information. Good job!

Comments are closed.