The Stories We Tell Us About Ourselves

…how problems encountered in AI research mirror our own cognitive biases

Photo by David Matos on Unsplash

The brain is an intricate organ, with billions of neurons with trillions of connections forming a hugely intricate nonlinear dynamical system. It’s arguably the most complex structure in the universe (at least that we know of). It can solve a seemingly endless number of tasks and lets us navigate and survive in an ever-shifting environment.

This might be a familiar rap to you, but at the same time some its consequences are easily overlooked. Because we and our personalities are the product of the behavior of the brain, we and our personalities are tied to its complexities.

This article delves into the clash between the (mostly) ineffable mechanics constituting us and our constant desire to have a compelling and coherent story about what it is we are doing and thinking. The problem of making sense of our own outputs draws some parallels to what is known in AI research as explainability.

Some byproducts of this clash are known in psychology as cognitive biases, among which we find two lovely effects called cognitive dissonance and confabulation, that put into question how much we understand what is going within us.

The subconscious

“You didn’t notice, but your brain did.”

More than 100 years after Sigmund Freud postulated the existence of the subconscious, his idea that we are not really masters in our own house has been to some extent integrated into our identities.

Considering the complexities of the brain, you might say it is only natural that most of the time, we are ignorant about what precisely we are doing, but only in the course of the twentieth century have people started to come to terms with the fact that there is much more going on inside us than we are consciously aware of. We live most of our lives on autopilot, and only when something urgent comes up does our conscious attention flock to it (as I mentioned in my previous article on attention).

The brain filters through an incredible amount of information every second, while the conscious mind is only aware of a tiny fraction of it, most of which has undergone heavy pre-filtering. Almost all processes, be they motoric, cognitive or emotional, are automated and regulated beyond active conscious control.

And not only information-intake is automated, but decisions are made very differently than appears to us on a day-to-day basis: studies indicate that decision-making becomes next to impossible if we shut out the emotional components of our brain and rely on prefrontal activity alone (see Antonio Damasio’s book The Strange Order of Things for a great overview).

Instead, many of our decisions rely heavily on the simulating power of the body and the gut-brain, working with distributed, parallel processing with special types of nonmileanated neurons. Acting on your gut feeling has a much more precise scientific meaning than you might have come to believe.

And as Danny Kahneman and Amos Tversky point out, biases induced by Priming or Anchoring can have strong effects on our decision-making under uncertainty, with us usually being uaware of them taking place.

Even worse: consciously knowing that we are being anchored or primed still doesn’t make us immune to the biases they can induce.

Machine learning and explainability

Researchers of artificial intelligence and computational neuroscience are faced with a somewhat similar problem, summarized under the term explainability. How do we explain emerging intelligence in our AI systems, and make the systems explain to us how they make decisions, while still being able to explain the details of how they process their information?

Instead of building algorithms to solve certain tasks and hard-wiring them into computers, modern machine learning methods allow computers to develop algorithms by themselves by working with specified inputs and desired outputs (see Pedro Domingos’ The Master Algorithm for a much more detailed overview).

Photo by Sai Kiran Anagani on Unsplash

This has shown us that it becomes increasingly hard to understand what it is that machines are doing, while, paradoxically, they are becoming more and more successful at doing all kinds of things (image classification, speech recognition, playing Go, etc.).

Sometimes it seems they perform best without any human intervention or attempt to teach them knowledge manually. This famously prompted Frederick Jelinek (one of the fathers of speech recognition), when building speech recognition software at IBM, to proclaim that “Every time I fire a linguist, the performance of the speech recognizer goes up”.

Computations in large neural networks are distributed over the entire network (and in deep neural networks, also through several layers, called hidden layers, with nonlinear activation functions), making it is next to impossible to trace the details of computation through the layers constituting the network.

The knowledge implicit in the algorithm represented by the network is likewise distributed across the entire weight structure of the network, so you can’t take a look at the weights and grasp intuitively that the network is good at, say, classifying cats or dogs.

As I went into in some of my other articles (f.e. Ants and the Problems With Neural Networks), note that this is a very different approach from the symbolic manipulations of a Turing machine (while you can of course run a Neural Network on a Turing machine, as we do on our computers), on which modern computers are based, and whose symbolic manipulations are are more similar to rational, step-by-step reasoning.

Therefore, we might face principled constraints when it comes to “understanding” information processing in complex distributed systems, or at least constraints when it comes to building intelligent systems that are efficient and can keep full score of their own actions.

The need for narrative

So then it comes as no surprise that we human beings have our own issues with explainability for some of the information processing going on in our brain.

Behaviorists have treated the brain like a black box (similarily focusing on input-output-relationships), but in a psychological and social context, we are in constant need of finding a narrative explaining what we are doing and why we are doing it.

But how reliable is this narrative, really?

Humans have evolved from animals, and animals are doing perfectly well without constantly telling each other stories about what motivates them.

But then we developed the ominous faculty to talk to each other (and, even more crucially, to ourselves). Social and sexual pressures (as I go through at more length on my article on sexual selection) might have been at the forefront of sparking the development of higher cognitive abilities like language. Even today, a large amount of our brainpower is spent on gossip, on keeping track of what’s happening with the people around us.

Evolutionarily speaking, the neocortex, which is where our language faculties reside, is the most recent additions to our neural circuitry (much more on the neocortex and its architecture in my article on Why Intelligence Might Be Simpler Than We Think). It makes up most of our brain in terms of its sheer mass, but that doesn’t mean that it’s always been in charge and always “knows” what’s going on.

This results of a functional view of the brain: if one part of the brain (such as the language center or brain regions where the self-model is created) wants to know something about activity in another part, signals specifying said activity need to go travel to this brain region.

It should be clear that this is neither feasible nor efficient nor in any case even necessary for all brain activities at all times.

So this ends up only adding to the problem of explainability: brain regions trying to do the explaining are sometimes simply cut off from information flow that would give them necessary insights, as we will see in the case of split-brain patients.

And still, we try to explain.

Being able to keep up a constant narrative has its uses, and seems to be highly desirable for functioning members of society. But we should be mindful of the fact that this does not mean that everything we think and say about ourselves or about others has to be true.

Quite the opposite, actually.

The stories we tell us about ourselves

“If a given idea has been held in the human mind for many generations, as almost all our common ideas have, it takes sincere and continued effort to remove it; and if it is one of the oldest we have in stock, one of the big, common, unquestioned world ideas, vast is the labor of those who seek to change it.”
Charlotte Perkins Gilman

Even people within stories like compelling stories. Photo by Road Trip with Raj on Unsplash

We have a strong psychological prevalence for stories. We perceive the world in narratives (take a look at the Heider-Simmel Illusion, which convinces us that two dots and a triangle are involved in a dramatic fight scene). As a byproduct of our hypersociality, we are constantly on the lookout for agency and causality in the outside world. And with that comes the desire to have a coherent narrative about ourselves: I am this way or that way because of this or that reason.

But as we saw, for many tasks there is a massive amount of distributed, parallel calculation taking place in the brain and body, making it tricky to understand and verbalize our motivations.

So we end up constantly guessing at the causes of our own behavior.

Further, it’s hard to implement processes that eliminate contradictory ideas. Which means we all end up walking around with massive inconsistencies in our head that can only be overcome by strenuous critical thinking, a skill not even consistently practiced by the most well-educated critical thinkers.

And to be honest, evolutionary speaking it’s not really necessary for our survival to have a perfectly coherent worldview, so our brain just lets them be if they don’t get into each other’s way too much.

Stories have the power to shape our perception, even without us noticing it. The brain (as I eluded to in my article on the Bayesian Brain Hypothesis) constantly predicts the content of its perceptual inputs, so you see and hear things differently based on your expectations about the state of the world (which has led cognitive scientists to coin reality as controlled hallucination). And as we are constantly trying to push the complex and messy phenomena of the world into simple, sometimes blatantly self-contradictory stories, these stories can end up dramatically altering how we perceive reality.

Cognitive biases and cognitive dissonance

“He had very few doubts, and when the facts contradicted his views on life, he shut his eyes in disapproval.”
Hermann Hesse, The Fairy Tales of Hermann Hesse

In 1957, Leo Festinger introduced the term cognitive dissonance to signify the discomfort we feel when we experience something that contradicts aspects of our worldview. But this discomfort is often subconscious, and we tend to cope with it by coming up with alternative explanations for the events we don’t like, or by simply discarding them.

Our urge to avoid cognitive dissonance makes us stay within the comfort zone of our world view, makes us subconsciously ignore information or reframe discomforting aspects of our cognition.

Cognitive dissonance is usually coped with by self-justification, which can take place both externally and internally.

In her book Mistakes Were Made, But Not By Me, Carol Tavris describes many cases of cognitive dissonance avoidance tactics having grave consequences, from the moral decline civil societies in fascist states, the US government during the Vietnam war (see Ken Burns’ and Lynn Novick’s brilliant and shocking documentary) or the Iraq war, the biases of the legal system when faced with belated evidence for the innocence of the condemned, among many other things.

Many decisions, especially moral decisions, are made without having any clear and rational understanding of their underpinnings (which becomes blatant in experiments when subjects have been subtly manipulated), but after they have been made, they can lead to excesses of self-justification, which in turn cause adjustments in the subjects world view in order to restore coherence and minimize cognitive dissonance.

A popular example is the smoker. He or she might be subconsciously aware of the unhealthiness and irrationality of smoking but when asked always justifies it in one or the other way (“I just like the taste/the ritual/the sociality/being outside/it’s not that unhealthy either way”). Many of us have been in a similar situation (exchange smoking for alcohol or for any other unhealthy habit). We do it constantly.

Split-brains and the art of confabulation

At times, attempts of the brain to justify its behavior after the fact become transparent to the point of parody.

Perhaps nowhere is this as clear as with split-brain patients, whose corpus callosum connecting the right and left hemisphere has been severed (in this short video, you can see Michael Gazzaniga, one of the pioneers behind these experiments, explaining the basic premise).

Left and right hemisphere cannot communicate anymore, so the left and right visual fields are processed independently. The language center is located in the left hemisphere, meaning it does not have access to the left visual field (keep in mind that vision is cross-wired, so the left hemisphere processes the right visual field and vice-versa).

If you show something only to the left visual field, this information is processed in the right hemisphere, which can manifest in a physical reaction (nervous blush or giggle when you show pictures of naked people, for instance, or pointing somewhere with the left hand) but cannot be directly accessed by the language processing parts of the brain. So when asked why he/she giggled or pointed the patient will make an explanation up on the spot.

When we guess at explanations for our behavior but don’t have a crucial piece of information, we usually don’t say “I don’t know”, but simply end up fabricating something, and then retrospectively accounting for this piece of information as introspective certainty.

This is called confabulation, and is done by everyone literally all the time. While in everyday life it is usually not as clear cut to show to what degree we are ignorant about the causes of our behavior, it is safe to say we never fully understand what determines our decisions (the roots going back to problems of explainability for complex information processing systems), and so we always make stuff up to straighten out the narrative.

What to take away from this

It can be a bit horrifying to look into cognitive biases (Wikipedia lists 157, and yes, I counted), but as someone once said knowledge is the first step to wisdom. It can be fascinating to take a step back and observe the constant commentary you keep up about yourself and contrast that with the fact of the complexities of your inner life.

This can help be more mindful of our own biases and confabulations. I try to keep track of moments when I caught myself blatantly talking nonsense about my behavior or was obviously trying to justify something to myself, but I‘m under no illusion that I rid myself of my biases. They are built into us on such a deep level that it might just be another cognitive bias to think that we can overcome them. If even Daniel Kahneman says that he hasn’t made much progress it might just be impossible.

Lastly, I think the AI community should be mindful of this as well. Our biases show that we gravely overestimate explainability when it comes to our own actions, so we should be aware of potential limitations to the abilities of machines to justify their own behavior. We have a strong desire to have them speak to us and become transparent in their motivations (as Lex Fridman and David Ferrucci go spend some time discussing here), but we should watch out that they don’t become too much like us humans.

After all, we don’t want to have robot overlords that confabulate all day.

About the Author

Manuel Brenner studied Physics at the University of Heidelberg and is now pursuing his PhD in Theoretical Neuroscience at the Central Institute for Mental Health in Mannheim at the intersection between AI, Neuroscience and Mental Health. He is head of Content Creation for ACIT, for which he hosts the ACIT Science Podcast. He is interested in music, photography, chess, meditation, cooking, and many other things. Connect with him on LinkedIn at https://www.linkedin.com/in/manuel-brenner-772261191/

Leave a Reply