On the role of uncertainty in evolution, markets, neuroscience, and AI
The world is unpredictable in many ways.
This has been the natural state of affairs for all living beings fighting an uphill battle for survival within an ever-evolving world. And while modern life has given us more certainties, with scientific theories setting out to explain the mess around us, science also has been confronted with its own limitations: while Newtonian physics argues that if we have sufficient information of a system, we can predict its future perfectly, chaos theory teaches that being able to predict something to decent levels of precision is very different than predicting it to perfect precision: if we err by only the smallest of margins, our predictions will quickly deteriorate further into the future until they end up being completely useless.
The uncertainty of our predictions is intertwined with the complexity of our environment. Given that most environments we find ourselves in are extremely complex, there is a limit to what we can forecast.
Our temporal discounting of future rewards captures this implicit assumption: we put more emphasis on the present than the future precisely because predictions are frequently flawed, and future rewards are always more uncertain than present rewards. The more uncertain the future becomes, the more we care about the present. This hedonic shift is often observed in war-torn societies, or in societies with inflation going rampant. Only in stable societies do people have the luxury of believing that sacrificing current resources for far-away pension funds is a good idea.
Predictions take many shapes our forms. If we have skin in the game, we make predictions before staking resources on future outcomes that we expect to hold reward. But not only do individuals predict, markets make predictions on which products and companies will succeed. Evolution bets on which organism will be successful.
Darwinian evolution is a noisy process because it takes place in a noisy, uncertain world. There is only so much you can plan for, and this insight is deeply incorporated into the structure of the reproductive machinery.
Nature operates on the principle of variation and selection. Not only the selection part plays a crucial role in Darwinian evolution, but before traits get selected, for most species sexual reproduction first produces some random distribution over different traits by making use of recombining the genome of two individuals instead of just cloning an already existing one.
This injects noise into the generative process underlying reproduction and guarantees that a species does not rely on clones of the same individuum, but has many traits and abilities assorted in a spectrum to adjust to collectively.
Introducing this variation creates a more robust mechanism for dealing with unknowns and unknown unknowns. You might never know when a strange trait that accidentally developed in an individual ends up being useful for an entire species, or when suddenly the environment shifts dramatically (think of a meteor striking). There are many examples of this manifesting in evolutionary history (evolutionary biologists distinguish between the periods of “creeps” of relative stability and “jerks” of rapid shifts in genetic makeup due to rapid shifts in the environment).
And so injecting noise into the process is nature’s fool-proof way to deal with the uncertainty of life.
In the framework of Bayesian statistics, we think of random distributions as encoding our knowledge about variables. For Gaussian random variables, a distribution is characterized by a mean and variance. The mean gives us an estimate of the likeliest value of the variable, while the variance gives an estimate of the variance (the inverse precision) associated with it. If we want to predict a Gaussian distributed variable, our uncertainty about the estimate is proportional to the variance of the distribution. In an information-theoretic sense, this is equivalent to the entropy of the distribution over the variable, encoding the limits of our knowledge about the value it will take. This entropy is equivalent to the variance of the noise associated with the variable: the noise encapsulates everything we cannot predict.
We need to factor in this uncertainty in order to make good predictions. And if we expect the prediction to be dominated by uncertainty, we should either put lesser weight on it or take several guesses before we draw any conclusions.
Volatility is truth.
— Nassim Taleb
Like evolution, markets are made up of noisy processes. Markets are places where good ideas are separated from bad ideas by trying them out in the arena of real life. As Nassim Taleb puts it, the volatility of markets is the truth of good ideas and bad ideas separating. But good ideas and bad ideas often only reveal their nature against the test of reality. Precisely because human economic and technological developments are so complex and unpredictable, it is difficult to know which ideas will persist and which will not. Unknown unknowns are bound to occur, and what is a terrible adaptation in one environment will turn out to be a lifesaver in another.
Thus, markets follow a similar logic to the evolutionary process. A mechanism to deal with the uncertainty of the expected future is built into them. This is a strong case as to why central planning was so unsuccessful when practiced in its naive form in many communist societies. A governing body can not have complete enough information to know what the future brings. A top-down approach of one centralized entity essentially makes a single prediction for a future outcome with a single suggested solution, disregarding the endless array of alternative scenarios that will require an endless array of alternative solutions.
In a way, entrepreneurs are so important for society because so many of them fail. Many things in life follow a similar evolutionary logic and are accordingly difficult to control by top-down measures. The unruly, vibrant process that is language is another such example. Many institutions have tried and failed to establish rules for what constitutes correct and incorrect language, but as soon as a pattern in a language is particularly well adapted to its environment, it propagates into the future on its own, while other more artificial creations quickly fizzle out.
Although our intellect always longs for clarity and certainty, our nature often finds uncertainty fascinating.— Karl Von Clausewitz
Neuroscientists have struggled with the fact that many of their experiments are hard to reproduce: if you send the same rat through the same maze, the neural recordings can still look significantly different, and even lead to different outcomes. This might be seen as a weakness of our neural machinery being implemented in meat, with all the analog imprecisions that come with it. But some evidence is now emerging that the noise underlying neural processing might also be a feature, and not only a bug. Connecting to the principles discussed above, this influential paper on noise in the nervous system states that “networks that have formed in the presence of noise will be more robust, which will facilitate learning and adaptation to the changing demands of a dynamic environment.”
Similarly, current AI algorithms train better with a certain amount of randomness injected into them (be it through stochastic gradient descent and through random initializations of network parameters, which effectively means adding a noise term on them), and this evolutionary process of training many models leads to some better and some worse outcomes, with will lead to some well-adjusted ones and some badly adjusted ones. Results like the lottery ticket hypothesis illustrate that luck plays an important role in training successful machine learning algorithms after random initialization.
The Bayesian Brain Hypothesis goes further in arguing that all neural processing is inherently stochastic, and rests on the foundation of noisy predictions based on statistical models of the environment that build precision estimates right into them. This makes sense from the point of view of life being decomposed into known and unknown elements: if we have a good understanding of how much we know and how we do not know, we are surely better adjusted to navigate and act in the world.
“Uncertainty is the only certainty there is, and knowing how to live with insecurity is the only security.”
— John Allen Paulos
In his Ted talk, Max Hawkins describes how he injected noise into his life by having many of his actions be dictated by a random computer program. And while his measures were a bit on the extreme end, I think there is something to be drawn from the importance of noise in finding good solutions in uncertain environments. It implies a certain “letting go” of too much top-down control in many areas of our lives, a certain Buddhist insight into the interconnectedness of all beings, and the complexity of the world. It is also an interesting lesson on how to design complex systems (and as I mentioned some of these lessons are being picked up on in AI, e.g. also with evolutionary and other forms of stochastic algorithms), and to embrace noise instead of trying to get rid of it.
About the Author
Manuel Brenner studied Physics at the University of Heidelberg and is now pursuing his PhD in Theoretical Neuroscience at the Central Institute for Mental Health in Mannheim at the intersection between AI, Neuroscience and Mental Health. He is head of Content Creation for ACIT, for which he hosts the ACIT Science Podcast. He is interested in music, photography, chess, meditation, cooking, and many other things. Connect with him on LinkedIn at https://www.linkedin.com/in/manuel-brenner-772261191/