How self-learning AI could re-define our concepts of creativity
The triumph of Google’s AlphaGo in 2016 against Go world champion Lee Sedol by 4:1 caused quite the stir that reached far beyond the Go community, with over a hundred million people watching while the match was taking place. It was a milestone in the development of AI: Go had withstood the attempts of computer scientists to build algorithms that could play at a human level for a long time. And now an artificial mind had been built, dominating someone that had dedicated thousands of hours of practice to hone his craft with relative ease.
This was already quite the achievement, but then AlphaGoZero came along, and fed AlphaGo some of its own medicine: it won against AlphaGo with a margin of 100:0 only a year after Lee Sedol’s defeat. This was even more spectacular, and for more than the obvious reasons. AlphaGoZero was not only an improved version of AlphaGo. Where AlphaGo had trained with the help of expert games played by the best human Go players, AlphaGoZero had started literally from zero, working the intricacies of the game out without any supervision.
Given nothing more than the rules of the game and how to win, it had locked itself in its virtual room and played against itself for only 34 hours. It didn’t combine historically humanity’s built up an understanding of the principles and aesthetics of the game with the unquestionably superior numerical power of computers, but it emerged, just by itself, as the dominant Go force of the known universe.
How could it do this?
Chess vs. Go
Chess is far more popular in the West than Go. Part of that might be for cultural reasons, for a certain affinity that has been related to the intellectual characters of East and West. Chess is sometimes likened to the rule-based Western mind, with Go being more similar to the holistic tendencies of the Eastern mind. Go is considered to be more open, more intuitive than chess.
This angle can be helpful in explaining the big difference in success scientists had in building a Go computer, compared to a chess computer. After all, the dominance of the artificial mind is quite old news to the chess community.
Folks were trying to build chess-playing computers as soon as there were computers around (with computer science pioneers Babbage, Turing, Shannon, and von Neumann all devising specific chess hardware), making chess the historically most-thoroughly studied problem in all of artificial intelligence.
As the climax of this development, Deep Blue won against world champion Gary Kasparov in 1997 to much international acclaim. Three years later, the best Go computer was still being beaten by a 9-year-old boy (with a nine move handicap included).
This difference is quite profound and relates to differences in the fundamental make-up of the games. Understanding it helps explain why building a Go engine was so much harder than a chess engine, but also why it, once successful, was such a milestone in our pursuit of artificial intelligence.
Tactics vs. Strategy
“Chess is the art which expresses the science of logic.” – Mikhail Botvinnik
Chess and Go are a combination of short-term tactics and long-term strategies, as David Epstein explains in his book Range. Computers are really good at short-term tactics, because they can literally just go through every possible combination in their little mechanistic heads and evaluate the best one, but have a harder time at longe-term strategic thinking.
This relates to the combinatorial explosion in the space of all moves: the tree of possibilities blows up exponentially, and after 50 moves, there are more possibilities than atoms in the universe, so the limitations of finite computing powers kick in fairly quickly. To overcome this problem, programmers need good heuristics and decision tables to rule out obviously bad moves and stump the decision tree (see here for a good introduction on decision trees in chess).
These are more readily available in chess than in Go: a pawn is worth roughly 1, a bishop and knight 3, a rook 4, a queen 9, if you control the center, you are usually better, if your pieces are developed, that’s usually good, if your pawns are connected, that’s good, etc. (as in everything, exceptions confirm the rules).
After a deep, heuristic-enhanced search of the tree of future moves, chess positions can be evaluated numerically as being worth a certain value for black or white. And so the symbolic, rule-based approach to AI that was most popular and seemed most promising in substantial parts of the last century naturally lent itself to be applied to chess, and the question of building better engines was to find more efficient algorithms for searching the decision trees and to ramp up computing power.
But for Go this wasn’t working: it‘s much harder to evaluate Go positions in the same way. What humans call “intuition” and long-term strategy play a larger role. Even professional Go players in many cases can’t explain the intuitions they have about what is going on in the game. There is simply too much that can happen, and the search-tree blows up too quickly to build algorithms with clear-cut criteria, allowing them to compete with humans.
And so Go needed to be cracked with a completely different approach. The deep learning revolution came around at a convenient time, offering a large array of new techniques. As David Silver, the leader of the AlphaGo team explains, AlphaGo was the first algorithm of its kind that did not rely on structured searches but used reinforcement learning and deep neural networks instead.
And as their success bore witness, it worked tremendously well.
“Reinforcement learning feels like what intelligence is.” – David Silver
Reinforcement learning can roughly be defined as an intelligent agent interacting in an environment. As the word “intelligent” I used here already shows, the problem of reinforcement learning more clearly phrases the overall problem of artificial intelligence. Our very own intelligence is most clearly manifest in our ability to operate in the complex and uncertain environment we call the world. We can set our own goals or pursue them by exploring our environment and solving the problems that stand between us and our goals.
We can imagine a Go player sitting down and exploring the game playfully, can imagine what it means to discover a new and creative idea this way. And so the principles behind AlphaGoZero hit very close to home when compared to our intuitive understanding of intelligence.
This is why David Silver thinks the future of all AI could very well have its foundation in reinforcement learning.
The role of intuition in chess
While building symbolic AIs is possible for chess, it should not be forgotten that what we call “intuition” plays an enormous role for human players in the game as well. While modern chess engines can evaluate any position by calculating through large sets of possible moves and comparing them to expert game databases, grandmasters don’t have that luxury, and while they can calculate many moves ahead, they also rely on intuitive estimates of the merits of positions and the quality of moves, especially in faster time formats such as Rapid, Blitz or Rocket chess (with only 15 to 1 minute on the clock per player).
“While all artists are not chess players, all chess players are artists.”
– Marcel Duchamp
Chess never was only a science. Chess is also a sport, and chess is also an art, a unique spectacle in which composition of timeless beauty can take place simultaneous to performance, almost like on a jazz record. And so in chess, there is always an interplay between an artistic and a scientific element. These come together in every good chess player and in every chess game, although with markedly different emphasis. But as is to be expected, this emphasis has been shifting in the last twenty years.
The advent of computers has left its trace on modern chess and has led to a fascinating dynamic in the chess world. This development might foreshadow many developments in store for mankind, questioning how we define ourselves and some of what makes us most human: our art, our creativity, our sense of beauty.
How the machines have changed chess
To play for a draw, at any rate with white, is to some degree a crime against chess. – Mikhail Tal
Following world champion’s Kasparov’s defeat against Deep Blue in 1997, chess was never quite the same. If computers were better at chess than humans, then surely there was something to learn from them. And so modern chess grandmasters began studying positions with the help of engines, checking combinations, evaluating positions, and studying the best moves aided by their combinatorial power and foresight. The championship game of Anand vs. Carlsen in 2013 was by many considered to be a game between the old generation of chess battling against an entirely new generation of rising stars, which was the first that had trained their whole lives with the help of supercomputers. Carlsen won, ushering in a new era of chess, which was perhaps best encapsulated in the world championship game in 2018, in which Carlsen and Caruana ended all twelve championships games in a draw.
Of course, errors are not good for a chess game, but errors are unavoidable and in any case, a game without errors, or as they say ‘flawless game’ is colorless. – Mikhail Tal
Some criticize that high-level chess has increasingly become less about daring sacrifices and beautiful attacks, but about not making any mistakes, understanding positions well, and knowing the best moves in many openings from inside out (and then winning drawn endgames, as world champion Carlsen is so well known for). Of course, this is also due to the extremely high quality of playing that is necessary today to reach the top of the chess world.
The human element of chess
For me, chess is more art than a science. – Mikhail Tal
But part of it is also due to engines: some in the community claim that “machines have ruined chess”, and that modern chess is lacking the “human” element, is lacking the inspiration and freshness that was found in the classic games of the early days by masters. Games of Paul Murphy or Adolf Anderssen like the Evergreen or the Night at the Opera are still being analyzed 150 years after being played, and those of masters like Capablanca or Tal continue to make people fall in love with the game. But are the days of “human” chess, of chess defined by neck-breaking attacks and strokes of inspiration, really over?
Mikhail Tal had many nicknames: by many in the chess community, he is simply known as Magician from Riga. Like few besides him, he represents the chess player as an artist, as a poet, as eccentric, daring, and fun. His many dramatic sacrifices have received a legendary status in the chess community. He did not play to draw, but played in a way that every game he played was like a poem: unique, exciting, emotional, and, if everything turned out right, beautiful. Nevertheless, many think that modern grandmasters would easily win against the older masters. Because while dramatic sacrifices can sometimes overwhelm the opponent, destabilize positions, invite mistakes (Tal likened it to leading his opponent into a dark forest) and lead to advantages in the endgame, if they are countered carefully they can also easily lead to wasted material, unstable positions and, ultimately, to defeat.
But maybe we have our dichotomies wrong. Does the machine mind have to represent mechanical rigidness, brute calculation? Does it have to take out the spirit of the game? Does the artificial mind necessarily seem artificial and stale to us? Or is there another kind of beauty to be found?
AlphaZero says yes: and it might change modern chess yet again.
AlphaZero and the Beauty of the Artificial Mind
Machines ruined chess. Self-learning AI might save chess. – agadmator
AlphaZero is a jack of several trades: it moved on from Go to learn chess, and after less than 24 meager hours of training, it beat Stockfish, the to-date most powerful chess engine on the market.
There is something peculiar about AlphaZero’s chess playing. It makes moves that come across as human and fundamentally creative. AlphaZero can play aggressively, pursuing long-term strategies, seemingly teasing its opponents, shutting down pieces cleverly and imaginatively. Almost like a human being, almost like those poets of the chessboard, almost like Tal, but better: like Tal on steroids.
I always wondered how it would be if a superior species landed on earth and showed us how they played chess. Now I know. – Peter Heine Nielsen
Watching some of AlphaZero’s games can be an inspiring experience. It plays in a way that is almost inconceivable for human players. Alien chess, even for grandmasters. And so many, among them, world champion Magnus Carlsen, have been studying the chess games of AlphaZero. According to Carlsen, they have brought him new ideas and perspectives on the game and helped him improve his style. The way Alpha went through the different “human” openings during its training phase is also studied with some interest.
Over 20 years after Deep Blue, the artificial mind is changing chess yet again.
A similar pattern has been observed with Go: in the second game against Lee Sedol, AlphaGo made the famous “move 37”. It defied all rules Go players are taught from a young age, and none of the professional players had anticipated it. It was discovered by a computer and became part of the shared knowledge of mankind. And so likewise, after the initial shock of the defeat, Lee Sedol has gone so far as saying that AlphaZero brought the joy of the game back to him because he realized there is so much more to explore in a game he had been exploring all his life.
Where this could lead us
“Creativity is intelligence having fun.” – Albert Einstein
Creativity means discovering something that wasn’t known before. As AlphaZero trains through a process of discovery, there is space for it to become truly creative, even putting into question what it is that we mean by creativity.
For Gary Kasparov, the advent of AlphaZero is a more profound moment in the history of AI than DeepBlue precisely because it is based on discovery. When we think of the dawn of AI, we think of the cold rationality of the machine mind slowly taking over mankind. We usually don’t think of the creativity, of the fun that could come from it.
We are the apes that gravity couldn’t keep from venturing out into space. We are the apes that shoot protons into each other close to the speed of light and discover the gravitational ripples of black holes billions of light-years away. We are agents in the incredible complexity and intricacy of reality, ever exploring, ever learning. AlphaZero shows us how the discoveries of the artificial mind can provide their own kind of inspiration for our constant curiosity. There are many other domains in which the interplay of intricate patterns enriches our existence. There is music (with Deepfake music starting to sound irritatingly good), poetry, art, science.
Domains in which we might be holding on too tight to our ideal of what it means to be creative and meaningful.
AlphaZero is so great precisely because it shows what AI could add to the mix: and that AI doesn’t have to take away from the joy of what we believe makes us most human.About the Author
Manuel Brenner studied Physics at the University of Heidelberg and is now pursuing his PhD in Theoretical Neuroscience at the Central Institute for Mental Health in Mannheim at the intersection between AI, Neuroscience and Mental Health. He is head of Content Creation for ACIT, for which he hosts the ACIT Science Podcast. He is interested in music, photography, chess, meditation, cooking, and many other things. Connect with him on LinkedIn at https://www.linkedin.com/in/manuel-brenner-772261191/