There is such a thing as robophobia — an anxiety disorder defined by the irrational fear of technological advances in robotics, which encompasses drones, robots and artificial intelligence (AI). As Baun et al. noted, the underlying fear is that of artificially intelligent robots taking over in an insurrection of automatons, hence the need to protect mankind. This is typically the dystopian plot of sci-fi movies, dramatized to express a ‘present and imminent danger’ if AI is not closely regulated, monitored and restrained by airtight codes and laws to ensure it’s subservient to mankind.
The machines may not be rising or plotting against humanity, but they are definitely taking over. With increased efficiency. The result has been increasing job losses in operations involving mundane and repetitive tasks that can be automated, data-heavy functions and undertaking of tasks that would otherwise be dangerous or impossible — like clearing toxic wastes, repairing live, high-voltage wires, gathering data in the depths of the oceans and volcanoes, etc. With advances in technology, AI intervention has, according to an Ipsos survey for the World Economic Forum, made life easy through innovations in transportation, entertainment, education, the environment, safety, food and nutrition, shopping, employment etc.
However, the mistrust of AI persists beyond just a niggling concern. This has been reinforced by the abuse of data-collecting apps and deployment of algorithms to not just influence, but even manipulate decisions that we make, thus feeding the conspiracy theory of a villainous AI monster that’s taking over our lives. Professor Stuart Russel explains that “algorithms have learnt to manipulate people to change them so that in the future, they are more susceptible and they can be monetized at a higher rate.”
Ill-intentioned use of algorithms through click-baits and disinformation can be abused to persistently and continuously nudge people online towards certain beliefs, preferences and opinions. Such schemes give justification to flagging of certain keywords by law-enforcement agencies in an era in which social media has been misused as a tool for radicalization — from terrorist group recruitments to the insurrection in the US. But it has also been a democratizing and social justice tool that fanned the flames of the Arab Spring and mobilized masses protesting police brutality in Nigeria.
The algorithms on their own however cannot be blamed for manipulating behavior, machine-learning aside. Rather, the manipulative objective is a deliberate action by interested human entities. As Russel observes:
“The algorithms don’t care what opinions you have, they just care that you’re susceptible to stuff that they send. But of course, people do care, and they hijacked the process to take advantage of it and create polarization that suits them for their purposes… The algorithm doesn’t know that human beings exist at all and from its point of view, each person is simply a click history.”
The likelihood of advanced AI doing everything that we can do, but better, is no longer an ambitious goal. When self-driving cars become a reality globally, coupled with the attendant efficiency that will drastically drop the fare, 25 million taxi drivers can easily be rendered jobless, a force far more disruptivethan the Uber revolution.
The real deal, which has sent the anti-AI battalion on a tailspin is machine learning, which enables AI to improve automatically through experience and independently interpret data. The significance of this technological leap in our society today cannot be gainsaid, with robots increasingly becoming a point of reference to supplement or even replace human judgment. Andrew Woods, acknowledges that scholars have good reason to be concerned about the fairness, accuracy and humanity of the AI systems. This is probably what inspired the likes of Steve Woolgar to propose a ‘sociology of machines’ in light of theAI and its implications for understanding human behavior. Woolgar was of the opinion that “one of the more important options is to view the AI phenomenon as an occasion for reassessing the central axiom of sociology that there is something distinctively ‘social’ about human behavior.”
While recognizing the concerns regarding AI, Woods dismisses them as misplaced:
“While these concerns are important, they nearly all run in one direction: we worry about robot bias against humans; we rarely worry about human bias against robots. This is a mistake. Not because robots deserve, in some deontological sense, to be treated fairly — although that may be true — but because our bias against non-human deciders is bad for us.”
Maybe humanity is inclined to closing ranks in anti-AI bias, to the point of shelving innovations with ‘statistically insignificant’ flaws that endanger life — like an accident by a self-driving car for instance.
Ironically, we are likely to accommodate more perilous risks from fellow humans (be they dictatorial politicians, sex predators, or drunken drivers). It could be that we are paralyzed by a fear of forfeiting our civilization to machines as was captured by E.M. Forster in his spookily prescient novel, The Machine Stops, which he penned in 1909. As Russel notes, “… the story is really about the fact that if you hand over the management of your civilization to machines, you then lose the incentive to understand it yourself or to teach the next generation how to understand it.” It is question of intervention and understanding machine learning that AI and computational neuroscience researchers are still grappling with. Manuel Brenner alluded to this when discussing machine learning and explainability with the poser: “How do we explain emerging intelligence in our AI systems, and make the systems explain to us how they make decisions, while still being able to explain the details of how they process their information?
Nowhere is the dread of AI taking over more manifest than in the three laws of robotics by Isaac Asimov. It is our way of playing God with the machines through three commandments: i) though shalt not injure a human being or through inaction allow a human to come to harm; ii) though shalt obey orders given by a human being except when such orders conflict with the first commandment; and iii) though shalt protect your existence except when doing so contradicts the first or second commandment.
We have come a long way since the days of Asimov’s ‘God code’ that reflected the predominant thinking of his time up to the 1960s. But that body of canons designed to keep machines in check is unraveling. AI is now being deployed in warfare, which renders the instructions void. Definition of harm is also subjective as there’s no definite provision for mental and emotional anguish. In any case, in the event of emergencies or certain medical protocols, there may be need to carry out what could be considered painful (even harmful) procedures to save a life (e.g. amputation and abortion). And as Russel pointed out, “…a self-driving car that followed Asimov’s first law would never leave the garage because there is no way to guarantee safety on the freeway — just can’t do it because someone else can always just side-swipe you.” As such, after 80 years, Asimov’s laws, as Mark Anderson argues, need updating.
That update is ushering in the era of what Emma Hart, the British computer scientist, dubbed ‘artificial evolution.’ Russel talks about removing fixed objectives and integrating uncertainty in AI to make that a reality. Departing from the problematic tradition of assigning fixed objectives to algorithms is meant to empower AI with a ‘human flaw’ of not knowing everything and therefore having to come back for further direction. Russel explains that the problem in current AI designs is that they are built to know the full objective. Ideally, however, AI should be “systems that know they don’t know what the objective is and then they start exhibiting behaviors like asking permission” before executing tasks. This is the essence of human-compatible AI as illustrated in the book, Human Compatible.
Hart is putting theory into practice with her work on artificial evolution, as she works on “a radical new technology which enables robots to be created, reproduce and evolve over long periods of time, a technology where robot design and fabrication becomes a task for machines rather than humans.” The technology gets inspiration from nature, but goes beyond biomimicry. It’s bound to be one giant technological leap for machines that will entail mixing “the digital DNA of the chosen robots to create a new blueprint for a child robot that inherits some of the characteristics of its parents, but occasionally also exhibits some new ones.” And just as in nature, it is anticipated that a cycle of selection and reproduction will beget new generations of better robots that exhibit optimized behavior and are better adapted to their environment, only that this will not take thousands of years like biological evolution. The stage is thus set for evolutionary algorithms (EA), which draws inspiration from biological evolution mechanisms like reproduction, mutation, mutation, recombination and selection. EAs are algorithms that perform optimization or learning tasks with the ability to evolve. As Xinjie Yu and Mitsuo Gen argue in Introduction to Evolutionary Algorithms:
“The natural evolution of species could be looked at as a process of learning how to adapt to the environment and optimizing the fitness of species. So we could mimic the viewpoint of modern genetics, i.e., ‘survival for the fittest’ principle, in designing optimizing or learning algorithms.”
Ceding decisive control to AI and integrating uncertainty in algorithms is almost like we are giving them freewill, which has an uncanny parallel to the creation story. A story whose trajectory we are all too familiar with: mankind eventually rebels against the divinity as we evolve physically, emotionally, socially and intellectually. Should this be a cause for concern with AI evolution? That’s a story for the future!
About the author – Paul Omondi (@omondipaul)
Paul Omondi is a creative and innovative media and communications expert. He is experienced in the development and execution of communication strategies, editorial direction, creative and technical writing. He is a career journalist, business manager, and leader who is passionate about positive social impact and sustainable development, especially through the use of digital tools and innovation. Paul has a master’s degree in digital journalism.