Deus Ex Machina

Another local award-winning author and creator of the Hellbound Trilogy, Tim Hawken and I recently caught up over a cup of tea.

We got onto the subject we both love – science fiction’s influence on technology. He offered to write the following piece for our erudition and stupefaction, about the current state of Artificial Intelligence and the risks it poses…

The creation of Artificial Intelligence will likely be our final invention. This is, however, a double-edged sword. One side will have the ability to cut away disease, suffering, death and essentially our need to ever create anything again – it will invent everything for us. Human thinking and ingenuity will become obsolete. The other side of the blade has the capability to annihilate us completely. There is a growing movement of people who think an extinction event is likely to be the default outcome, unless we look seriously into the problem of how to develop ‘Friendly AI’. These people are not apocalyptic yahoos shouting from street corners, wearing sandwich boards with “the end is nigh” scrawled on them. They are AI developers, futurists and oxford professors at the top of their respective fields. This article attempts to distil the key reasons why the risk of a Malignant AI is so great and explain why the development of Friendly AI is a lot harder than it appears on the surface. Before you start to panic, take a deep breath. Everything is going to be okay: extinction is at least five years away.

Before delving into the sensationalism of destruction, it’s worth clarifying what is actually meant by Artificial Intelligence in this context. Many forms of AI already exist. The program Deep Blue can beat grandmasters at chess, advanced algorithms can predict weather events with strong accuracy and Siri can tell us how to find a cheap Indian restaurant close by. The difference about all of these applications is that they’re very task specific. They aren’t creative or general in scope. The true benchmark of what some might refer to as ‘The Singularity’ or Artificial General Intelligence (AGI) is that a machine will match – or surpass – human ability at a wide range of tasks. In his recent book called Superintelligence, Professor Nick Bostrom lists things like strategizing, social manipulation, hacking, research ability and economic productivity as just some of the powers an AGI would pack in its intellectual arsenal. In short, a ‘true’ AI would be like one of those incredibly annoying people that are just good at everything, only better.

The reason why intelligence beyond human levels is so dangerous is because our own ability to think gives us a decisive advantage over other animals. While we are weaker, slower and less resilient than many other creatures, our mental capability has led us to dominate the planet and its inhabitants. The survival of most other animals rests on our will to help them (or allow them) to continue to thrive in ‘our world’. We have become the caretakers of earth. Think about what would happen if a new species came into being that was smarter than us. What would happen to us if that species didn’t care about our survival? What if that species saw us as a genuine threat to its survival? By handing over our advantage to these creatures, we are putting the asset that protects us most into tentacles we may not control. The chilling thing to realise is that a leap in Artificial Intelligence is likely to achieve an IQ gap, which makes the difference between humans and mice seem small in comparison. In effect, should a Superintelligence decide it didn’t want us around, there would not be a Terminator or Matrix-like war, where we struggle for survival against the machines. We don’t go to war with flies landing on our food. We swat them.

lennon robotThis huge gap in mental ability between humans and an AI would be produced by what has been termed as an intelligence explosion. Imagine we develop whole brain emulation (essentially a human brain being successfully uploaded onto a computer). To start, this ‘brain’ is hooked up to the Internet and consumes the entire sum of human knowledge in a few minutes. Uninhibited by the fleshy wetware it used to think with, the emulation is also 100,000 times faster at processing information. This means a millennium of critical thinking can happen in under four days. What kind of strategies and plans do you think it would come up with uninhibited by time? According to Eliezer Yudkowsky from the Machine Intelligence Research Institute: “It would be physically possible to build a brain that computed a million times as fast as a human brain, without shrinking the size, or running at lower temperatures, or invoking reversible computing or quantum computing. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight and a half hours”. Interestingly, whole brain emulation and speed intelligence are regarded as weaker forms of Superintelligence. Self-improving machine intelligence, which has the ability to iron out bugs in its software, bootstrap to other technologies, and write new code to even further optimize its intelligence, has the potential to shoot beyond the stratosphere of what we can even comprehend. To us, what this kind of AI could achieve would seem like magic; similar to how harnessing the power of nuclear energy would look to prehistoric man. It’s easy to imagine how a seemingly slow development process could exponentially multiply to become a detonation of growth. Nick Bostrom believes a ‘fast take off’ could happen in as little as days, hours or minutes. There would be scant warning before an AI could form a decisive strategic advantage, seize control of the world and shut down any competing intelligence systems, which may threaten its dominance. If this is a genuine risk, then it would seem important to make sure the intelligence has our best interests at heart. There are, of course, steps we can take to slow down the likelihood of a fast take off. Government monitoring, public policy, restriction of hardware availability to burgeoning AIs and other ‘boxing’ techniques to stifle the growth of any systems being built, can all help even the playing field. This would mean competing projects each have the chance of launching at similar times. The benefit here is that if one system malfunctions, the other AIs could be used to shut it down and protect our interests. Still, in a competitive world like ours, where there is huge potential gain for people being first to reach the creation of AI, do we really think that developers are going to slow down unless they have a true understanding of the risks? A lone hacker theoretically has the computing capability to produce an AI system using normal, modern-day hardware. This ‘lone genius’ scenario is not off the table to produce an intelligence explosion that could set fire to the world. The reality is there are companies like Google pouring millions into AI research with little focus on making it friendly. While acknowledging the risks, they are not addressing them. At this point only the Machine Intelligence Research Institute in California and the Future of Humanity Institute at Oxford are earnestly working towards theoretical answers. They admit they aren’t yet close to a technical solution, but the horse needs to become before the superintelligent cart.

Another risk in creating AI is that many assume it will be human-like. This anthropomorphising of a machine means we not only falsely think it will be similar in capability to us (ergo Terminator-like struggle), but that it will automatically want to achieve goals we think are good, in ways we want those goals to be achieved. But, just because we think chocolate mousse is great, doesn’t mean an AI will start churning out perfect chocolate mousses, or cure AIDS, or terminate Justin Bieber. It stands to reason that any machine we design would not actually want anything, unless we program it with goals in the first place. It would not have any moral values that we do not program in there either. An extension of that reasoning would lead us to believe that to solve this problem we simply instil values and goals that we know are good. Glossing over the serious issue of whether or not we have a perfect morality to benchmark (we don’t), or whether we know what good goals consist of (we don’t), there is always the issue of a machine perverting our intent, to fulfil even basic goals in horrible ways. Yudkowsky, developing an idea of Bostrom’s, illustrates this particularly well when he talks about an AI tasked to maximise the building of paperclips. Firstly, the AI designed to maximise the production of paperclips would initially work to improve its intelligence, to optimize its power to build them. It would do this, because improving its intelligence means it will meet its primary goal faster and better. This would result in an intelligence explosion that would give the AI a capability to convert most of the matter in the solar system into paperclips. Unfortunately, we are part of the solar system and are a resource the AI might want to use: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Yudkowsky continues: “This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfil many of our important terminal values, such as life, love, and variety. The AI won’t revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.” Another perversion of a goal we genuinely want, is happiness. Unfortunately, planting electrodes in our brains that stimulate the release of endorphins, or just replicating “smiling faces” by paralysing our face muscles into permanent joker-like grins can bring about physical manifestations of ‘happiness’. These are not just extreme examples meant to scare you. They are also there to show how seemingly harmless intent can result in great wrong. The good news is that if we do manage to get the goals and values right, it is unlikely a machine will overthrow those values for selfish interests, as some might suppose. We can program a machine to place more value on human life than its own survival. It stands to reason that if this principal value is part of the machine’s nature it won’t want to change, much as Yudkowsky says he wouldn’t take a pill that he knew would turn him into a murderer – He isn’t wired to want that in the first place.

Tuerkischer_schachspieler_windisch4Fortunately there are minds like Bostrom and Yudkowsky working in earnest to figure out how we might be able to solve the goal/value issue. Their number one goal is to formulate pathways to Friendly AI. The question remains: will they get there before less-cautious scientists deliver a technical solution to produce an unfriendly Singularity? Or will an intelligence explosion turn us all into paperclips? A recent survey asked world experts when they think we might achieve human-level machine intelligence. The median estimate is that we have a 10% chance of seeing it happen by 2022, a 50% chance by 2040 and a 90% chance by 2075. Ray Kurzweil, AI expert, futurist and director of engineering at Google is confidently predicting a 2029 result, but thinks it may be as close as 5 years. It is interesting to note that most futurist predictions sit at around the twenty-year mark, because that distance of time is far enough away to be credible, but close enough to be relevant. It may be that this internal bias is skewing estimates to be closer than reality. Still, earlier this year one computer program called Eugene came close to passing the Turing Test. During online conversation it fooled ten out of thirty panellists that it was a real human. The caveat here is that Eugene is masquerading as a 13-year-old Ukrainian boy whose 2nd language is English – Hardly the type of powerful general intelligence that might change the world. Yet, from little things, big things grow.

The race is well and truly on. As the finish line draws nearer, we can only hope that an Artificial Intelligence is built with the capability to help us avert other global catastrophic risks, like climate change, pandemic disease, super-volcano eruptions and asteroid impacts. It would be the cruellest of ironies that a machine we build to save the world, and make our lives better, is in the end the cause of our destruction.

Author’s note:

The above is a highly simplified (and probably misrepresented) account of a complex and important issue. If you’re interested in further reading, I recommend Superintelligence by Nick Bostrom, Our Final Invention by James Barrat and anything by Eliezer Yudkowsky, who is particularly clear – and entertaining – in his arguments.

Tell us in no uncertain terms what you think
Click below to create a Video Post[vidrack align=”left”]

Be the first to comment on "Deus Ex Machina"

Leave a comment

Your email address will not be published.


*