What is the risk of Superintelligence?

The main aspect of the assessment of that risk for Humanity is to ask a basic question about key values that define Humanity, what is good and what is right. Paradoxically, Superintelligence forces us to answer these questions more meaningfully than ever before.

Why does Superintelligence represent the highest risk for Humanity? Because it is almost certain to happen, unlike natural pandemics, which may not happen at all, since it is a lottery type risk. The second reason why it is so dangerous is that it may happen much earlier than the risk mostly talked about in recent years – the climatic catastrophe. The risk coming from Superintelligence is more likely to happen in the next 50 years rather than in the next century. On the other hand, I believe that if we manage to deliver the so called “friendly” Superintelligence, then instead of becoming the biggest risk, it will itself help us reduce other anthropogenic risks, such as climate change.

Superintelligence is defined as a type of artificial intelligence that would surpass even the smartest humans. The main threat stems from even the slightest misalignment of our values and Superintelligence’s objectives, or its “values”. If this happens, even when the corresponding goals appear benign, it could be disastrous. Nick Bostrom quotes a scaring example that involves a Superintelligence programmed to “maximize” the abundance of some objects, like paperclips. This could lead Superintelligence to harvest all available atoms, including those in human bodies, thereby destroying humanity (and perhaps the entire biosphere). In addition, there are multiple ways that Superintelligence could become malevolent towards humanity, as University of Louisville computer scientist Roman Yampolskiy outlines in his 2016 paper:

  • Preventing humans from using resources such as money, land, water, rare elements, organic matter, internet service or computer hardware;
  • Subverting the functions of local and federal governments, international corporations, professional societies, and charitable organizations to pursue its own ends, rather than their human-designed purposes;
  • Constructing a total surveillance state (or exploitation of an existing one), reducing any notion of privacy to zero – including privacy of thought;
  • Enslaving humankind, restricting our freedom to move or otherwise choose what to do with our bodies and minds, as through forced cryonics or concentration camps;
  • Abusing and torturing humankind with perfect insight into our physiology to maximize the amount of physical or emotional pain, perhaps combining it with a simulated model of us to make the process infinitely long.

It would be impossible to provide a complete list of negative outcomes of an AI agent with only some general cognitive ability that he would be able to inflict. We can expect a lot of these sorts of attacks in the future. The situation is even more complicated once we consider the systems that exceed human cognitive capability. Such Superintelligence may be capable of inventing dangers we are not even capable of predicting or imagining.

In another article, “Fighting malevolent AI: artificial intelligence, meet cybersecurity”, previously quoted Roman Yampolskiy argues that purposeful creation of a malicious AI will likely be attempted by a range of individuals and groups, who will experience varying degrees of competence and success. These include:

  • Militaries developing cyber-weapons and robot soldiers to achieve dominance;
  • Governments attempting to use AI to establish hegemony, control people, or take down other governments;
  • Corporations trying to achieve monopoly, destroying the competition through illegal means;
  • Hackers attempting to steal information and resources, or destroy cyberinfrastructure targets;
  • Doomsday cults attempting to bring the end of the world by any means;
  • Psychopaths trying to add their name to history books in any way possible;
  • Criminals attempting to develop proxy systems to avoid risk and responsibility;
  • AI-risk deniers attempting to support their argument, by making errors or encountering problems that undermine it;
  • Unethical AI safety researchers seeking to justify their funding and secure their jobs by purposefully developing problematic AI.

Although we do not have Superintelligence yet, AI has already embraced most of human activity. The cyber risk already exists in these domains. Once we have a prototype, Immature Superintelligence, which will be able to connect its presence in these domains, the risk may become exponential even before the arrival of a mature Superintelligence.

So, we have to accept that the creation of Superintelligence poses perhaps the most difficult long-term risks to the future of Humanity. Phil Torres identifies several issues here, saying that the first one is the amity-enmity problem: the AI could dislike us for whatever reason, and therefore try to kill us. The second risk is the indifference problem: the AI could simply not care about our well-being, and thus destroy us because we happen to be in the way. And finally, there is yet another problem, which he calls “the clumsy fingers problem”: the AI could inadvertently nudge us over the cliff of extinction rather than intentionally pushing us. This possibility is based on the assumptions, which state that higher levels of intelligence aren’t necessarily correlated with the avoidance of certain kinds of mistakes. He warns that the fruits of our ingenuity — namely, dual-use technologies — have introduced brand new existential risk scenarios never before encountered by Earth-originating life. Given the immense power of Superintelligence, e.g. it could manipulate matter in ways that appear to us as pure magic, it would be enough to make a single error for such a being to trip humanity into the eternal grave of extinction.

The existential risk posed by Superintelligence does not depend on how soon one is created; it merely concerns us what happens once this occurs. Nonetheless, a survey of 170 artificial intelligence experts made in 2014 by Anatolia College philosopher Vincent C. Müller and Nick Bostrom, suggests that Superintelligence could be on the horizon. The median date at which respondents gave a 50 percent chance of human-level artificial intelligence was 2040, and the median date at which they gave a 90 percent probability was 2075. This prediction is further away than 2045 given by Ray Kurzweil. In any case, if they are correct, some people around today will live to see the first Superintelligence—which, as British mathematician I. J. Good observed in 1966, may be our last invention.

Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could “spell the end of the human race”. In 2009, AI experts attended a conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have already acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved “cockroach intelligence.” They concluded that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. Various media sources and scientific groups have noted separate trends in differing areas, which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns. One of those well-known AI experts, Eliezer Yudkowsky, believes that risks from AI are harder to predict than any other known risks. He also argues that research into AI is biased by anthropomorphism. He claims that since people base their judgments of AI on their own experience, they underestimate its potential power. He distinguishes between risks due to technical failure of AI, which means that flawed algorithms prevent the AI from carrying out its intended goals, and philosophical failure, which means that the AI is programmed to realize a flawed ideology.

Another reason why Superintelligence is the biggest risk is that it is the one that may arrive in an inferior, “half-baked” form. There is certainly no need for Superintelligence to be conscious to annihilate Humanity. It is worth to remember what kind of panic and material loss was caused by the ‘WannaCry’ ransom virus, on 13th May 2017, believed to have been stolen from the US National Security Agency, almost infinitely primitive by comparison with Superintelligence. The virus was reportedly spread out by North Korea. As reported by BBC, it targeted computers running the Microsoft Windows operating system by encrypting data and demanding ransom payments in the Bitcoin cryptocurrency. It was reported that within a day it had infected more than 230,000 computers in over 150 countries, including Russia and China. Parts of the United Kingdom’s National Health Service were infected, causing it to run some services on an emergency-only basis during the attack. Spain’s Telefónica, FedEx and Deutsche Bahn were hit, along with many other countries such as Russia, the Ukraine and Taiwan. Only by sheer coincidence the attack was stopped within a few days by Marcus Hutchins, a 22-year-old web security researcher, who discovered an effective solution.

This example shows that it is enough for an AI agent to be more intelligent in a specific area than any human and that its intelligence being digital can increase exponentially. If, for example, such an entity had slightly misaligned objectives or values with those that we share, it might be enough for it to annihilate Humanity because such misalignment may then lead immediately to the point of no-return, by triggering the so called run-away scenario of Technological Singularity. Malhar Mali in his interview with Phil Thores of X-Risks, puts it very clearly:

“When it comes to creating Superintelligence, the coding becomes important. Because there’s a difference between “do what I say” and “do what I intend.” Humans have this huge set of background knowledge that enables us to figure out what people actually say – in a context-appropriate way. But for an A.I., this is more of a challenge… it could end up doing exactly what we say but in a way that destroys the human race.”

This kind of risk is well illustrated by the Greek legend about Tithonus, the son of Laomedon, the king of Troy. When Eos (Aurora), the Goddess of Dawn, fell in love with Tithonus, she asked Zeus to grant Tithonus eternal life. Zeus consented. However, Eos forgot to ask Zeus to also grant him eternal youth, so her husband grew old and gradually withered. (N.B., the whole story was beautifully depicted by Sir James Thornhill on the ceiling of Greenwich Naval Musuem).

Tithonus a mere mortal and Eos the goddes – the painting in Maritime Museum in London

It is difficult to imagine at first what kind of damage a wrongly designed Superintelligence can do. In my view, the most dangerous period for Humanity, which will last for about one generation, has already started. I call it the period of Immature Superintelligence. If we somehow survive this period, by managing the damage that will occurr from time to time, and maintain our control over Superintelligence, it will be Superintelligence itself that will help us to minimize other risks.

If one minor computer virus such as WannaCry, can do such a damage, then imagine what might be expected even from a relatively primitive Superintelligence if it is applied in a full scale cyberwarfare. In theory, such a cyberwarfare could trigger off a cascade of a series of non-existential risks, but which could combine into an existential risk. For example, what would be the consequences if North Korea, or a super-rich derailed billionaire acquire the capability of cracking any password within minutes, using quantum-computing (China may already have this capability). It could then also get access to the most important state and military secrets, including access to firing nuclear weapons. If at the same time, either through development, or simply by purchasing sophisticated quantum-computing based algorithms, it could paralyse communications and computer networks, it could thus trigger off a ‘hardware war’. In such a war it would get an initial (or total) advantage because most of the military equipment, which relies on computing, could be disabled or become useless. Then other, normally very small probability level existential risks could be triggered off, such as weaponized AI. The attacked countries would then try to defend themselves with all available means creating an existential catastrophe.

The good news is, that at the same time as it would be possible to crack any password using quantum computing technology it would also be possible to protect access to various state guarded secrets by applying quantum encryption. It has already been proved to work by China in February 2018, when not only the passwords but also the whole content (a video) was quantum encrypted. Since quantum encryption makes it physically impossible to get access to the protected information by cracking a password, this will reduce the risk of a full scale cyber war.

Viruses are only one aspect of the damage that even basic IT can do. There have been many other IT-generated non-virus related damages. More recently, we are seeing the first occurrences of the damage done by the so-called narrowly focused AI systems. These are AI agents that excel in one or two domains only. The damage done by today’s AI systems included market crashes, accidents caused by self-driving cars, intelligent trading software, or personal digital assistants such as Amazon Echo or Google Home.

Such failures are just a warning. Once we have developed Superintelligence capable of accomplishing a much wider range of tasks, the damage will be much worse. Imagine the AI agent that could trigger the switching off power grids in just one country. Since grid networks are connected globally, it could create a very serious damage world-wide in almost every aspect of life for many weeks, if not months. This is the warning that Symantec group made in their announcement in September 2017:

“The energy sector has become an area of increased interest to cyber attackers over the past two years. Most notably, disruptions to Ukraine’s power system in 2015 and 2016 were attributed to a cyber-attack and led to power outages affecting hundreds of thousands of people. In recent months, there have also been media reports of attempted attacks on the electricity grids in some European countries, as well as reports of companies that manage nuclear facilities in the U.S. being compromised by hackers. The Dragonfly group, which is behind those attacks, appears to be interested in both learning how energy facilities operate and also gaining access to operational systems themselves, to the extent that the group now potentially has the ability to sabotage or gain control of these systems, should it decide to do so.”

Ignoring for a moment a malicious damage caused by cyberwars, most of it will occur because of ill-defined tasks or control of task execution. It is not easy to make a machine that can understand us, learn and synthesize information to accomplish what we want. The added problem is that very few decision makers appreciate that the problem is already with us. Additionally, according to Machine Intelligence Research Institute, in 2014 there were only about 10,000 AI researchers world-wide. Very few of them, just about 100, are studying how to address AI system failures systematically. Even fewer have formal training in the relevant scientific fields – computer science, cybersecurity, cryptography mathematics, network security and psychology .

Many AI researchers have recognized the possibility that AI presents an existential risk. For example, MIT professors Allan Dafoe and Stuart Russell mention that contrary to misrepresentations in the media, this risk need not arise from spontaneous malevolent intelligence. Rather, the risk arises from the unpredictability and potential irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives. This problem was stated clearly by Norbert Wiener in 1960, and we still have not solved it.

Elon Musk, the founder of Tesla, Space X and the Neuralink, a venture to merge the human brain with AI, has been urging governments to take steps to regulate the technology before it’s too late. At the bipartisan National Governors Association in Rhode Island in July 2017 he said: “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.” He also said, he had access to cutting-edge AI technology, and that based on what he had seen, AI is the scariest problem. Musk told the governors that AI calls for precautionary, proactive government intervention: “I think by the time we are reactive in AI regulation, it’s too late,” he said.

Most people still think that AI should continue being developed like all previous technologies. But as with each technology, the more advanced AI becomes the more people it can affect. Even AI researchers still behave and develop their AI agents as they were a similar piece of technology, as a rudimentary IT programs. Even if we assume this argument, many human inventions have potentially both a positive and a negative effect. Suffice to give two examples; nuclear energy and the Internet. Although it is true that in principle AI is a tool (so far) as any other invention before, i.e. not inherently good or bad, it differs from all previous inventions in that it could lead to unimaginable unintended consequences.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *