9th May 2018

Last Decade For Humans to Control their Future?

Humanity may have just about one decade to maintain control over its own destiny based on the following assumptions:

  1. The world has started to change in most areas at nearly an exponential pace. What once took a decade it can now be achieved in a year or two
  2. Apart from imminent man-made existential dangers for Humanity, such as biotechnology or nuclear war, which can happen at any time, the most imminent risk facing Humanity is Artificial Intelligence, in particular – an immature form of superintelligent AI
  3. This is the type of AI, which will exceed humans in their capabilities and the speed for making intelligent decisions by millions of times, but only in some areas, being utterly incompetent in most others
  4. However, AI’s self-learning and self-improvement capabilities, which are already available, may progressively lead to unwanted diffusion of its superintelligent skills in some specific domains into others, of which we may not be even aware
  5. By 2030 we may already have such an immature superintelligent AI, when humans may no longer be able to fully control its goals and when we may already be outsmarted when trying to implement ad hoc new control mechanisms
  6. Therefore, any political or social changes have to be viewed from that perspective – we have just about a decade to remain in control of our own future. This risk trumps out other existential risks like climate change or nuclear war because of its imminent arrival and in extreme case, a potential total annihilation of the human species
  7. Any successful control of AI development, which includes uploading of the agreed Universal Values of Humanity, can only be achieved and managed by supranational organization with significant powers
  8. There is no time left to create such an organisation from scratch, and therefore it needs to be an existing organization that would be converted into a supranational power (realistically within the Western World only)
  9. The most suited organization seems to be the federated European Union, with NATO coming close second
  10. We must retain the ultimate control over AI development by 2030 and then have a strictly controlled further AI development, until it reaches the stage of Superintelligence about 2045. Only then can we hope to reach the period of incredible wealth and unimaginable capabilities – a new dawn for the coexistence of humans and Superintelligence.

There is some good news on the horizon. In the USA there is already an AI Caucus in Congress, the purpose of which is to “inform policymakers of the technological, economic and social impacts of advances in AI and to ensure that rapid innovation in AI and related fields benefits Americans as fully as possible.” What it misses in its scope is the existential threat that an AI may present. Top AI companies will fight any limitation on their AI development, if it endangers their profits, which had already been seen in the Congress during the hearing of IBM in June 2017.

In the UK, which still holds the most advanced position in ‘soft’ AI development, there is an All Party Parliamentary Group (APPG) headed by Lord Timothy Clement Jones. One of its current items on the agenda is a proposal for an AI Global Governance Summit. If its current scope is extended by including the agreement on Universal Human Values, which could be one of the controlling measures for AI development, then it might represent an important step in reducing the risk of AI. That would ensure the continuous development of AI, which our civilisation definitely needs, is controlled by globally agreed legal system in that area.

Before you move on, I would encourage you to read some evidence on the probability that humans may lose control over AI by 2030. Please bear in mind that the authors of the articles talk mostly of a ‘full’ Superintelligence by 2030, whereas I predict will only have an Immature Superintelligence by that date, which can still become, especially if combined with other Global Disorder Risk, the earliest existential risk for Humanity.

  1. How the Enlightenment Ends – by Henry Kissinger – published 15/05 2018
  2. Artificial intelligence is quite likely to ‘replace humans altogether’ Stephen Hawking warns – 2/11/2017
  3. Robots will be smarter than us all by 2029 – by Ray Kurzweil director of Google – published 5/10/2017
  4. Elon Musk says AI will beat humans at EVERYTHING by 2030 –  by Karla Hunt published on 6/6/2017
  5. Machines Can Quickly Outsmart Humans – by Rhenn Anthony Taguiam – published 28/2/2017

Your comments are welcome.

Next: Intelligence and Superintelligence

1 thought on “Last Decade For Humans to Control their Future?

  • Unfortunately, it looks as the UK’s All Party Parliamentary Group focus is predominately on the befits that AI can deliver. It also limits its scope only to the United Kingdom, rather than also considering the implications of AI development on the future of Humanity, including existential risks that it can trigger off.

Leave a Reply

Your email address will not be published.

twenty − sixteen =