The world has been changing at a nearly exponential pace for some time and yet most politicians, governments and even scientists behave like the pace of change had been linear. Such pace of change is unnatural for humans and that’s why it may be difficult to notice that what once took a decade, takes just a year now in many areas, such as medicine, communications, or culture. Exponential pace of change may on its own become an existential threat if it is combined with other risks such as global pandemic, nuclear wars, or global warming. But it is the Artificial Intelligence (AI), which is our biggest and imminent existential threat, and where the change is fastest.
The late physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have all expressed concerns about the possibility that Artificial Intelligence (AI) could evolve to the point that humans could no longer control it, with Hawking theorizing that this could “spell the end of the human race”. Other prominent AI scientists such as professors Allan Dafoe and Stuart Russell, also emphasize that AI presents an existential risk for humanity. At a bipartisan meeting of the USA National Governors Association in Rhode Island in July 2017 Elon Musk said: “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that by the time we are reactive in AI regulation, it’s too late”.
If we consider that 99% of all species have disappeared, then why should we be an exception. To avoid extinction, we must mitigate existential risks such as climate change or pandemics and most importantly the threat arising from developing a hostile, most advanced Artificial General Intelligence (AGI) ultimately becoming Superintelligence. It is this threat that could annihilate the human species, possibly within the next few decades.
Why then despite the AI risk being more imminent and more profound than climate change, there is hardly any international action addressing that urgent problem. Instead of serious discussions on the consequences of losing control over self-learning AI, conferences on AI are concerned with relatively trivial aspects of AI control, such as the erosion of our privacy, and instead focus on AI benefits. We should of course embrace immense benefits that AI is already delivering. However, by focusing on AI benefits and only mentioning superficial AI threats, the real dangers, to which we may be exposed, are hidden.
The problem of effective AI control is similar to difficulties of controlling global warming. For nearly 30 years it was argued that a global warming tipping point was far away, so nothing was done. Only when at the Paris conference in 2015 and COPP 26 in Glasgow, a maximum of 1.5C temperature rise was set, as recommended by the International Panel on Climate Change (IPCC), a concrete global action plan was finally agreed with 2030 date, as the tipping point beyond which we may lose the battle for controlling climate change. Such thresholds are of course not an exact science. However, the very fact of specifying them has triggered a global action. Therefore, the best way forward to an effective control of AI seems to be to follow the IPCC and Glasgow COP26 examples.
Losing control over AI will impact every domain of human activity. Hence, we urgently need organisations, such as the UN, European Union or OECD to co-operate on organizing an international conference on AI control. The prime goal of the Conference should be to declare 2030 as a tipping point for AI and agree the warning signposts of humans losing control over AI, e.g.:
- The creation of the first cognitive AI Agent
- Humanoid robots surpassing the performance of humans in many areas
- The number of artificial neuromorphic neurons exceeding the no. of neurons in a human brain
- Incidents when AI network of globally connected robots goes out of control leading to a global chaos.
Such thresholds measuring AI advancement should not be breached before 2030 to give us more time for preparing the transition to the period when AI will start controlling us. The date 2030 is only an example, although like with climate change, it seems to be the most probable. There is a saying ‘What is not measured is not done’ and just declaring such thresholds may be enough to trigger a global action.
The recently launched FutureSurge organization seems to be well-suited to take up the role similar to IPCC and campaign for calling an international conference to agree such a tipping point date and criteria. The conference could be organized by one of the UN agencies or by the European Union, which has the most advanced legislation on AI.
Further justification for the urgency and the necessity of setting 2030 as the most likely date for humans losing control over AI are provided in the author’s article – click to
Tony Czarnecki, Managing Partner of Sustensis
[i] BBC News, https://www.bbc.co.uk/news/technology-30290540 , 2/12/2014,
[ii] Allan Dafoe and Stuart Russell, Technology Review, https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/, 2/11/2016,
[iii] Camila Domonoske, Elon Musk Warns Governors: Artificial Intelligence Poses ‘Existential Risk’, 17/7/2017
[iv] D. Jablonsky, Nature, Volume 427, Issue 6975, pp. 589, 2004
Tony Czarnecki, Managing Partner of Sustensis