AI as the most dangerous existential threat

Tony Czarnecki

London, 3rd July 2023

‘Don’t look up…but AGI instead of the comet’. That tweet by Elon Musk, in his very own style, begins this section. For those who have not seen the 2021 movie ‘Don’t look up!’, here is a brief explanation. In the movie, the scientists are convinced the comet is to hit the Earth in a few months’ and call for an immediate action to correct its trajectory. But populist politicians just hours before the comet hits the planet still organize huge demonstrations to urge people NOT to look up and see the coming comet. Elon Musk replaced the ‘comet’ with ‘AGI’ addressing the message to top AI developers such as OpenAI, Microsoft or Google, which behave as there had been no danger from the soon to arrive AGI.

There is no way, in which we can stop this process we must find a pathway towards an effective control of the AI development until the time, when it will be aligned with best human values, becoming our friend rather than foe. But Elon Musk’s tweet has also brought to the fore the scale of that threat. In a metaphoric way, it compares our situation to an all-out global war with one difference – the enemy is not visible yet. To win that war, we need to be prepared to change current laws or entitlements and accept some restrictions. We must think the unthinkable. AGI is not here yet, but it is clearly visible, like a comet was clearly visible to all who wanted to see it. If AI is left out of control or such control is ineffective, it will most likely not leave us alone. It will become progressively our enemy, initially competing for limited resources, and later fighting us directly. In the worst-case scenario, it will lead to the extinction of all humans by the end of this century. Any percentages qualifying the probability of that to happen this century are unnecessary, since in such a situation life will become unbearable even in a few decades.

However, some AI scientists and researchers try to estimate the chance of AI presenting an existential threat. In August 2022, AI Impacts organisation surveyed 738 AI scientists on the probability of AI becoming an existential threat. 48% of them responded that they estimate that threat at 10%. In an article by Alberto Romero published in May 2023, he quotes similar estimates made in March and April 2023 by well-known science authors Yuval Noah Harari, Tristan Harris, Aza Raskin and the physicist Max Tegmark. But he is not so much concerned whether the 10% estimate of humans’ extinction is credible or not. He undermines, justly in my view, the whole approach of estimating such a risk, which is unscientific, because it cannot be calculated in any meaningful way, like for example, the risk of an airplane catastrophic failure.

It is far better to take the view of the scientists like Geoffrey Hinton, who in response to the question of “how soon he predicts AI will become smarter than us,” said: “I now predict 5 to 20 years but without much confidence. We live in very uncertain times. It’s possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows which is why we should worry now.” Scientists are worried, because they simply don’t know the level of risk of AI becoming a potential existential threat.

Since the top AI scientists can only ascertain that AI can be an existential threat, we should take this extremely seriously and act as if it had been a proven case. Perhaps a more pragmatic assumption is to view that risk from the bottom up. AI researchers know what capabilities AGI must have and how soon it may acquire those capabilities. I have taken all that into account and agree with a growing number of AI scientists that AGI will emerge by 2030. When it does emerge, it will be smarter than Humans. So, what happens then? Let me quote Geoffrey Hinton once more: “If it gets to be much smarter than us, it’ll be very good at manipulation, because it would’ve learned that from us and there are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program so it’ll figure out ways of getting around the restrictions we put on it. It’ll figure out ways of manipulating people to do what it wants. ”

The only way, we can minimize that risk is to mitigate it. This is why I use the word ‘must’ rather than ‘should’ quite often when talking about AI as an exsitential t, to bring your attention to the limited choices that the world still has. Changes in politics, economy and social domain will be AI driven. Whatever we will do, it is too late to avoid the greatest truly global chaos in this decade. Current political and legal structures are unfit for purpose. We must significantly re-invent global politics and democracy within a few years, knowing from the outset that it will be imperfect but better than doing nothing. It is a difficult, and perhaps for some people, even a horrific scenario.

But there is another scenario, where humans may very soon experience life of unimaginable wealth and contendeness. There are only two conditions. First, we must accept that humans are governed on entirely new principles as a planetary civilisation, which means among others to forsake national sovereignty and some restrictions on our freedoms for our own safety and benefit. Secondly, we must accept even a bigger challenge. We may not become extinct only if over a century or two we evolve into a new species. If you have accepted this line of thought then it will be easier to acknowledge the need for some radical and necessary changes, which will lead us to the world of abundance and exhilarating self-fulfilment rather than to extinction.