No global AI control means human species extinction
(This is an extract from the book : Prevail or Fail – a Civilisational Shift to the World of Transhumans’ by Tony Czarnecki)
Some people think that having no control over AI will not affect our future negatively or be only a nuisance. Some might say that AI will be friendly to humans by its very ‘nature’. Unfortunately, there is no implicit certainty that AI will be our friend rather than foe. AI, like any technology, has the potential to cause both benefits and harm. Its impact will depend on how it is developed and deployed. If we do nothing or have an ineffective AI control, humans will be progressively under a greater control of a maturing Superintelligence. If it becomes hostile to humans, it may trigger an early human species’ extinction. Some of the most recent events make it easier to understand what the consequences of having no AI control may mean.
The current rush to market, prevailing in any industry, is also present in the AI sector. The best example is the rising competition in the Internet browsers. As mentioned earlier, in February 2023 Microsoft launched BingChat by combining its Bing browser with AI chatbot ChatGPT to increase its market share in the Internet browsers. That was quickly followed by Google, which merged Google browser with its chatbot LaMBDA, creating Bard.
Giving AI Assistants access to the Internet without a thorough testing, and without implementing rigorous control methods, may already pose some danger. What was even more worried, as disclosed in an article ‘Scientists made a mind-bending discovery about how AI actually works’ is that the developers of these advanced chatbots were not quite sure how they managed to achieve such spectacular results .
That is how the loss of control over AI may begin. It proves that AI may have negative consequences, currently trivial in comparison with the impact it may have in the next few years. Therefore, it is more likely that if AI is not controlled it will become malicious by intent, be the result of erroneous goal specification, or even bugs in the computer hardware or software. We should be concerned that AI may be designed or trained to optimize certain objectives without considering the potential negative consequences for humans. If such market-first attitude of major AI companies continues, then it is more likely that AI will be evil rather than benevolent.
The biggest risk is that we may become an extinct species, if AGI and its final most advanced form, Superintelligence, becomes malevolent. That would not be totally surprising if we consider that 99 of all species are gone, including six humanoids before us, like Homo Floresiensis (50,000 years ago), Neanderthal (about 40,000 years ago) or Denisovans (just 15,000 years ago), i.e., in historical times. If we want to be an exception, we must evolve, as some other species have done, like crocodiles.
I have to reiterate that if nothing is done to control AI, or it will be executed ineffectively, or implemented too late, then it may become our last invention, as James Barrat, said in his ‘Our Final Invention: Artificial Intelligence and the End of the Human Era’.
One of the measures proposed for improving AI control suggested by Tsedal Neeley, Professor of Business Administration at Harvard Business School, is to slow down its development, as a wide global long-term approach. She said: “You have to slow down to ensure that the data that these systems are trained on aren’t inaccurate or biased.”  I don’t think it would work for the following reasons:
- First of all, we would need to have a powerful global organization, like the World Government, which could impose severe sanctions on companies and individuals trying to keep developing AI at the current speed in any country, including China and Russia,
- There would have to be internationally agreed control mechanisms verifying that the development of AI ceased for some time,
- Slowing it down may not be helpful because it is quite probable that even the current most advanced AI may have already discovered mechanism for its self-improvement without human intervention. After all, it can already code and write its own algorithms,
- This would be the first time in human history that we globally abandon an advanced technology for a less advanced one. I ignore the practical side of implementing such an idea, which would not be easy at all,
- Since it could not be implemented globally because some countries would not agree to have such a strict control on their territory, then even a reasonably rich billionaire could still continue developing AI clandestinely,
- Finally, even if it were possible to slow down AI development, the consequence would be a significant drop of the world’s GDP (e.g., most electric cars now use AI), causing a negative chain reaction, turbulence in the markets, unemployment etc.
In summary, this option may be even worse than no AI control at all because it might create an illusion that AI development had ceased or slowed down significantly, so there is no real danger.
Therefore, unless we accept that we really live at the time when the pace of change is exponential in most domains of human activities then that alone may be a catastrophic error in judgment on when AI may take a total control over our future. We have just a few years to implement the mechanisms of AI control, because if after it becomes AGI, by about 2030, it may be difficult or even impossible to control it effectively. Therefore, we can no longer rely on political, diplomatic, technological, or social processes, which we have used in the past.
We need a truly revolutionary approach breaking almost all existing barriers in politics and social domains by preparing for a civilisational shift with maximum human co-operation and consensus. Only then can we increase the chances of human species survival and an unimaginable abundance. That’s what option 2 is about.
Tony Czarnecki is the Managing Partner of Sustensis, a Think Tank on Civilisational Transition to Coexistence with Superintelligence. His latest book: ‘Prevail or Fail – a Civilisational Shift to the World of Transhumans’ is available on Amazon: https://www.amazon.co.uk/dp/B0C51RLW9B.