Option 2: Full global AI governance

London, 17/6/2023

(This is an extract from the book : Prevail or Fail – a Civilisational Shift to the World of Transhumans’ by Tony Czarnecki)

Many AI researchers, such as Stuart Russell in his book ‘Human Compatible’, or Nick Bostrom in his seminal book ‘Superintelligence’ have proven that there is no fail-safe method of controlling AI, which is already immensely more intelligent in some areas than any human. Russell has come to an overall conclusion that teaching AI our values and setting strict goals may not be the best way to control it. Why? Because what we say may not always be the same what we mean, which illustrates the problem of interpreting our intentions. The best example comes from a Greek legend about Tithonus, the son of Laomedon, the king of Troy. When Eos (Aurora), the Goddess of Dawn, fell in love with Tithonus, she asked Zeus to grant Tithonus eternal life. Zeus consented. However, Eos forgot to ask Zeus to also grant him eternal youth, so her husband grew old and gradually withered.

Since it is impossible that our intentions will be always correctly interpreted and executed by AI as we want, there is no failsafe method of uploading AI with human values, which it would be expected to obey, or specifying its goals in an unambiguous way. Therefore, Stuart Russell postulates that we may only teach AI human preferences. Should it have doubts about a major decision to be taken, it would then always ask to reconfirm our wish.

The arrival of ChatGPT has shown how unprepared our civilisation is to control AI. Neither the AI researchers, and even those who created it, have expected the breadth and finesse of responses of that AI Assistant. Only two months after the release of ChatGPT it became clear how it has taught itself to do things it was never expected to do, like writing a sonnet about a forbidden love at the time of Shakespeare and in his style. It has taught itself new ways in which it can interact with people. That is a reason for grave concern.

In the next 2-3 years we shall see humanoid robots in various roles. They will become assistants to doctors, policemen, teachers, household maids, hotel staff etc. Their human form will be fused with growing intelligence of much more powerful AI agents. We should also remember that all those hundreds of millions of primitive assistants, such as Alexa or Siri are already becoming fast self-learning agents. As their intelligence and overall presence grow, so will the risk of their intended or erroneous action and the intrusion into our private life that has already started to shock us.

Therefore, we need to be prepared that quite soon some serious incidents linked initially to malfunctioning self-learning robots and later-on to malicious action by some advanced AI systems will occur. If such incidents e.g., malicious firing of nuclear rockets coincides with other risks such as pandemics or local conventional wars, they may create an existential civilisational threat. But it will also negatively affect any on-going efforts to adjust the way we live and are governed, such as the reform of democracy, or building the World Government because of the ensuing chaos – a Global Disorder.

In a positive way, such incidents may mobilize nations to reduce various existential risks. Malicious incidents or significant material damage arising from cyber wars, may lead to street protests far exceeding what we experienced in summer 2019, and which was organized by the ‘Extinction Rebellion’. Whatever one might think about the form of these protests, which inconvenienced a large number of people worldwide, they have also brought to the fore an important message: we are all a human civilization, and this is our only planet.

Therefore, we should act as a planetary civilization and not as a bunch of countries fighting for their sovereignty, while facing existential risks, which may make them and the rest of the human race extinct. That is why global AI control could not be truly global because it is impossible to get support of all countries. In principle, this is exactly the role, which has been envisaged for the United Nations. However, as we have seen over decades, the lack of consent on solving major world problems or political crises has made this organization unsuitable for such a task. Saying that, we should not forget, that since its inception, the UN has played a significant role in minimizing potential global catastrophes. Unfortunately, it would be impossible to rely on the UN today for several reasons mentioned earlier. Ukraine war provides again further examples, such as the UN being unable to enforce a demilitarization zone around the Zaporizhian power station or react decisively against Russia’s blatant threat of using nuclear weapons.

Even the European Union, which has been acting more swiftly in certain areas like GDPR, global warming, oil embargos etc. has been slow in creating the legislation to regulate the AI use and development. Therefore, it is unrealistic to put much faith in politicians and the system of global politics. If we rely on governments to regulate AI, we will almost certainly be left without any meaningful control on time with all the resulting negative or even catastrophic consequences. The only realistic way to control AI effectively is for the AI sector to control the AI development process itself.

Nick Bostrom has meticulously analysed more than a dozen methods of AI control and concluded that there is none, which would guarantee a full control over AI. So, what do we need to do, have no control at all? That is an option, which we have already considered above, and the answer was that having none, would almost certainly lead to a human species’ extinction. AI threat is different from natural pandemics, which may not happen at all, even if we do not apply any counter measures, since it is a lottery type risk.

On the other hand, uncontrolled AI is an existential threat, which we may face in just about a decade from now. It is so dangerous since it may occur much earlier than the risk mostly talked about in recent years – the climatic catastrophe. AI threat will be far more dangerous for humans if AGI emerges by 2030 than the Global Warming exceeding 1.5C. That AI’s tipping point, which may coincide with the Global Warming’s tipping point, will start a human species’ evolution or extinction. Whatever happens, we already have no other option than to evolve.

Therefore, we need to increase the probability of controlling AI effectively for as long as possible. The key aspect of controlling AI regards the values, which define Humanity, what is good and what is right. Somewhat paradoxically, AI forces us to answer these questions more meaningfully than ever before.

Tony Czarnecki is the Managing Partner of Sustensis, a Think Tank on a Civilisational Transition to Coexistence with Superintelligence. His latest book: ‘Prevail or Fail – a Civilisational Shift to the World of Transhumans’ is available on Amazon: https://www.amazon.co.uk/dp/B0C51RLW9B.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *