Humanity may become extinct within the lifetime of some people already alive or it may evolve into a different species. If our civilization is to survive, we need to apply some powerful risk mitigation strategies. We have heard a lot about an existential threat of Climate Change. But this is only one of about a dozen of such existential risks. Among them, the most severe is the threat of Artificial Intelligence and especially, its mature form – Superintelligence. This is an existential threat of an entirely different magnitude, which can either make our species extinct by a direct malevolent action, or by taking over the control about the future of Humanity. This risk is also different than for example Climate Change, because it may come much earlier, within the next few decades. Secondly, we cannot stop (uninvent AI) – the proverbial genie is already out of the bottle.
There is a minimum 20% chance that one of the existential risks will materialize by the end of this century, making our species extinct. However, some experts, like prof. Martin Rees, the Astronomer Royal, or the late Stephen Hawking, assess such a risk as at least 50%.
We have already entered unknowingly the period, which I call the “Transition to Coexistence with Superintelligence”. In practice, we have about one decade to put in place at least the main safeguards to control the Superintelligence’s capabilities, to protect us as a species and develop it as a friendly Superintelligence, which will become our partner. Therefore, Humanity should have a Mission, accepted by a significant majority (but not all) nations and based on the revised Universal Values of Humanity. It should determine a strategy to avoid humans’ extinction and prepare for our gradual evolution into a new species, for example:
Avoid extinction and evolve into a new species by developing a friendly Superintelligence
One of the key preconditions for implementing such a Mission is the creation of a supra-national powerful organization that would be acting on behalf all of us, as a planetary civilization (considering that the UN cannot realistically play that role). However, I believe it is too late for this option since it would have taken several decades to create such an organisation and even then, it might not have all the required powers. Realistically, we must accept (the sooner the better) that the world will probably not act as a single entity, at least not immediately. Since we must act now, the option is to count on the most advanced international organization, which would initially act on behalf of the whole world, although it would only include some countries. I have argued my case extensively in my book Who could Save Humanity from Superintelligence? The organisation, which might fulfil that role most effectively is the European Union, followed by NATO and as a fall-back option, by China. Saying that, the support of other organisations such as the UN, or WTO will be vital. Whichever organisation will lead Humanity, it should be guided by a Vision on how the Humanity’s Mission could be delivered (at least the aspect related to Superintelligence) such as:
Maintain a global control of existential risks, especially the development of Artificial Intelligence
To deliver such a Vision we must teach and instil in AI the best human values and preferences until its mature form – Superintelligence – becomes a single entity, millions of times more intelligent than humans and yet remaining our partner. That process of maturing the current AI over a few decades should start straight away. Therefore, we must agree as soon as possible a kind of a Roadmap for Humanity’s evolution, perhaps similar to the one I propose on this website, which contains five stages:
- Existential Risks
- Creating a friendly Superintelligence
- Reforming Democracy
- Making a transition to a federated world
- Evolving with Superintelligence
We have started the most uncertain period in the existence of the human kind. You can make your own judgment whether this is an exaggeration or an understatement by browsing this website, beginning with existential risks. Then move on to Superintelligence and further tabs on the right. The further down you go in each top level tab, the more detail will be provided.
Tony Czarnecki, Sustensis