I would assume that for the vast majority of us the most important goal is to live a long and healthy life, and prepare a similar life for our children, grandchildren, great-grandchildren etc., almost ad infinitum. We take it for granted that, as a species, we will exist forever. The problem is that there is no such infinitum. Very few of us consider that over 99% of all species, which once existed, are now extinct and we will be no exception. Moreover, there is at least 20% probability (but some experts rate it at more than 50%) that we may be extinct by the end of this century.
However, on the plus side, we are also the only species, which may be able to control its evolution in a desired direction. We have already been doing it in some way over millennia by controlling our evolution but in a cultural and social sphere, which has also strengthened our resilience to extinction. But today we may be able to control our physical evolution into a new species, while retaining a cultural and social pedigree.
We will only be successful in that evolution if we do it gradually, like a caterpillar evolves into a butterfly. At present we are the caterpillars. Our target is to become butterflies by morphing humans into Superintelligence.
But are we really reaching the moment when if we do nothing, our species will simply become extinct within this, or the next, century? I would welcome to read your arguments whether you agree or disagree (simply make a comment on this page). After all, the purpose of this website is to add suggestions to a common melting pot of ideas, to gain a better understanding about the way we can avoid extinction at least in the next few centuries. Below, I argue my case, emphasizing from the outset that it is a very subjective view:
- The world has started to change in most areas at nearly an exponential pace. What once took a decade it can now be achieved in a year or two
- Apart from imminent man-made (anthropogenic) existential dangers for Humanity, such as biotechnology or nuclear war, which can happen at any time, the most imminent risk facing Humanity is Artificial Intelligence (AI).
- It is the technology, which is the driving force behind the exponential pace of change, and in particular a superfast development of the AI’s capabilities. AI has already been delivering many benefits to us all and in the next few decades it may create the world of unimaginable abundance. That is the side of AI we all want to hear about.
- However, AI, as many other technological breakthroughs, such as nuclear energy or biotechnology (especially genetics), has also become a risk, probably the greatest existential risk that humanity has ever faced.
- By 2030 we may already have an immature form of Artificial Intelligence before it reaches the Artificial General Intelligence (AGI), also called Superintelligence – the term used on this website. This is the type of AI, which will exceed humans in their capabilities and the speed for making intelligent decisions by millions of times, but only in some areas, being utterly incompetent in most others. In particular, AI’s self-learning and self-improvement capabilities, which are already available, may progressively lead to unwanted diffusion of the superintelligent skills from some specific domains into other, of which we may not be even aware. Therefore, any political or social changes have to be viewed from that perspective – we have just about a decade to remain in control of our own future.
- By about 2030-2035 Artificial Intelligence may reach the stage, when humans may no longer be able to fully control the goals of such an immature superintelligent AI, even by implementing most advanced control mechanisms. The most significant threat in this period is the emergence of a malicious Superintelligence. It may destroy us within a few decades. This risk trumps out other existential risks, like climate change, because of its imminent arrival and in an extreme case, a potential total annihilation of the human species.
- By 2045-2050 Superintelligence may reach its mature stage, either becoming a benevolent or malevolent entity. If it becomes benevolent, by having inherited the best Universal Values of Humanity, it will help us control all other existential risks and gradually create the world of abundance. If it becomes malevolent, it may eradicate all humans. At around that time, Superintelligence may achieve the so-called Technological Singularity. That means it will be able to re-develop itself exponentially, being millions of times more intelligent and capable than the entire Humanity, and quite probably becoming a conscious entity
- Whatever happens, humans will either become extinct because of existential risks, such as Superintelligence, or evolve into a new species. It is impossible to stop the evolution. If Superintelligence emerges by the mid of this century, then we may either become extinct within a very short period or will gradually morph with it by the end of the next century. If in the next few decades some existential events do happen, especially if they are combined, like pandemic, global warming and a global nuclear war, then our civilisation and potentially the human species may be extinct at once. Alternatively, a new civilisation will be built reaching the current technological level within a century facing the same existential risk as we do now. That cycle may continue even for a few centuries, in which case the human species’ extinction may be delayed, or we will morth into a new species
- Therefore, Humanity has entered the period of an existential threat, when it can annihilate itself as a species
- We can no longer stop the development of Superintelligence, but we still have the control over its mature form, in particular whether it becomes benevolent (friendly) or malevolent (malicious)
If you broadly accept these observations but disagree with the pace of change, believing for example that it may take one or two centuries to develop Superintelligence then you are probably in the vast majority of people who think so. However, for naysayers here are some reminders: In September 1933, the eminent physicist Ernest Rutherford gave a speech, in which he said that “anyone who predicts energy could be derived from the transformation of the atom is talking moonshine”. The very next day the Hungarian scientist Leo Szilard, while walking in a park, worked out conceptually how a nuclear chain reaction can be used to build a nuclear bomb or a nuclear power station. There are dozens of similar most recent examples where top specialists predicted that an invention in their specialist domain would be many years away, but all of which have already been achieved either fully or partially, such as (no. of years in brackets counting from 2016): i.e. AlphaGO (10 years earlier), autonomous vehicle (8), quantum computing (10-15), quantum encryption (10).
In my view, we have already entered unknowingly the period, which I call the “Transition to Coexistence with Superintelligence”. In practice, we have about one decade to put in place at least the main safeguards controlling the Superintelligence’s capabilities, to protect us as a species. Therefore,
Humanity should have a Mission, based on Universal Values of Humanity, which should determine who will humans become as a new species, for example:
Avoid extinction by coexisting with Superintelligence and gradually evolve into a new species
One of the key preconditions for implementing such a Mission is the creation of a supra-national powerful organization that would be acting on behalf all of us, as a planetary civilization (considering that the UN cannot realistically play that role). However, I believe it is too late for this option since it would have taken several decades to create such an organisation and even then, it might not have all the required powers. Realistically, we must accept (the sooner the better) that the world will probably not act as a single entity, at least not immediately. Since we must act now, the option is to count on the most advanced international organization, which would initially act on behalf of the whole world, although it would only include some countries. I have argued my case in depth in my book Who could Save Humanity from Superintelligence? The organisation, which might fulfil that role most effectively is the European Union, followed by NATO and as a fall-back option, by China. Saying that, the support of other organisations such as the UN, or WTO will be vital. Whichever organisation leads Humanity it should be guided by a Vision on how the Mission could be delivered (at least the aspect related to Superintelligence) such as:
Maintain a global, continuous and comprehensive control over the development of Artificial Intelligence
To deliver such a Vision we must teach and instil in AI the best human values and preferences until its mature form – Superintelligence – becomes a single entity, millions of times more intelligent than humans and yet remaining our partner. This should start with a deep reform of democracy on which a pseudo World Government, such as the European Federation, must be founded. Throughout this website, I have used the assumed transformation of the European Union into a European Federation and then into a Human Federation as a kind of strawman approach. This particularly applies to the proposed changes in human values, democracy, and organizational aspects of such a transformation.
The next question is how to make such a transition? You will find a plausible answer here.