Big Coexistence

If we design Superintelligence properly, its emergence does not have to be a catastrophic event in the human history. In such case a new period in Humanity’s existence will start – the Big Coexistence. A friendly Superintelligence, which genuinely cares about our well-being, could guide us through various hazards that endanger human species. This covers both anthropogenic and non-anthropogenic risks, such as asteroids impact, which if detected early could be put on a trajectory bypassing Earth. More importantly, Superintelligence may have different ways of analysing potential risks, based on different idea-generating mechanisms, of which we humans could be totally unaware.

Once the Big Coexistence starts the human species may have very little influence on its own future. Some people will remain in a purely biological form, living well over 100 years thanks to a rejuvenating medicine. Some, with extended brain capabilities, will become Transhumans, and some will decide to fully merge with Superintelligence and perhaps keep living in a digital form. However, irrespecive of the form of intelligence, the biggest legacy that Humanity will deliver to the new species of intelligent and conscious beings will be the best human ethics in the form of widely agreed Universal Human Values. After that, the next generation of “ethical” Superintelligence may itself redefine ethics of the kind we cannot even imagine.

Assuming Superintelligence develops its capabilities gradually, being under our full control and becoming quite possibly a conscious Superintelligence, the question is how it could directly help us in mitigating all other existential risks. Superintelligence, if properly designed and managed can deliver incredible benefits to Humanity and at the same time make our future much safer. This is not yet an overwhelming view among the scientists. A lot depends on how we prepare ourselves for this moment and whether there will be any intervening catastrophic events, which would bury the dream of Superintelligence and possibly be the end of civilization (e.g. engineered, untreatable pandemics).

Among the optimists we have Max Tegmark, a well-known cosmologist. In his book “Life 3.0: Being Human in the Age of Artificial Intelligence” he gives quite an optimistic view on what Superintelligence can do for us. In an interview with Clive Cookson, Tegmark remains convinced that barring some cataclysmic disaster in the next few decades, Superintelligence will take over the world. But he believes that we can shape the way this happens, including embodying human values. In his view, the next few decades on Earth could have cosmic significance, determining “nothing short of the ultimate future of life in our universe”. Given that our galaxy has about 100bn planets and there are 200bn galaxies in the universe, most astronomers maintain that extra-terrestrial intelligence must be widespread. Since Superintelligence is almost inevitable, we should make every effort now to ensure that it is friendly.

There are a number of computer scientists that believe Superintelligence will be in a form of human-machine hybrids such as Transhumans with their mind wirelessly connected to computer intelligence or converted into a digital form, which could then be copied and hence Transhumans could preserve their life for ever. However, Tegmark disagrees with them. Clean-slate Superintelligence will be much easier to build and, even if Transhumans and Uploads are introduced, their human component is likely to make them uncompetitive in the long run against pure Superintelligence. Once it has exceeded human abilities, our knowledge of physics suggests that it will advance rapidly beyond the point that biological intelligence has reached through random evolutionary progress.

We are one of 200 billion, billion planets which makes intelligence in other parts of the Universe nearly a certainly

As Tegmark points out, “information can take on a life of its own, independent of its physical substrate”. In other words any aspect of intelligence — presumably including consciousness that evolved in flesh, blood and carbon atoms can exist in silicon or any other material. No one knows what the next blockbuster substrate will be but Tegmark is confident that the doubling of computing power every couple of years will continue for a long time.” I might agree with this view with one proviso. Transhumanism, i.e. blending part of our mind with Superintelligence should be seen only as a transitory phase to a fully digital form. That would be a much safer passage and would give Transhumans more time to decide on the best way for the evolution of the new species.

The fundamental limit imposed by the laws of physics on the speed of computers, is a billion, trillion, trillion times more powerful than today’s best computers. The intelligence explosion could propel AI across the universe, generating energy billions of times more efficiently than present-day technology. Tegmark describes candidate power sources such as black holes, quasars and a “sphalerizers” that convert heavy fundamental particles (quarks) into lighter ones (leptons). The message at the heart of Life 3.0 and Tegmark’s “beneficial AI” movement is that, since Superintelligence is almost inevitable, we should make every effort now to ensure that it emerges in a way that it will be as friendly as possible to human beings, primed to deliver the cosmic inheritance we want. If we wait too long, it may be too late.

At present no one has a clear idea how to achieve this. At a moral and political level we need to discuss what goals and qualities to incorporate and this subject has been covered to some depth on this website. At a technical and scientific level, researchers must figure out how to build our chosen human values into AI in a way that will preserve them after we have lost direct control of its development. Tegmark advances various options and scenarios in which Superintelligence plays the roles ranging from “gatekeeper” to “protector god”, “zookeeper” to “enslaved god”. “I view this conversation about the future of AI as the most important one of our time. Life 3.0 might convince even those who believe that AI is overhyped, to join in.

Tegmark is supported in his views by Stuart Russell, a British-American AI scientist. He proposes that to ensure that the goal we have in mind will be correctly understood by Superintelligence, three principles must be observed. I consider these principles probably the most practical solution that can actually work because it would make Superintelligence behave more like we do:

  1. Superintelligence needs to know in minute detail, supported by thousands of examples, what are our top human values.
  2. Allow Superintelligence to have some margin of doubt both on the rationality of those values and then on their interpretation
  3. Teach Superintelligence what these values really mean in practice by letting it observe for some time how people actually implement those values (which is discussed here on this website).

If you have read the articles on the preceding tabs on this website, then you know this is precisely the view we agree with. Assuming we teach Superintelligence our values and have a full control of its activity, it can become an enormous help for the whole humanity to solve almost any problem we have. All anthropogenic existential risks and even some risks stemming from natural disasters (such as super volcanos), including of course climate change, could be drastically lowered or eliminated. The only caveat might be that for Superintelligence to help us successfully, we may need to trust its judgments and decisions, and fulfil what is expected from us.

At this stage, my overall assumption is that we will somehow manage to control Superintelligence and make it our “best friend”. We should start developing practical measures right now by adopting 23 Asilomar Principles, defined earlier on, so that AI presents as low a risk to us as possible before it transforms itself into Superintelligence and becomes a Technological Singularity.

The first step would be to help the maturing Superintelligence to understand who we are as humans and what are our most important values. This is why it will be so critical to redefine our key human values on behalf of the whole Humanity because this will ultimately become a joint set of values shared by humans and Superintelligence.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *