Transhumans – a threat or a safety net?

One of the key assumptions taken on this website is that a continuous development of AI may ultimately lead to its transition into Superintelligence, provided we do not succumb earlier to some existential disasters and perish as a species. We may become bystanders or decision makers in that process. It largely depends on our ability to control the development of AI. We need to have this process in place in some way latest by 2025 for the most advanced AI systems, and on a global scale by about 2030. From then on, the degree of our control over the AI’s new capabilities must increase because once Superintelligence has matured, its self-learning abilities may develop very fast. By about 2050, within a short time, such Superintelligence will reach through self-improvement, the so-called Technological Singularity. At this point it will become our unquestionable Master setting its own rules of how and where to progress further, without even consulting us, since we might quite likely not even be capable of understanding its arguments or its overall strategy. This might relate to its intended expansion beyond our planet, or simply getting access to new materials and energy resources. I leave it to your imagination, what intentions such as Superbeing might have or what it might be able to invent (e.g., make any product, including food for us, biological species, from thin air).

The most likely outcome is that once a mature Superintelligence arrives, it will become our Master by default, hopefully with our most benevolent values embedded in its overall decision-making system. Its knowledge, choices of important decisions for humanity and an overall comprehension of the world around us and the Universe in general, will be unimaginably greater than our own capabilities. Therefore, in the next few decades we may be forced to make that biggest decision in the history of humankind: how we want to evolve as a species.

If we manage to turn Superintelligence into our friend and, assuming we would still have control over its evolution, e.g., via linking its goals to our most important human values) then the question is what our choice will be. For example, we can let it evolve itself and, so to speak, let it fly off and leave us alone, should that be possible. However, there is another option, at least over the next century or two, where a purely digital Superintelligence (without any human mind uploaded) may leave us alone, whilst still taking care of all our needs. For many of us that would probably be an ideal situation although it might be the riskiest approach since we will have no ultimate control over it. Such a Superintelligence may, for example, amend its original values inherited from us, which in the worst-case scenario may lead to the annihilation of the human species, competing with it for the same resources. That is why to minimize the risk of Superintelligence working against our interests or even become outright malicious, we must have an early control over its maturing process, as it becomes more and more intelligent. Such a control should be global and start right now. The question is if that can be done at all, and if so, how it can be achieved.

Let me then outline one of the possible ways in which we could control the development of Superintelligence. If we consider the immense intellectual power and ultra-fast decision-making by such Transhumans, like Elon Musk is planning to become himself, and for whom saving Humanity is an absolute priority, then we may have no better option than to entrust some selected and approved Transhumans with the power to act on behalf of the whole Humanity to save us from existential risks. They will have to control the maturing Superintelligence from ‘inside’ by part of their brain being continuously fused with it via a Brain-Computer-Interface (BCI), such as the emerging Neuralink chip, currently embedded in a human skull or similar devices like miniature encephalogram-like helmets, reading and writing instruction into the human brain.

A decision to relegate the governing powers to individuals who will be selected rather than elected, even if they happen to be the most honest, is absolutely unthinkable today. It is just impossible to expect the current political class in most democratic countries to suddenly agree to allow all significant decisions to be made by a bunch of the most intelligent people on the planet. It is even less credible if we think that a deep reform of democracy, say in the USA or in Britain is rather unthinkable in the next 10 years, since it is not in the interest of politicians to deliver it. And how about Russia, China or other autocratic or dictatorial states? It all seems utterly hopeless, similarly as it is with having the World Government by the end of this decade.

And yet, a solution we must find. A default option of doing nothing will more than likely lead to the extinction of the human species by the end of this century. The only plausible approach might be not to seek ideal, crystal clean solutions but rather take haphazard, somewhat risky steps, which may minimize the possibility of an utter catastrophe for our civilization. In my view, Transhumans may become such a potential solution to solving many problems that seem unsurmountable today, including a growing threat of uncontrolled development of AI.

It seems that perhaps the most realistic way to proceed is to rely on a bottom-up control of AI by those who are the most successful in developing ever more advanced AI systems. This is the fall back option, which we should try implement right now, until at some stage we can engage the people’s representatives (rather than politicians) into the process of AI control. The solution, which I propose would consist of several stages described in the side tabs extending from the main Transhumans tab.