1st May 2018

GD Inspirations

What does climate change, Superintelligence and Brexit have in common?

A short answer is that they are all existential risks. However, each of them differs either in the probability, impact, immediacy and the subject of the risk. We have been exposed to many risks in various areas throughout our life. Living is a risk in itself. But what does ‘existential’ risk mean? An “existential” risk differs from other risks primarily in the risk severity. While there is a risk that one of the pipes in your house may burst at any moment and flood your house, it is not an existential risk for you since you are very likely to continue to exist (hence the name). The same applies to your house – it is very likely ‘to survive’ such an event, since the term ‘existential’ refers to the continuing existence of any object in the same form, retaining the same properties and functionality, e.g. a house providing at least a shelter. On the other hand, driving a car is an existential risk for you. In 2016, the Global Challenges Foundation found that the risk of dying in a car accident in the United States was 1 in 9,395 per annum. Assuming an average lifespan of say 80 years, that means the odds for an average person of dying in a car crash was 1 in 117 over that period. Recalculating that in percentages means the risk of driving a car is 0.8% over a lifetime, or about 0.01% per year. There is much more detailed information on the subject of existential risks on this website.

So, is climate change an existential risk? The recent ‘Extinction Rebellion’, which created a real chaos in the London transport, effecting millions of people over the fortnight period just before Easter, unequivocally claimed so, demanding an immediate action from the governments.

Image result for Extinction rebellion
Extinction Rebellion demonstration in London in April 2019

The word ‘extinction’ refers in this case to the end of the existence of the human species. If the organizers of the demonstration had that in mind then it would mean the climate change, which most scientists agree is man-made, really poses an existential risk for humanity. However, risks differ by the risk impact or the probability of the risk maturing over a certain period, e.g. within a year. An influential Stern Review on the Economics of Climate Change published in 2006 calculates the overall risk of human extinction as 0.1% per year, which includes the risk stemming from the climate change. That means that by the end of this century, according to this Report, there is a 10% risk that humans become extinct. Some researchers see it even more pessimistically assessing that risk at 20% or even at 50%. By the way, Stern Review is explicit that the number 0.1% per year isn’t based on empirical considerations but is just a useful assumption.

Is then the climate change risk a near extinction event? If it is to be treated literally as an existential risk, it is not true that it is an immediate extinction event. According to the most recent calculations, it would take another 150 to 200 years before the human species would become extinct because of the changes to the biosphere caused by the climate change. However, many species would survive long after all humans were gone.

Image result for dangerous AI
Immature Superintelligence – superintelligent, but dangerously inexperienced

How about Superintelligence? As has been extensively discussed on this website, Superintelligence is by far the most severe and immediate existential risk for Humanity. Only artificial or natural pandemics can happen sooner, actually at any time. However, they may not lead to a literal extinction of human species because some genetic differences in some people make them immune to certain pathogens. Even the global nuclear war would not lead to a total extinction of all humans. According to recent calculations between 1.5 to 2 billion people would die (Iceland and Tasmania would apparently be least affected; the southern hemisphere would also suffer less) over a period of several years. Billions will survive but it would be the end of civilization as we know it, perhaps for a hundred years (mainly because of radioactive contamination).

Related image
Would the United Kingdom survive Brexit?

An existential risk does not refer just to humanity. As mentioned earlier, it can refer to a single person, an object or an organization meaning the end of the existence of an evolving object. Brexit is another example. Here, the object under existential risk is the United Kingdom as a state, currently formed of four nations. Its existential threat refers to the end of the United Kingdom, when for example Scotland becomes a separate state and Northern Ireland unites with the Republic of Ireland.

So, how can we reduce existential risks? The answer is simple – by taking precautionary measures. That was the intention of the ‘Extinction rebellion’ – mobilise the governments and the people to take some drastic actions to make the extinction of the human species, as a consequence of climate change, less likely. We may argue whether the name of the demonstration properly reflects the status quo. However, ignoring the form and the method of addressing the issue, it was right to draw our attention to this important problem. Moreover, the dramatic way in which the message was conveyed to the governments seems to have achieved some success. The EU now agrees to eliminate all processes generating CO2 from all means of production, except from agriculture, by 2050.

However, the ‘Extinction rebellion’ also proves one more important point. Most people will not make any significant changes in their lives, even if they know they are facing a significant risk. For example, the smokers would generally continue smoking despite the daily warnings on their cigarette packets, until they become severely sick or get a lung cancer. The same applies to population at large. Only a dramatic increase in the ultraviolet radiation in the 1980’, which were causing the ozone depletion in the atmosphere (resulting from of the use of harmful HFCs and CFCs chemicals), led to a costly withdrawal of these substances and a successful decrease in the Ozone Hole over the Antarctic thirty years later.

The global action to eliminate the use of ozone depleting substances, proves that we humans need what I would call ‘a near existential event’ to act globally. Such an event usually leads to fundamental changes in the underlying cause of a potential existential risk, to avoid it occurring in the future. For climate change risk, the emerging weather patterns destroying vast areas in various parts of the planet, could be such a near existential event, forcing governments to take an unpopular and costly action.

For Superintelligence we also need such a near existential event. I have suggested that for humans such a near existential event will be the emergence in the next decade of what i would call an Immature Superintelligence. We can already see the first peripheral events that clearly point that way. In the UK, we have the first judicial review of the use of indiscriminate facial recognition by the police. We have heard about the damage done by a miniscule virus Wanna Cry in 2017, or the infiltration of the governmental data and election manipulation by sophisticated cyber-attacks. These are not yet even basic AI threats. They are just a prelude to some more spectacular events like firing nuclear missiles, or releasing intentionally new types of viruses and bacteria from biological labs by a rouge AI agent. Unfortunately, we may need such near existential risks in order to take decisive action regarding our future co-existence with Superintelligence. Paradoxically, this is our best hope for humans acting urgently together to save their own species.

As regards Brexit, I believe the most likely outcome is that it may not happen at all. Therefore, it could also be categorized as a near existential event. In this particular case, the near Brexit event, which would have almost certainly led to the end of the UK as a state, may instead start the overdue process of the re-alignment of the individual parts of the UK that at first may have to be split into more than four nations (parts), e.g. London, Yorkshire, Lancashire etc. The Brexit experience would make it easier to reconfigure these new individual parts into a federal Britain, a process that may take a few years. Such a federation would need to have a written constitution, which would itself be the start of profound changes in the UK’s democratic system, marking the beginning of an entirely new period in the history of the British Isles.

Spontaneous actions such as the Extinction Rebellion, or a growing opposition to Brexit in the UK, bode well for future actions that may be required to draw the governments’ attention to the need of reducing the risk of deploying Superintelligence (the Artificial General Intelligence – AGI). The period of Immature Superintelligence may become the Humanity’s near existential event. Perhaps only in such a situation can humans agree on a mutual, prompt and tough long-term action (see here for further details) that will be required in many areas of human endeavours. This will include the need to sacrifice part of our individual freedoms and sovereignty on behalf of an organization that will manage the transition to Humanity’s coexistence with Superintelligence. You can read further on that subject on this website here.

Leave a Reply