1. Existential Risks

What are the risks that may destroy our civilization and Humanity? In most general terms, these risks could be grouped as global catastrophic or existential (terminal) risks, when classified according to their scope and severity. A “global catastrophic risk” is any risk that is “global” in scope, lasting for some time (endurable) and hence may kill the vast majority of life on earth but humanity could still potentially recover. An “existential” risk on the other hand, is such that its intensity is terminal and its effect is transgenerational. It can destroy all human, non-human and even plant life.

An example of a global catastrophic risk that could destroy the world in a material sense and at the same time potentially eliminate human race is a global nuclear war. It could immediately wipe out most of the current material substance of civilization, i.e. towns, infrastructure, food crops, etc. and in the longer term, through radiation, lack of food and the emergence of post-nuclear winter, causing the death of all remaining people. A global pandemic, is an example of an existential risk, although the COVID-19 virus of the early 2020, could at worst be categorized as a catastrophic event. However, existential pandemics may also be caused by an accidental or intentional release from laboratories of a deadly virus, which may wipe out the human race in weeks but leave the infrastructure undamaged, at least for a few years.

Furthermore, both global catastrophic and existential risks could be divided into two broad groups: anthropogenic (man-made), which Humanity could to some extent control and mitigate (e.g. global warming), and non-anthropogenic over which we have no control (e.g. asteroid impact).

Some catastrophic risks are natural, such as super-volcanos or an impact of a large asteroid that can cause extinction of many species. Earth has experienced such mass extinctions in the past. This also includes global warming, which in the past was natural and today is anthropogenic i.e. man-made. Other such anthropogenic risks are pandemics caused by artificially created biologically active viruses, or nuclear wars. Perhaps that’s why Nick Bostrom of Future of Humanity Institute believes that human extinction is more likely to result from anthropogenic causes than natural causes. On this website we are only concerned with anthropogenic risks.

We have hardly any control over the non-anthropogenic (natural) risks. But we do have control over political, social, economic and technological risks. The anthropogenic (man-made) existential risks can be split into three categories, which may require different approaches:

Risks that are immediate and may become existential within days or even in hours:

  1. Global nuclear war
  2. Weaponized AI or cyber wars
  3. Engineered pandemics and synthetic biology
  4. Nanotechnology and experimental technology accident
  5. Unknown risks, mainly technology-orientated

But there are also risks that may become existential progressively. These risks are called progressive, because their impact increases with time and they are not immediate yet. There are at least two types of such risks: Superintelligence and Climate Change. They both have some additional characteristics – these are the risks that would certainly materialize if we do nothing, as opposed to other risks covered here, which are of a ‘lottery’ type. The top two progressive risks are:

  1. Superintelligence the risk, which may partially materialize starting from around 2030 and then quickly have a wider and much more severe impact.
  2. Climate Change whose impact may be severely felt already in mid this century, although may not be existential yet. Conventional modelling of climate change has focused on the most likely outcome: global warming by up to 4C. But there is a risk that feedback loops, such as the release of methane from Arctic permafrost, could produce an increase in temperature of about 6C or more by the end of this century. Mass deaths through starvation and social unrest could then lead to the collapse of civilisation.

I describe all these risks in more detail in the subsequent sections but cover Superintelligence in a separate tab.

Most of the articles on the subject of Existential Risks on this website are based on Tony Czarnecki’s book – “Who could save Humanity from Superintelligence?

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *