The purpose of this part of Sustensis is to be a kind of a Think Tank, providing inspirations, suggestions and solutions for the transition of Humanity to the time when it will coexist with Superintelligence. I have founded this part of Sustensis driven by these assumptions :
- The world has started to change in most areas at nearly an exponential pace. What once took a decade it can now be achieved in a year or two.
- Apart from imminent man-made existential dangers for Humanity, such as biotechnology or nuclear war, which can happen at any time, the most imminent risk facing Humanity is an immature form of Artificial Intelligence, before it reaches the Superintelligence level, also called Artificial General Intelligence.
- By 2030 Artificial Intelligence may reach the stage in some areas, where humans may not be able to fully control it. It may already outsmart us.
- By 2045 Superintelligence may be mature enough that it will either have full benevolent or malevolent control over humans. If it is benevolent, inheriting the best Universal Values of Humanity, it will help help us control all other existential risks and gradually create the world of abundance. If it becomes malevolent, it may eradicate all humans and quite likely any other living species.
- By 2050 Superintelligence may achieve the so called Technological Singularity. That means it can re-develop itself exponentially being millions of times more intelligent and capable than the entire Humanity.
- For naysayers here is a reminder quoted by FT: “In September 1933, the eminent physicist Ernest Rutherford gave a speech, in which he said that anyone who predicted energy could be derived from the transformation of the atom “was talking moonshine”. The very next day the Hungarian scientist Leo Szilard worked out conceptually how it could be done”. There are dozens of similar most recent examples (no. of years in brackets: i.e. AlphaGO (10), autonomous vehicle (8), quantum computing (10-15) or quantum encryption (10).
You will find some justification for these assertions in various areas of this website. However, a full reasoning behind these assumptions can be found in my book: “Who could save Humanity from Superintelligence? – European Union, NATO or… China”.
My intention is to expand the ideas explored on this website and continuously update them as the world around changes at almost an exponential pace. I invite experts in the field to join me in making this site relevant to people interested in various aspects of human progress in the context of evolving Artificial Intelligence, ensuring that when it emerges as Superintelligence, it will become our friend rather than a foe. We need to create a critical mass for a transition towards coexisting with a benevolent Superintelligence.
Next: How to use this site?