Sustensis – A Think Tank that makes you think!

Sustensis is a not-for-profit Think Tank providing inspirations, suggestions, and solutions for a Civilisational Transition to coexistence with Superintelligence. We must recognize that the scale of the required changes represents a Civilizational Shift in the history of a human species. To successfully navigate this transition to a new civilization, we must accept the magnitude and timeframe required for this transformation.

An important part of our work is Transhumanism. Transhumans are people who can make decisions and communicate with computers just by thought using Brain-Computer-Interfaces (BCI). They will soon become far more intelligent than any human. Super fast technological progress will ultimately enable Transhumans to become a new species – Posthumans. Thus for us…

 “Transhumanism is about Humanity’s transition to its coexistence with Superintelligence until humans evolve into a new species”. 

Our key objective is to minimize the risk of developing a malicious Superintelligence, which requires a global co-operation, hardly possible without a deep reform of democracy. This area is supported by our linked website Euro-Agora, which presents a new type of democracy. It combines the best features of direct and representational democracy into a Consensual Presidential Democracy. Furthermore, if Humanity’s transition is to be carried out swiftly and effectively it must be consensual. To facilitate that, we have developed an AI-enabled Consensual Debating approach, supported by the most advanced Artificial Intelligence Assistants on our second subsidiary website – Consensus AI. principles, which you can view here. We invite experts in various domains of knowledge to join us in making this website relevant to people interested in certain aspects of human progress in the context of evolving Artificial Intelligence to ensure that when it emerges as Superintelligence, it will become our friend rather than foe.

It is best to use the left side bar for a detailed navigation. It is arranged following a logical order, starting with Existential Risks, with the sub-tabs explaining the proposed solutions in more detail. There is also a comprehensive SEARCH facility to help you access the reuired information. You can access the content and leave comments without registering. However, to make standalone posts or articles, you need to register. The posts and entries are archived and continuously updated under the main tabs. This website observes the principles of the Global Data Protection Regulation (GDPR) and by entering this site you agree to those principles,  which you can view here.

_________________________________________________________________________________________________________________________________________________________________________________________________________________________

LATEST NEWS AND ARTICLES

_______________________________________________________________________________________________________________________________________________________________________________

Towards Benevolent Superintelligence

A Response to Conjecture’s “The State of AI Today” Compendium

London, 5 November, 2024

In recent years, numerous publications have raised alarms about Artificial General Intelligence (AGI) or Superintelligence as an existential threat. Last year the Future of Life Institute published an open letter Pause Giant AI Experiments, calling for suspending the development of the most advanced AI models. In October 2024 there were two In recent years, numerous publications have raised alarms about Artificial General Intelligence (AGI) or Superintelligence as an existential threat. Last year the Future of Life Institute published an open letter Pause Giant AI Experiments, calling for suspending the development of the most advanced AI models. In October 2024 there were two publications expanding the case for a moratorium on development of AGI for a decade or two: ’Narrow Path, by Andrea Miotti, the founder of Control AI and Compendium on The State of AI Today’ by Conjecture. The Compendium provides plenty of evidence that the existential threat coming from AI is real and near. It justly underlines the growing divide between AI’s unprecedented development pace and humanity’s readiness to manage it.

All three proposals call for a literally global moratorium on the development of the most advanced AI models, hoping to produce in 10-20 years AI that will be for ever beneficial and under human control. I consider that highly unrealistic because it simply ignores political realities. The difference between these proposals and mine may stem from different assumptions that I have taken in this document:

  1. AI cannot be un-invented. The ultra-fast increase of AI’s intelligence and capabilities, which will in the next few years exceed general human intelligence, is unstoppable.
  2. AI is not just a new technology – it is a new intelligence that may lead to:
    • Human extinction. If Superintelligence emerges as malicious, we may share the fate of all extinct species (99%) and several hominids, or …
    • Human coexistence with Superintelligence. If we control AI development effectively then we may produce a benevolent Superintelligence, bringing unimaginable human prosperity.
  3. Human control over AI may end one day, e.g. when it becomes smarter than humans or AGI
  4. AGI may emerge by 2030 and may already be beyond full human control.
  5. Global AI control by 2030 is impossible. Therefore, we must implement it almost immediately, at least in those countries, where it is possible with a high degree of AI control and alignment.
  6. The duration of human control over AI once it becomes AGI depends on:
    • AGI proliferation. We may only be able to control or coexist with one AGI, not dozens.
    • How well has AGI been aligned with human values and preferences.
  7. It’s better to try to create a benevolent Superintelligence than stop development of advanced AI, which would be ineffective, increasing the risk of creating a malicious Superintelligence.
  8. A potentially malicious Superintelligence is our ‘enemy’ seen on the horizon. We need to imagine that we are at the cusp of the danger equivalent to the start of the Third World War.

Therefore, to minimize the threat of human extinction by AI we need to lower this risk e.g. by restricting the supply of advanced GPU chips or AI algorithms as suggested in the Compendium. But to do it effectively we need new global organizations capable to enforce that, and other mechanisms, at least in the countries, which are the undisputed leaders in the advanced AI systems. That is why I propose to build a “Coalition of the Willing Nations,” a de facto World Government. It would enforce the recommendations of the existing Global Partnership on AI (GPAI) giving it extended prerogatives. We need a single ‘Western’ AI centre, like a Global AI Company (GAICOM) developing AI as one Superintelligence and to counterbalance China’s AI Program. But we must also build Multimodal AI Maturing Framework to align Superintelligence with human values and goals.

To read the article click on the image below.

Tony Czarnecki, Sustensis

______________________________________________________________________________________________________________________________________________________________________________

Two AI-generated jornalists discuss in their own podcast Tony Czarnecki’s article ‘Taking Control over AI before it starts controlling us’

London, 21 September, 2024

You may have come across some extraordinary examples of what AI can already do or how it can help us. Recently, I have come across a truly astounding innovation hardly mentioned by its developers Google Deep Mind, and still called ‘Experimental’ – NotebookLM. I have decided to try it myself. As the basis for this experiment, I have used my article ‘Taking control over AI before it starts controlling us’ published last year, and which you can also view on our website (click on the image below):

Image generated by: DALL-E

In this experiment, NotebookLM has read the article and created a podcast – an interview between two jornalists – an ‘AI woman’ and ‘AI man’. What is most striking is a superbly natural flow of this conversation, with humour, hesitation, as well as other emotional expressions. Just be amazed. This is how these two ‘journalists’ summarized their podcast based on a 12-page article:

“The provided text is an excerpt from a document entitled “Taking control over AI before it starts controlling us”. This article discusses the potential dangers of Artificial Intelligence (AI) reaching a point where it surpasses human intelligence and control. The author, Tony Czarnecki, argues that the development of a Superintelligence, an AI entity with its own goals and surpassing human capabilities, poses an existential threat to humanity. He highlights the need for global action and governance to control AI development before it reaches a point of singularity, where it becomes uncontrollable. Czarnecki proposes a framework involving an international agency to monitor AI progress and implement a system of control based on universal values. He also explores the role of Transhumans, individuals with enhanced cognitive abilities through Brain-Computer-Interfaces, in potentially controlling Superintelligence from within. The article outlines various scenarios, including the potential for an AI-governed world, and concludes with a call for urgent global action to mitigate the potential risks of AI.”

Listen to the 8-minute podcast by clicking on the image below.

Image generated by: DALL-E

Here is another of the articles “Can AI make you happy?“, which Google’s NotebookLM turned into an interview.

In this incredibly human-like interview, it’s the emotions, which make it so remarkable. Additionally, the AI interviewers communicate very skillfully some complex problems in simpler language creating a chat over a coffee. Click on the image to listen to this 13 minute. podcast.

London, 25/9/2024

Image generated by: DALL-E

________________________________________________________________________________________________________________________________________________________________________________________________________

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *