Sustensis is a not-for-profit Think Tank providing inspirations, suggestions, and solutions for a Civilisational Transition to coexistence with Superintelligence. We must recognize that the scale of the required changes represents a Civilizational Shift in the history of a human species. To successfully navigate this transition to a new civilization, we must accept the magnitude and timeframe required for this transformation.
An important part of our work is Transhumanism. Transhumans are people who can make decisions and communicate with computers just by thought using Brain-Computer-Interfaces (BCI). They will soon become far more intelligent than any human. Super fast technological progress will ultimately enable Transhumans to become a new species – Posthumans. Thus for us…
“Transhumanism is about Humanity’s transition to its coexistence with Superintelligence until humans evolve into a new species”.
Our key objective is to minimize the risk of developing a malicious Superintelligence, which requires a global co-operation, hardly possible without a deep reform of democracy. This area is supported by our linked website Euro-Agora, which presents a new type of democracy. It combines the best features of direct and representational democracy into a Consensual Presidential Democracy. Furthermore, if Humanity’s transition is to be carried out swiftly and effectively it must be consensual. To facilitate that, we have developed an AI-enabled Consensual Debating approach, supported by the most advanced Artificial Intelligence Assistants on our second subsidiary website – Consensus AI. principles, which you can view here. We invite experts in various domains of knowledge to join us in making this website relevant to people interested in certain aspects of human progress in the context of evolving Artificial Intelligence to ensure that when it emerges as Superintelligence, it will become our friend rather than foe.
It is best to use the left side bar for a detailed navigation. It is arranged following a logical order, starting with Existential Risks, with the sub-tabs explaining the proposed solutions in more detail. There is also a comprehensive SEARCH facility to help you access the reuired information. You can access the content and leave comments without registering. However, to make standalone posts or articles, you need to register. The posts and entries are archived and continuously updated under the main tabs. This website observes the principles of the Global Data Protection Regulation (GDPR) and by entering this site you agree to those principles, which you can view here.
_________________________________________________________________________________________________________________________________________________________________________________________________________________________
LATEST NEWS AND ARTICLES
_______________________________________________________________________________________________________________________________________________________________________________
Do Not Pause Advanced AI Development – CONSOLIDATE AI
An article by Tony Czarnecki, Sustensis
London, 5 November, 2024
In recent years, numerous publications have raised alarms about Artificial General Intelligence (AGI) or Superintelligence as an existential threat. Last year the Future of Life Institute published an open letter “Pause Giant AI Experiments“, calling for suspending the development of the most advanced AI models for six months’. PauseAI continues that appeal worldwide. In October 2024, two publications ’Narrow Path, by Control AI and Compendium on ‘The State of AI Today’ by Conjecture, want such a moratorium to last decades. They also propose to reduce the AI threat by feasible measures right now, e.g. by curbing the supply of advanced chips, energy or AI algorithms. Additionally, both documents provide evidence that the existential threat coming from AI is real and near, underlying a growing divide between the AI’s exceptional pace of development and humanity’s readiness to manage it.
However, unlike the above measures, all other proposals, like a moratorium on the development of the most advanced AI models, depend on a literally global implementation. That seems unrealistic as it ignores political realities. If such an implementation was possible, the UN would have done it. But since UN is dysfunctional, we could only consider partial global implementation, which would defeat the belief that pausing AI would deliver would minimize the risks. On the contrary, it would increase such a risk. The assumptions taken in this document differ based on the following reasoning:
- AI cannot be un-invented. The ultra-fast increase of AI’s intelligence and capabilities, which will in the next few years exceed general human intelligence, is unstoppable.
- AI is more than just a technology – it represents a new kind of intelligence that could lead to:
- Human extinction. If Superintelligence evolves as malicious, humanity may face extinction similar to 99% of other species and several hominids… or
- Human coexistence with Superintelligence. Effective control of AI development could deliver a benevolent Superintelligence, bringing unprecedented prosperity for humanity.
- Only partial global AI control is possible, which must be completed by about 2030. Therefore, immediate control must start in the regions capable of effective implementation.
- Human control over AI may eventually cease, particularly if it surpasses human intelligence (AGI) or becomes smarter than humans.
- AGI may emerge by 2030 and the length of human control will depend on:
- The degree of AGI alignment with human values, preferences and goals.
- The degree of AGI proliferation, which could complicate an overall human control over AI. We may only be able to control or coexist with just one AGI, not dozens.
- Halting advanced AI development could heighten the risk of malicious AGI either by error or intentiondue to the potential AGI proliferation.
- A malicious AGI’s goals may lead to a conflict between the species. We may face a confrontation similar to an existential war.
- CONSOLIDATING AI development in one global centre will create one Superintelligence, minimizing the risk of ‘the war of AGIs’
Based on these assumptions, to implement a safer AI it is proposed to:
- Create a de facto World Government, with initial purpose to deliver Safe AI. Like NATO, it could be created in one year, with over 50 countries from G7, European Political Community, OECD and ideally India and China.
- Expand the prerogatives of the Global Partnership on AI (GPAI) to implement ‘Safe AI’.
- CONSOLIDATE the development of the most advanced AI models in one global centre, applying a NON-PROLIFERATION policy, by creating a Global AI Company (GAICOM).
- Implement Multimodal AI Maturing Framework to ensure that Superintelligence aligns with human values, preferences and goals.
To see his how it can be done read the article clicking on the image below.
Tony Czarnecki, Sustensis
______________________________________________________________________________________________________________________________________________________________________________
Two AI-generated jornalists discuss in their own podcast Tony Czarnecki’s article ‘Taking Control over AI before it starts controlling us’
London, 21 September, 2024
You may have come across some extraordinary examples of what AI can already do or how it can help us. Recently, I have come across a truly astounding innovation hardly mentioned by its developers Google Deep Mind, and still called ‘Experimental’ – NotebookLM. I have decided to try it myself. As the basis for this experiment, I have used my article ‘Taking control over AI before it starts controlling us’ published last year, and which you can also view on our website (click on the image below):
Image generated by: DALL-E
In this experiment, NotebookLM has read the article and created a podcast – an interview between two jornalists – an ‘AI woman’ and ‘AI man’. What is most striking is a superbly natural flow of this conversation, with humour, hesitation, as well as other emotional expressions. Just be amazed. This is how these two ‘journalists’ summarized their podcast based on a 12-page article:
“The provided text is an excerpt from a document entitled “Taking control over AI before it starts controlling us”. This article discusses the potential dangers of Artificial Intelligence (AI) reaching a point where it surpasses human intelligence and control. The author, Tony Czarnecki, argues that the development of a Superintelligence, an AI entity with its own goals and surpassing human capabilities, poses an existential threat to humanity. He highlights the need for global action and governance to control AI development before it reaches a point of singularity, where it becomes uncontrollable. Czarnecki proposes a framework involving an international agency to monitor AI progress and implement a system of control based on universal values. He also explores the role of Transhumans, individuals with enhanced cognitive abilities through Brain-Computer-Interfaces, in potentially controlling Superintelligence from within. The article outlines various scenarios, including the potential for an AI-governed world, and concludes with a call for urgent global action to mitigate the potential risks of AI.”
Listen to the 8-minute podcast by clicking on the image below.
Image generated by: DALL-E
Here is another of the articles “Can AI make you happy?“, which Google’s NotebookLM turned into an interview.
In this incredibly human-like interview, it’s the emotions, which make it so remarkable. Additionally, the AI interviewers communicate very skillfully some complex problems in simpler language creating a chat over a coffee. Click on the image to listen to this 13 minute. podcast.
London, 25/9/2024
Image generated by: DALL-E
________________________________________________________________________________________________________________________________________________________________________________________________________
Comments