Sustensis is a not-for-profit Think Tank providing inspirations, suggestions, and solutions for a Civilisational Transition to coexistence with Superintelligence. We must recognize that the scale of the required changes represents a Civilizational Shift in the history of a human species. To successfully navigate this transition to a new civilization, we must accept the magnitude and timeframe required for this transformation.
An important part of our work is Transhumanism. Transhumans are people who can make decisions and communicate with computers just by thought using Brain-Computer-Interfaces (BCI). They will soon become far more intelligent than any human. Super fast technological progress will ultimately enable Transhumans to become a new species – Posthumans. Thus for us…
“Transhumanism is about Humanity’s transition to its coexistence with Superintelligence until humans evolve into a new species”.
Our key objective is to minimize the risk of developing a malicious Superintelligence, which requires a global co-operation, which may be impossible without a deep reform of democracy. This area is supported by our linked website Euro-Agora, which presents a new type of democracy. It combines the best features of direct and representational democracy into a Consensual Presidential Democracy. Furthermore, if Humanity’s transitions is to be carried out swiftly and effectively it must be carried out in a consensual, rather than an antagonistic way. To facilitate that, we have developed an AI-enabled Consensual Debating approach, supported by the most advanced Artificial Intelligence Assistants such as ChatGPT, on our second subsidiary website – Consensus AI.
It is best to use the left side bar for a detailed navigation. It is arranged following a logical order, starting with Existential Risks, with the sub-tabs explaining the proposed solutions in more detail. There is also a comprehensive SEARCH facility to help you access the reuired information.
You can access the content and leave comments without registering. However, to make standalone posts or articles, you need to register. The posts and entries are archived and continuously updated under the main tabs. This website observes the principles of the Global Data Protection Regulation (GDPR) and by entering this site you agree to those principles, which you can view here.
We invite experts in various domains of knowledge to join us in making this website relevant to people interested in certain aspects of human progress in the context of evolving Artificial Intelligence to ensure that when it emerges as Superintelligence, it will become our friend rather than foe. If you are a member of NGO, you may be interested in how you can align and strengthen your agenda with some of the proposals put forward on this site.
LATEST NEWS AND ARTICLES
London, 25th November 2023
PREVAIL OR FAIL – A Civilisational Shift to Coexistence with SUperintelligence – A new book by Tony Czarnecki, Sustensis
This book has beeen published at the time when finally a meaningful debate has started not just about AI regulation, i.e. its use, but about controlling its development. Only then can we hope that the final product – Superintelligence becomes a benevolent entity helping us to navigate a transition to the World of Transhumans and ultimately facilitating our evolution into Posthumans.
The pace of change has never been as rapid as it is today, but the field where exponential progress is most apparent, particularly since 2022, is Artificial Intelligence (AI). The release of ChatGPT and similar AI Assistants signifies a significant stride towards achieving Artificial General Intelligence (AGI). In 2014, the futurist Ray Kurzweil predicted that AI would reach human-level intelligence by 2029, but there is still no consensus on what AGI is.
However, regardless of the kind of AGI, which emerges by about 2030, it is vital that we are able to control it, before it starts controlling us. Such loss of control over AGI will be a gradual process rather than an abrupt event. A complete loss of control will occur when we are unable to reverse AGI’s decisions. As a self-learning intelligence, AGI will outperform humans in any task or situation, including evading human oversight. If AI control proves ineffective, AGI might achieve this even before 2030.
Once AGI gets out of our control, it will resist any attempts to reimpose it. Assuming its capabilities continue to improve exponentially, it may have catastrophic consequences. Therefore, it is imperative to explore all feasible options to ensure human control over AI beyond 2030. This will enable us to better adapt to coexisting with Superintelligence, immensely more capable than humanity. To ensure the survival of humanity, we must fundamentally revise the necessary solutions for effective AI control. Just as we must take more significant actions to address Global Warming, so we must adopt a similar level of commitment to AI development control.
We must recognize that the scale of the required changes represents a Civilizational Shift in the history of a human species. To successfully navigate this transition from our current state to a new civilization, we must accept the magnitude and timeframe required for this transformation. The solutions proposed in this book, are presented as as “The Ten Principles of a Civilizational Shift”:
- Adjust global AI governance to a civilisational shift since AI it not just a new technology but an entirely new form of intelligence. AI regulation is important but far more significant is AI development control. Both are part of AI governance but require different procedures and have different impact on humans’ future,
- Undertake a comprehensive reform of democracy, as it is a prerequisite for achieving effective AI development control and aligning it with human values. We must rebalance the power of governance between citizens and their representatives in parliament,
- Retain control over AI governance beyond 2030. While there is no scientific proof that AGI will emerge by 2030, just as there is no proof of the Global Warming reaching a tipping point by that time, we must develop AI as if AGI will emerge within that timeframe,
- Create Global AI Regulation Authority (GAIRA) by transforming the Global AI Partnership (GPAI). GAIRA should be responsible for regulating a global use of AI in society,
- Create Global AI Control Agency (GAICA) as a Consortium in the USA, since two-thirds of the AI sector is located there. Gradually expand it to engage non-US companies, including China,
- Create Global AI Company (GAICOM). This would be a Joint Venture company to consolidate the most advanced AI companies in a single organization. Effective control over AI development will be impossible if it remains dispersed among numerous companies,
- Create Superintelligence Development Programme (SUPROG)managed by GAICOM matching China’s efforts in the AI sector,
- Create Global AI Governance Agency (GAIGA) under the mandate of the G7 Group. GAIGA would oversee both GAIRA, responsible for regulating the use of AI products and services, and the GAICA Consortium, responsible for AI development control,
- Create a de facto World Government initiated by the G7 Group, incorporating members from NATO, the European Union, the European Political Community, or from OECD,
- Create a Global Welfare State, which would also include the setting up of a Global Wealth Redistribution Fund, needed to mitigate the challenges posed by the transition to the World of Transhumans.
The implementation schedule in the book is based on three assumptions:
- Transhumans (humans with Brain-Computer-Interfaces – BCI) with far superior intelligence than any human will emerge by about 2027.
- AGI will arrive by 2030,
- Superintelligence will emerge by 2050.
David Wood chaired the pre-Summit event on Global AI Safety at Bletchley PArk
David Wood, Sustensis board member, chaired the pre-Summit event on Global AI Safety at Bletchley Park on 311st October 2023. See some photos below. It was a prestigious event with the pannelist Prof. Stuart Russell, an eminent AI scientists, Prof Max Tegmark, chairman of Future of Life Institute and Conro O’Leahy – chairman of Conjecture taking art in the Conference which started the day after.
For me it was one of the more optimistic events on how we may tackle AI as an existential risks and so it seemed it was the view of the particiapnts. I have written an article on Medium and in some other papers, assessing the Summit’s results. You can read “What next after the AI Safety Summit” here. I would appreciate your comments, which you can leave below.
Tony Czarnecki, Sustensis
Video from the ‘BLETCHLEY PARK AI SAFETY SUMMIT – A PREVIEW‘
23 September 2023
London Futurists organised an event chaired by its chairman, David Wood, on 23 September 2023. The subject was the forthcoming Global Ai Safety Summit to be held at Bletchley Park, the site when the famous German ENIGMA Coding machine was unravelled by the genius British Mathemtician Alan Turing and his collegues. After 80 years, our civilisation has to face another challenge – to control Artificial Intelligence, which soon may be far more intelligent than any humans. David, who is a memebr of Sustensis Advisory Board, has invited five pannelists, including Sustensis Managing Partner, Tony Czarnecki, to imagine speaking for ten minutes to the politicians and other international leaders who will gather at the UK’s Bletchley Park on 1-2 November for the Global AI Safety Summit. What would they say?
The organisers of that Summit have announced the following five objectives “to make frontier AI safe, and to ensure nations and citizens globally can realise its benefits, now and in the future”:
- A shared understanding of the risks posed by frontier AI and the need for action
- A forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
- Appropriate measures which individual organisations should take to increase frontier AI safety
- Areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
- Showcase how ensuring the safe development of AI will enable AI to be used for good globally.
But are these actually the best objectives? And what do these really mean? Watch the viedo to find out.
Tony Czarnecki presented his views on ‘HOW TO RESPOND TO AI? to the World Federalist Movement
On 5th August 2023, Tony Czarnecki, teh Managing Partner of Sustensis presented main topics from his new book ‘PREVAIL OR FAIL – A civilisational shift to the World of Transhumans’ to the World Federalist Movement. Click on the picture to view the video:
25th August 2023
An extraordinary book by Jose Cordeiro and David Wood
Sustensis has been mainly engaged in a civilisation transition enabled by Artificial Intelligence, which, if properly controlled, can deliver immense blessings to humans, including in the area of medicine and life extension. One of us, David Wood, who is a member of Sustensis Advisory Board together with Jose Cordeiro, have published a book ‘The Death of Death’ which has been translated into nearly 20 languages, such as Spanish, English, Portuguese, French, German. This is a significantly edition, since as those who visit Sustensis website know, the pace of change is now nearly exponential and a lot has happened in the area of medicine since its first edition in 2018.
As the authors say, immortality has been humanity’s greatest dream since prehistory. Even children understand that aging is bad, that death is the most horrible loss that can happen to them and their family. Fortunately, the children of the present generation can belong to the first generation of immortal humans. We must be aware that we live between the last mortal, and the first immortal generation.
This book has been a journey of discovery for me in many ways, although I have been interested in the subject for many years. First of all, apart from reporting on the causes of aging and discoveries on reversing an aging body to a younger state it also covers various methods such as gene replacement, or cryogenics. Furthermore, it discusses societal implications of regeneration medicine on millions of people who will quite quickly become metabolically younger by a few decades. Finally, it examines the economics of this unprecedented development of the human history both in personal terms (how much it will cost) and how it will impact the overall cost of maintaining a healthy society.
I hope you will be enjoying reading the book as much as I have.
Tony Czarnecki, Managing Partner, Sustensis
London, 25th August 2023
Can Britain play a leading role in global AI governance?
London 3rd July 2023
I have recently attended a panel debate at the Chatham House partially inspired by Tony’s Blair’s Institute on Global Governance. He presented a case for Britain to play a major role in governing AI, although his own Institute provides in its key policy document “A New National Purpose” a rather dim view on the current state of AI in the UK. For example, the UK contributes only 1.3% of the aggregate computing power of the Top 500 supercomputers, less than Finland and Italy, and was the only leading nation to record a decline in AI publications last year.
A significant part of the debate at the Chatham House focused on the forthcoming global AI summit to be held in London this autumn. While there are only a few details about the conference agenda, Prime Minister Rishi Sunak announced in early June, after a meeting with President Biden, that Britain aims to play a global role in AI to ensure its immense benefits while prioritizing safety.
For Rishi Sunak, establishing the most powerful AI agency in London could be one of the milestones in restoring Britain’s position as a truly global power after Brexit. However, the challenge lies in the fact that there already exists an agency in Paris responsible for global AI regulation called the Global AI Partnership (GPAI), with 46 countries, including the US, the UK, and the EU. Therefore, the need to create yet another global AI agency is not immediately apparent. So, what role could Britain play in delivering safe AI?
Continue reading here...
Tony Czarnecki, Managing Partner, Sustensis
Why eminent AI scientists differ so much on AI being an existential threat?
Those who watched my presentation last week on “Prevail or Fail – Preparing for AGI arrival by 2030” (https://www.youtube.com/watch?v=SnTBtmPf19M),may perhaps find some additional arguments for the overall direction to control the development of AGI or Superintelligence, so that it does not become and existential threat. That term is not liked by those focusing mainly on AI benefits, which are indeed already substantial and shortly will be immense. But yesterday, I found even more support for the assertion that AI is an existential threat, which may materialize within a decade. It is the latest message from the OpenAI’s Board: Sam Altman, Greg Brockman and Ilya Sutskever on ‘Governance of Superintelligence’ (https://openai.com/blog/governance-of-superintelligence) who say that: ‘major governments around the world could set up a project that many current efforts become part of, or we could collectively agree, with the backing power of a new organization (International Atomic Energy Authority’). This supports exactly what I have proposed in my presentation and in my recently published book: ‘Prevail or Fail: A Civilisational Shift to the World of Transhumans’ and specifically one of its 10 ‘Commandments’: Develop One Superintelligence programme by One organization. OpenAI also suggest that: ‘it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains’, leaving some room for interpretation what they exactly mean, but from the overall context it is most likely AGI or Superintelligence.
Continue reading here.
Tony Czarnecki, Sustensis