23 September 2023 London Futurists organised an event chaired by its chairman, David Wood, on 23 September 2023. The subject was the forthcoming Global Ai Safety Summit to be held at Bletchley Park, the site when the famous German ENIGMA Coding machine was unravelled by the genius British Mathemtician Alan Turing and his collegues. After …
Continue reading ‘BLETCHLEY PARK AI SAFETY SUMMIT – A PREVIEW‘
Author:T. Czarnecki
‘HOW TO RESPOND TO AI? – for the World Federalist Movement
On 5th August 2023, Tony Czarnecki, teh Managing Partner of Sustensis presented main topics from his new book ‘PREVAIL OR FAIL – A civilisational shift to the World of Transhumans’ to the World Federalist Movement. Click on the picture to view the video:
The Death of Death
25th August 2023 The scientific possibility of physical immortality and its moral defense An extraordinary book by Jose Cordeiro and David Wood Sustensis has been mainly engaged in a civilisation transition enabled by Artificial Intelligence, which, if properly controlled, can deliver immense blessings to humans, including in the area of medicine and life extension. One …
Continue reading The Death of Death
Can Britain play a leading role in global AI governance?
London 3rd July 2023 I have recently attended a panel debate at the Chatham House partially inspired by Tony’s Blair’s Institute on Global Governance. He presented a case for Britain to play a major role in governing AI, although his own Institute provides in its key policy document “A New National Purpose” a rather dim …
Continue reading Can Britain play a leading role in global AI governance?
Why eminent AI scientists differ so much on AI being an existential threat?
London, 26/6/2023 Those who watched my presentation last week on “Prevail or Fail – Preparing for AGI arrival by 2030” (https://www.youtube.com/watch?v=SnTBtmPf19M),may perhaps find some additional arguments for the overall direction to control the development of AGI or Superintelligence, so that it does not become and existential threat. That term is not liked by those focusing mainly on …
Continue reading Why eminent AI scientists differ so much on AI being an existential threat?
PREVAIL OR FAIL – A new book by Tony Czarnecki, Sustensis
London, 15th June 2023# This book has beeen published at the time when finally a meaningful debate has started not just about AI regulation, i.e. its use, but about controlling its development. Only then can we hope that the final product – Superintelligence becomes a benevolent entity helping us to navigate a transition to the …
Continue reading PREVAIL OR FAIL – A new book by Tony Czarnecki, Sustensis
VIDEO on ‘PREVAIL OR FAIL’ – Preparing for AGI Arrival by 2030
London 22 June 2023 This video captures the debate on the subject covered in Tony Czarnecki’s book: PREVAIL OR FAIL – A Civilisational Shift to the World of Transhumans – Preparing for AGI Arrival by 2030.
How Might Transhumans Control Superintelligence?
How Might Transhumans Control Superintelligence? A podcast released on 9th January 2023 This is a podcast interview with Peter Scott – the author of several books on AI such as ‘Artificial Intelligence and You‘, broadcaster and keynote speaker. The interview is in two parts: Part 1: And Part 2:
Why ChatGPT has made the need for AI control even more urgent?
Tony Czarnecki, Managing Partner, Sustensis, London, 23/12/2022 The incredible pace of discoveries and innovations in AI in 2022 raise the question about how long humans will be in control of AI and indirectly of their own future. It seems quite likely that we are approaching a pivotal moment when AI will indirectly control many decisions …
Continue reading Why ChatGPT has made the need for AI control even more urgent?
Pause Giant AI Experiments: An Open Letter
London, 31 March 2023 Tony Czarnecki, Managing Partner of Sustensis has signed the Open Letter Pause Giant AI Experiments: An Open Letter published by Future of Life Institute, Oxford on 29/3/2023. So far, the letter has been signed by nearly 2,00 eminent AI scientists.