Why eminent AI scientists differ so much on AI being an existential threat?

London, 26/6/2023

Those who watched my presentation last week on Prevail or Fail – Preparing for AGI arrival by 2030” (https://www.youtube.com/watch?v=SnTBtmPf19M),may perhaps find some additional arguments for the overall direction to control the development of AGI or Superintelligence, so that it does not become and existential threat. That term is not liked by those focusing mainly on AI benefits, which are indeed already substantial and shortly will be immense. But yesterday, I found even more support for the assertion that AI is an existential threat, which may materialize within a decade. It is the latest message from the OpenAI’s Board: Sam Altman, Greg Brockman and Ilya Sutskever on ‘Governance of Superintelligence’ (https://openai.com/blog/governance-of-superintelligence) who say that: ‘major governments around the world could set up a project that many current efforts become part of, or we could collectively agree, with the backing power of a new organization (International Atomic Energy Authority’). This supports exactly what I have proposed in my presentation and in my recently published book: ‘Prevail or Fail: A Civilisational Shift to the World of Transhumans’ and specifically one of its 10 ‘Commandments’: Develop One Superintelligence programme by One organization. OpenAI also suggest that: ‘it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains’, leaving some room for interpretation what they exactly mean, but from the overall context it is most likely AGI or Superintelligence.

Continue reading here.

Tony Czarnecki, Sustensis