Why eminent AI scientists differ so much on AI being an existential threat?

Tony Czarnecki, Sustensis

London, 26/6/2023

Those who watched my presentation last week on Prevail or Fail – Preparing for AGI arrival by 2030(https://www.youtube.com/watch?v=SnTBtmPf19M),may perhaps find some additional arguments for the overall direction to control the development of AGI or Superintelligence, so that it does not become and existential threat. That term is not liked by those focusing mainly on AI benefits, which are indeed already substantial and shortly will be immense. But yesterday, I found even more support for the assertion that AI is an existential threat, which may materialize within a decade. It is the latest message from the OpenAI’s Board: Sam Altman, Greg Brockman and Ilya Sutskever on ‘Governance of Superintelligence’ (https://openai.com/blog/governance-of-superintelligence) who say that: ‘major governments around the world could set up a project that many current efforts become part of, or we could collectively agree, with the backing power of a new organization (International Atomic Energy Authority’). This supports exactly what I have proposed in my presentation and in my recently published book: ‘Prevail or Fail: A Civilisational Shift to the World of Transhumans’ and specifically one of its 10 ‘Commandments’: Develop One Superintelligence programme by One organization. OpenAI also suggest that: ‘it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains’, leaving some room for interpretation what they exactly mean, but from the overall context it is most likely AGI or Superintelligence.

But what specific arguments have those who support the claim that AI poses and existential threat and those who oppose it. A few days ago I found a partial answer in a superb ‘Munk Debate’ on “AI research and development poses an existential threat”?(https://munkdebates.com/livestreamai).

The case for those that believe AI is an existential threat was supported by Max Tegmark – an eminent physicist and futurist, professor at MIT, and Youshua Benjo – the winner of the Turing award, an equivalent of the Nobel Prize in AI.

The case against the motion was presented by Yann Lecun, also the winner of the Turing AI Award and head of AI in Meta (Facebook), and by Professor Melanie Mittchell from Santa Fe Institute. The initial vote among the audience was 67% believing AI presents an existential risk and 33% saying it does not. After the debate the vote was 61% to 39% respectively.

I fully support the arguments presented by Benjo and Tegmark. They were overwhelming and clear, although did not include the main argument that AI is not just another technology – it is a new INTELLIGENCE. The other weakness in their arguments was that they were not emphasising strongly enough the danger about AI misinterpreting its set goals or setting up its own goals. Finally, neither they, nor the opposing party expanded the problem of the inability of controlling potentially hundreds of AGIs, which could only be effective if just one Superintelligence is developed (the term mentioned many times in the debate but without specifying what it is).

The opposing camp’s arguments were simply poor or even definitely untrue. That is especially so with the arguments presented by Melanie Mittchell. For a professor at Santa Fe Institute, it was just embarrassing.

The question is why the people like Yann Lecun, an eminent AI scientist and practitioner, come to a view that AI does not present an existential threat. In my view it may stem from the lack of more precise definitions of what is really the subject matter of such debates, like that one organized by a renowned Munk Debates. In this case, the debate might have been more meaningful if these very important points had been clarified:

  • What is an existential threat. The definition used in the debate was from Nick Bostrom’s book ‘Superintelligence’, although prof. Mittchell consistently ignored it and pretended it was just a risk as any other. For me an existential threat is an event which would cause a total extinction of a human species. It could be man-made (anthropogenic) such as global warming, or artificial pandemic. But it could also be a natural existential threat like gamma ray burst, or an asteroid impact.

    What is AGI or Superintelligence. This is very important because it represents a watershed moment, when the AI’s intelligence will be far superior to ours in all aspects. The problem could be solved very quickly once there is a recognized organization, such as IPCC for the global warming, which would establish those definitions. I define AGI as follows:
    Artificial General Intelligence is a self-learning, superior to humans’ intelligence, capable of solving any task far better than any human”. But my more pragmatic definition is simply this one: “ Artificial General Intelligence is much smarter than any human”.  Regarding Superintelligence, I would define it as follows: “Superintelligence is a single self-organizing intelligence, with its own mind and goals exceeding all human intelligence
  • When AGI or Superintelligence is likely to arrive? Of course, nobody knows it exactly. But without specifying the most probable date of the AGI’s emergence, any debate can lead to inconclusive results, which in turn does not motivate government to do anything about it. That was precisely the case with Global Warming. For over 20 years nothing was done to mitigate that risk, until it was agreed at the conference in Paris in 2015 that the most likely tipping point date for the rise of a global temperature increase by 1.5C. is the year 2030. It seems that the only reason why we had that panic meeting between President Biden and Prime Minster Sunak early June was that the expected date of AGI arrival has suddenly advanced by a few decades from 2060 to about 2030. Unless the likely date of AGI emergence is not agreed by a renowned international organization, governments and politicians will avoid committing themselves to any urgent action.
  • Ignoring well established facts. One of the best example is the impossibility to precisely define what we mean when we expect AI to execute our order or fulfil a certain goal. Prof. Stuart Russell one of the best known AI scientists has come to an overall conclusion that teaching AI our values and setting strict goals may not be the best way to control it. Why? Because what we say may not always be the same what we mean, which illustrates the problem of interpreting our intentions. The best example comes from a Greek legend about Tithonus, the son of Laomedon, the king of Troy. When Eos (Aurora), the Goddess of Dawn, fell in love with Tithonus, she asked Zeus to grant Tithonus eternal life. Zeus consented. However, Eos forgot to ask Zeus to also grant him eternal youth, so her husband grew old and gradually withered. It has been proven time and again that it is impossible to specify human intentions in absolutely unambiguous terms whether to other humans or to AI. The language itself is the key barrier.  Therefore, when Youshua Bengi mentioned it that problem to the opposing side, which should have known it, it can only be interpreted as a disingenuous attitude.  
  • Lack of understanding that AI is a new type of intelligence, with which we will soon compete, and not just a powerful tool. That was the argument, which the proponents of AI being an existential threat did not use. The opposing team indirectly maintained that intelligence can only be embodied in a biological being, which is only one of possible substrates. From the perspective of the Universe’s evolution, it is rather unlikely that animals and humans are the only intelligent being in the Universe.

But there is also one more explanation for denying that AI is not an existential threat – particular interests of the people taking part in a debate, although I am not saying that was the case in this Munk Debate. Scientists and AI developers are people like you and me. Why should they differ so much from politicians, lawyers or even policemen? After all, their jobs may depend on governments financing their research, and that may be linked to a particular government’s budgetary priorities. Servility has not been invented in this century. However, in the name of full objectivity, most people, but not true scientists, defend what they have invented rather than trying to find possible inconsistencies in their own theories. The late James Lovelock is perhaps a good example what it meant to be a true scientist, when after 30 years fighting the building of nuclear reactors, he changed his view entirely, seeing the global warming as a more significant threat.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *