Will 2030 be a tipping point for Artificial intelligence control

Last Saturday, 30th April 2022 I had a presentation for London Futurists on the subject as in the image below. You can view the presentation by clicking on the image. You can also view the slides by clicking here.

After the presentations there were quite a few comments and I quote some of them below. One of the comments was from prof. Kim Solez from the University of Alberta in Canada. Here is a link to his short video: https://www.youtube.com/watch?v=Tg7gK1tNolw .

I have responded to it as follows:

2/5/2022

‘Thank you, Kim, for your comments.  Very briefly, I would agree that most people, and first of all politicians, would fight for Transhumans never to take charge of the world. The idea of creating a super class of hyper intelligent and capable people (Transhumans) and the rest of mere humans seems abhorrent to say the least. But the other alternative is my first scenario – let the world continue mismanaging its own future until the unavoidable extinction.

I entirely agree with your idea of the need for humans and AI co-operating – that’s the Mission Statement of Sustensis www.sustensis.co.uk  – ‘Inspirations for Humanity’s transition to coexistence with Superintelligence’. However, it only covers an initial phase, say over the next decade or so. That’s why I have proposed the second scenario – where a Global AI Governance Agency, managed by some kind of the World Government would ‘mature’ AI over a certain period before Superintelligence (AGI) starts controlling/managing us. I would consider the idea that humans can control Superintelligence indefinitely as logically inconsistent. Why a superintelligent agent, millions of times more capable than all humans would allow it to be controlled by humans who would not even be capable of understanding most of its suggestions for making certain decisions. So, the whole point of controlling AI is not to ensure that humans will be able to maintain such a control for ever, but that they are able to control the TRANSITION process (the maturing process of AI) to the time when we will be managed and controlled by Superintelligence. To ensure that the product of such a transition is as benevolent to humans as possible three conditions should be met:

  1. The maturing process should be global to be effective
  2. The maturing process should start as early as possible
  3. The maturing process should continue for as long as possible, to give us time to prepare our own evolution

Since the first and the second point are unrealistic or highly improbable (we need the process to start in the next 2-3 years and it will certainly not be global), the third scenario, which I have proposed for managing the Transition by Transhumans (initially very limited in their capabilities), seems to be at least an option worth trying. Since we do not have the World Government, the only realistic option is for a consortium of the top developing AI companies to create an independent Global AI Agency (see details in my presentation – https://www.youtube.com/watch?v=2wQ_XLwF6k4). However, as I mentioned, I would hope (and that’s only some hope) that within a decade we will have a de facto (which means some countries like China will not be included) World Government. It might take control of the last leg of the transition by selecting the thousands or tens of thousands wirelessly connected Transhumans, who would serve as both governors of the Superintelligence, gradually being more integrated with it, as well as managing our daily affairs as proposed by the elected representatives of most of the nations. Realistically, the role of the elected World Government will be a rubber-stamping exercise because not allowing the execution of decisions proposed by Transhumans might only rarely be in the interest of the world.

Therefore, initially controlling the maturing Superintelligence by Transhumans and ultimately our transition to a new species seems to be a better although also a perilous way. A lot depends on the selection process of Transhumans, but any wrongdoing could be minimized by a sufficiently large number like tens of thousands of them connected wirelessly and making crucial decisions by a majority voting. However, there is no resilient method of ensuring that the Superintelligence, which humans will create will be a benevolent one. Only by combining all the methods, including a very important first stage of priming it with the best human values and preferences, seems to be the least risky way forward.’

I will follow his response. I will also post this response on London Futurists site shortly.

Regards

Tony

Some other interesting comments on London Futurists site:

_____________________

Ted H.

If that software is aware of itself, has a model of reality that includes itself and other agents modelled to some reasonable degree of fidelity, is capable of abstraction from observations, is capable of generating multiple potential future scenarios then choosing amongst them with some combination of previously chosen valences and randomness, and is generally cooperative with other entities and takes reasonable care to ensure the security and freedoms of other agents; then it deserves the label of person.
To do all of that, it has to have some degree of embodiment, to be able to make things happen in reality.

David W.

“If that software… then it deserves the label of person”.

But that’s not the same as saying it has sentience. Writers such as Stanislas Dehaene, Mark Solms, and Anil Seth, have all recently made good cases that the kinds of AI we are creating will *not* have sentience, despite having all the characteristics you list.

And even if it did have sentience, that’s still no reason for us to stop considering if we need to control it. We would need to look out for violations of the principle you also highlight, namely “is generally cooperative with other entities and takes reasonable care to ensure the security and freedoms of other agents”

Ted H.

Totally agree David – and that also applies to all people, and all levels of agency that they manage to create.

Don’t think we are quite there yet – today – politically or ethically speaking.

We are at least as dangerous to us, as we are to AI, as an insufficiently aware AI would be.

We need to get that sorted – soon!!!

David W.

Yes, people are more dangerous than AI at present. But that could change as AI gains more capabilities – especially if these capabilities surge in power over a relatively short period of time.

>”We need to get that sorted – soon!!!”

Yes! Because otherwise, with a lack of effective global cooperation, we’ll either destroy ourselves due to human misdemeanours, or else we’ll be destroyed by AGI misdemeanours once AGI arrives.