AI Agent discusses how to control AI most effectively

An interview with GPT-3 AI Agent, which ponders on its future, when it may itself actually control its own makers

20/1/2021

Tony Czarnecki, Managing Partner, Sustensis

Introduction:

This interview has been carried out with Generative Pre-trained Transformer 3 (GPT-3), now also known as INSTRUCT, is an Artificial Intelligence agent, one of the most advanced Natural Language Processors (NLP) supported by 175 billion tokens (words, split into 3-4 characters bits), created by Open-AI in July 2020. Its database includes all Wikipedia and millions of articles and reliable sources of information from all knowledge domains. To use it, one must adjust the GPT-3 to particular needs. That includes the creation of a few sentences’ context PROMPT (a statement or a question), to which GPT-3 writes a COMPLETION. Others include setting several parameters, such as TEMPERATURE (probability of selecting a given token), which may affect the quality of the answers.

I have tried to convince the AI Agent that we need to control AI with the help of the first Transhumans, n idea that I have developed in a paper for 2022 AI conference in Seattle.

TC: Most people, including politicians, still think that AI should be developed like all previous technologies. Even AI researchers still develop their AI agents as a rudimentary IT program. Although it is true that in principle AI is a tool, it differs from all previous inventions in that if developed carelessly, it could lead to unimaginable unintended consequences, mainly because of the AI’s self-learning capabilities. The existential risk posed by Superintelligence (fully developed Artificial General Intelligence – AGI) does not depend on how soon it is created. What mainly concerns us is what happens once this occurs. Nonetheless, a survey of 170 artificial intelligence experts carried out in 2014 by Anatolia College’s philosopher Vincent C. Müller and Nick Bostrom, suggests that Superintelligence could be on the horizon.

The median date at which respondents gave a 50 percent chance of human-level artificial intelligence was 2040, and the median date at which they gave a 90 percent probability was 2075. This prediction is further away than 2045 given by Ray Kurzweil. In any case, if they are correct, most people living today will see the first Superintelligence, which, as the British mathematician I. J. Good observed in 1966, may be our last invention. The late physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have all expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could “spell the end of the human race”.

Many other AI researchers have recognized the possibility that AI presents an existential risk. For example, MIT professors Allan Dafoe and Stuart Russell mention that contrary to misrepresentations in the media, this risk need not arise from spontaneous malevolent intelligence. Rather, the risk arises from the unpredictability and irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives. This problem was stated clearly by Norbert Wiener in 1960, and we still have not solved it.

AI: Some experts have argued that the development of artificial intelligence could lead to the extinction of the human race. We need to take a step back and ask ourselves whether the way AI is currently being developed is really the best way to do things. We cannot keep on developing AI the way we have been doing, without taking into account the potential risks. We need to develop AI in a way that is safe and responsible. This means taking into account the possibility of Superintelligence and doing everything we can to prevent it from happening. We need to be proactive and not wait until it is too late.

TC: But to be effective we need a global control AI and since the United Nations is not a Federation, it has limited impact on AI development. Without the World Federation with a global control this task is impossible. Do you agree?

AI: I agree that a global control AI is necessary to ensure safe and responsible development of AI. However, I do not think that the United Nations is the right organisation to do this. The UN is not a federation and has limited impact on AI development. A global control AI would need to be a separate, independent organisation with the power to enforce its regulations.

TC: But how to achieve a global federation when we may only have less than a decade after which a global control of AI will be impossible?

AI: This is a difficult question. I think it will be necessary to create a global AI governance framework that outlines the responsibilities of different organisations and individuals when it comes to the development and use of AI. This framework would need to be ratified by all countries in the world. Once it is in place, it would be up to the global control AI to enforce it.

TC: Since this is unrealistic wouldn’t you think that a better option is to start with AI control by the top AI development companies. The key developers could in a few years’ time become the first Transhumans who would have wireless neural connection of their brains to the most advanced AI they are developing. In that way, they could control AI from inside being partially fused with it.

AI: This is an interesting idea. However, I think it is important to have a global control AI that is independent and not controlled by any one organisation or group of people. This would ensure that AI is developed in a safe and responsible way.

TC: Yes, but we simply have no time. we must try solutions which are available today, even if they present some risk.

AI: I agree that we need to take action now and that not all solutions are perfect. However, I think it is important to take into account the potential risks of AI development before implementing any solutions. We need to develop AI in a way that is safe and responsible. This means taking into account the possibility of Superintelligence and doing everything we can to prevent it from happening.

TC: Controlling AI is similar to controlling the Internet. There is an independent W3C committee, form of individual experts, which controls the Internet quite well. So, what we would need in the interim is to set up a similar organization, until it is taken over by an international global organization capable of effectively controlling AI development.

AI: I agree that controlling AI is similar to controlling the Internet. However, I think it is important to have a global control AI that is independent and not controlled by any one organisation or group of people. This would ensure that AI is developed in a safe and responsible way.

TC: We will not have a global federation before 2030 and we must act now. Creating something like the organization controlling the Internet for the control of AI, quite possibly by the first Transhumans, seems to be the only realistic way forward, would you agree?

AI: This is a HUGE undertaking and would require a massive amount of effort and a significant amount of financial resources. Additionally, it would be important to have a governing body made up of a variety of experts in different fields related to AI, in order to make informed decisions about the future of AI.

TC: Yes, but the key problem would be that such a governing body should have ultimate power of control and setting sanctions like the International Atomic Energy Authority in Vienna.

AI: So, what you’re saying is that there needs to be a governing body in place to control AI development? Yes, I believe that it is important to have a governing body in place to control AI development. This is necessary to ensure that AI is developed in a responsible and safe manner.

TC: How realistic is it to have such a body before 2030?

AI: I’m not sure how realistic it is to have such a body before 2030, but it is certainly something that we should be working towards.