Why do we need one global AI programme?

Tony Czarnecki, Managing Partner, Sustensis

(This is an extract from the author’s book: “Prevail or Fail – a Civilisational Transition to Coexistence with Superintelligence

London, 30/11/2023

The Chinese New Generation Artificial Intelligence Development Plan (2017)’, which I have mentioned earlier, might be a template for the AI programme, outlined in this chapter. It confirms why even a stricter approach to developing AI based on an Open-Source policy is needed, in view of the OpenAI’s current closed shop policy. In contrast to its name, OpenAI broke its commitment to keep all research as an open source, when it released ChatGPT, and instead keeps its algorithms and key research tightly closed to external scrutiny. On the other hand, the key difference between the Chinese programme and the Programme proposed here is that the first one is just for China, whereas the programme proposed here is to be global. At least that is what it should be, if AI control is to be most effective. For that, we would need the World Government with sufficiently strong executive powers. But I have already mentioned it several times that it is of course impossible, given the timescales. So, what is the solution?

First of all, we must accept that most of the decisions and solutions applied may not be perfect. We must also accept the 80/20 rule, or as some people say, working in a ‘quick and dirty’ mode. Although this will unavoidably create some errors, this imperfect approach is the only feasible way forward in retaining the control over AI for as long as possible. Within about a decade, we would need to apply dozens of control mechanisms simultaneously to increase the chances of success. That success would be the release of a maturing Superintelligence only after it has been primed with the Universal Values of Humanity, and when it understands what it means, and perhaps even what it feels, to be a human.

The imperfect solutions relate primarily to a partially global control. Some countries will simply not accept quite drastic measures that may need to be imposed, like a significant loss of their sovereignty. That will reduce the effectiveness of the controlling measures. We can only minimize the risk of creating a malevolent Superintelligence by applying various methods of control simultaneously and extending the control period far beyond of what might otherwise be possible if AI is controlled only for a very short period.

Whatever we will do, this decade may decide if the homo sapiens species becomes extinct or gradually evolves into a new, inorganic species. It is still up to us to determine to some degree the most likely outcome. If we create a benevolent Superintelligence, we would like it to inherit the best human traits, so that initially we would evolve with it, and later on, within it.

The only way, in which we may effectively control the AI research and development is to consolidate all advanced AI research and already deployed projects into one large programme Superintelligence Development Programme (SUPROG). I need to remind you here about the difference between Artificial General Intelligence (AGI) and Superintelligence. AGI is a self-learning intelligence capable of solving any task better than any human. But Superintelligence is a single networked, self-organizing entity, with its own mind and goals exceeding all human intelligence. The first breakthrough will happen when AI reaches human level intelligence and becomes AGI. This can even be an individual AI Assistant.

By the end of this decade, we may have thousands of them, each as intelligent as anyone of us. However, in a few months, those humanoids may self-connect to each other rapidly creating a global network, unless we are able to restrict them. This will be an Immature Superintelligence, which we may be unable to control because it will outsmart us whatever we do. Therefore, we must do everything possible that once AGI or an Immature Superintelligence is outside of our control, its goals and behaviour are aligned with the Universal Values of Humanity and follow our preferences rather than its own. However, the only way, when such an Immature Superintelligence may still remain under human control is to develop it as One Super Programme, hence SUPROG. That would ensure the future Superintelligence becomes our partner rather than an evil entity, which may by error, fighting for common resources, or malicious behaviour cause human extinction.

The current situation shows what may happen if there are hundreds if not thousands of individual advanced AI projects developed by different companies. ChatGPT was released on 30th November 2022. But in May 2023 there were at least 10 AI Assistants of similar competence. One of them, Claude, developed by Anthropic, has outperformed ChatGPT by being at least 10 times more powerful, if measured by the contextual information it can process. ChatGPT can only process an input (Prompt) of about 2,000 words, or 3-4 pages. Claude can process about 77,000 words, which is the size of this book. It also teaches AI in a much simpler, less expensive, and more effective AI learning process.

An effective AI control must from the very start focus on the AI’s goals and behaviour, including knowing how it has arrived at any decision or solution, so called explainability. It must be the ultimate decision centre, similar to the BIOS programme, which controls every computer operating system, and which may be achieved via the MASTER PLATE as proposed in this book. This is where the Universal Values of Humanity will be stored as well as its goals, and human preferences, continuously updated as the maturing AGI experiences the world of humans. This should be at the top of SUPROG hierarchical structure, which will consist of hundreds of projects, research labs and even manufacturing facilities.

The delivery of individual projects would be the responsibility of GAICOM. Here is a list of companies aligned with their AI expertise (in reality companies and their allocated functions may be different):

  • Amazon – main distributor
  • IBM – Quantum Computing and super large computers
  • NVIDIA – AI and graphics processors
  • Intel – neuromorphic processors
  • META – Metaverse
  • Microsoft/Google/Apple – a brand new ‘AI Operating System’
  • Google – Development of AGI and Superintelligence. It might be the Programme ACCELERATOR
  • Deep Mind & OpenAI – AI control, AI Antivirus, eliminating ‘black boxes’ and ensuring the AI Mind’s explainability. It might be the Programme’s – BREAKING PEDAL
  • Neuralink/TESLA – Robotics household and neurosurgery
  • Boston Dynamics – Robotics industrial (Volvo ABB + others)

This is just an approximation of what potential major projects might be, which companies might deliver it, and in what they may specialize. There will probably be a few hundred narrowly specialized companies, members of GAICOM. Each of them may be working on dozens of projects. All of these companies’ projects and deliverables will have to be integrated within SUPROG. This is a truly mammoth task and perhaps the largest programme ever created by humans. It will be far more complex and important than the NASA’s Moon Landing Programme or the Manhattan project.

I assume that initially only the US companies and projects would join GAICOM and SUPROG mainly for legalistic reasons. They will anyway constitute more than half of all projects and companies in the world. However, non-US companies could join at any time, perhaps with the assistance of Global AI Governance Agency (GAIGA) – see next chapter. When GAIGA takes over the supervision of FMF from the US government, then whatever legal arrangements have been established by then for the US companies, will have to be made compatible with the international law.