The solution to controlling AI is to change the way we think about it. Stuart Russell – an eminent AI scientist suggests that “instead of building machines that exist to achieve their objectives, we should build them in a way, where ‘machines are beneficial to the extent that their actions can be expected to achieve our objectives.’ This fix might seem small, but it is crucial. Machines that have our objectives as their only guiding principle will be necessarily uncertain about what these objectives are. Uncertainty about objectives might sound counterproductive, but it is actually an essential feature of safe intelligent systems. It implies that no matter how intelligent they become, machines will always defer to humans. They will ask permission when appropriate, they will accept correction, and, most importantly, they will allow themselves to be switched off – precisely because they want to avoid doing whatever it is that would give humans a reason to switch them off. Once the focus shifts from building machines that are ‘intelligent‘ to ones that are ‘beneficial,’ controlling them will become a far easier feat”.
This seems to be highly idealistic and somewhat unrealistic scenario, which has already been criticised by some experts. However, what is important in prof. Russell’s view in the context of Immature Superintelligence, is that he also confirms that we have just about a decade to ensure the control over the AI’s goals: “All this could take a decade to complete – and even then, regulations will be required to ensure provably safe systems are adopted while those that don’t conform are retired. This won’t be easy. But it’s clear that this model must be in place before the abilities of A.I. systems exceed those of humans in the areas that matter”.
This kind of risk of Humans losing control over AI, stems from the assumption that once Superintelligence is preloaded with human values, it will follow them in exactly the same way as we wanted. Such a risk is well illustrated in the Greek legend about Tithonus, the son of Laomedon, the king of Troy. When Eos (Aurora), the Goddess of Dawn, fell in love with Tithonus, she asked Zeus to grant Tithonus an eternal life. Zeus consented. However, Eos forgot to ask Zeus to also grant him eternal youth, so her husband grew old and gradually withered.
Over the next decade we shall see humanoid robots in various roles more frequently. They will become assistants in your GP’s surgery, policemen, teachers, household maids, hotel staff etc., where their human form will be fused with the growing intelligence of current Personal Assistants such as Amazon’s Alexa, Apple’s Siri, Google’s Assistant or Samsung’s Bixby. One of the most effective way in which the Immature Superintelligence will learn our values, will be for those millions of AI assistants, autonomous cars, etc., provide feedback to the central ‘hub’, of how they practice those values and what they experience. Once compared with the master values, they will then correct and re-test those values in real environment. In the end, this is what we do. We may consider raising an Immature Superintelligence as a child.
Tesla cars are the best example of how ‘values’, behaviour or experience of each of the vehicles is shared. Each Tesla car continuously reports its unusual, often dangerous ‘experience’ to Tesla’s control centre, through which all other cars are updated to avoid such a situation in the future. Similar system is used by Google’s navigation. At the moment, these centres storing values and behaviour expected from various nodes, are dispersed (Google’s Waymo has a similar but of course a separate centre). It is like developing individual versions of ‘Superintelligence’. That’s why Humanity needs to develop a single, rather than competing versions of Superintelligence with one Centre or its ‘brain’ for storing those values, behaviour and experiences. Until the time when we lose control over Superintelligence, we will be able to amend its set of values, provided that those humans who will do it on our behalf will have the authority of a planetary organization. But that will only happen when all Superpowers stop dreaming about achieving AI supremacy to conquer the world with one super cyber-attack.
We should also remember that all those hundreds of millions of assistants are already becoming fast self-learning agents and their accumulated knowledge is being stored in a central repository on the network, which is a kind of an early ‘pool of intelligence’, to which each of these agents may have a full access, if it has the required access rights. As their intelligence and overall presence grow, so will the risk of their intended or erroneous action and the intrusion into our private life that has already started to shock us. Therefore, in the next few years, we need to be prepared for some serious incidents linked initially to malfunctioning self-learning robots and later on to malicious action by some AI agents.
If such incidents e.g. malicious firing of nuclear rockets coincides with other risks such as pandemics or local conventional wars, the impact on reforming democracy may be significant. They can stall any on-going programmes to reform democracy or building a planetary organization because of the ensuing chaos – a Global Disorder.
In a positive way, such incidents may mobilize nations to deliver a new model of democracy faster than otherwise might have been the case, in order to reduce various existential risks. Malicious incidents or significant material damage arising from Cyber wars, may lead to street protests far exceeding what we have experienced in summer 2019, organized by the ‘Extinction Rebellion’. Whatever one might think about the form of these protests that have inconvenienced large number of people worldwide, they have also brought to the fore a very important message: we all are a human civilization and this is our only planet.
This kind of thinking is absolutely necessary to change the perception of most people that we can live a cosy life within our own borders and enjoy our freedom and sovereignty. This is no longer possible and that’s why the protests like these ones, can also help to see the need to treat the threat of Immature Superintelligence even more seriously. But for such protests to take place, people must be personally affected in some way by the malevolence of some act of AI. That should accelerate efforts to countries working more closely together on delivering a friendly AI. And that is why it can lead to some fundamental positive shifts in the global politics triggered by what I call the ‘AI control dilemma’.