30th April 2018

Intelligence and Superintelligence

How could we measure intelligence? The current “gold standard” of measuring intelligence is Intelligence quotient (IQ). Wikipedia defines it as: “… a total score derived from several standardized tests designed to assess human intelligence… Historically, IQ is a score obtained by dividing a person’s mental age score, obtained by administering an intelligence test, by the person’s chronological age, both expressed in terms of years and months. The resulting fraction is multiplied by 100 to obtain the IQ score. When current IQ tests were developed, the median raw score of the norming sample was defined as IQ 100 and scores with standard deviation (SD) up or down were defined as 15 IQ points greater or less. By this definition, approximately two-thirds of the population scores are between IQ 85 and IQ 115. About 5 percent of the population scores above 125, and 5 percent below 75”.

Psychologist Howard Gardner’s theory of multiple intelligences states that intelligence can be broken down into 8 distinct components: logical, spatial, linguistic, interpersonal, naturalist, kinaesthetic, musical and intrapersonal. Thus he believes that standard IQ tests and psychometric tests focus on certain components, such as logical and linguistic, while completely ignoring other components which may be equally important

These critical comments make IQ tests not entirely reliable for assessing human intelligence. However, IQ tests are de facto the only standard that is widely used and despite its weaknesses it allows making some valuable insights into the intelligence and capabilities of millions of people. We may need to improve these tests but in the meantime they are already applied to intelligent assistants, such as Siri or Goggle’s Personal Assistant.

On 3rd October 2017 a test was organized for several AI assistants by three Chinese researchers: Feng Liu, Yong Shi, and Ying Liu, primarily based on exams carried out during 2016. According to researchers, Google’s AI Assistant rating of 47.3 is barely beneath a six-year-old human’s IQ of 55.5. However, it was more than double that of Siri’s IQ of 23.9. Siri is also behind Microsoft’s Bing or Baidu, which have respective IQs of 31.98 and 32.92 respectively. All AI’s IQs are considerably lower than a mean for 18-year-old’s which is 97.

The researchers say, that: “The results so far indicate that the artificial-intelligence systems produced by Google, Baidu, and others have significantly improved over the past two years but still have certain gaps as compared with even a six-year-old child” (Feng Liu, 2017). They grade AI’s intelligence into six levels based on the model that combines AI and human traits around four areas of data, together with “input, output, mastery, and creation”:

  • First-grade system, which might exchange some information with people
  • Second-grade system that can manage the interface to some objects such as TVs or washing machines, the so-called Internet-of-Things (IoT)
  • The third-grade includes computer systems and mobile phones, which are programmed and can be upgraded. That would include AlphaGo from Google’s DeepMind
  • Fourth-grade include Google Brain, Baidu Brain, and the EU’s RoboEarth robots, because they have the ability to communicate and be managed using cloud data
  • Fifth-grade intelligence is at a human level
  • Sixth-grade system will have the capability to “continuously innovate itself and create new knowledge, with I/O ability, knowledge mastery, and application ability that all approach infinite values as time goes on” (Feng Liu, 2017).

The difference between the grades of AI seems to be quite significant. But once it gets to the sixth grade, AI will improve exponentially until Superintelligence becomes Technological Singularity.

How soon can it happen? There is no agreement on when Superintelligence may definitely arrive. The Future of Humanity Institute at Oxford University did a research in May 2017 asking 1,634 researchers who published papers at the 2015 NIPS and ICML conferences (the two leading machine learning conferences) and asked them to complete a survey on when AI will outperform humans in various areas. 352 researchers responded and their aggregate view is as follows (Asian respondents expected these dates to be much sooner than North Americans) (Katja Grace, 30/05/2017):

  • By 2024 – translating languages
  • By 2026 – writing high school essays
  • By 2027 – driving trucks
  • By 2031 – working in retail
  • By 2049 –writing a bestselling book
  • By 2053 – working as a surgeon
  • By 2062 – 50% chance of AI outperforming humans in all tasks

Perhaps the best known predictions are those made by Ray Kurzweil, one of the prominent futurists, who has already proven over the previous 30 years that his predictions were largely correct. In an interview Kurzweil had with Futurism on 3rd October 2017, he confirmed that: “2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human level of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion-fold by merging with the intelligence we have created.” For Ray Kurzweil, the process towards this singularity has already begun (Reedy, 2017). If you think that Singularity cannot happen by 2045, then these two latest inventions from Google should make you think.

First, was the invention announced by Google at its I/O conference in May 2017, which gives quite a considerable boost to those impatiently awaiting the arrival of Superintelligence. It was it latest invention – AutoML (Auto Machine Learning). The Google team came up with a machine learning software that can create self-learning code. The system runs thousands of simulations to determine, which areas of the code can be improved. It then makes the changes and continues the process until its goal is reached. The result was that AutoML is better at coding machine-learning systems than the researchers who made it. In an image recognition task it reached record high 82 percent accuracy. Even in some of the most complex AI tasks, its self-created code is superior to humans (Green, 2017). Google’s AutoML could develop a superior image recognition system within a few weeks, what would have taken months for most brilliant AI scientists. But in December 2017, just six months later, the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good, if not better, than any developed by a human in less than a day. (Rejcek, 2018)

The second invention, announced in the journal “Nature” on 19/10/2017, will probably be viewed in the future as a very significant milestone on the path to Superintelligence and Singularity. Barely a year after Google’s AlphaGo beat Lee Sedol, the Grandmaster in the Chinese game GoGo, which itself was considered a very important breakthrough in AI, a vastly superior AI agent called AlphaGoZero (AGZ) beat GoGo Masters by 100 games to 0. The original AlphaGo had the benefit of learning from thousands of previously played Go games against human players and against itself. AGZ, on the other hand, received no help from its human handlers, and had access to absolutely nothing aside from the rules of the game. The only input it had were the rules of the game. Using “reinforcement learning,” AGZ played against itself 4.9 million games, starting from a very basic level of play without any supervision or use of human data (AlphaGo, by comparison, had 30 million games). The self-learning capability allowed the system to improve and refine its digital brain, known as a neural network, as it continually learned from past experience. In effect, AGZ was its own teacher. It took for AGZ just 4 hours to self-learn chess to such a degree that it beat the world class champions. Additionally, this technique is so powerful because it is no longer constrained by the limits of human knowledge. Instead, it can learn from a “clean slate”.

The way in which AGZ teaches itself is so significant for AI because it shows that once AI gets full knowledge about the real-world problem to be solved then the power of reinforcement learning will deliver the result. Gradually such AI will become Artificial General Intelligence (AGI) i.e. Superintelligence, finding solutions and strategies that are beyond human capabilities. The most surprising in this story is how short was the time between the AlphaGo’s win and an absolute supremacy of AGZ over GoGo Masters. That’s what exponential progress is about.

From the perspective of Superintelligence, there is another type of intelligence where humans may still excel, called ‘emotional intelligence’. This refers to “an individual’s ability to understand and be aware of his own emotions, as well as those of people around him. This ability enables him to handle social interactions and relationships better (Cohen, 2016).”

A true Superintelligence, if it is to fully compare with human intelligence, must have such a capability. Recently, some progress in this direction has already been made. In May 2017, an “emotional chatting machine” has been developed by scientists, signalling the approach of an era, in which human-robot interactions are seamless and go beyond purely functional. The chatbot, developed by a Chinese team, is seen as a significant step towards the goal of developing emotionally sophisticated robots. The ECM, as it is known, was able to produce factually coherent answers whilst also filling its conversation with emotions such as happiness, sadness or disgust (Devlin, 2017).

Superintelligence may potentially become the most dangerous adversary of Humanity to which we dedicate a lot of space on this website. However, it can also become very benevolent and friendly agent that Humanity will badly need to fight other existential risks and help turn our civilisation on the path of immense prosperity.

You can view Sustensis video and presentations on this subject here:  https://sustensis.co.uk/?page_id=1222 

Next: Singularity

1 thought on “Intelligence and Superintelligence

  • What can we learn from the past to minimize the risks of malevolent Immature Superintelligence?

    Most people, including politicians, who after all, make decisions on behalf all of us, think that a fully developed Superintelligence (Artificial General Intelligence) is decades away, and by then we will have it under our full control. Unfortunately, this view ignores immense difficulties in controlling a fully developed Superintelligence. Furthermore, it takes a naively optimistic view that we will create the so-called friendly Superintelligence, which will do us no harm. Finally, this view completely ignores the fact that within a decade we may have an Immature Superintelligence that will have a general intelligence of an ant but immense destructive powers, which it may apply either erroneously or in a purposeful malicious way.

    There is a very high probability that malevolent Immature Superintelligence will trigger off in the next few years a number of dangerous events. It is unlikely they will present an existential threat to Humanity. They will rather be malicious process control events created purposefully by a self-learning agent or events caused by an erroneous execution of certain activities. These could include firing off nuclear weapons, releasing bacteria from strictly protected labs, switching off global power networks, bringing countries to war by creating false pretence for an attack, etc. If such events coincide with some other risks at the same time, such as extreme heat in summer, or extreme cold in winter, then the compound risks can be quite serious for civilization.

    On this website, you will find top 10 existential risks, which Humanity faces right now. Among them, at the top of the list, is Superintelligence (Artificial General Intelligence). Somewhere in the middle of that list is Climate Change, which incidentally is not in any sense an immediate existential risk in comparison with bio pandemics or Superintelligence. The question is, what could force the world leaders to take these risks very seriously and act on them decisively right now.

    Before I answer this question, let me refer to some recent and some historical events, when large civilizational catastrophes were looming and the decisions that the world leaders than took, impacted profoundly the fate of the world. I would call them existential risk triggering decisions. They have two outcomes: they can prevent the existential risk becoming reality, or they can trigger it off.

    1. Let me start with the WWII. In September 1938 the British Prime Minister Neville Chamberlain proudly showed the Munich Peace Accord, under which Hitler took over the Czechoslovakia’s Sudetenland. That act of appeasement was thought to be enough to stop the European war, although it was already very clear at the time that Japan (invading China) and Italy (invading Abyssinia, today’s Ethiopia) colluded with Hitler so that fascism could take control of the entire world. We all know what happened later – 3% of the world population lost their lives. It was the Munich Accord that became the existential risk triggering event, and which started a year later the global war by Germany attacking Poland in Gdansk. Conclusion: For the very first time it was clear from the outset that the world may enter a period of another global war. However, since there was no World Government with sufficient powers, which could rein in Hitler (the League of Nations was even weaker than today’s United Nations), it was only a matter of less than a year that the global war has started.

    2. The second example is the post-war Europe. For most of the nations affected by the WWII, the experience was so horrible and profound that in many countries the most common graffiti at that time was “No more war!” The former main European adversaries: Germany, France, Italy and the Benelux first begun integrating their economies in 1952 (mainly heavy industry) by forming the European Coal and Steel Community. Five years later that integration included common democratic values, which gave birth in 1957 to the European Economic Community, now the European Union. Conclusion: Europe has learnt a terrible lesson. Its existential risk triggering decision was to form the Steal and Coal Community rather than fight the Third WWW. Consequently, for the last 73 years, there was no war in Europe, apart from the Balkan war in the 1990’. That was only possible by having a relatively powerful European Commission and the European Council, a pseudo European Government – the precursor of the European Federation

    3. Another, a truly existential event, was the Cuban crisis in October 1962, when the world was just hours away from the break out of a global nuclear war. That event is still considered one of the most dangerous moments in human history. The global nuclear war did not happen because the Soviet Union agreed to withdraw their missiles from Cuba and the USA withdrew their nuclear missiles from Turkey. In the midst of a total chaos on 26th October 1962, the Soviet and American leaders took the right decision.

    After a decade, president Nixon and the first secretary of the Soviet Union – Brezhnev signed the SALT 1 Treaty, which froze the number of intercontinental missiles. The subsequent START 1 Treaty of 1991 led to a significant reduction of nuclear weapons. Against all the odds, a nuclear war has so far been avoided. Conclusion: The fact that the world has not experienced a global nuclear war was not because the UN stopped that conflict. It was a decision made solely by two countries the USA and the Soviet Union but the consequences of that decision (in this case positive) impacted the whole civilisation. Here, the existential risk triggering decision that led to a long process of the reduction of nuclear missiles was to withdraw the Soviet missiles from Cuba and American from Turkey, rather than fight a nuclear war.

    4. The next event is the very recent Climate Change conference in Katowice in Poland on 16th December 2018. The conference coincided with the IPCC’s latest appeal to lower the CO2 emissions even further, so that the temperature growth will maximise at 1.5C, rather than 2C, as had been agreed in Paris. For me, the fact that the conference has agreed some concrete results, such as a unified system to measure the decarbonization of individual countries’ economies, is quite a success. That was only possible because some of the dangers of climate change are now very clear, such as the fires in California or extreme hurricanes in North America. The dissemination of such information via TV and other media to billions of people makes them more aware that climate change is real, and these could be just its first (very minor yet) effects. This in turn makes it easier for politicians to propose solutions that otherwise would have been rejected by the voters.

    However, please note, it took over 20 years from Rio to Paris to make the first Climate Change Agreement. Moreover, even the conference in Katowice can only be deemed a success in relative terms (that something after all has been agreed). In absolute terms, it is a failure because the world should be acting much faster and much more decisively, which requires that we make some quite painful, mainly financial, decisions. Conclusion: The world has not been combating Climate Change properly because to solve such a global problem in an efficient way we need to act globally. But that requires a strong Global Government and not such a weak incoherent organisation like the United Nations. Here, the existential risk triggering decision was the signing of the Paris Accord in 2015, which started the process of a gradual decarbonization of the world’s economy.

    5. The final example is still evolving and is not an existential threat for the world but rather the most destructive event for the country in economic and political sense for a long time. I am talking about Brexit, which has entered its final stage. From November till mid December 2018 the British government has released over 100 so-called Notices, detailing the consequences of Brexit in various areas. It is that the disclosure of a potential damage to Britain’s political influence, economy, security, tourism, science and cultural relations that have gradually made the population aware of the consequences of Brexit. Because of that rising awareness, no party in the Parliament wants a hard Brexit and more and more voters that had voted for Brexit have second thoughts, believing that Britain should stay as a full member of the EU.

    Conclusion: The release of the Notices is a kind of a triggering event for the final decision on Brexit (apart of course from a deal agreed with the EU). The British MPs and the electorate have now a clear warning. We shall see in the next few months, what was the impact of that awareness and in the next few years, we should be able to assess how the final decision has changed Britain’s political and economic position.

    For our civilisation, the next few years will be, in the context of Immature Superintelligence, a period of existential risk triggering decisions. If the world is not be able to form a de facto World Government by the end of the next decade, we may be unable to control Superintelligence and expose ourselves to the biggest risk Humanity has ever faced. We should look back at the historical events, which occurred in the last century, to see what may happen to our civilization if we continue to rely on organizations such as UN. It is a great organization in many areas of human endeavours, such as medicine (WHO) education, culture (UNICEF) etc. However, what really matters, is its lack of political power that could enforce the decisions that it makes. The United Nation, and especially its Security Council, where any decision requires unanimity of the five major powers, is in most cases powerless. The consequence is that it cannot resolve the conflicts such as in Syria or Yemen. The World needs a powerful World Government. The UN has never been and will never be able to create such a government.

    We need to intensify efforts that would make humans behave more often like a swarm of bees under a strict direction of the queen (The World Government), rather than increasing the scope of nations sovereignty (that’s what Brexit is about) and individual freedom. That is the price we will all have to pay if we want to preserve the most important value: LIFE!

    What is the probability of such a de facto government coming into existence? In ‘normal’ circumstances, if over the next decade we have no world war, no nuclear conflict, no worldwide pandemics on a scale greater than the Spanish flu of the 1920, and no major AI-related catastrophic, but not existential event, the chances of forming such a government are, in my view, close to zero. Paradoxically, the only hope that we may have for creating a de facto World Government (which would include the majority, but not all countries) is that some of those catastrophic events will occur, like those caused by the Immature Superintelligence mentioned earlier.

    Similarly, like after the Cuban crisis, when the world has significantly reduced the number of nuclear weapons (still a long way to go), such a serious event may convince many nations that we must make some sacrifices and can only be safe if we act together. That will mean passing part of the national sovereignty and individual freedoms to a strong federal government that would have the best chances to keep us safe. As it has been mentioned many times on this website, the organization that has the best experience and motives to become such a de facto World Government is, in my view, the European Union. It should be transformed into a shallow European Federation of nations, rather than states, which would still enable the member countries preserve their national identity and culture.

Leave a Reply

Your email address will not be published.

twelve − 5 =