30th April 2018

Intelligence and Superintelligence

How could we measure intelligence? The current “gold standard” of measuring intelligence is Intelligence quotient (IQ). Wikipedia defines it as: “… a total score derived from several standardized tests designed to assess human intelligence… Historically, IQ is a score obtained by dividing a person’s mental age score, obtained by administering an intelligence test, by the person’s chronological age, both expressed in terms of years and months. The resulting fraction is multiplied by 100 to obtain the IQ score. When current IQ tests were developed, the median raw score of the norming sample was defined as IQ 100 and scores with standard deviation (SD) up or down were defined as 15 IQ points greater or less. By this definition, approximately two-thirds of the population scores are between IQ 85 and IQ 115. About 5 percent of the population scores above 125, and 5 percent below 75”.

Psychologist Howard Gardner’s theory of multiple intelligences states that intelligence can be broken down into 8 distinct components: logical, spatial, linguistic, interpersonal, naturalist, kinaesthetic, musical and intrapersonal. Thus he believes that standard IQ tests and psychometric tests focus on certain components, such as logical and linguistic, while completely ignoring other components which may be equally important

These critical comments make IQ tests not entirely reliable for assessing human intelligence. However, IQ tests are de facto the only standard that is widely used and despite its weaknesses it allows making some valuable insights into the intelligence and capabilities of millions of people. We may need to improve these tests but in the meantime they are already applied to intelligent assistants, such as Siri or Goggle’s Personal Assistant.

On 3rd October 2017 a test was organized for several AI assistants by three Chinese researchers: Feng Liu, Yong Shi, and Ying Liu, primarily based on exams carried out during 2016. According to researchers, Google’s AI Assistant rating of 47.3 is barely beneath a six-year-old human’s IQ of 55.5. However, it was more than double that of Siri’s IQ of 23.9. Siri is also behind Microsoft’s Bing or Baidu, which have respective IQs of 31.98 and 32.92 respectively. All AI’s IQs are considerably lower than a mean for 18-year-old’s which is 97.

The researchers say, that: “The results so far indicate that the artificial-intelligence systems produced by Google, Baidu, and others have significantly improved over the past two years but still have certain gaps as compared with even a six-year-old child” (Feng Liu, 2017). They grade AI’s intelligence into six levels based on the model that combines AI and human traits around four areas of data, together with “input, output, mastery, and creation”:

  • First-grade system, which might exchange some information with people
  • Second-grade system that can manage the interface to some objects such as TVs or washing machines, the so-called Internet-of-Things (IoT)
  • The third-grade includes computer systems and mobile phones, which are programmed and can be upgraded. That would include AlphaGo from Google’s DeepMind
  • Fourth-grade include Google Brain, Baidu Brain, and the EU’s RoboEarth robots, because they have the ability to communicate and be managed using cloud data
  • Fifth-grade intelligence is at a human level
  • Sixth-grade system will have the capability to “continuously innovate itself and create new knowledge, with I/O ability, knowledge mastery, and application ability that all approach infinite values as time goes on” (Feng Liu, 2017).

The difference between the grades of AI seems to be quite significant. But once it gets to the sixth grade, AI will improve exponentially until Superintelligence becomes Technological Singularity.

How soon can it happen? There is no agreement on when Superintelligence may definitely arrive. The Future of Humanity Institute at Oxford University did a research in May 2017 asking 1,634 researchers who published papers at the 2015 NIPS and ICML conferences (the two leading machine learning conferences) and asked them to complete a survey on when AI will outperform humans in various areas. 352 researchers responded and their aggregate view is as follows (Asian respondents expected these dates to be much sooner than North Americans) (Katja Grace, 30/05/2017):

  • By 2024 – translating languages
  • By 2026 – writing high school essays
  • By 2027 – driving trucks
  • By 2031 – working in retail
  • By 2049 –writing a bestselling book
  • By 2053 – working as a surgeon
  • By 2062 – 50% chance of AI outperforming humans in all tasks

Perhaps the best known predictions are those made by Ray Kurzweil, one of the prominent futurists, who has already proven over the previous 30 years that his predictions were largely correct. In an interview Kurzweil had with Futurism on 3rd October 2017, he confirmed that: “2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human level of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion-fold by merging with the intelligence we have created.” For Ray Kurzweil, the process towards this singularity has already begun (Reedy, 2017). If you think that Singularity cannot happen by 2045, then these two latest inventions from Google should make you think.

First, was the invention announced by Google at its I/O conference in May 2017, which gives quite a considerable boost to those impatiently awaiting the arrival of Superintelligence. It was it latest invention – AutoML (Auto Machine Learning). The Google team came up with a machine learning software that can create self-learning code. The system runs thousands of simulations to determine, which areas of the code can be improved. It then makes the changes and continues the process until its goal is reached. The result was that AutoML is better at coding machine-learning systems than the researchers who made it. In an image recognition task it reached record high 82 percent accuracy. Even in some of the most complex AI tasks, its self-created code is superior to humans (Green, 2017). Google’s AutoML could develop a superior image recognition system within a few weeks, what would have taken months for most brilliant AI scientists. But in December 2017, just six months later, the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good, if not better, than any developed by a human in less than a day. (Rejcek, 2018)

The second invention, announced in the journal “Nature” on 19/10/2017, will probably be viewed in the future as a very significant milestone on the path to Superintelligence and Singularity. Barely a year after Google’s AlphaGo beat Lee Sedol, the Grandmaster in the Chinese game GoGo, which itself was considered a very important breakthrough in AI, a vastly superior AI agent called AlphaGoZero (AGZ) beat GoGo Masters by 100 games to 0. The original AlphaGo had the benefit of learning from thousands of previously played Go games against human players and against itself. AGZ, on the other hand, received no help from its human handlers, and had access to absolutely nothing aside from the rules of the game. The only input it had were the rules of the game. Using “reinforcement learning,” AGZ played against itself 4.9 million games, starting from a very basic level of play without any supervision or use of human data (AlphaGo, by comparison, had 30 million games). The self-learning capability allowed the system to improve and refine its digital brain, known as a neural network, as it continually learned from past experience. In effect, AGZ was its own teacher. It took for AGZ just 4 hours to self-learn chess to such a degree that it beat the world class champions. Additionally, this technique is so powerful because it is no longer constrained by the limits of human knowledge. Instead, it can learn from a “clean slate”.

The way in which AGZ teaches itself is so significant for AI because it shows that once AI gets full knowledge about the real-world problem to be solved then the power of reinforcement learning will deliver the result. Gradually such AI will become Artificial General Intelligence (AGI) i.e. Superintelligence, finding solutions and strategies that are beyond human capabilities. The most surprising in this story is how short was the time between the AlphaGo’s win and an absolute supremacy of AGZ over GoGo Masters. That’s what exponential progress is about.

From the perspective of Superintelligence, there is another type of intelligence where humans may still excel, called ‘emotional intelligence’. This refers to “an individual’s ability to understand and be aware of his own emotions, as well as those of people around him. This ability enables him to handle social interactions and relationships better (Cohen, 2016).”

A true Superintelligence, if it is to fully compare with human intelligence, must have such a capability. Recently, some progress in this direction has already been made. In May 2017, an “emotional chatting machine” has been developed by scientists, signalling the approach of an era, in which human-robot interactions are seamless and go beyond purely functional. The chatbot, developed by a Chinese team, is seen as a significant step towards the goal of developing emotionally sophisticated robots. The ECM, as it is known, was able to produce factually coherent answers whilst also filling its conversation with emotions such as happiness, sadness or disgust (Devlin, 2017).

Superintelligence may potentially become the most dangerous adversary of Humanity to which we dedicate a lot of space on this website. However, it can also become very benevolent and friendly agent that Humanity will badly need to fight other existential risks and help turn our civilisation on the path of immense prosperity.

Next: Singularity

Leave a Reply

Your email address will not be published.

twelve + 9 =