26th April 2018

SUPERINTELLIGENCE

What is Superintelligence? Could it be our friend or become a foe, which in the worst case scenario can annihilate Humanity. But before we answer these questions let us agree on what is intelligence? Unfortunately, this is not the term that is easy to define unambiguously. Intelligence can mean many things to different people. The scientific community has been debating that since at least the late 19th century.

Without going into a long discourse, intelligence is defined in a popular sense as a general mental ability to learn and apply knowledge in order to change the environment most effectively for the intelligent agent, in our case – humans. Recently, some scientists rejected the idea of a single intelligence and instead have suggested that intelligence is the result of several independent abilities, which when combined contribute to the total performance of an individual. That would include other “intelligences” such as:

  • the ability to evaluate and judge
  • the ability to reason and have abstract thoughts
  • the ability to learn quickly as well as learn from experience
  • the ability to comprehend complex ideas
  • the capacity for original and productive thought

Robert Sternberg, a psychologist, proposes that there are three fundamental aspects of intelligence: analytical, practical and creative. He believes that traditional intelligence tests only focus on one aspect – analytical – and do not address the necessary balance from the other two aspects. (Cohen, 2016)

Let us now move to Artificial Intelligence (AI), which is a very new discipline. It has been applied in earnest for at least 30 years under various other names such as Expert Systems and later on Neural Networks. Its key features are super performance and imitation of human cognitive abilities like problem solving and learning or speech recognition. It can beat best human capabilities but usually in one discipline only. Therefore, it is termed as a “narrow AI”. In 2011, the IBM Watson computer system competed in Jeopardy game, against former winners Brad Rutter and Ken Jennings. Watson won the first place and the prize of $1 million. Then in March 2016 Google’s AlphaGo computer using self-learning (machine learning) program beat the 18-time world champion Lee Sedol. The success of self-learning has sparked a real revolution in AI. Ray Kurzweil, currently the Chief Technologist at Google and one of the most reliable futurists, (I will be quoting him a few more times), confirmed his earlier prediction that AI will match human intelligence (in narrow subjects, like medicine) by 2029.

Machine self-learning may deliver in the next 20-30 years an intelligent agent, which will surpass any human being in every skill or task. When AI reaches such capability it will become Artificial General Intelligence (AGI), or as, the term proposed by Nick Bostrom, in this book. Margaret Rouse proposes a simple definition of Superintelligence that may be adequate for the purpose of this book:

Superintelligence is defined as a technologically-created cognitive capacity far beyond that possible by humans” (Rouse, 2016).

Very soon after Superintelligence has been embodied either in a robot or a computer network, it will be capable of redesigning itself. Imagine that such a smart machine will be capable of rapidly producing generations of progressively improved, powerful machines, creating intelligence far exceeding human intellectual capacity, until it reaches the so called runaway effect (Wikipedia, 2018). Once Superintelligence has reached that point it may be impossible for a human to comprehend it and control. It will thus quickly reach the point in time called Technological Singularity or simply Singularity. Ray Kurzweil in his book “The Singularity is near” (Kurzweil, 2006) defines it as follows:

Singularity is the point in time when all the advances in technology, particularly in Artificial Intelligence (AI), will lead to machines that are smarter than human beings”.

When people talk about Singularity in the context of AI they mean Technological Singularity, i.e. the point in time when Artificial General Intelligence (AGI), will lead to machines that are smarter than human beings in every aspect of human knowledge and skills, enabling it through the process of self-learning to become even more knowledgeable and potent by re-inventing itself exponentially.

Kurzweil predicts that Technological Singularity event will happen by 2045, while the SoftBank CEO Masayoshi Son, another authority on AI, forecasts it will happen by 2047. Ben Goertzel, one of the well-known AI researchers, who is chief scientist at the robotics company Hanson Robotics, believes AGI is possible well within Kurzweil’s timeframe. However, interestingly, he says that the Technological Singularity is harder to predict, estimating the date anywhere between 2020 (!) and 2100.

Next: Consciousness

Leave a Reply

Your email address will not be published.

16 + 7 =