28th April 2018

Singularity

At some stage we may arrive at the point called Technological Singularity or simply Singularity. This is the point in time when Superintelligence will have at its disposal such a range of novel technologies, exceptional scheduling and organizational capability that it could rapidly become matchless and unrivalled in what it can do in every field of life. It will be able to bring about almost any possible outcome, and be able to foil virtually any attempt that might prevent achieving its objectives. It could also eliminate, if it chooses, any other challenging rival intellects. Alternatively it might manipulate or persuade controlling humans to change their behaviour towards its own interests, or it may merely obstruct their attempts at interference (Bostrom, 2002).

 The key problem related to the risk coming from Superintelligence and particularly from Singularity is that we may lose control over its objectives, intentions, behaviour and attitude towards humans. The so called ‘Control Problem’ will occur when technology advances beyond our ability to foresee or control Superintelligence, which will be upgrading its potential at an exponential pace. There is a growing evidence that the time when we may lose control over AI may come far earlier (at about 2030) than the arrival of Superintelligence or its final stage – Singularity.

Singularity may be embodied in a human-like body, as  self-learning Artificial General Intelligence program, which through a series of self-improvement cycles would achieve the so-called “runaway stage”. Each new and more intelligent generation will appear more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. In a positive scenario, the world will then be transformed beyond recognition by the application of Superintelligence to humans and/or human problems, including poverty, disease and mortality.

To achieve Singularity we need to make at least three major improvements. Tim Urban suggests how this could be done:

  • Increase computer power. This has been doubling every 18 months following the so-called Moore’s Law. Some people think the “Law” will stop working about 2030 because we will reach physical limits of continuing miniaturization of chips. On the other hand, some futurists, such as Ray Kurzweil, predict that by around 2025 intelligence packed into a $1,000 computer, should reach the power of a human brain. But even that seems to be a fairly moderate prediction in view of exceptional acceleration in the development of a quantum computer
  • Emulate human brain using reverse engineering. There are several ways to do that. One that Urban suggests involves a strategy called “whole brain emulation,” where the goal is to slice a real brain into thin layers, scan each layer one by one, use software to assemble and accurately reconstruct a 3-D model, and then implement the model on a powerful computer. Recently we have been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains at least 100 billion neurons, each having on average about 10,000 connections. If that makes it seem like a hopeless project, remember the power of exponential progress.
  • Emulate human brain by copying the process mastered by evolution. This could be done using a machine-oriented approach, not by mimicking biology exactly. To do that we would build a computer that would have two major skills: doing research on how it could improve itself and then coding changes into itself (that’s exactly what evolution did to us). We would teach computers to be computer scientists, so they could bootstrap their own development. That would be their main objective, finding the most effective process to make themselves smarter.

Next: Consciousness

1 thought on “Singularity

  • The arrival of Superintelligence triggering technological Singularity by 2030 is rather unlikely. However, I would agree that an existential threat may be created by then by an Immature Superintelligence. It might be stupid and error prone in most areas of human skills and especially emotions. However, it would be enough that it is superintelligent in just one or two areas but will make decisions which are illogical and very dangerous from human point of view or Humanity’s interests. This is especially true about judgements that require the application of human values.

Leave a Reply

Your email address will not be published.

1 × 5 =