28th April 2018


Many computer scientists think that consciousness involves two stages: 1 – accepting new information, storing and retrieving old information, and 2 – cognitive processing of all that information into perceptions and actions. If that’s right, then one day machines will become conscious. That view has recently gained some significant support. In October 2017 Drs. Stanislas Dehaene, Hakwan Lau and Sid Kouider from Collège de France in Paris came to a conclusion that consciousness is “a multi-layered construct. As they see it, there are two kinds of consciousness, which they call ‘dimensions C1 and C2. Both dimensions are necessary for a conscious mind, but one can exist without the other:

  1. Subconsiousness – dimension C1, containing information and the huge range of processes with the required algorithms in the brain where most human intelligence lies. That is what enables us to choose a chess move, or spot a face without really knowing how we did it. The researchers believe that this type of consciousness has already been represented in a digital form and is comparable to the kind of processing that lies behind AI algorithms that are embedded in DeepMind’s AlphaGo or Chinese Face++.
  2. Actual consciousness – dimension C2 containing and monitoring information about oneself, which splits it into two distinct types and which is not yet present in machine learning:
    • The ability to maintain a huge range of thoughts at once, all accessible to other parts of the brain, making abilities like long-term planning possible. In this area there is already some progress. For example, in 2016, DeepMind developed a deep-learning system that can keep some data on hand for the use by an algorithm when it contemplates its next step. This could be the beginning toward global information availability.
    • The ability to obtain and process information about ourselves, which allows us to do things like reflect on mistakes.

This proposition closely correlates well with a theory that Actual Consciousness (C2) is driven by billions of networks that bind together information from Subconsiousness (C1) following stochastic probability similar or identical with the principles of quantum mechanics (Stanislas Dehaene, 2017).

Should those recent findings and proposals from Collège de France in Paris become generally accepted then that might resolve the questions posed by Roger Penrose about the nature of consciousness. He may be right that consciousness is not just the brain-mind construct but is also underpinned by phenomena similar to those present in quantum mechanics, however not in the way he suggests, i.e. in the ‘hardware’ (microtubules). The quantum mechanics’ Uncertainty Principle would not act at the level of individual synapses but rather at the level of neural networks, which connect thousands of synapses and give an averaged response at a macro-level, e.g. lifting a hand. Since that response would be only probabilistic and not based on yes-no state of an individual synapse (and hence its similarity to quantum phenomena), it will ensure that there will be no conflict with the preservation of free will. That might also be in line with the thinking of people like Raymond Tallis, who so strongly defends the validity of free will (i.e. unpredictability of human actions). And that’s how the scientific world may slowly be arriving at some common understanding on the nature of consciousness, and by extension, the feasibility of uploading a human mind together with its consciousness to Superintelligent being.

Let me now summarize the conclusions about the nature of consciousness, as far as science seems to indicate and the consequences for uploading a human mind to Superintelligence. Consciousness is probably one of the very few areas in science where the academics cannot agree on the meaning of the subject they are studying. Probably, and this is still only probably, the human consciousness multi-layered structure and organization has been best summed up in the recent research by the academics from Collège de France (Stanislas Dehaene, 2017):

Consciousness is a structural (functional) organization of a physical system, which operates at two levels: 1. Subconsiousness – accepting, storing and retrieving information using huge range of processes with the required algorithms – this is where most of human intelligence and knowledge lie. 2 – Actual consciousness containing information about oneself, which it turns into a wide range of ‘thoughts’, all accessible at once to all parts of the brain, which it is able to have continually monitored and processed, outputting them as perceptions and actions.”

The above information can be converted into a diagram below:

The likely organization of the human consciousness as proposed by the researchers at the Collège de France
(Stanislas Dehaene, 2017)
– graph by Tony Czarnecki

Such an approach to the nature of consciousness provides a clearer view on a number of questions in this area. For example, it allows for a gradual development of consciousness over millennia of life’s evolution, which might have started with automatic, chemistry based responses, in plants ‘seeking’ best nutrients, i.e. being somewhat aware of the environment. In the animal world the level of self-awareness would be directly correlated with the brain size relative to the body mass and complexity of neural connections. When these two parameters reached a tipping point, human consciousness was ignited in Homo sapiens and other humanoids, such as in the Neanderthal Man. If we accept this notion, then human consciousness, having physical strata and differing only from animal self-awareness because of a much higher level of complexity and self-organization of billions of stimuli and memory cells, can be replicated in a different than a biological strata, such as in ‘silicone’.

Therefore, the main assumption taken here is that once Superintelligence emerges, it will be possible that at some stage it will become a conscious agent.

Next: Superintelligence’s Risk

2 thoughts on “Consciousness

  • I have recently come across an article in SingularityHub published on 14th August 2018 by Shelly Fan, which provides some support for those who believe that the so called brain uploading is possible. It is one of the more and more frequent true breakthroughs in various areas of science and technology. This one relates to our brain. Here is a link to this article: https://sustensis.co.uk/?page_id=1139

  • Does Superintelligence need to be conscious?


    Tony Czarnecki

    I have just completed reading Yuval Harari’s book Homo Deus. In that fascinating book he poses a lot of questions and assertions. Some of them seem to be deliberately provocative to stir up discussion on the problem he raises. One of them is the question: Does Superintelligence need to be conscious?

    He starts with discussing the problem of free will and comes to a conclusion that free will does not exist. I agree with that – for more just read his book. However, he makes a very interesting side comment in this context saying that if there is no real free will, and if people can be manipulated to such a degree that they would consider even the fake information delivered to them by external media, people or agents as the product of their own thinking, then that would be the end of the liberal democracy.

    He then moves on to the nature of ‘self’. He comes to a conclusion that a ‘self’ does not exist literally, since who we are, what are our desires and what ultimately, we do and how we are perceived by others, varies from day to day, or even from a moment to a moment. Here I would only partially agree. In my view Harari is right in a strictly defined version of the ‘self’ – that it is an amalgam of average ‘selfs’ over a certain period of time. But that has been affirmed for a long time by psychologists such as Jung, or to some degree Freud, that each of us has what is called a ‘Master Personality’. Most frequent appearance of our ‘self’ tends to take the rein in case of problems or decisions to be made. The outcome of such decisions is governed by a statistical probability based largely on our genotype and previous experiences.

    Harari thinks that if there is no free will and no real self, then a true humans’ independence does not exist. I mention one of the consequences of that in my book. If a ‘self’ does not exist, then personal identity is an illusion. However, if we accept that it is an illusion, humans may at some stage start behaving like a bee swarm. Such understanding may might gradually transform our behaviour to be much more in line with how a swarm of bees behaves. From such a perspective persuading people to act like a bee swarm to safe the human species rather than an individual may be somewhat easier.

    If neither free will nor a self exists, Harari concludes, then does Superintelligence need to be conscious or can it be superintelligent without having consciousness? This very much depends, in my view, on the definition of Superintelligence and consciousness. Lets’ assume that Superintelligence, broadly described as Artificial General Intelligence (AGI), is an agent that is capable of solving any problem immensely faster and better than any human being, or even the whole of Humanity. Harari thinks that Superintelligence with such a characteristic does not have to be conscious.

    Let me now define consciousness as the scope of response to a range of stimuli, related to a particular event, including agent’s own actions, abstract ‘thoughts’, feelings and desires. The wider the response, the more conscious the agent is because the higher is its awareness of the environment including its own actions, memories, ‘thoughts’ and desires.

    I think if we consider Superintelligence and consciousness as defined above then I believe it seems inconceivable for Superintelligence not to be conscious. The very process of self-learning from memories, the fact that it needs to be fully aware of various possible pending changes in the environment, including its own actions, would require it to be conscious. So, Superintelligence may well need not only self-awareness but also consciousness.

    Moreover, it needs to be emotional for the reasons that its response may have to vary depending upon the situation of other agents, to enable it to evolve most effectively. In line with the theory of evolution it may need to show altruism for its own long-term benefit. So, Superintelligence may well need to have emotions. On the other hand, it is plausible for Superintelligence not to have emotions because that may depend on how it was built and what was its original Super Goal.

Leave a Reply

Your e-mail address will not be published.

4 + ten =