Review of Peter Scott’s book – “Crisis of Control”

The book that I have read recently is in some way quite rare. Why? Because the author, Peter Scott, has merged literature with a serious scientific genre. For me, it was a real pleasure to read it, and ponder over many citations from famous people and quotations from Sci-fi literature. But for most people it may be an introduction to the world of Sci-fi, which is gradually becoming a reality. It is also a warning and a call for action. In this sense, it is similar to my own objective when I was writing “Becoming a Butterfly”. The biggest difference between these two books, as I see it, is that Peter Scott’s book is a kind of a glimpse into the unknown, looking into a potential oblivion from the very edge of the cliff. His warnings seem to me more subtle and presented in a more literary and descriptive way than mine.

Peter Scott argues his case very deeply, for example when he writes about the Fermi Paradox and the Turing Test. However, I am not entirely convinced (yet) that Fermi paradox explains the inability to see any signs of other civilisations. If that paradox does explain it, then I agree that the only way to minimize the fulfilment of that prophetic Armageddon is through the symbiosis of conscious digital beings and humans.

I find the author’s criticism of the Turing Test very valuable, especially in the context of cognition, intelligence and ‘humanness’. The author arrives at a conclusion that Turing Test is really about the ability to mimic humans rather than measure ‘objective’ intelligence. I fully agree with such a view. He also challenges himself quite often, which makes his conclusions more credible.

What I find a bit surprising is his use of the term “Artificial intelligences” (in plural), which is the first time I have come across. This may be relevant to the current state of AI development, especially when people think about AI as millions of robots or humanoids. However, once Singularity has been achieved, I cannot think about it other than as a singular entity.

The author dedicates a lot of time to Conscious Artificial Intelligences (CAI). This is again thought provoking. I can immediately see billions of conscious digitized humans and CAI originated purely in silicon, all embedded in a single web of a single entity – Superintelligence. Peter Scott seems to be quite sure about the possibility that AGI will ultimately become conscious. This is also my view. However, it is still a highly disputed subject, where even the nature of consciousness has not been generally agreed, or even less so, proven among the scientists.

The author emphasizes quite often the need to go beyond our anthropomorphism and assess intelligence more objectively. He develops that subject quite deeply, forcing a reader to think beyond what seems to be obvious. His preferred way of a human-AI relationship is co-operation rather than competition. I entirely agree with such an approach. What may differ us marginally is how to achieve that. I see it these days as a progressive, decades long process, where we gradually fuse with AI, by becoming Transhumans and finally Posthumans, as a new species. However, I still believe that the governance of that process is extremely vital and that is why I view this decade as a decisive one in terms of maintaining that control for another decade or two. That is also why we need almost right now a de-facto World Government representing a Human Federation.

Another subject discussed is the impact of the 1% of the richest people. Assuming that being extremely rich, they will also become the first Transhumans, they may steer the course of civilizational evolution in a very egoistic way. I also see that as a danger. Moreover, the author’s view on the role of AI developers on maintaining control is almost identical with mine. I also believe that AI developers, supported by some benevolent richest people, such as Elon Musk (my view on his role coincides fully with Peter Scott’s) are in practice the best bet for humans to control the fusion of humans and digitally-originated conscious beings.

Another area where the author’s and my views strongly coincide is in the area of an open-source AI, which is one of the necessary steps minimizing the risk of delivering a malicious AI, a Supremacist power, or an individual taking control over the humans’ future (p.204). In that context, I feel that the author may have not dedicated enough space to Transhumans as a potential danger to our civilisation or our best hope for maintaining control over our fate, while we began morphing with Superintelligence almost in a seamless way. Elon Musk with his Neuralink has stopped pretending that his main objective is eliminating mental illnesses. He is now saying loud and clear ‘if you can’t beat them, join them’, meaning that Transhumanism is one of his objectives.

I also entirely agree with the author’s observation about the pace of change happening now at an almost exponential rate in most domains, which may accelerate the process of knowledge seeping through the borderlines of separate domains (p.171). This is contrary to IBM’s naïve view that they can keep Watson as a single subject AI domain. But that of course suits their commercial interests and mutes any potential criticism coming from the Congress (e.g. 2017 Congress enquiry on IBM’s risk mitigation strategy of AI).

I was particularly interested in the author’s views on Prisoner’s dilemma (p.207) for the very reason that I have expanded it in my recent book as ‘AI Supremacist’s dilemma’, which I think may change the world in a positive way more rapidly than we think, if the current leaders see the futility of their efforts to rule over the world.

The scenarios from 2027, opening the book, and then followed on in 2037, as a closing conclusion, are beautifully written for a book that some people may categorize as ‘a technology genre’. But the author seems to expect a much faster progress of AI than Ray Kurzweil, who forecasts the arrival of AI Singularity in about 2045.

I like the conclusions on the need to increase the understanding, followed on by some sort of high-level co-operation, between ‘scientists’ and humanists (p.178). Do we need another Pugwash movement?

Finally, I think it might have made the reading easier if the book had clearly marked chapters and sections. The Table of Contents might have been helpful as well.

Overall, the book is a real inspiration for all those who may have a similar objective to the author’s – spreading the message about the danger of not just simplistic AI or even Cyber Wars, but about the biggest issue – the Human species survival and evolution.

Tony Czarnecki

London, October 2020