Could we create a morally good AI, which would never threaten us?

Tony Czarnecki, Sustensis

London, 26th November 2021

Implementing morality as a parallel backbone to the advanced AI decision-making may be easier than creating a superintelligent humanoid in the context of Moravec’s Paradox. In his book published in 1988 Mind Children: The Future of Robot and Human Intelligence Moravec postulates that it is easy to train computers to do things that humans find hard, like mathematics and logic, but it is hard to train them to do things humans find easy, like walking and image recognition. Morality does indeed fall into the first category, since like higher cognitive functions, it is a relatively recent evolutionary development and might be easier to replicate in AI than more ancient, optimized human skills. Read the whole article in PDF here