Harms Register – a repository of AI Risks

Maury Shenk, CEO of Safe AI Hub,

There is a saying ‘what is not measured cannot be improved’. One way to think about AI risks is through the lens of risk management. To improve the resilience to AI risks, we need to identify and measure them.

Risk management also has a key feature that risks include both threats (potential sources of harm) and opportunities (potential sources of benefit). It is of course important to consider AI risks alongside the massive potential benefits and opportunities of AI.

Sustensis is working with Saihub.info on a risk management approach to AI risks and safety. Saihub has created an evolving AI harms register to record potential harms from AI (both current and projected harms), which is based on the ‘risk register’ used in risk management practice. Saihub is also developing an expanding set of analyses of individual harms in the register, providing key materials on each. These analyses focus primarily on threats/harms, but Saihub intends for its approach to be extended to opportunities/benefits of AI.

We can learn a lot from Risk Management in general, which is a well-established discipline in industry. Risk management frameworks like ISO 31000 and M_o_R® are mature frameworks that involve assessment of risks (based upon criteria including impact, likelihood, proximity and velocity) and ongoing planning of responsive actions.

There are also evolving risk management frameworks specific to AI. For example:

The main reason that a risk management approach to AI risk is needed is that the current debates about AI risk and safety mix a wide variety of issues, producing imprecise and often incoherent analysis. For example, consider two examples of public controversies about AI:

  • In the recent turmoil around control of OpenAI, part of the dispute was between the philosophies of “effective altruism” (which focuses on the threat that AI-enabled superintelligence could doom humanity) and “effective accelerationism” (which focuses on the opportunity for AI to produce huge benefits to humanity).
  • As a result of controversy around the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? – which identified risks from language models relating to environmental and financial costs; large, biased data sets; misdirected research and interaction with human biases – Google in late 2020 fired Timnit Gebru, the co-lead of its ethical AI research team.

Whatever the merits of these controversies, it is clear that they are about different things. The debate about whether AI will destroy or save humanity is on a different planet than the debate about how to deal with the current effects of AI. The people who engage in these debates often don’t even like talking with each other.

It is important to discuss and address both of these categories of AI risk – and many others. Among other things, in the face of huge uncertainties associated with the rapid progress of AI and the potential for AI to have massive impact on societies and the planet, the precautionary principle indicates that attention to all credible risks is warranted.