Any legislation passed by a global AI-Controlling Agency, should ensure that a maturing AI is taught the Universal Values of Humanity and be driven not by goals (apart for the lowest level robots) but by human preferences, keeping the AI agent always slightly uncertain about an ultimate goal of a controlling human. Of course, these values will only be Universal, if Humanity acts as oneness, as a single federation. Only then, sometime in the future, humans, although being far less intelligent than Superintelligence, will hopefully not be outsmarted, because that would not be the preference of Superintelligence. This is almost an idealistic vision, but unless it materializes, human species may lose the ultimate control over its destiny.
If the EU takes on the initiative for establishing such an agency it should try to engage some of the UN agencies, such as such the UNICRI and create the coalition of the willing. In such an arrangement, the UN would pass a respective resolution, which is almost certainly be non-binding, but the EU-led Agency would have the enforcing capability, initially limited to its territory and probably the majority of those states that would support such a resolution. The legal enforcement would of course almost invariably be restricted to prohibiting trade in goods that do not comply with the laws enforced by such an Agency.
However, it is quite likely that unpredictable events, such as the consequences of the post Covid-19 pandemic, may accelerate the need for such a Global AI Governance Agency. In such circumstances, the EU or the European Federation, should it be already established, would have to take on the role of ‘a global’ agency, even if it covers only the members of the EU and some associated countries. In such case, it should be operating on the basis of ‘what must be done immediately’ or ‘what’s possible’, rather than ‘what should be carried out’.
Tony Czarnecki, Sustensis