Global AI Governance agency

I propose to form a single Global AI Governance Agency, as the first step of creating a friendly Superintelligence. It would be responsible for creating and overseeing a safe environment for a decades long AI development. It would have control over any aspect of AI development that exceed a certain level of AI intelligence (e.g. the ability to self-learn or re-program itself). It could operate in a similar way to the International Atomic Energy Agency (IAEA), with sweeping legal powers and means of enforcing its decisions. Such laws should take precedence over any state’s laws. Stuart Russell in his latest book ‘Human Compatible’, believes humans should establish a full control over AI latest by 2030. This is also the date I have put in my previous book, where I call the decade 2020-2030 – an age of the Immature Superintelligence.

One candidate for such a body could be the United Nations Interregional Crime and Justice Research Institute (UNICRI), established in 1968. It has initiated some ground-breaking research and put forward some interesting proposal at a number of UN events such as 1st global meeting on AI and robotics for law enforcement, co-organized with the INTERPOL in Singapore in July 2018 or Joint UNICRI-INTERPOL report on “AI and Robotics for Law Enforcement” published in April 2019.

The problem is that these proposals have remained just that – proposals. Until April 2020, there has been not a single UN resolution in this area. But even if it had been one, it would most probably face the same problem, typical of many other areas of the UN activities – the inability to enforce the UN’s decision. Therefore, seeing the impotence of the UN, it is more likely that such a global AI-Governance legal framework will be based on the EU proposals, implemented in a similar way to Global Data Protection Rights (GDPR).

Comprehensive control over AI development

Creating such a Global AI Governance agency must be a starting point in the Road Map for Managing the Humanity’s Evolution, if we recognize that the development of a Superintelligence is the earliest and the most important long-term existential risk, which may determine the fate of all humans in just a few decades from now. Only a global agency with real powers has any chance of creating a de facto standard legal framework for controlling the development of AI. To be effectively implemented such an agency would need to have a comprehensive control over all AI products hardware (robots, AI-chips, brain and body implants, visual and audio equipment, weapons and military equipment, satellites and rockets, etc). It should also cover software the oversight of AI algorithms, AI languages, neuronal nets or brain controlling networks. Finally, it should include AI-controlled infrastructure such as power networks, gas and water supplies, stock exchanges etc. It should also extend beyond our planet, especially covering the AI-controlled bases on the Moon, and in the next decade, on Mars.

Tony Czarnecki, Sustensis
April 2020