53. Only EU can stop AI taking control over humanity’s destiny by 2030

Why does Superintelligence (a super-advanced form of AI) represent the highest risk for Humanity? Because it is almost certain to happen, unlike with natural pandemics, which may not happen at all, since it is a lottery type risk. The second reason why it is so dangerous is that it may happen much earlier than the risk mostly talked about in recent years – the climatic catastrophe. The risk coming from Superintelligence is more likely to happen in the next 50 years rather than in the next century. On the other hand, I believe that if we manage to deliver the so called “friendly” Superintelligence, then instead of becoming the biggest risk, it will itself help us reduce other anthropogenic risks, such as climate change.

The Superintelligence is defined as a type of artificial intelligence that would surpass even the smartest humans. The main threat stems from even the slightest misalignment of our values and Superintelligence’s objectives, or its “values”. If this happens, even when the corresponding goals appear benign, it could be disastrous.

Another reason why Superintelligence is the biggest risk is that it is the one that may arrive in an inferior, “half-baked” form. There is certainly no need for Superintelligence to be conscious to annihilate Humanity. Read more here.