Weaponized AI

There are several definitions of a weaponized AI because such a new subject means many things to many people. However, for the purpose of our discussion I would define Weaponized AI as any evil, malicious or destructive action addressed either at an individual, a group of people or a country using the AI technology. There are two type of weaponized AI: Soft Weaponized AI, which uses software applications that achieve malicious objectives by usually compromising or blackmailing individuals through publication of documents, pictures or breaking into security systems, and Hard Weaponized AI that directs specialized weapons or equipment at a pre-planned target.

Soft weaponized AI

Let me start with the Soft Weaponized AI. Here, probably the best and most succinct list of what soft weaponized AI could do has been created by a well-known futurist Thomas Frey, who presents a simple scenario: “Virtually every situation presents an opportunity for a weaponized A.I., but each will require different strategies, targets, and techniques. Once a clear objective is put into place, the A.I. will use a series of trial and error processes to find the optimal strategy. A.I. tools will include incentives, pressures, threats, intimidation, accusations, theft, and blackmail. All can be applied in some fashion to targeted individuals as well as to those close to them.” (Frey, 2017). He himself had doubts whether he should publish his list of 36 examples, of what it could involve, because it might give hints to perpetrators. In the end he came to a conclusion that anything he could think of, terrorists and evil doers could come up with such a list as well. It is a scary list, from which I have selected only the most significant examples – you have been warned!

Organization-wide or country-wide Soft Weaponized AI Scenarios. These have already been ‘tested’ at the lowest possible level of threat, using so-called fake news and very primitive AI support, in the elections in the USA and other countries. Now imagine applying more sophisticated AI such as the latest version of Deep Mind’s AlphaGo Zero.

  1. Hijacking a City. Every city is made up of interdependent systems that function symbiotically with their constituency. Stoplights, water, electric, sewage, traffic control, garbage removal, tax assessment, tax collection, police, and fire departments are just a few of the obvious trigger points. Once A.I. can disable a single city, it can easily be replicated to affect many more.
  2. Destroying a Country. At the core of every country are its financial systems. Weaponized A.I. could be directed to attack essential communication and power systems. Once those are disabled, the next wave of attacks could be focused on airports, banks, hospitals, grocery stores, and emergency services. Every system has its weakest link and this kind of exploitive weaponry could be relentless.

Intimidating Professionals

In any society there are “people of influence” who are critical for maintaining the systems, business operations, and processes that govern our lives. These individuals become the most “at risk” for becoming a target of weaponized A.I:

  1. Stock Analysts–The value of our entire stock market hinges on the assessment of a few key individuals
  2. Politicians – Any elected official can be bullied into voting in favour of a specific bill or funding proposal
  3. Judges – The outcome of most court cases is decided by a single judge
  4. Newspaper Editors – These people decide what goes on the front page
  5. Corporate CEOs – The CEOs are a huge factor in determining the success or failure of a business
  6. Medical Doctors – Doctors and physicians are among the most respected professions on the planet, whose decision on the selected treatment may have a significant impact on someone’s life
  7. Military Generals – Far beyond the field of war, military generals make far reaching decisions on a daily basis.
  8. Bankers – They can be forced to issue a huge loan?

Landmark decisions in the future

Here are a few examples: Will our most important decision is the future be decided by well-informed individuals or a heavily biased A.I.?

  1. Should cryptocurrencies replace national currencies?
  2. Should we have a single world leader?
  3. How should life and death decisions be made in the future?

Commandeered Systems

Every major system has the potential of being hijacked by an evil A.I. in the future. It can be achieved either through the tech itself, the people that control it or a combination of both, virtually all future systems will be vulnerable, such as:

  1. Stock Exchanges
  2. Power Plants
  3. City Water Supply
  4. Security Systems
  5. Data Centres
  6. Cloud Storage Systems
  7. Airports
  8. Prisons
  9. Election Systems

Hijackable Equipment

As our equipment becomes more universally connected to the web, commandeered devices will become an ongoing concern. For example, the same drone that can deliver packages can also deliver bombs, poison, and spy on your kids.

  1. Flying Drones
  2. Driverless Cars
  3. Airplanes
  4. IoT Devices
  5. Delivery Trucks
  6. Stoplights
  7. Smart Houses

Hard weaponized AI

South Korea currently maintains the border with its northern neighbour using Samsung-built robot sentries that can fire bullets, so it’s safe to say autonomous weapons are already in use. It’s easy to conceive future versions that could, say, use facial recognition software to hunt down targets and 3D-printing technology that would make arms stockpiling easy for any terrorist. Robotic soldiers would only aim at specific targets. They will be so small and cheap that even an average earner (say a potential terrorist) could buy it.

However, an individual robotic soldier would not be a threat to Humanity. What may create an existential risk is a potential arms race in autonomous weapons and Artificial Intelligence. Such a race would expose civilians to undue, potentially existential risk. If autonomous weapons are developed and deployed, they will eventually be in the air, space, sea, land, and cyber domains.

Future soldiers

Paulo Santos writes in the Bulletin of the Atomic Scientists that such robot-soldiers will be taught to operate in teams, supported by a network of unmanned weapons systems. They will also patrol computer networks and potentially will be everywhere. It is highly unlikely that only one country will pursue their development. As mentioned earlier in this section, Russia, USA and South Korea have already displayed their capabilities. Thus, many states will conclude that they require development of ever-stronger artificial intelligence controlling various weapons with ever greater autonomy.

However, the main existential threat is this. Autonomous systems with learning abilities could quickly get beyond their creators’ control. They would be a danger to anyone within their immediate reach. And autonomous weapons connected to each other via networks, or autonomous agents endowed with artificial intelligence and connected to the Internet, would not be confined to a single geographic territory or to states involved in armed conflict (Santos, 2015).

The unintended effects of creating and fielding autonomous systems might be extremely severe if they get under the control of malicious AI agents. In the worst-case scenario, nuclear war heads may be fired, almost definitely annihilating most life on earth, should all current nuclear arsenals be used.