“I’m Sorry Dave, I’m Afraid I Can’t Do That …Yet”

July 16, 2019

Generally when we think of artificial intelligence, we think of one of the many movies about murderous robots that want to eradicate humanity, or at least make humans subservient. See HAL 9000 from 2001: A Space Odyssey, or Skynet from Terminator, or basically any episode of Black Mirror. Typically, the plot starts with a friendly robotic assistant that provides some service to enhance human lives; as the plot progresses, harming the humans it serves is the best way to achieve the goals it was programmed to accomplish. While these examples in media are equal parts exciting and terrifying to imagine in our own society, we are a long way from hitting the 2029 deadline for developing Skynet.

To step back a little, it’s important to clarify the difference between general artificial intelligence and superhuman artificial intelligence.

Artificial Intelligence (AI), in general, refers to a computer exhibiting “human-like” cognitive functions. AI is a machine assessing its environment, and acting on that assessment. Here are a few examples that are in-use today:

Artificial intelligence is actually quite pervasive in our society, and we all reap the benefits pretty much every day. That being said, the gap between recognizing hand-written digits and having a survival instinct that is developed enough to eradicate all humans is pretty large.

Super-human intelligence in the context of AI is not simply exceeding human performance at a given task – computers have been doing computation faster and (mostly) more reliably than humans for years – it requires AI to be able to abstract ideas and apply them to new environments. AI designed to maintain the trajectory of a plane is not going to be able to read human handwriting or recognize humans as a major threat to its existence.

The potential for AI to do harm is very real today–just not in the form of the robotic apocalypse. There are plenty of issues in AI that we need to solve before we can even start thinking about pushing towards human-level intelligence, and we will discuss some of these limitations in future posts. However, super-human (or even human-level intelligence) is not required for AI to be dangerous. If we can develop autonomous vehicles to avoid hitting pedestrians, it does not require much effort to extend that system to intentionally hit pedestrians–another topic for future discussion. 

For now, and for the foreseeable future, humans are still the largest risk to AI causing harm.