Who’s Afraid of the Big Bad AI?

Jun 7th, 2019 - Category: Change

Many of us have encountered people with a general fear of Artificial Intelligence. It typically starts with “This AI stuff scares me. What will keep it from evolving itself and taking over?” There are many responses, but they never seem to allay the concerns. The conversation always seems to end with, “But eventually AI will become so advanced…”

Hollywood has fueled these fears for almost a century starting in 1927 with movies like the classic Metropolis, the cult favorite Blade Runner, the frightening AI in Ex Machina, or the benevolent AI in Her. I have a special relationship with Artificial Intelligence since I worked on early research in the field as part of my engineering degree. Back then it was simply called Neural Networking and a large network might simulate hundreds of “neurons” with several inputs and outputs. Today’s networks utilize millions of “neurons” and are trained using special high power computers with massive datasets containing hundreds of thousands of images, billions of words, etc. My final project was to create a network to write simple musical fugues in the style of Bach based on a starting pattern of a few notes. After multiple training sessions each taking days of computer time, the output still sounded more like random notes than music. My extensive analysis of why it didn’t work earned me an A in the class. MuseNet Now, after many years, somebody has created MuseNet as part of the Open AI initiative, which can create original compositions that sound like this. The Open AI Foundation was founded by Sam Altman and Elon Musk, who paradoxically is also afraid of AI “taking over.” Its mission is:

to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

On a mundane level, people don’t realize that AI is already a major part of their lives. It is holding back the avalanche of email spam, finding photos of cats in large photo libraries (or online), predicting the next enjoyable song / movie, and much more. Sure, it has non-threatening names like “Machine Learning” or “predictive algorithms,” but it is still AI.

A neural network can be trained to do “bad things,” but thinking that it is going to “influence” other networks is like saying that training a dog to fetch is going to influence a bird to do the same. A network would have to first be trained on the bad things themselves, then trained to influence (and other networks would have to be trained to accept that influence), and finally trained on how to use those bad things to “take over” the world. This leads to the realization that being afraid of these networks is probably pretty far down the list of things to be afraid of. There are ethical issues of course and in one of the articles that ended my previous post, “Let’s ask more of AI,” there is a great overview of the biggest ones such as facial recognition, social credit systems, and computer generated realistic news articles.

For a deeper dive, the ArsTechnica article “The basics of modern AI—how does it work and will it destroy society this year?” is outstanding. Its original title is even more intriguing, “From Machine Learning to Generative Adversarial Network to HAL: A Peek Behind the Modern Artificial Intelligence Curtain.” The article covers many of the major areas of the field including Machine Learning, Deep Learning, photo / facial / speech recognition, creating content / deep fakes, and what the real risks are. I personally found the photo recognition part to be the most interesting because it seems like such an impossible problem but it happens inside our phones constantly and accurately. Go ahead, search for a cat (or a dog or a tree or whatever) and see what happens. If you haven’t done this before, prepare to be amazed.