Understanding the 'Artificial' in Artificial Intelligence

Dec 15th, 2019 - Category: Change

Several months ago I wrote an article “Who’s Afraid of the Big Bad AI?” that explored the fear of AI that was popular then. AI News Already the tides have shifted and the most recent headlines are “12 Everyday Applications of AI,” “AI Is This Year’s Hottest Job,” and “Job Recruiters Are Using AI in Hiring.” Doom and gloom is being replaced by “Wow, how cool, I can find every photo of Uncle Bob in my library of 32,385 photos!” Part of this is due to improved reporting on how AI actually works, its strengths, and its weaknesses.

Recently an article in Ars Technica did a great job explaining the “ghost in the machine.” “How neural networks work—and why they’ve become a big business” starts with the history of AI (1954!), goes on to highlight major milestones (backpropagation, convolutional neural networks, the power of combining AI techniques, and training with massive datasets), and ends with the most recent advances such as using “deep learning” to create what are now being called “deep networks.” However the most fascinating thing about this article are the reader comments. One reader writes that he is expecting a lull in the development of AI since even the most advanced networks:

  • Rely on their training data being pre-classified by humans
  • Can only recognize what they have been trained on and still cannot make inferences from existing data
  • Ruled by GIGO (garbage in, garbage out), or more concernedly in some applications, bias-in, bias-out. If the training data is slanted, the inferences also will be.

Others argue that these issues can be overcome by using one AI to train another AI or making an AI good enough to be used by the general public which then would train it further (think of Siri learning from being corrected). In these comments, there are over 100 excellent observations with only a few based on existential fears like “This creation of AI is going to come back to bite us.” One reason is that the Ars Technica readers are very familiar with the actual technology behind AI and understand what’s “Artificial” about AI. Terms like machine learning (ML), computational neural networks (CNN), and Artificial General Intelligence (AGI) mean very different things to them. They were reading articles like Paul Ford’sOur Fear of Artificial Intelligence” in MIT’s Technology Review years ago and have progressed to a more nuanced view of the field based on knowledge and experience. Many have even read one of the earliest detailed science fiction books on AI, Isaac Asimov’s “I, Robot” which is famous for the three laws of robotics (and the ethics of Artificial Intelligence in general):

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Amazingly this book was written in the 1940s, over a decade before Cornell University’s Frank Rosenblatt report describing an early version of a neural network. Brain So there has been a long term effort to create a framework for “safe AI,” but the question remains, “What still makes Artificial Intelligence well… artificial?” If scientists are creating computer systems modeled after the human brain, why hasn’t a super-intelligence emerged yet? Mainly it is the problem of scale. The human brain is estimated to have around 100 BILLION neurons (+/- a few billion). Currently the largest artificial neural networks have the size of a frog brain (about 16 million neurons). If you really want to get a sense of the immense complexity of the human brain, this article is a good start (and it contains wonderful illustrations!). But the differences don’t end there, even a single neuron in a human brain is VERY different than a “neuron” in an AI “brain.” At the most basic level, computers operate using transistors that are either On or Off, a “1” or a “0.” Neurons are much more complex as explained in this article, “Surprise! Neurons are Now More Complex than We Thought!!

If your brain isn’t aching by now (mine is), I’ll leave you with some additional reading material, the beginnings of an interesting series from Mozilla (makers of the Firefox browser and all around smart people), “Dismantling AI Myths and Hype.” The next post in this series will provide more detail on how these new technologies are affecting the everyday world we live in: everything from keeping farms running smoothly to the fears associated with self-driving cars.

Now if we were to combine Artificial Intelligence with Quantum Computing… (cue evil laugh)