Artificial Intelligence: As Soon As It Is, It Isn’t

For many years now, the term “artificial intelligence” has sat uncomfortably with me. What we call “artificial intelligence” today refers to any number of operations, performed by machines, that are normally attributed to human minds.. These operations are many and varied– calculating, planning, problem-solving, learning, natural language processing, reasoning, et al– and we generally collect them under the category “intelligence” when a human mind performs them. What we forget is that when a machine performs these operations, it exhibits intelligence more or less in the same way that a human mind does. Machines are artifices (techne, or “made things”), of course, but that fact doesn’t make everything that machines do “artificial” any more than the fact of humans’ fundamentally organic constitution makes everything we do “real” or “natural.”

Or, as Calum Chance noted in his recent book Surviving AI: The Promise and Peril of Artificial Intelligence (2015): “we don’t call airplanes ‘artificial aviation’ to distinguish them from birds.”

Chance’s illustration gets at an important distinction. We humans have at least a two-millennia long habit of contrasting the artificial to the natural. We also have a corresponding habit of contrasting the artificial to the real, which has been sedimented into our thinking for just as long. This is why a lot of people who work on AI, myself included, prefer terms like “machine learning,” “machine intelligence,” or “machine thinking.” A machine may be doing it, and a machine may not be natural, but what it is doing is real.

That may seem like a ticky-tack point, but it really does matter what we call things. And there’s a lot about calling machine intelligence “artificial” that concerns me. It rings false in deeply significant ways, similar to ancient and medieval scientists’ use of “aethers,” or 18th and 19th C. anthropologists’ use of “race,” or many contemporary scholars’ use of “neoliberalism.” If you’re describing the world with reference to a fictive or vague or poorly-articulated category, you’re mis-describing the world.

^This is not what AI looks like^

One consequence of a loosey-goosey deployment of the term “artificial intelligence” is that it makes it possible for the things to which that term refers to hide in plain view. (Ironic, of course, because according to Heraclitus, hiding is what physis loves to do!) In popular culture, we are constantly fed images and imaginings of AI that look like the highly-anthropomorphized “robot overlord” above. So, we fail to recognize how much AI is already all around us– and it is literally all around us all the time– because we tend to stop calling things “artificial intelligences” once they’ve worked their way into our everyday reality.

It’s almost as if the moment AI appears integrated into our mundane lives, it ceases to appear qua AI. Our phones aren’t AI, they’re “smart.” Our self-driving cars aren’t AI, they’re “autonomous.” Our entertainment resources (Netflix, Pandora) aren’t AI, they’re just “tv” and “radio” now. Our digital assistants aren’t AI, they have actual names (Siri, Alexa, John Paul).

And so, while we continue on fretting about our imaginary, dystopic future under the rule of  imaginary AI-enabled robot overlords, we’re simultaneously utilizing and being-utilized-by AI technologies every day. Because we do not recognize that fact– because we keep referring to machine learning, machine thinking, and machine intelligence as “artificial”– we simply fail to see the reality of it all around us.

And that is very, very dangerous.

^This is not how AI works^.

There’s a lot still to learn and understand about how human intelligence works, but there is considerably less so about what we call “artificial intelligence.” Machine intelligence is just another step in the evolution of human intelligence; it extends and supplements what the so-called “natural” human mind can do much in the same way that the automobile and the airplane extend humans’ natural capacity for mobility. Machine intelligence has made us more efficient, more effective, more connected, and capable of considering problems of such massive data-size that they would literally overwhelm our poor meat-brains. It is time to think seriously (with our regular human minds) about how to work more cooperatively with machine intelligence, lest it overwhelm us. That will require some serious re-thinking of the largely unthinking distinctions we make between the real and the artificial, the natural and the technological, the human and the post- or non-human.

What we must do immediately– and what those of us who are educators ought to encourage our students to do– is to stop thinking about AI as futuristic science fiction. Ceasing to refer to machine intelligence as “artificial” would be a huge step in the right direction, in my view. But we also need to re-center technology in our conversations about how social, moral, and political “goods” are going to be determined in the world that we share with machines.

It is already the case that machines are making those determinations.

Not every idea needs a tool, but every tool needs an idea. Our most pressing problem right now is that we’re forging ahead recklessly with intelligent technologies, making very sophisticated tools with very unsophisticated and poorly-thought-through ideas. That is a recipe for disaster.

Leave a Reply

Your email address will not be published. Required fields are marked *