This past weekend, at the Society for Existentialism and Phenomenology conference, I heard a really fascinating panel dedicated to “The Promises of Polytheism” and I haven’t been able to stop thinking about it since. Now, as a rule, I’m not all that interested in theism(s) of the sort that most people would recognize, but I am very interested in the contemporary order of thinking, evaluating, and making where most of today’s quasi-gods are manufactured and where those contested deities reside– namely, technology. So, fair warning, what follows is at best only tangentially related to the actual substance of the panelists’ papers.
The three panelists– Ammon Allred (University of Toledo), Michael Norton (University of Arkansas-Little Rock), and Adriel Trott (Wabash College)– had each taken Jan Asman’s 2003 text The Price of Monotheism as a common point of departure for thinking about aesthetics (Allred), the intellectual and discursive “ecology” of religions and the sciences (Norton), and politics (Trott). They were all fantastic papers– I recommend you contact the authors and ask for copies!– but Trott’s, in particular, really generated a lot of interesting questions for me relative to my own current research (in future technologies, artificial intelligence, big data, social media, and their sociopolitical effects on “human” life and thinking).
[No philosopher likes to have his or her conference paper hijacked by the ever-present “how-does-this-relate-to-MY-project?” audience member, so I apologize to Trott in advance for being that guy. ]
Logos and the “Imminent Measure” of Plato’s Cave
Artificial Intelligence, Chatbots, and the (Possible?) Mutability of the “Imminent Measure”
As regular readers of this blog know, most of my current research has been focused on what can broadly be called “future technologies” (despite my insistence that almost none of my research interests should be properly called “future” technologies anymore), including developments in artificial intelligence, big data, and social media, and how those current technologies are radically reshaping, redefining, in many ways determining not only how we understand “humanity,” but also how we “humans” understand the sociopolitical/economic/moral world we share with one another and with the things we have made. Trott’s paper at SPEP was really generative for my own thinking. (So also were Norton’s and Allred’s. I regrettably committed the triple-offense of asking all of the panelists some version of the how-does-this-relate-to-MY-project? question. Alas, mea culpa.) Of particular interest to me was how convincing I found Trott’s argument with regard to logos as not only a valid, even well-founded, “imminent measure” for inside-the-Cave political thought, but also one that might rightly be advocated-for over and against the philosophical/monotheistic standard of any transcendent-Truth X.
What follows is not so much a challenge to Trott’s account as it is a reiteration of what has unfortunately become a constant refrain of mine these days, namely, that the vast majority of professional philosophers– and I do NOT include Trott among these– are doing a really terrible job thinking seriously about how contemporary technological advances seriously complicate and problematize (I hate that word) how we are thinking about the contributions that Philosophy can and should make to our understanding of the world we (“humans”) share.
Take, for example, the bizarro result of the Facebook chatbot experiment this past summer, which sounds sci-fi but was, in fact, straight science, hold the fiction. Facebook’s A.I. (artificial intelligence) developers designed two “chatbots” and gave them the task of negotiating a trade. The bots were basically charged with trading hats, balls, and books, each of which were given a predetermined value by the coders. So, the “project” here, a very fundamental project in A.I. development, was to see how well the bots could work out the operations of a simple negotiation. The chatbots managed to quickly “learn” the rules of trade, and they easily mastered several very-humanlike strategies of negotiation (e.g., pretending to be very interested in the item they owned, so they could later pretend they were making a sacrifice in giving it up). But they also did something that the programmers didn’t anticipate…
Within two days, the chatbots invented a “new language” for negotiating with one another, which appeared (to programmers) to be English-based (the language of their coders), but which was nevertheless indecipherable to anyone but the bots. The bots’ “private” language was more than just slang or coded-shorthand, according to several prominent linguists who examined it, though it operated in many of the same ways that slang and coded-shorthand do in “human” language, i.e., as communicative tools for the more efficient accomplishment of tasks. So here we have perhaps the first evidence of machine learning in which we see the spontaneous manufacturing by artificial intelligences of something that can be recognized as a language, but which cannot be decoded algorithmically. Like, seriously, holy shit.
What did Facebook do when they realized what was happening? They unplugged the bots. In all the official statements I’ve read, Facebook (and even a number of A.I. developers unconnected to Facebook) have unanimously insisted that the chatbots were not unplugged because developers were afraid of what they were doing. I am unconvinced.
The lady doth protest too much, methinks.
To really understand why this chatbot story is not just another garden-variety tech story, and why you should care– not to mention what in the world it has to do with Trott’s paper– it is absolutely essential to understand just a little about what A.I. and “machine learning” developers do (or are attempting to do). Without getting too technical, what A.I. and machine learning developers do, at its (very reductively rendered) base, is more or less attempt to “code”– that is, algorithmically define– something akin to the operations that we recognize as “human thinking.” An algorithm, which is really just a rigorously-determined set of ordered steps designed to accomplish a task, is an essential part of human thinking that, in our everyday “human” brains and experience, occurs so quickly and seemingly-naturally that the intricacy of its processes and rules remain mostly invisible to us. To the extent that the tasks we accomplish everyday are algorithmically decided– and those tasks are many and varied, including making, evaluating, translating, trading, deciphering, and in many ways also communicating– they can be “coded” in such a way that machines can repeat.
Your calculator doesn’t “understand” mathematics. It “does” mathematics, because mathematics is only a regular, code-able, system of rules and regulations. If coded correctly, your calculator never makes mistakes. And neither does your computer, for that matter. So far at least, as far as we know, all of our machines are just dumb boxes until we humans tell them what to do.
A.I. developers aim to make our machines more than just dumb boxes that follow rules, though. And this is where the chatbot story becomes interesting with respect to Trott’s proposal of logos as the “imminent measure” of the polis, I think.
If we understand at least some of the operations of distinguishing-the-better-from-the-worse, the operations of human reason (logos), that A.I. developers are attempting to algorithmically reproduce in machines, to be more or less accurately representative of at least one “aspect” of human thinking (i.e., mathematical or utilitarian or task-oriented thinking), and if we also want to insist that this sort of algorithmic thinking is not the whole (or even the most important aspect) of human thinking, then we really should ask ourselves what might happen when machine-thinkers are introduced into the polis as equal (or indistinguishable) participants in the deliberations of the polis, which has so far been the exclusive domain of the “human.”
Surely we all know, at this point, that the algorithms of Google and Facebook and Amazon, for example, determine our interests as much as (or more) than they reflect our interests. So it doesn’t take a super-sophisticated sci-fi imagination to understand the extent to which how-we-distinguish-the-better-from-the-worse (socially and politically, among humans) is already being contaminated with a kind of simulacrum of of what Trott called the “imminent measure” of logos or human reason, a ham-handed copy of logos currently being coded into our everyday (both significant and mundane) interactions by not only computer programmers and economists and advertisers, but also (more surreptitiously) by social scientists who provide the data of “human life” in algorithmic form for use by the most intimate agents in our lives: bankers, legislators, law enforcement officers, doctors, psychologists, educators, and profit-oriented, exploitative, bureaucratic analysts of every ilk.
Bots Don’t Care About What You Have In Common
And now, for the coup de grĂ¢ce, allow me to return to that curious story of the Facebook chatbots, because here’s what we (self-professed philosophers) really need to think about that all the algorithmphiliacs won’t and don’t:
What might the introduction of a non-human logos mean for the human polis?
.
Assuming that logos is not some transcendent/divine measure to which one can appeal in order to settle disputes by authority, as (on Trott’s account) a monotheist or Philosopher-Tyrant might, but rather logos is only an imminent measure of distinguishing-better-from-worse that is not without rules, but is malleable inasmuch as it embeds itself in fundamentally social/political communication, what happens when certain agents come to occupy the political space who do not care about being understood by “the common”? That’s what happened with the Facebook chatbots, and Facebook unplugged them.
We can’t unplug them all. That train done left the station, y’all.
Whatever happened in the last U.S. Presidential election, and I think there’s still so much more to learn about that, it’s pretty much common knowledge now that chatbots and other algorithmically-programmed, social-media quasi-agents made a real difference in the way that so-called “real humans” politically distinguished better-from-worse. We weren’t all drugged and deluded and forced to vote a certain way, after all.
Yes, I get it, there are a number of ways to explain the Emergence of Trump and all of the post-truth, post-civility, fake-news, #NotAllX, pro-business, pro-gun, xenophobic, and the funhall-mirrored free-speech morass that we see in our “common” discourse today. Structural racism and misogyny, combined with the total absorption of legislative interests by capitalist interests, are chief among the valid explanatory accounts. But so are bots.
A.I. bots are not just driving our cars and delivering our packages anymore. They’re re-ordering our social and political discourse, which means they’re re-ordering how we distinguish-better-from-worse, which means they’re re-ordering the logos that orders our polis. I don’t mean “the humans who are programming the bots” are doing this re-ordering. I mean the bots themselves are doing this re-ordering.
TL;DR Version Of The Above
We humans think that we make the rules for determining what is true and right in our shared world. Then, we made a machine (A.I. chatbot), we taught it a really basic version of our rules, and now it’s making its own determinations. Oh, and we already gave it a key to the house.