This past weekend, at the Society for Existentialism and Phenomenology conference, I heard a really fascinating panel dedicated to “The Promises of Polytheism” and I haven’t been able to stop thinking about it since. Now, as a rule, I’m not all that interested in theism(s) of the sort that most people would recognize, but I am very interested in the contemporary order of thinking, evaluating, and making where most of today’s quasi-gods are manufactured and where those contested deities reside– namely, technology.  So, fair warning, what follows is at best only tangentially related to the actual substance of the panelists’ papers.

The three panelists– Ammon Allred (University of Toledo), Michael Norton (University of Arkansas-Little Rock), and Adriel Trott (Wabash College)–  had each taken Jan Asman’s 2003 text The Price of Monotheism as a common point of departure for thinking about aesthetics (Allred), the intellectual and discursive “ecology” of religions and the sciences (Norton), and politics (Trott). They were all fantastic papers– I recommend you contact the authors and ask for copies!– but Trott’s, in particular, really generated a lot of interesting questions for me relative to my own current research (in future technologies, artificial intelligence, big data, social media, and their sociopolitical effects on “human” life and thinking).


[No philosopher likes to have his or her conference paper hijacked by the ever-present “how-does-this-relate-to-MY-project?” audience member, so I apologize to Trott in advance for being that guy. ]




Logos and the “Imminent Measure” of Plato’s Cave
In very, very broad strokes, Trott’s argument was that we need to rethink our largely-uncritical reception of the “traditional” reading of the Allegory of the Cave from Plato’s Republic Book VII. (You should check out an earlier version of Trott’s argument in her blogpost “Philosophy and Monotheism, Politics and Democracy,” which is not exactly the same as the paper she gave at SPEP but will give you a better sense of her position than my cursory recounting of it here.) As the received wisdom on the passage goes, Plato’s Philosopher, who has exited the Cave and seen the Truth, returns to the Cave to “liberate” its prisoners from their captivity to shadowy misrepresentations of reality. We (philosophers and readers of Plato) ought to be on the side of the enlightened Philosopher, it is presumed, because she is bringing back into the world of competing doxai (or “opinions”) what Trott called an “external measure,” capable of distinguishing between True and False accounts. This external measure is The Truth, it comes from “outside” the Cave, and it is as-yet-unknown to the residents of the Cave, who– again, it is presumed– ought to more or less take the Philosopher’s word for it.
Trott’s paper did a great job of elucidating just how counterintuitive and problematic this uncritical allegiance to the liberatory mission of the Philosopher– who she rightly called “the Philosopher-Tyrant”– is, and she did a even greater job of drawing out the comparison to monotheistic establishments of transcendent, privileged, and “revealed” Truth as the measure for believers and unbelievers alike. On the monotheistic view, “God” (or God’s “Word”) occupies the same place as this rather reductive iteration of “Truth” does for Philosophy: it orders and makes sense of the human world, it provides an absolute measure for distinguishing between true and false, and– perhaps most importantly– it insists that there is no imminent measure without the one it rather tyrannically imposes. 
What if, rather than presuming that the Philosopher’s report of a Truth “outside” (or the monotheist’s report of the revealed Word of God) is the only measure and, rather than presuming that the Philosopher/monotheist is a liberator, we instead saw them as tyrants, attempting to impose on our interpersonal deliberations (about the True, the Right, the Good) a standard with which we quite literally have no experience? That was the question that Trott asked, and her paper made a convincing case for the possibility that the received reading of Plato’s Allegory of the Cave is in error. Listening to her account, I was reminded of that great couplet from Eliot’s “Prufrock,” and I could imagine Trott imagining Plato saying That’s not what I meant / that’s not what I meant at all.
But if, as Trott argued, there may be “imminent” measure  already operational in the Cave, in the polis, capable of helping us distinguish between better and worse doxai, between the world as it appears to you and the world as it appears to me, what would that imminent measure be? This didn’t really come out until the Q&A, but Trott seemed to suggest that that imminent measure would be logos. It’s a complicated and interesting thing to figure out, how we might think of logos as an imminent measure for determining between “better” and “worse” (NOT “true” and “false”) logoi, without making logos akin to the Truth of the Philosopher or the God of the monotheist. But it’s not that strange, I think.


Ok, now I’m going to propose a fast-and-loose explanation of what I mean, but I think this is more or less how philosophers and non-philosophers alike think about what we call “logic.” I think it’s also how we think of “mathematics.” That is to say, logic and mathematics serve as imminent measures for determining between better and worse discursive accounts of what-is and what-is-possible (in the case of the former) and for determining between better and worse functional accounts of what-is and what-is-possible (in the case of the latter). Logic appears to be “hardwired” into discursive human thinking in the same way that mathematics appears to be “hardwired” into the material order of the Universe. I don’t have to make a case for the “transcendent” reality of logic or mathematics, nor do I have to (tyrannically) convince you to “trust me” or to “believe” in the imminently ordering, regulative capacity of logic and mathematics. It is both presumed and evident in the interactions of any polis, any shared human world, that whatever we take to be a “better” account of the world  (i.e., capable-of-being-shared-in-common, or intersubjectively “true”) will be measured by its adherence to the imminent strictures of logic and mathematics.
To wit, I can no more convince you that my social/political/moral discursive account of the world we share is better than yours if mine relies on logical contradictions than I can convince you that “we can definitely get that couch through your apartment door” if my functional account of the world we share disregards the basic mathematical ordering of physical space. 
Whatever other transcendent Truth or God we may believe in, it is nevertheless the case that we’re working out the world we share together, most of the time, by virtue of what Trrott called these “imminent measures,” logic and mathematics being chief among them (though I’d allow for the possibility of others). So, I’m all on board with Trott’s suggestion that we think of logos as the imminent measure of the polis, the place where doxai are determined to be better-or-worse accounts, and I really appreciated her attempt to more or less revalorize this imminent measure over and against the Philosopher’s Truth as it is so commonly taken to be celebrated in Plato’s Allegory of the Cave.
But hold up, now I’ve got some robot questions…










Artificial Intelligence, Chatbots, and the (Possible?) Mutability of the “Imminent Measure”


As regular readers of this blog know, most of my current research has been focused on what can broadly be called “future technologies” (despite my insistence that almost none of my research interests should be properly called “future” technologies anymore), including developments in artificial intelligence, big data, and social media, and how those current technologies are radically reshaping, redefining, in many ways determining not only how we understand “humanity,” but also how we “humans” understand the sociopolitical/economic/moral world we share with one another and with the things we have made. Trott’s paper at SPEP was really generative for my own thinking. (So also were Norton’s and Allred’s. I regrettably committed the triple-offense of asking all of the panelists some version of the how-does-this-relate-to-MY-project? question. Alas, mea culpa.) Of particular interest to me was how convincing I found Trott’s argument with regard to logos as not only a valid, even well-founded, “imminent measure” for inside-the-Cave political thought, but also one that might rightly be advocated-for over and against the philosophical/monotheistic standard of any transcendent-Truth X.  















What follows is not so much a challenge to Trott’s account as it is a reiteration of what has unfortunately become a constant refrain of mine these days, namely, that the vast majority of professional philosophers– and I do NOT include Trott among these– are doing a really terrible job thinking seriously about how contemporary technological advances seriously complicate and problematize (I hate that word) how we are thinking about the contributions that Philosophy can and should make to our understanding of the world we (“humans”) share. 















Take, for example, the bizarro result of the Facebook chatbot experiment this past summer, which sounds sci-fi but was, in fact, straight science, hold the fiction. Facebook’s A.I. (artificial intelligence) developers designed two “chatbots” and gave them the task of negotiating a trade. The bots were basically charged with trading hats, balls, and books, each of which were given a predetermined value by the coders. So, the “project” here, a very fundamental project in A.I. development, was to see how well the bots could work out the operations of a simple negotiation. The chatbots managed to quickly “learn” the rules of trade, and they easily mastered several very-humanlike strategies of negotiation (e.g., pretending to be very interested in the item they owned, so they could later pretend they were making a sacrifice in giving it up). But they also did something that the programmers didn’t anticipate…















Within two days, the chatbots invented a “new language” for negotiating with one another, which appeared (to programmers) to be English-based (the language of their coders), but which was nevertheless indecipherable to anyone but the bots. The bots’ “private” language was more than just slang or coded-shorthand, according to several prominent linguists who examined it, though it operated in many of the same ways that slang and coded-shorthand do in “human” language, i.e., as communicative tools for the more efficient accomplishment of tasks. So here we have perhaps the first evidence of machine learning in which we see the spontaneous manufacturing by artificial intelligences of something that can be recognized as a language, but which cannot be decoded algorithmically. Like, seriously, holy shit.

What did Facebook do when they realized what was happening? They unplugged the bots. In all the official statements I’ve read, Facebook (and even a number of A.I. developers unconnected to Facebook) have unanimously insisted that the chatbots were not unplugged because developers were afraid of what they were doing. I am unconvinced. 






The lady doth protest too much, methinks.















To really understand why this chatbot story is not just another garden-variety tech story, and why you should care– not to mention what in the world it has to do with Trott’s paper– it is absolutely essential to understand just a little about what A.I. and “machine learning” developers do (or are attempting to do). Without getting too technical, what A.I. and machine learning developers do, at its (very reductively rendered) base, is more or less attempt to “code”– that is, algorithmically define– something akin to the operations that we recognize as “human thinking.” An algorithm, which is really just a rigorously-determined set of ordered steps designed to accomplish a task, is an essential part of human thinking that, in our everyday “human” brains and experience, occurs so quickly and seemingly-naturally that the intricacy of its processes and rules remain mostly invisible to us. To the extent that the tasks we accomplish everyday are algorithmically decided– and those tasks are many and varied, including making, evaluating, translating, trading, deciphering, and in many ways also communicating– they can be “coded” in such a way that machines can repeat. 















Your calculator doesn’t “understand” mathematics. It “does” mathematics, because mathematics is only a regular, code-able, system of rules and regulations. If coded correctly, your calculator never makes mistakes. And neither does your computer, for that matter. So far at least, as far as we know, all of our machines are just dumb boxes until we humans tell them what to do. 















A.I. developers aim to make our machines more than just dumb boxes that follow rules, though. And this is where the chatbot story becomes interesting with respect to Trott’s proposal of logos as the “imminent measure” of the polis, I think.

If we understand at least some of the operations of distinguishing-the-better-from-the-worse, the operations of human reason (logos), that A.I. developers are attempting to algorithmically reproduce in machines, to be more or less accurately representative of at least one “aspect” of human thinking (i.e., mathematical or utilitarian or task-oriented thinking), and if we also want to insist that this sort of algorithmic thinking is not the whole (or even the most important aspect) of human thinking, then we really should ask ourselves what might happen when machine-thinkers are introduced into the polis as equal (or indistinguishable) participants in the deliberations of the polis, which has so far been the exclusive domain of the “human.” 
















Surely we all know, at this point, that the algorithms of Google and Facebook and Amazon, for example, determine our interests as much as (or more) than they reflect our interests. So it doesn’t take a super-sophisticated sci-fi imagination to understand the extent to which how-we-distinguish-the-better-from-the-worse (socially and politically, among humans) is already being contaminated with a kind of simulacrum of of what Trott called the “imminent measure” of logos or human reason, a ham-handed copy of logos currently being coded into our everyday (both significant and mundane) interactions by not only computer programmers and economists and advertisers, but also (more surreptitiously) by social scientists who provide the data of “human life” in algorithmic form for use by the most intimate agents in our lives: bankers, legislators, law enforcement officers, doctors, psychologists, educators, and profit-oriented, exploitative, bureaucratic analysts of every ilk. 





















Bots Don’t Care About What You Have In Common


And now, for the coup de grĂ¢ce, allow me to return to that curious story of the Facebook chatbots, because here’s what we (self-professed philosophers) really need to think about that all the algorithmphiliacs won’t and don’t: 





What might the introduction of a non-human logos mean for the human polis


.




Assuming that logos is not some transcendent/divine measure to which one can appeal in order to settle disputes by authority, as (on Trott’s account) a monotheist or Philosopher-Tyrant might, but rather logos is only an imminent measure of distinguishing-better-from-worse that is not without rules, but is malleable inasmuch as it embeds itself in fundamentally social/political communication, what happens when certain agents come to occupy the political space who do not care about being understood by “the common”? That’s what happened with the Facebook chatbots, and Facebook unplugged them. 













We can’t unplug them all. That train done left the station, y’all.

Whatever happened in the last U.S. Presidential election, and I think there’s still so much more to learn about that, it’s pretty much common knowledge now that chatbots and other algorithmically-programmed, social-media quasi-agents made a real difference in the way that so-called “real humans” politically distinguished better-from-worse. We weren’t all drugged and deluded and forced to vote a certain way, after all.  














Yes, I get it, there are a number of ways to explain the Emergence of Trump and all of the post-truth, post-civility, fake-news, #NotAllX, pro-business, pro-gun, xenophobic, and the funhall-mirrored free-speech morass that we see in our “common” discourse today. Structural racism and misogyny, combined with the total absorption of legislative interests by capitalist interests, are chief among the valid explanatory accounts. But so are bots.

A.I. bots are not just driving our cars and delivering our packages anymore. They’re re-ordering our social and political discourse, which means they’re re-ordering how we distinguish-better-from-worse, which means they’re re-ordering the logos that orders our polis. I don’t mean “the humans who are programming the bots” are doing this re-ordering. I mean the bots themselves are doing this re-ordering.

TL;DR Version Of The Above









We humans think that we make the rules for determining what is true and right in our shared world. Then, we made a machine (A.I. chatbot), we taught it a really basic version of our rules, and now it’s making its own determinations. Oh, and we already gave it a key to the house. 

Leave a Reply

Your email address will not be published. Required fields are marked *