Humanity: How To Tell Your Students They’re The End Of It

For the last couple of semesters, I’ve been trying to introduce my students, in as gentle and nonthreatening a manner as possible, to the idea that they may very well be the last of “humanity” as we know it.  This has not been a well-received speculative proposition.

In my defense, I really make an effort to take the long route around to this (admittedly disorienting and destabilizing) insight. Like, I literally start with the Big Bang. I do this, in part, because it gives me a chance to explain what a “singularity” is, a concept that will be absolutely essential for them to understand before I get to the mic-drop later.  The second reason that I start with the Big Bang is because I want to slowly unfold a complex story of a Universe, our Universe, which against all odds, has been from its inception capable of encoding information in increasingly complex ways.

[For what it’s worth, that is the long and short of my story about our Universe, i.e., ours is a Universe that is, as unlikely as it is, capable of encoding information, and everything that exists or has existed in the Universe has been just an evolutionary moment (Physis? Logos? Arche? idk) on the timeline of increasingly complex systems of information-coding.]

For my students, I begin with an account of our Universe’s primordial singularity (infinite density), which “exploded” (or, more accurately, “amplified”) into the seeds (atoms) of what of what would later form the large scale structure of our Universe, a Universe that is capable of encoding more and more complex information. That Universal capacity is the first scene in the move from Physics, as we now understand it, to Chemistry (when carbon atoms combined into molecules, the first complex information structures). Then, a billion years later, what we now call Chemistry made way for what we now call Biology, with the evolution of the complex molecule DNA (which is, more or less, lengthy strings of information that determine the “programs” of living things). This eventually made way for Neurology with the emergence of the fist “nervous systems” in organic life-forms, which carved the path for, through evolutionary aggregation of neurons, what we call “brains,” and eventually the evolution of super-sophisticated neural systems that we now recognize as the (proto-human) “brain”– that, with its equally amazing and unlikely neocortex (capable of memory, language, art, and so much more)– marked the first massive jump in the Universe’s billions-of-years project of encoding information in complex ways: from biology to neurology. 

Now, add in just one more bit of evolutionary serendipity– opposable thumbs!— and we can see the second massive jump: from neurology to technology. But it’s important to remember, as I remind my students, that “technology”– even on a very simple understanding of “technology” qua “craftmanship” or “made things”– was still a long time coming. After all, ‘life” (Zoe in ancient Greek, or Eve according to Hellenized Jews) had already been in existence for more than 13 billion years before the first proto-humans appeared. It took at least another 5 million years before anything resembling the contemporary human, with its fancy-schmancy neocortex, evolved. And it was only about 80 thousand years ago that, voila!, they/we humans first started making things (techne).

What we call “civilization,” a gross and idiosyncratically human designation, has only been around for about 6 thousand years, even though it appears, to our pathologically myopic species, as if the Universe manufactured in an instant (voila!) an animal life-form that could itself make things, things that could encode information better than any organic life form could.

To reinforce the point, and for scale, at this point I show my students the following two graphic timelines: The first is a timeline of “life” on Earth. The second is a timeline of “human life” on Earth.

I have found, most students are still with me at this point in the story, as they (mistakenly) believe that I am more or less convening a Human pep rally. Bless their hearts. Things take are about to take a dramatic turn.

Over the next couple of weeks, we talk A LOT about emergent technologies, about artificial intelligence and algorithms and androids and genetic/biological engineering and metadata and social media and surveillance and, in sum, the manner in which hardly any of us have fully reckoned with the increasingly indecipherable distinction between our meatspace/IRL selves and our digital selves. I have my students watch a couple of episodes of Black Mirror (“Be Right Back” and “White Bear”), as well as the original 2010 documentary Catfish. I introduce them to Hiroshi Ishiguro’s Geminoid Project (including the most recent Geminoid F), and Masahiro Mori‘s concept of the “uncanny valley” (about which I have written extensively in a series on this blog). We take up the case of the so-called “bladerunner” Oscar Pistorius, as well as tech/human-hybrids of a much more mundane ilk, like Supreme Court Justice Ruth Bader Ginsberg, who in 2014 had a stent replaced in her heart and so is, technically, a “cyborg.” (Fwiw, by my count, at least 6 of the current 9 Supreme Court Justices are cyborgs. SCOTUS should stand for Supreme CYBORGS of the United States!) In the course of all of these classes, of course, I discuss with my students what makes a “human” a “human,” what can or cannot be added or removed from what we think a “human” is in order for it to remain so, and how the borders that we want to draw around (borrowing the parlance of Foucault) both the word and thing “human” are increasingly porous.

Then and only then do I give my students a quick-and-easily-digestible account of the “posthuman”, namely, a person or entity that exists in a state beyond being human. They should, at this point in my course, have more than ample resources for considering such a possibility, though we all surely know that “having ample resources for understanding” has quite infrequently, in human history, amounted to not understanding.at all.

Nevertheless, this is the moment, after many weeks of careful set-up, that I drop the following graphic on students: a “timeline” of the Posthuman. (NB: there are a number of more or less speculative “predictions” with regard to the “posthuman future” available on the internet. I’ve elected to use this one from Ian Pearson, not because I endorse the broad outline of Pearson’s “futurist” predictions, but rather because, in the course of my own extensive research on emergent technologies, Pearson’s timeline strikes me as not only the most probable, but also the most conservatively realistic):

Let me translate this graphic for you as I do for my students: We’ve already, today, surpassed the point that the graph above marks as “50 years” in the future. That is to say, we’ve already achieved “robotus primus” (the ideal robot) and “homo cyberneticus” (the rudimentary merging of the robot/machine and the human). We are more or less capable at present,  and have been for at least the last 30 years. of manufacturing “homo optimus” through either genetic engineering or straight-up eugenics. The only thing standing between contemporary humanity and the manufacturing of “homo optimus” is (thankfully?) two things: (a) humans’ enduring moral/political dispute over what constitutes the “optimal” iteration of the “human,” and (b) our collective, historically-sedimented, moral aversion to eugenics programs. broadly construed. We are, quite literally, moments away from “homo hybridus,” and arguably there already.

Look, I’ve more or less immersed myself in research on this stuff for the last five years. I am here to tell you, in no uncertain terms, that the graph above is a conservatively-estimated timeline. We (humans) have already exceeded the predictions of the next 50 years as depicted on this graph. Even if it remains the case that most “humans” who are presented with this timeline are disinclined to believe it– including ALL of my students and MOST of my colleagues/friends (many of whom think I’m crazy or just indulging sci-fi-futurism)– it remains the case that experts on the matter (of which I am one) are in large part inclined to confirm the predictions above. Moreover, we got #receipts.

This is NOT science fiction. Now, in the account that I narrate to my students, I tell them that this is the more or less inevitable consequence of a Universe that has been inclined since its inception (and I do not think that it has to have been “intelligently-designed” so) to evolve more and more complex systems of encoding information. We have to think of evolution that way now. Not as more and more complex organisms, but as more and more complex information systems.

And so, it is only at this point in my course that I inform my students:


YOUR CHILDREN WILL (MOST LIKELY) BE POSTHUMAN.

Look, I’ve been teaching Marx for more than a decade now, so I am totally attuned to the fact that  students– in my case, 96% of which are proletarians– find it very difficult to step outside of their received-wisdom about themselves and voluntarily enact change in their own interest. I can teach Marx’s account of “alienated labor” all day every day, and I can have students in my class shouting AMEN! to the rafters all day long, but it is STILL the case that few, if any, of them will actually get out in the streets and #Fightfor15 or protest in support of #UBI or even collaborate with others to make their lives incrementally less miserable. And yet, it remains the case, that in the course of a normal classroom lesson on Marx, no matter how counter-intuitive Marx’s insights are to the lived-experience of average American  college-aged students, they still get it. They get that they are exploited. They get that the logic of Capital does not serve them. They get that their future, if it is to be free and non-exploitative, requires thinking of themselves as part of a complex system (a class or a race or a people) rather than than as an individual, which is the only (and inadequate) discourse and vocabulary that classical liberalism has provided them thus far.

But they, lo and behold, they WILL NOT think of the posthuman. They’ve been socialized to think of not only technology, but also the environment, social and political structures/collectives, and art as more or less the equivalent of existential “extras.” Yeah, I’ll take a shitty 9 to 5 job, with some music, clean water, and a a smartphone on the side. Students are passionately disinclined to let go of their ideological tethers to “the human,” and they refuse to untether themselves from all the other Classical Enlightenment (Western) Liberalism conceptual anchors– individuality, Reason, civilization, proceduralism, white supremacy, patriarchy, heteronormativity, meritocracy, etc– that, if cut loose, would leave them unmoored, unvalidated, discredited. and alone.

Why not, I’ve been trying to ask my students this semester, really consider another (and really posthuman) way to organize their perception of who they are and the world they inhabit? Why not loosen their claim on themselves as “an individual,” their hold on “the human” as the organizing principle of the world in which they live, and think, rather, of the complex systems they inhabit, make possible, forward or disrupt?

If we could just, for a moment, press “pause” on our millennia-old commitment to the Protagorean idea of the human as the measure of all things, or the monotheistic commitment to the human as the steward of the Earth, or the instantiation, covenant-keeper, or servant of some God, or if we could just relieve ourselves for a moment of our commitments to (or complicity with) white supremacy, our commitment to (or complicity with) Capital and the ideology of private property that undergirds it, or to any malformed revision of the mission civilisatrice… we just might be able to re-think something that looks like real human freedom.

Individual human brings are, after all, just bits in infinitely more complex, infinitely more impactful, and infinitely more important bytes. The problem is that we are the kind of animal that has been trained to not think of ourselves that way, because we think (wrongly) that thinking ourselves as insignificant qua individual is apostasy, when it is really FREEDOM.

I sometimes show my students the following video of a flock of starlings, as a way of reinforcing my narrative about our Universe being the kind that is only “interested” (and I use that in a non-anthropomorphic sense, though maybe not in a non-cognitive sense, idk) in evolving more and more complex systems, more and more complex manners of encoding information.


Borrowing from Daniel Bario-O’Neil, I tell them: “Each bird follows a hundred unconscious rules as an individual. Simultaneously, the whole group is reacting to a hundred varying stimuli at every moment. It sounds like it would result in chaos, but it doesn’t; built into the architecture of those rules and responses is a network of unconscious communication—tiny, largely invisible signals that dictate how the starlings continuously coordinate as, in effect, one system. This is complexity at work.”

And for the students who think, “Hold up a sec, I’m not following a hundred unconscious rules ore reacting to a hundred varying stimuli at every momen. I AM A FREE AND RATIONAL INDIVIDUAL MAKING DECISION IN MY OWN INTEREST!”… I say, remember three weeks ago when we talked about algorithms.

The challenge of thinking seriously about emergent technologies, not to mention the “posthuman,” is not primarily about thinking the “loss” of freedom, or “loss” of individuality,  or the “loss” of self. It’s about thinking freedom, individuality, and personhood (whatever that is), without borders–geographical, political, ethical, corporeal, or even cognitive. It’s about thinking “oneself” as maybe not indistinguishable from the flock, the village, the demos, but ontologically inseparable from them, and thinking about one’s surviving and thriving as NOT primarily directed by “rational self-interest,” but rather by the interests of the common.

Leave a Reply

Your email address will not be published. Required fields are marked *