We’re nearing the end of the semester and I’m wrapping up two of the most exciting and intellectually invigorating courses I’ve taught in a long time. One of them was an upper-division undergraduate course entitled “Technology and Human Values” (
syllabus here) The other was an intro-level undergrad Ethics course called “Contemporary Moral Issues” (
syllabus here), half of which is a historical survey of moral theory and the other half of which is primarily focused on contemporary moral issues surrounding technology.
Full disclosure: I’m currently in the process of writing a book on emergent technologies, and I have an extremely heavy (usually 5/5) teaching load, so I’m always eager to take advantage of opportunities to combine my course prep-time with my research time. For the past couple of years, I’ve pretty much gone all-in on that combination. This semester, I perfected it.
Maybe I should have seen this coming, but one of the things that I realized this term was how
painfully little students understand about the technologies that they use every day and that thoroughly shape their lives. (To be fair, almost everything that I say in the following about “students” also applies to most people I know.) I’m talking about very basic technologies here– like
search engines and GPS navigation and social media platforms and genetic testing and autocorrect– all of which are current technologies made possible by what we call “
narrow AI,” so commonly used by all of us as to be practically invisible. (I’ve written about that “disappearing tech” phenomenon
here.) When we talked about variants of artificial intelligence in class this term, I noticed that students had a tendency to either grossly overestimate AI capabilities (a la “robot overlords”) or grossly underestimate them (a la “super advanced calculators”). Leaving aside the monumental importance and ubiquity of AI, I was dismayed to realize that just explaining
how algorithms work was a challenge, even to STEM majors.
In fact, somewhere around the sixth week of my upper-division course, one of the students raised her hand and said: “I don’t think when I say ‘contemporary technology’ I mean the same thing as when you say ‘contemporary technology.’ The things you’re talking about sound like science-fiction to me. Can you describe what you’re thinking of when you say it?”
A few things about this incredibly productive moment in my course:
First, kudos to that student for asking the question that I am sure everyone else in the room wanted to ask. Second, this is exactly the sort of question that, if it remains unasked, can totally derail a course. (Check out my “Why I Invited Students To Give Me The Finger” post on exactly this problem of unasked questions in class!) Third, her question served as a timely and important reality-check for me. I have been so deep into my research on emergent technologies that I actually sounded delusional to regular people! And, fourth, I realized not only how deficient my students’ understanding of contemporary tech was, but that THIS IS A REAL PROBLEM.
If students don’t understand (and only barely recognize) the most quotidian technological capabilities all around them, the ones they use everyday, how can we expect them to be forward-looking in their evaluations and assessments? Despite all of our emphasis on STEM in higher ed, we’re teaching students how to make, to repair, and to manage– or, in many cases, just to babysit– machines that they do not fully understand, about which they rarely ask non-technical questions, and the implications of which they are neither equipped nor encouraged to appraise.
This kind of ignorance and apathy will be our undoing as a species, if it is not already.
If I had to put a finger on it, the other crucial insight from this semester was that. perhaps now more than ever before in human history, it’s hard to think about the future. Of course, that’s partially because we can’t know the future– duh– but that has always been true. What I think distinguishes our contemporary efforts to reckon with the future is the very real and imminent possibility of our extinction as a species, which challenges not only our understanding, but also our imagination. I might be inclined to compare it to the early 20th C. threat of nuclear power–except the “nuclear threat” permitted obvious diplomatic dversions of disaster– or the bubonic plague– except that humanity survived the plague three times.
The only analogue, really, is climate change.
We’re getting poorer, sicker, stupider, less relevant, less free, and more robotic by the day.
If you don’t know, it is
already possible (thanks to
CRISPR technologies) to edit the human genome at not only the somatic level, but also
the germ level, which could easily accomplish the very worst aims of 19th and 20th C. eugenics programs by the time your great-grandchildren are born.
Deep ML and
neural-network AI research is progressing so quickly that if– I would say “when”–
general AI is achieved, you and I mere plebes will likely not know about it. Tens of millions of jobs previously done by humans will be automated within the next decade, including the most obvious ones (like drivers and pilots, postal workers, radiologists/pathologists, travel agents, “customer service,” bankers, financial analysts, accountants, and everyone in the print industry), but also the less obvious ones (like nurses, surgeons, waiters, farmers, sex workers, mechanics, judges, even teachers/professors).
We, humans, are
this close to being considered by whatever advanced intelligence replaces us like we, humans, consider the great apes.
Despite all this, I remain a techno-optimist. Unlike the dumpster fire that is our current environmental sitch, we still have a fighting chance to commandeer the ship of technology. What is more, commandeering that ship may be our last and best hope for extinguishing the climate-change dumpster fire.
The clock is ticking, though. Faster than you think.
So, if you’re interested in these questions and if you’re of a mind that the
Gen Z cohort of undergrads should be, too, here’s a list of eminently important texts that I used in my courses this semester. You should read them. You should teach them. You should stand on the street corner and pass them out.
I’ve divided the following into four general categories: (1) texts on social media, its history and implications, (2) texts on algorithms, artificial intelligence, and machine learning, (3) texts on actually “emergent” technologies (so, things that are not yet real, but will be soon), and (4) texts that I haven’t yet taught but I will be incorporating into my classes in the next few semesters.This should go without saying, but there are plenty of fantastic texts that are not listed here that would also be great. I’m just listing the books that I’ve tried and found to be successful at the undergraduate level.
___________________________________________________________________
Earlier in the semester, I was really doubting my decision to assign texts on social media since Zuckerberg et al were in the news practically every other day. Then, I remembered that (a) students don’t read the news and (b) they DGAF about Facebook, so I’m glad that I kept Tim Wu’s The Attention Merchants: The Epic Scramble to Get Inside Our Heads and Zeynep Tufekci’s Twitter and Tear Gas: The Power and Fragility of Network Protests on the syllabus. Wu’s text is a history of how-we-got here to the era of social media behemoths, who don’t sell us goods or services so much as they capture our attention and then sell our attention to others who want to sell us goods and services (or ideologies) or who want our data for more nefarious purposes. It’s eminently readable with a lot of great anecdotes, and the new addition includes an extra chapter arguing that Trump is the first “attention merchant President.” (The last chapter is fantastic!) Tufekci is one of my favorite tech writers today, and I would assign almost anything she writes. One challenge I noticed with teaching Twitter and Tear Gas is that students are woefully under-informed about the Arab Spring uprisings, but I was able to translate many of her insights about “movement cultures” with reference to movements closer to my students’ experience (esp #BlackLivesMatter and #DefendDACA). I think Tufekci’s text generated some of the most interesting questions and conversations from students this semester.
*An aside about teaching social media: Social media has only been around for less than two decades, and the primary demographic of users for each platform changes dramatically and quickly. By the time we started talking/teaching about social media, platforms like SixDegrees and MySpace were already long gone or irrelevant. For GenXers like myself, our go-to references are Facebook and Twitter. Those are still the most populous platforms, but they are not the platforms used by college-aged students today. So, for your handy reference, I recommend the following taxonomy that I just made up:
Yes, yes, I know: Facebook and Instagram (and Whatsapp) are all a part of the Zuckerberg Empire, and that’s an important thing to talk about if you want to talk about the business of social media. But, the platforms are very different, as are their user demographic, and anecdotes about one rarely translate well to the others.
___________________________________________________________________
[ Algorithms, Artificial Intelligence, and Machine Learning ]
When I’m thinking about what books to assign on AI (broadly speaking), the following criteria are most important to me:
- Accessibility: How much technical expertise is necessary to understand the text? Can it hold the attention of a non-STEM undergraduate major while at the same time not seeming “rudimentary” to a STEM undergraduate?
- Authorial Disposition Is the text pessimistic/apocalyptic or optimistic about AI? Does the author provide compelling evidence for their pessimism or concrete suggestions for the rest of us, going forward, to accompany their optimism? Is the author a Luddite?
- Timeliness: In general, I don’t assign tech texts that weren’t written in the last 5 years.
- Philosophical Heft: Does the text ask important questions of meaning and value? Does the text put forward, explicitly or implicitly, a theory of mind, or technology, or humanity, or intelligence, or society, or politics, etc, etc? Bonus points if it employs actual philosophers.
- Explanatory Force: Will the reader have (at least) a nodding familiarity with the vocabulary, the capabilities, and the limitations of artificial intelligence after reading it? Will they be to recognize (and assess) the capabilities/limitations of emergent AI technology in their lives as a result?
All four of the texts above tick every box. For use in lower-level courses, or just book-gifts for your parents/grandparents (or Luddite friends), I highly recommend Hannah Fry’s
Hello, World: Being Human in the Age of Machines, which I used in my
intro Ethics course this term with great success. I also think that Brian Christian’s
The Most Human Human: What Talking With Computers Teaches Us About What It Means To Be Alive is super-accessible to intro-level undergrads, and it has the added advantage of actually referencing a lot of the philosophers and texts that you would normally include in an intro Philosophy course.
What To Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence is an anthology of very short (2-3 pages max) reflections from, as the title states, “today’s leading thinkers,” and many professional philosophers are included among them. (Note: I would only use that text in an advanced course on Philosophy of Mind or, as I did, Philosophy of Technology. It definitely requires some prior expertise in AI and ML.)
The outlier here is James Barrat’s
Our Final Invention:Artificial Intelligence and the End of the Human Era, which is truly apocalyptic in its disposition. (The first 12 or so pages are among the most terrifying you will ever read.) I include Barrat mostly to keep my own techno- optimism in check, but it is a fantastic text.
___________________________________________________________________
[ Actually “Emergent” Technologies ]
As I said above, I think it’s very risky business to try to talk about actually emergent technologies in classes where basic tech literacy can’t already be assumed. Doudna and Sternberger’s
A Crack in Creation: Gene editing and the Unthinkable Power to Control Evolution might be an exception to that warning, as I think it can (and should) be taught in most Medical Ethics courses.
Jennifer Doudna is, of course, the multiple-prize-winning inventor of
CRISPR technology. Since her discovery of that game-changing gene-editing technology in 2012, she has also been the loudest and harshest critic of its use.
A Crack in Creation is definitely one of the most accessible introductions to CRISPR capabilities, and next to AI there is no more important emergent tech than CRISPR today.. so, really, everyone should read it.
Kate Devlin’s
Turned On: Science Sex, and Robots— the title says it all– is a hilariously fun read, and it hides some of the most important philosophical questions about emerging tech just underneath all that fun. I often tell my students that when we talk about coexisting with our coming “robot overlords,” people seem to have only two questions: (1) will they kill us?, and (2) can we have sex with them? Devlin takes on the latter question with seriousness and aplomb.. and exactly zero censorship. Hers is one of the very few emergent tech books that I’ve read in the last 5 years that actually changed my mind.
The other two texts are considerably more difficult.
Weinersmith and Weinersmith’s
Soonish: Ten Emerging Technologies That Will Improve and/or Ruin Everything poses a serious challenge to both ordinary understanding and ordinary imagination (though the authors assuage the former a bit with well-placed cartoon cels throughout). Still, its hard to wrap your mind around things like fusion energy and bio-nanotechnology without some advanced STEM expertise. So, this text can be a worthwhile, but uphill, slog. Similarly, I think Michio Kaku’s
The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind likewise requires (at least) some familiarity with basic questions in Philosophy of Mind, and considerable familiarity with brain-computer interface technologies. (It is a literally mid-blowing text, though!) I would caution against adding either of these texts to your syllabus if you haven’t already planned for several weeks on algorithms, artificial intelligence, and machine learning.
___________________________________________________________________
[ Books I Haven’t Taught Yet, But Will Soon ]
I should say, first, that I’ve taught “selections” from some of the texts above (O’Neil, Noble, and Eubanks), though I yet haven’ taught any of them in their entirety. Most of these belong in the same general category as the “Algorithms, Artificial Intelligence, and Machine Learning” that I have used and that I listed above. Hannah Fry’s
Hello, World was a smashing success in my course this term, and students were not only interested in the text, but shocked and dismayed to learn how many areas of their lives are determined by algorithms and AI. Now that I know how well this topic lands, I plan to exploit the bejesus out of it. Cathy O’Neil’s
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy has already established itself as a canonical tech-text, and I look forward to pairing it with Noble’s
Algorithms of Oppression: How search Engines Reinforce Racism and Eubanks’
Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. These three texts are, in my view, the very best accompaniments to talking about the “digital divide.”
I’ll also be adding Bernard Harcourt’s excellent
Exposed: Desire and Disobedience in the Digital Age, which I only read recently after hearing an excellent lecture by Alberto Moreiras that employed Harcourt’s work this past February. (
I wrote about Moreiras’ lecture here.) Both Moreiras and Harcourt are considerably more pessimistic about technology than I am, so I consider the inclusion of
Exposed as a kind of self-check along the same lines as Barrat’s
Our Final Invention. I’ll also be including Turkle’s
Alone Together: Why We Expect More from Technology and Less from Each Other for the same reasons, namely, that there are a not-insignificant number of my students whose general disposition towards technology is that it is alienating and scary. That is not a disposition that I share, but it is one I want to carve out more time to seriously discuss.
___________________________________________________________________
That’s it. There’s your starter list of already tried-and-tested texts for your next syllabus. Now, go do the thing.
I know there are competing ideologies, ginormous egos, and a shit-ton of dollars tied up in this endless, ridiculous, and ultimately self-defeating “STEM vs. humanities” battle that we keep fighting, but both sides are going to have to give a little if we hope to survive the current neoliberal onslaught with higher education–not to mention a reasonably well-educated citizenry– intact. To my colleagues in the humanities, I would say: if you’re not regularly incorporating questions concerning technology into your courses, you are not only doing your students and your discipline a great disservice, but also your community. And, to my colleagues “across the aisle” in STEM, I would say: if you are not actively encouraging your students to take courses in the humanities, even beyond what they are “required,” and to take humanities courses seriously as part of a well-rounded STEM education, you are also doing a great disservice to your students, your disciplines, and your community.
As Santayana wrote: those who cannot remember the past are condemned to repeat it. I’ll add: those who cannot anticipate the future are dead on arrival.
And there’s just no hope whatsoever of anticipating the future if you can’t look around your world as it is now and understand something about what’s going on there.