ICYMI, I posted the first iteration of “What To Assign If You Want To Teach The Future” last year at the conclusion of my advanced seminar called “Technology and Human Values.” I’m now teaching that course every semester and, because both emerging technology and the scholarship about it is being produced at a mind-boggling pace, I’m finding that I need to replace some of my required texts every term with even-more-recent texts. So, I thought I’d post another installment for anyone interested in the new literature, with a few comments on what has worked well (and less well) in the classroom.
I run my Technology and Human Values course (current syllabus here) very much like a graduate (MA-level) seminar. It is reading-intensive, discussion-centered, and– after a couple of weeks at the beginning when I lecture a lot in order to set the landscape of the course– largely student-led. The first half of the semester, we work through roughly twenty essays (one per class period) collected in the anthology Ethics and Emerging Technologies, edited by Ronald Sandler. Then, we spend a couple of weeks considering very recent news/issues in technology– two years ago it was CRISPR, then the Facebook/Cambridge Analytica scandal and the rise of the alt-right on YouTube last spring, then facial recognition tech this past fall, and this year it will almost certainly be something about deepfakes and democracy. These “what’s happening right now” essays are usually drawn from sources like Wired, MIT Technology Review, The Verge, the fantastic online magazine Real Life, or other reputable/major news outlets.
Most of the students enrolled in my course are not Philosophy majors– in fact, the overwhelming majority of them are STEM majors– and so the roughly two months that we spend doing text-based close reading, exegesis, and analysis together is essential for my students to develop (or sharpen) the critical and analytical skills they will need for the “small group” text assignment, which constitutes the lion’s share of their grade in my course.
[The “small group text assignment” in my course is this: Each student is assigned to a “small (2-3 person) group” and is responsible for reading one text independent of the rest of the class. Each individual member of the small groups must turn in an individual paper analyzing the strengths/weaknesses of their text, and the small groups are collectively responsible for presenting/”teaching” their text, over the course of two 50-minute class periods, to their peers in the second half of the semester.]
I haven’t significantly changed, and have no immediate plans to change, the first half of my course. So, all of the changes I note herein (and anticipate making in the future) really only concern the selection of “group texts.” Below, I’m going to say a little bit about the texts I’ve used previously, but now replaced, and then follow that up with my hopes for this semester’s new selections.
_______________________________________________________________
[TEXTS I’M BIDDING A FOND FAREWELL TO]
Each of the above are fantastic texts, and they all worked very well for my course, so I should first say that none of them are being replaced because they were deficient in any way. Christian’s text (The Most Human Human) has been a favorite of mine for many, many years… but it is starting to show its age, unfortunately. Every semester, I’m finding that the basic argument of O’Neil’s book (Weapons of Math Destruction, which really should be mandatory reading for every American) is familiar to most of my students, especially those who have taken my PHIL220 course, which requires them to read Hannah Fry’s Hello, Word: Being Human in the Age of Algorithms. A lot of the material in the Ford anthology (Architects of Intelligence), like the previous book I used in its place (What To Think About Machines That Think), is covered in other parts of the course, so it has become redundant at this point. And the Weinersmiths’ book (Soonish), although a TON of fun to think about, is a hair too technical for what I want to accomplish in my class.
I’m the most sad about parting with Tufekci’s Twitter and Tear Gas (because she is a brilliant thinker and writer, with whom I agree about almost everything), James Barrat’s Our Final Invention (because he is also a brilliant thinker and writer, with whom I disagree about almost everything), and Kate Devlin’s Turned On (which is, hands down, the most fun text I’ve ever taught in a course ever). I very well may reintroduce all of these texts in future semesters, but it was time to give them a break this time around.
_______________________________________________________________
[TEXTS ADDED IN 2019]
Tim Wu’s
The Attention Merchants wasn’t technically a “new” addition for me in 2019, but Wu published a new edition of the book with a concluding chapter on Trump (who he calls the first “Attention Merchant President”), which made this book read in a whole new, fascinating and exciting, way. I’m definitely keeping this one around! I gave Waldman’s
Privacy as Trust a try for one semester and it produced some genuinely engaging conversations among my students, but I think Waldman’s argument, though interesting, just isn’t robust enough for me to extend it for another term.
Until another book is written about CRISPR, I won’t ever give up Doudna’s book (
A Crack in Creation), because CRISPR technology is so existentially important to understand. But I should also note that hers is the most technically challenging book that we read
by far. I also plan to keep Kaku’s
The Future of the Mind around for the foreseeable future, if only because it tends to generate the most interesting discussions and requires a real reckoning with real science that sounds like so much like not-real sci-fi!
I was most pleased with the addition of Galloway’s
The Four, because I think my course was really lacking a consideration of the “business” side of tech. (I also really love the way that Galloway’s book provokes speculation about the future of the tech industry. Who will be “the 5th”?) And, finally,
Woke Gaming was a real eye-opener for me. I’m not a gamer, but soooo many of my students are, and I was really glad to have the opportunity to learn from them about the socio-political landscapes created within multiplayer games.
Woke Gaming is an anthology, which made it a little more difficult to manage as a “group text,” but we worked it out.
_______________________________________________________________
[NEW TEXTS FOR 2020]
I honestly could kick myself for only just realizing this semester that I’ve been teaching a whole course on Technology and Human Values for two years that didn’t include anything by Stuart Russell! Thankfully, Russell just published his new book
Human Compatible, which is fantastic, and which I immediately adopted for this semester. This is the kind of book that centers all of the questions that students regularly ask when they think about emerging technologies, so anticipate it being a quick favorite.
Similarly, Max Tegmark’s
Life 3:0 is a real “core issues in technology” text that powerfully drives home how woefully unprepared we humans are for the very-near future we seem to be mindlessly building for ourselves. I think Tegmark’s and Russell’s texts, which ask the Big Questions, will offer a nice balance this semester to Wu’s and Galloway’s socio-historical approaches, and also Kaku’s and Doudna’s technically sophisticated considerations.
I’m
very excited about my other two robot-centered new additions this semester, John Danaher’s
Automation and Utopia and David Gunkel’s
Robot Rights. I’ve been reading Danaher’s brilliant blog (
Philosophical Disquisitions) for almost a decade now, and I’ve been listening to his podcast (
here) for the last year, and so I was really stoked to hear that he had a book coming out this past summer. Needless to say, the book does not disappoint. Danaher is a clear and compelling writer, a nuanced thinker, and (as he convincingly argues in this book) a solid believer that a human world liberated from the demands of exploitative labor is not only imminent, but Good
. (
Viva la revolucion!) I think Danaher’s book will pair well with Gunkel’s, as the latter really takes seriously, in a decidedly non-paranoid or alarmist manner, questions about how we humans are going to negotiate sharing a world with the automatons who don’t just “take our jobs,” but may also have intelligent suggestions to offer about how worlds can or should be shared. I’ve been saying to my students for years now that we can’t wait any longer to decide about robot rights, so I’m eager to dig into Gunkel’s arguments in class.
The more I teach this course, the more I love it, and the clearer my own thinking about technology and human values becomes. One of these days, I’m going to crank out my own manuscript on the matter, which currently sits in a dozen or so not-yet-pieced-together files on my laptop. But at the moment, I’m still teaching five courses and more than 100+ students a semester, so you may just have to wait until I get uploaded to the cloud. 😉
As always, I welcome your comments below, especially recommendations for new reading!