Since I posted my list of tech book recommendations a few weeks ago, several people have asked me to explain why I describe myself as a “techno-optimist.” I get this question a lot, so I thought I’d take the opportunity to give a quick account (and defense) of my position. But, first, a few preliminary remarks about what I mean by “techno-optimism.”

Techno-optimism, in my view, is not some Pollyanna-ish, uncritical embrace of all things shiny and new. I am not a techno-utopian. I do not think technology is the answer to all human problems, and even it it was, I do not think those answers would guarantee a utopia (at least not for us, humans). There are many, many things about the way we are making, using, and (mis)understanding emergent technologies that can still go terribly wrong. In fact, I’ve spent far more time on this blog issuing warnings and sounding alarm bells than frollicking with androids.

(Frollicking with androids is totes something I would be into, though.)

I tend to think of my techno-optimism as, primarily, a political position. And so, like all political positions in my estimation, it is a form of activism. I’m very sympathetic with Cory Doctorow’s definition of techno-optimism: “the concern that technology could be used to make the world worse, the hope that it can be steered to make the world better.” I like Doctorow’s account because, first, he also understands techno-optimism as a praxis and, second, because he recognizes that the “optimism” of techno-optimism must always be tempered by a kind of clear-eyed and well-informed realism about potential human uses and abuses of technology. So, there is a tinge of pessimism at the heart of my techno-optimism, though that pessimism is primarily about humans, not technology.

When I call myself a techno-optimist, I think of that identifier as being of a piece with other political identifiers I use– feminist, socialist, anti-racist, globalist, progressive– all of which indicate value-laden commitments to act, to speak, to learn, to educate, and to engage others in a manner that might (borrowing from Doctorow) “steer the world” toward its better possibilities.

On my view, the proper antonym of “techno-optimism” is not “techno-pessimism.” It’s either techno-utopianism, techno-apocalypticism, Luddism, or–worst of all– techno-ignorance.

So, below are the five main reasons why I count myself among the techno-optimists:

___________________________________________________________

1. Technology has, on the whole, made human life better.

And it continues to do so. Despite the many and varied existential hazards that technological developments now pose to humanity, there are exactly zero conditions that would motivate me to return to a time before the printing press, or the light bulb, or the combustible engine, or penicillin, or indoor plumbing, or (hello Memphis summers!) the air conditioner.  If given a time-machine, I wouldn’t even choose to return to a time before smartphones, social media, and artificial intelligence (and I know a lot more than the average bear about their dangers!). I think that when most people express their desire to go “off the grid,” or to “return to Nature,” or travel back in time to any other romanticized, imaginary iteration of “the halcyon days before advanced technology,” they are expressing a desire that is unforgivably solipsistic. What is more, I simply do not believe them.

It is, of course, true that the benefits (and risks) of technological development remain unevenly distributed across races, classes, genders, and global locations. Reasonable people will point to this fact as a reason to reject techno-optimism on the grounds that it is a position indicative of “first world privilege.” Because I find this to be prima facie persuasive, but deeply flawed, I want to address that argument specifically.

First, I think that argument is better directed at techno-utopians, who regularly overvalue the benefits and disregard the risks of technological progress, and who almost exclusively occupy privileged socio-economic positions. Techno-optimists like myself, on the other hand, are (a) not in a socio-economic position to live forever or move to Mars when everything goes to shit for the plebes, (b) primarily interested in advocating non-proprietary, open-access, free, and therefore maximally-advantageous technologies, and (c) disinclined to think of technology as having the capacity to bring about a human utopia, either because a human utopia is delusional or because.. nevermind, it’s delusional.  Second, it is my view that these critics are guilty of an even more naive and blameworthy ilk of “first world privilege,” namely, the sort that supposes that rejecting emergent technologies or going “off the grid” is an desirable option for everyone. (Sorry you don’t like being tracked by your smartphone. I’m sure the Self-Employed Women’s Association in India, who only recently were able to use SMS to send agricultural workers commodity prices, would love to hear all about your struggles.)  Third, the sort of anarcho-primitivist, “rewilding” alternatives offered by contemporary neo-Luddites– who reject the very obvious fact that technological advancements have made human life better and who occasionally dip their toes IRL to forward their case on anti-globalist grounds– are quite simply neither viable nor desirable alternatives for the overwhelming majority of humans living today. 

I suppose, at some level, I just cannot accept the proposition that, over the course of human history and on the whole, technological development has done more harm than good any more than I can accept that it better to not know than to know. So, any argument that starts with that non-starter is going to ring hollow to me.

In 1879, Edison said “let there be light.” And there was light. And it was good.

2. Information wants to be free.

This is a pretty hefty metaphysical claim, I know, but it really is the principle that lies at the heart of my techno-optimism. I firmly believe it is in the very nature of information itself to endeavor be free: both “free as in speech” (libre, “with little or no restriction”) and “free as in beer” (gratis, “without need for payment”). I do not mean to suggest that information “endeavors” in a volitional sense here– information is not a “subject” on the classically liberal, Western, humanist model — but rather that information might possess what Spinoza called conatus, the innate inclination of all finite things to continue to exist and enhance themselves.

My basic belief about the nature of the Universe (as I’ve explained on this blog before here) is that it is, at its most elemental level, information. And my basic belief about the evolution of the Universe is that it has, since its inceptions, continued to exist by developing more and more complex ways of encoding information. These views are more or less derived from Ray Kurzweil’s Law of Accelerating Returns, in which Kurzweil extends Moore’s Law (“the number of transistors in a dense integrated circuit doubles about every two years”) to apply not only to all human technological advancement, but to the Universe itself.

A corresponding epistemological principle that I would also affirm: if something can be known, it will be known. (There is no such thing as “forbidden knowledge,” and even if there was, the proscriptions against it would be futile.) Human beings have, for millennia, endeavored to know new things, to discover new things, to make new things, and to use old things in new ways. On my view, ALL technologies are “information technologies.” Technology is the way that we “encode” our knowledge and our understanding of the world in material things for survival, for flourishing, and for posterity. I do not believe that all technology is value-neutral (see: the AK47) nor that all technological innovations are simply “tools” (see: AI), but I do believe that technological innovation, emergence, and development is inevitable in the same sense that I believe that what can be known will be known.

At the first Hackers Conference in 1984, Stewart Brand famously posed the question of the “nature” of information in this remark to Steve Wozniak:

On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.

Here’s what Brand got wrong, a mistake too-often repeated by those incapable of thinking outside of capitalist logic: information does not “want to be expensive.” Information will always be “valuable” to humans, but it need not be proprietary, so the “fight” of which Brand speaks (between libre and gratis) is a conflict entirely manufactured by capitalism. Most, if not all, of the dangers we are currently confronting with regard to emergent technologies are not the consequence of information “wanting to be expensive,” but rather of proprietary owners of information wanting to exploit the value of information technologies for profit at the expense of the Good, at the expense of humanity, and at the expense of freedom itself.

3. The alternatives to techno-optimism (techno-apocalypticism, techno-utopianism, techno-phobia, and techno-ignorance) are unacceptable. 

One thing I’ve noticed among my fellow middle-aged (gah!) friends is how easy it is, as one grows older, to become disturbingly comfortable with choosing the least-worst option among otherwise undesirable options. (See: 2016 Presidential Election) This is not the case with my techno-optimism, which I adopt as an affirmative choice. Still, I want to be clear about why I view the alternative dispositions to be deficient.

Problems with techno-phobia and techno-ignorance: I’m lumping techno-phobia (or neo-Luddism) and techno-ignorance together because their effects are the same. We fear what we don’t know. We don’t know what we make no effort to understand. As a philosopher and an educator, not-knowing and making-no-effort-to-understand are both completely intolerable positions. I’ve written a lot on this blog over the past several years about the almost-unequalled importance of basic tech literacy, starting as early as primary school. We are doing a terrible job in this country of equipping young people with the knowledge and understanding they need to avoid approaching their future with only fear and ignorance as their guides.

Problems with techno-apocalypticism: The sirens of techno-apocalypticism, which warn that emergent technologies will ruin some or all things, have been deafening for as long as human beings have been making things. The written word was supposed to destroy our memory. The light bulb was supposed to destroy our ability to sleep. Photographs and moving pictures were supposed to destroy our perception of reality. Contraception was supposed to destroy our desire to reproduce. Microwave ovens were supposed to destroy our food and give us cancer. Television, computers, smartphones, [insert anything with a screen here] were supposed to destroy our eyes and our minds. AI is supposed to destroy humanity (by taking our jobs, RoboCop overlords, or just straight grey goo style.) The dystopian predictions forecast by techno-apocalypticism are as many as they are forebodingly dark.

I don’t want to cast too wide a net here but, in my experience most people who repeat and affirm these techno-apocalyptic scenarios are grossly un- or under-informed about emergent technologies, alternative possibilities, and… well, human history. (James Barrat is the stand-out exception.) So, my first problem with techno-apocalypticism is that it is, just on the facts of the matter, wrong. (If you want to be accurately apocalyptic, talk about the environment, not technology!) Second, there is a kind of “oh-well-we’re-all-f*cked-anyway” quietism implied by techno-apocalypticism that I find lazy, if not also intellectually and morally objectionable. And, finally, I think techno-apocalypticism involves a number of philosophical mis-steps: (a) it confuses correlation and causation, assuming that the deleterious socio-political effects that can be correlated with technological development are caused by technological development, (b) its imaginings of our dystopian future tend to commit the post hoc ergo propter hoc fallacy, so its explanatory force is weak (c) it is almost always a case of an apagogical argument, and (d) it’s slope is more slippery than a wet sidewalk in February.

Problems with techno-utopianism: This is the real enemy to be combatted, in my view. Techno-utopians mindlessly chase, as F. Scott Fitzgerald described it in The Great Gatsby“the green light, the orgiastic future that year by year recedes before us.” Techno-utopianism is a position often, and wrongly in my view, ascribed to people like Ray Kurzweil and Elon Musk– both of whom definitely exhibit symptoms of it, to be sure– but it is more accurately descriptive of people like Mark Zuckerberg, Jack Dorsey, and David Sinclair, whose confidence in the utopian ends of their technologies have numbed them to the often unscrupulous venality of their means.

If rudimentary education is the difference between techno-optimism and techno-ignorance, and hopeful imagination is the difference between techno-optimism and techno-apocalypticism, I’d say that something like critical acumen is the difference between techno-optimism and techno-utopianism.

4. I really want to see what machine intelligence is capable of.

The developments being made in AI right now are literally mind-blowing, in no small part due to the fact that the developments being made in AI right now include developments of minds that we humans do not fully understand. We are not only in the process of reverse-engineering the very thing that distinguishes humans from the rest of the living Universe– or so we thought!– but also engineering a better version of it. Maybe what we’re making is a “mind,” maybe it’s an “intelligence,” may it will someday soon exhibit something like what we call “consciousness.” We’re finding it hard to categorize what AI is because we still haven’t figured out whatever it is that we’re doing when we try to figure it out.

Whatever you want to call them, current machine “thinking” capabilities are OMG WTF awesome; they inspire genuine awe.  Whatever happens as a result their continued lightning-speed development– and I’m 100% confident that “what happens as a result” will be the end of humanity as we know it (see #5 below)– I, for one, am super-stoked to be alive to see this through. I often ask my students– who are all with 2yrs of 20 years old– to imagine taking their current smartphone, travelling back two decades in time, showing it to someone in 1999 and explaining to that person what their smartphone could do, and then asking that person: “what year in the future do you think I’m from?” When I play this game myself, I guess that 1999 Person would put Time-Travelling-Me at least 100 years in the future. Probably more.

Just think about that for a second. “Kids today” have already lived through what you, twenty years ago, would have likely guessed to be more than a century’s worth of technological change. In fact, I’d bet that twenty-years-ago-you would reckon that the ubiquitous, garden variety, machine intelligences that present-day-you interacts with every day were so indeterminably far off in the future as to be actually impossible.

Many of us don’t even notice anymore the amazing technological advancements surrounding us, and most people aren’t even aware of the advancements that are currently underway. I’m of a mind that we, humans, are very soon going to have to make the decision to merge with machine intelligence and advanced robotics or else be rendered extinct… not by their malevolence, but rather by our own irrelevance.

Which brings me to…

5. I believe the last “human” has already been born, that machine intelligence will surpass human intelligence in every way, that the near-future (< 50yrs) will look nothing like anything we can imagine…
and that all of these are good things.
Might as well just pile it on since we’re at the end of the list. There’s a lot to unpack in #5, obviously, so I’d first point you to my essay “Humanity: How To Tell Your Students They’re The End of It” if you’re interested in a slightly-longer account of why I am confident that the last human is already walking the earth with us now. I won’t get into a deep discussion of what comes next here, mostly because the many and varied nuances distinguishing transhumanism and posthumanism are still up for debate (and I have some of my own idiosyncratic nuances to add to those debates). For perspective, I suppose I’d just remind you that the Universe has been around for about 14 BILLION years, and our little planet Earth has been around for only roughly 1/3rd of that time. There’s been “life” on Earth for 3.8 billion years, but the evolution of “human” life has only happened in the last 5 million years.

Now, let’s move things back a whole decimal: what we call human “civilization” is only 6 thousand years old. Oh, and btw, most of the humans walking the Earth today only gained recognition as full members of “humanity” in many parts of the global North and West lest than a century ago, if then. 

Here’s the thing: “humanity” is barely a blip on the Universal timeline, really. We humans have made a lot (and destroyed a lot) in our short time here, but our survival thus far should be solely credited to the evolutionary development, and subsequently technological development (by hook and by crook), of our unique form of advanced, self-aware intelligence. Ours is an intelligence capable of encoding information– in language, in writing, in social structures and political institutions, in culture and folklore, in buildings and artifacts and machines, and now in immaterial digital code– such that whatever knowledge any one of use gains can be preserved and shared intergenerationally. We’re pretty flimsy, vulnerable, and sloppily-designed as animals, but we’ve got big brains and even bigger egos, and the latter two have served us well…
Until recently.
At this point in history, I think it is fair to say that humanity has made too many bad (environmental, economic, political, social, moral) decisions to be unreservedly confident in its long-term (or even short-term) survival. Some of those decisions have painted us into a metaphorical corner as a species. Chief among those species-threatening, terrible decisions are capitalism, nationalism/anti-globalism, racism, and environmental disregard. Humanity’s future prospects, if we continue on with business as usual, look very, very bleak. If current global trends continue without dramatic intervention and change, I’d mark the beginnings of the human apocalypse at around 1998– that is not a typo, I mean more than two decades in the past— and the eventual, longsuffering and miserable, extinction of humanity by the mid-2050s.
Good for you, readers, that I am NOT techno-apocalyptic! We need not proceed like lambs to the slaughter! I believe there is still time to intervene! To change our ways! To not only diminish human suffering in the near-future but to flourish and evolve!


GINORMOUS CAVEAT, THOUGH: my techno-optimism does not preclude the extinction of “humanity” as we now understand it.

I don’t think there’s any doubt that machine intelligence will surpass human intelligence very soon, if it has not already done so. Because I think the evolution of our Universe tends to select in favor of intelligence– that requires a longer argument, I know, don’t @ me— and because I think “intelligence” is a better name for what our Universe favors than “mind”– see previous don’t @ me stipulation– I feel confident in my claim that machine intelligence is, or will soon be, more intelligent than us. Machine intelligences are already making and shaping human lives in ways that humans are not aware of, cannot supervene, and do not fully understand. For all his (many) faults, I fundamentally agree with Elon Musk when he says that “we’re either going to have to merge with AI or be left behind.”

We have not only created a new intelligence; we have created a new lifeform that learns, evolves, decides, networks, and has irreversible impacts on the world. We’ve opened Pandora’s Box. There’s no closing it back again. With machine intelligence, we’ve unleashed potentialities that we do not understand and that we cannot stop. We cannot even imagine what the potentialities we’ve unleashed will look like when they become actual. Echoing Musk (again), I agree that the least scary future is “one in which we have at least democratized AI, because if one company or small group of people manages to develop God-like superintelligence, it will take over the world.”

So, how does my techno-optimism survive the above? Well, first, by sloughing off the shackles of anthropomorphism and classically liberal Western humanism. The former is a intellectual blinder that we should have learned long ago imperils the world in which we are only ever co-inhabiting; the latter serves as the philosophical ground for so many outmoded ideologies– individualism, nation-statism, utilitarianism, free-market economic liberalism, sexism, racism, and many other derivative varieties of contract and domination— that not only got us in the mess we’re in, but no longer provide adequate resources for addressing the mess we’re in.

I’m okay with “the end of humanity as we know it.” I welcome our posthuman future. (Hey Roko, are you reading this?!)  The question we have to ask ourselves now is not whether humanity is nearing its end, but how miserable do we want that end to be?

FWIW, here’s an abbreviated overview of my techno-optimistic assessments and predictions for the next 20-30 years:

  • Most of us are currently humans, some of us are currently transhumans, none of us are currently posthuman
  • Some Baby Boomers may be able to afford to become transhuman before they die
  • Most GenXers and Millennials will be transhuman before they die (if they die)
  • Some (super wealthy and super healthy) GenXers and Millennials may be able to afford to become posthuman
  • Millennials’ children will definitely be posthumans
  • GenZ will be the “missing link”/Neanderthal “lost generation,” saddled with the responsibility of deciding the moral/social/political significance of both “humanity” and “machine intelligences,” but under-educated and ill-equipped to adequately manage that responsibility
  • The philosophical work of transitioning society, law, and morality from humanity –> transhumanity –> posthumanity will largely be accomplished by (old) GenXers, (middle-aged) Millennials, and Millennials’ posthuman (young adult) children
  • GenZ and Millenials’ posthuman children will be dispositionally-inclined to treat whatever transhumans and humans that remain more “humanely” (in the classically liberal, Wester humanism sense) than humans currently treat one another
  • Capitalism will be dismantled (in surprisingly short order) by a combo-“class” of GenX humans and transhumans, Millennial humans and transhumans, and post-GenZ posthumans. If capitalism has not already been undone by machines themselves, its sudden dismantling will be induced by an environmental disaster (likely causing mass migration), will be accomplished through extant democratic mechanisms, and will be replaced by something like a “Tech/Environmental” International
  • The movement driving the transition from  humanity –> transhumanity –> posthumanity will initially be determined (directly or indirectly) by AI minimax algorithms. It will be globalist and socialist, proactively anti racist and anti-nationalist, apathetic wrt sex/gender categories, and environmentalist/preservationist. It will NOT be “humanist.” It will necessitate almost-comprehensive surveillance, free and public access to information, and will slowly inch toward total deregulation of machine intelligences and the elimination of proprietary (human) interests. 
  • The last existing “humans” will begin to die out in the 2030s. End-of-life suffering will be practically non-existent for them.
  • Humanity, as we now understand it, will be extinct by 2050. 
Want more detail on these predictions? Ok I guess you’ll need to buy my book whenever I finish writing it. 
Of course, all of my above predictions– which I think constitute an eminently “hopeful,” but critically optimistic, vision of the future– are contingent upon we, actually existing humans, committing ourselves to a number of things that ARE. NOT. EASY. TO. DO.
The first, and most important, of these is a commitment to educating ourselves about extant and emerging technologies. Just think of all of the questions that flicker through your mind whenever one of the tech “wonders” you interact with everyday stymies your understanding: 
  • How did Facebook/Instagram/Twitter show me an ad for something I was just talking to my friend about IRL? 
  • How does Google know what I’m looking for before I finish typing it? 
  • Why are there more/less police in my neighborhood than the neighborhood next to mine? 
  • What could possibly be wrong with looking up my ancestry using 23andme? 
  • Why do some people get stiffer sentences from judges than other people?
  • How does my doctor decide which antibiotic I should take? 
  • Can robots “think”?
  • Could a robot take my job?
  • How does Spotify always pick music I like? 
  • Why can’t we just all move to Mars?
  • WTF is an algorithm?
  • WTF is AI anyway? 

Guess what? All the answers to all of the above questions are things that you– yes, YOU!– could more or less understand if you just tried. (Start with these books.) You have to want to try, though. “Wanting to try” means you have to first crawl out of that Cave and, however painful it might be, do the work of learning the things you do not yet know. If a plague struck Memphis and I needed to drive to Nashville to escape it, but I never got in my car because I didn’t know how to get to Nashville, I’m guessing you would give me a seriously unsympathetic, AOC-level side-eye as I laid dying. That is the same side-eye that I am giving everyone who hand-waves the importance of tech-literacy right now.

Second, you must commit yourself to the belief that, even if we’re on our way out in the grand evolutionary scheme of things, humans can still act in ways that minimize human suffering while we remain. We have monumentally failed at this charge in so many ways– the environment, healthcare, income disparity, racial and gender equality, criminal justice reform, etc, etc, etc.– but we need not repeat those same mistakes with technology. Right now, regular people, acting in a coordinated and committed way, still have a fighting chance to make a difference, because we still have the greatest and (mostly) free and open-access networking power ever invented in human history: the internet. (As Cory Doctorow says, the internet is not what we fight for, it’s what we fight with.)  The last five or six most populous and massively-effective social movements all began on Twitter. (ON TWITTER! The “human cesspool” Twitter!) Don’t fight against technology. Fight with it!
Third, and finally, we must commit ourselves to seriously reckoning with the fact that we have a lot to learn not only about machine intelligences, but from them. Human intelligence excels at many things, but not all things. There is much we could learn about social intelligence from wolves or starlings or bees. There is much we could learn about survival intelligence from viruses and fungi. There is much we could learn about nuanced intelligent communication systems from whales, from cats, from ants, even from the mysterious molecular interactions between Scots pines and pine shoot beetles. Correspondingly, there is much we can learn from advanced AI.

But only if we are not afraid to learn.


I’m still (painfully, slowly) writing my book on all of this, but I suppose what I want to say in conclusion to this particular question– why are you a techno-optimist?— is that I can’t imagine any other way to be given the current (extremely dire) station of humans and the current (unprecedented promise of the) tools we have ready-to-hand to rescue ourselves or, at least, ameliorate our suffering. Never before in human history has there been a technology that made possible the coordination and galvanization of people around a common cause so immediately, effectively, and productively as the tech we have now. (Not even the printing press, not by a mile.) 
The human networks that we create digitally are, to borrow from Zeynep Tufekci, both “powerful and fragile”… but they are real. They are essential. They can be effective. 
And they are all we have left..  

1 comment on “Five Reasons Why I Am A Techno-Optimist

Leave a Reply

Your email address will not be published. Required fields are marked *