What If You Were Gerald McGrew?: A “Rebuild the Internet” Thought Experiment

You may remember the story by Dr. Seuss (né, Theodore Seuss Geisel) from 1950 entitled If I Ran the Zoo, in which the pint-sized protagonist, Gerald McGrew, imagines the amazing creation he could bring about if he were allowed to run the zoo. If I Ran the Zoo is not only a great story about the never-before-seen exotic animals that would populate “the new zoo, McGrew Zoo,” but it also just so happens to contain the first ever documented use of the word “nerd.” So, there’s a fun little factoid from the “Nerd History” file for your next dinner party conversation!

In Suess’ story, Gerald McGrew embarks upon what professional philosophers call a “thought experiment,” aptly defined by Lindsay Bertram Yeates as “a device with which one performs an intentional, structured process of intellectual deliberation in order to speculate, within a specifiable problem domain, about potential consequents (or antecedents) for a designated antecedent (or consequent).” Philosophical thought experiments tend to be quite confining in their already-determined antecedents/consequents, so it can sometimes feel like all the fun has been sucked out of them by all the things that all the philosophers have already written about them– see Adriel Trott’s really great treatment of “intellectual capture” here— but it’s important to remember that thought experiments really boil down to exactly what Gerald McGrew is doing when he asks “what if I ran the zoo?”. Anytime one asks “what if…?”, about anything, one is engaging a thought experiment.

Gerald McGrew’s begins with an evaluation of the current state of zoo-affairs: “It’s a pretty good zoo, and the fellow who runs it seems proud of it, too. But he quickly realizes that “pretty good” can always be better, and he wonders how he might imagine “better” being brought about.

You know what else is a pretty good zoo? The internet.

(And the fellows who run it seem proud of it, too.)

I suspect we might all agree that there is much that could be better about the internet. And it occurred to me recently that articulating exactly how one might “rebuild the internet,” Gerald-McGrew-style, would be an excellent ethical thought experiment. For context, here’s the backstory to this realization:

Last week, in my Technology and Human Values course, we were discussing Hans Jonas‘ essay “Technology and Responsibility: Reflections on the New Tasks of Ethics” (here), in which he argues that advances in technology have empowered humans with a more expanded “agency” (i.e., “power to impact the world”) and, correspondingly, require us to expand our sense of moral responsibility. One of the “new” areas of ethical concern that Jonas suggests we take up is an “ethics of technology.” The Jonas essay is from 1972, so progress has been made in this domain since, but we had a particularly interesting discussion trying to think about “technology ethics” in conjunction with (another of Jonas’ suggested fields) “future generation” ethics. What, if anything, do we owe to generations that succeed us? How do those obligations amplify, mitigate, or supplant our current obligations with respect to the ethical development, use, and design of technology?

And then the question just naturally arose: What if you could rebuild the internet? What if you could go back and prescribe (or proscribe) specific norms, activities, capacities, or access requirements?

Before getting to my own Gerald McGrew answers, I want to say a three quick things about the pedagogy of this thought experiment.

  • I think it’s important to frame it as an exercise in non-ideal theory. All thought experiments are imaginative exercises, of course, but no veil of ignorance should be introduced for this one. We’re not locked in a Chinese Room. We’re not describing Kallipolis or the first geometer. We should presume that, despite what anyone says about them, barbers are entirely capable of shaving themselves. THAT DAMNED CAT IS EITHER ALIVE OR IT ISN’T. (See Charles Mills on the many problems with, and presumptions of, ideal theory here.) The rebuild-the-internet thought experiment should be engaged from one’s current position in history and culture, informed by all of the knowledge we currently have about the development of the internet so far, and incorporating our evaluations of its merits and demerits as it now exists.
  • It’s also important to encourage students (or peers, if you’re playing along with friends) not only to identify the changes/fixes they would make in their redesigned internet, but also to articulate the norms guiding those changes. This where the conversation gets really interesting.
  • This is a more challenging antecedent, but I think one really must insist upon a very basic (at least) tech-literacy when articulating any proposed changes to the internet, which will inevitably require those engaging the thought experiment to have a minimally-decent understanding of how the internet works. That is, the thought experiment should force participants to think about– or, even better, explain– not only what “digital information” is, but what information systems are: what they do (or cannot do), whether or not they have a “nature” (i.e., essential character, design, set of capacities or limitations), whether they have (or could possibly develop) “agency” or “mind” or “consciousness,” whether they merely mimic or significantly exceed human capabilities, etc., etc., etc. … ROBOT OVERLORDS!

Ok, so if I was Gerald McGrew and could “rebuild the internet,” below are some of the changes I would make.

(Caveat: I don’t think I had ever really thematized the question in this way to myself before, so these were my off-the-cuff remarks, though I think I would probably stick by most of them.)

  1. Anonymity/Psuedonymity would be a privilege,  not a right, on the internet. 
    THE governing moral norm of my “rebuilt” internet would be transparency, so anonymous or pseudonymous participation in any online platform would be, by default, prohibited in my rebuilt internet. I doubled-down on this by proposing that any platform or website allowing for anonymous/pseudonymous participation would have to be pay-to-participate. That is, “free speech” in (the digital) “public”– the ability to speak without personal responsibility or personal consequences for one’s speech acts– would be a privilege that comes with a monetary cost.

    My students immediately objected that this proposed change would intensify dangers/threats already experienced by members of “vulnerable” online groups as we now understand them– they specifically mentioned women, LGBTQ users, and racial minorities, but also whistle-blowers and “users with unpopular or unorthodox views”– who, they argued, need the protections provided by anonymity to participate in online conversations with the same freedom and liberality as members of non-vulnerable groups. I stipulated that whistle-blowers might constitute a special case for which I could imagine designing exceptions, but that whistle-blowers could be distinguished “in kind” from the other vulnerable groups.

    In reply to the supposition of other (identity-based) vulnerable groups’ “need” for anonymity, I argued that this was a chicken-and-egg problem: the reason that the vulnerable groups they identified regularly experience identity-specific attacks is because of the widespread allowance for (and prevalence of) anonymous and pseudonymous online activity, i.e., the current internet norm is that haters are not required to identify themselves as “authors” of their hate-speech. Correspondingly, the reason that regular targets of online attacks feel the need to advocate for the allowance of their own anonymous/psuedonymous participation in online conversations is because of the fact that, absent such protections, they would be would more identifiable targets of attack.

    As I explained to my students, the current method of “correcting” this chicken-and-egg morass has been to adopt what amounts to speech-policing protocols, defined primarily through Terms of Service (ToS) or Terms of Use (ToU) agreements and enforced in an almost entirely ad-hoc manner by sites that use them. Just take a look at the ToS and ToU parameters that more or less set the standards for online activity today– Facebook’s ToS, Twitter’s ToS, Google’s ToS, Instagram’s ToU, YouTube’s ToS, Snapchat’s ToS, TikTok’s ToS, Reddit’s ToU– and you’d be hard-pressed to find an actionable governing principle common to them all. And that’s not even to mention the entire sub-universe of internet-defining sites– hello, 4chan, 8chan, 8kun, and Endchan!– that not only capitalize on the host of human vices that anonymity amplifies, but just regularly flaunt their violations of all human norms, IRL or digital.

    Again, if I could rebuild the internet, I would do so first with an eye to preventing, as much as possible, the re-instantiation of the worst elements of our current internet. And I am convinced that the priority of privacy concerns over transparency concerns is the worst element of our current internet.

  2.  “Cookies” would be outlawed.
    If your worry, like mine, is not so much about “the internet” carte blanche, but rather about the specific elements of our digital infrastructure that gave rise to what we now call “data brokering,” then you will already be sympathetic to this proposed change. An internet cookie is, to most people, like the carburetor in their car: we have a more or less generic understanding of how it works, but nothing even closely approximating a real understanding, and we’re disinclined to ask any serious questions about it until it stops working… at which point, we just hand the problem over to a real mechanic.

    Here’s the thing, though: all of the brouhaha surrounding the Facebook/Cambridge Analytica scandal of 2018/9, all of the (so-called) paranoia about our smartphones “watching” and/or “listening to” us, all of our uncanny double-takes when Google’s PageRank algorithm correctly auto-completes our search queries, all of our worries about facial recognition technologies or predictive policing or GPS tracking or any other number of unseen surveillance technologies– ALL OF THEM are, directly or indirectly, made possible by cookies.

    IRL cookies, when you consume them, inevitably leave crumbs. Digital “cookies” also leave “crumbs,” and those digital crumbs are how the activities of your online self are tracked, surveilled, assessed, evaluated, and monetized (i..e, bought and sold by data brokers). Until recently, we had a lot more control over the crumbs we left behind, but the introduction of second-party and third-party cookies has basically eliminated any individual say we had in the the trails that lead back to us.

    Tl;dr: if cookies were forbidden, data brokers would have nothing to eat, and would not exist.

  3. “Opt-in” data collection would be mandatory. 
    “Opt-in” data collection is to the ethics of technology what “affirmative consent” is to the (largely, feminist) ethics of sexuality. The basic idea is that you shouldn’t have to say “no” to having your privacy (or body) violated. This is such a manifestly obvious Good Thing that I won’t make the whole argument for it here, but check out this great piece by Brian Barrett (on Wired) for a fuller treatment of the matter.

    Obviously, this suggestion would not be necessary if my suggestion #2 above were adopted, but might as well hedge my bets while I’m redesigning the zoo.

I’m sure that I’ll have more redesigns to offer when I think about it more, but please use the comment section below to offer your own thoughts.
How would YOU redesign the internet?

Leave a Reply

Your email address will not be published. Required fields are marked *