On Tuesday, this blog– which I very much consider an extension of myself– was finally released from Facebook jail, after having been “inside” for more than a month. I was reported by a user (who I do not know IRL and who is not a “Facebook friend” of mine) for violating Facebook’s “Community Standards,” specifically the proscription of spam. Reporting a user’s posts as “spam” is the easiest complaint to lodge against someone if your intent is to get them thrown in Facebook jail, and if you’re thrown in Facebook jail for that reason, it’s the hardest to get out (because Facebook doesn’t really care about spam). While I was digitally incarcerated, I couldn’t share links to this blog on Facebook, nor could anyone else, and the Facebook page set up for this blog was entirely evacuated of its content. Since Facebook owns Instagram, I also couldn’t “like” photos on that platform, and no one could like my photos.
For what it’s worth, I do not generate any revenue from this (entirely ad-free) site, and I never have, so my growing concern about the drop-off in traffic was not about money. It’s simply about the open and free exchange of ideas. Social media platforms like Facebook and Twitter are designed for short-form content, but for bloggers like me, they also provide an essential avenue for sharing (“boosting”) links to longer-form content like the essays I write here. I started this blog in 2006, around the same time that I joined Facebook and five years before I joined Twitter. The four of us have more or less happily coexisted for a long time– even and in spite of my many criticisms of social media platforms on this blog– largely because I do not spam, and I do not violate any of Facebook’s or Twitter’s other Community Standards.
So, when Facebook first notified me (on January 28) that this blog had been “suspended,” I honestly could not understand why. Facebook tells you who reported you, though, so I quickly formed my suspicions based on the profile pic of the user who filed the report, which features her mugging in front of a Confederate flag. A quick Google search of her more or less confirmed what I suspected, namely, that she just didn’t like what I stand for or what I had to say. (It turns out that, a few years ago, the person who reported my blog filed a lawsuit seeking a writ of mandamus that would have forced the South Panola (Mississippi) School District to fly the Confederate flag on school grounds. The court found that she did not have standing, and the case was dismissed.) Given the radical difference in our political views, and the utter lack of evidence for my reported “violation,” I’m now confident that this was nothing more than her attempt to silence me on Facebook.
For all of the wailing and gnashing of teeth we hear about anti-consevative online “censorship” from the political Right in this country, a recent ProPublica investigation found that Facebook appears to be mostly non-partisan in its failures to enforce Community Standards. (That is, you can post what liberals consider “hate speech” and what conservatives consider “hate speech” and it’s pretty much a crap-shoot whether or not your content will be banned by Facebook.) It’s important to keep in mind that, although the platform’s Community Standards have been around for years, no one really knew how they those standards were enforced until around the time of the Cambridge Analytica scandal two years ago, when Facebook was forced to take issues of transparency more seriously. Facebook only began issuing quarterly “Community Standards Enforcement Reports” in 2018.
Subsequently, the changes to Facebook’s protocols for reporting, reviewing, and adjudicating violations of its Community Standards have been so numerous and so frequent that it has been practically impossible for a regular user to keep up with them. So, let me give you the tl;dr: It is very easy to silence someone on Facebook. For good reasons, for suspect reasons, and sometimes for no reason at all.
If you happen to find yourself among the people who have been Facebook-silenced for suspect (or non-existent) reasons one day– and this will creep up in your inbox like a thief in the night, with no warning– you will very quickly learn that all efforts to remove your newly-fitted digital gag are entirely in vain. “Facebook jail,” as I’m sure my fellow inmates will attest, is a whole ‘nother level of maddeningly bureaucratic, mysterious, labyrinthine, Kafkaesque nightmare.
It is, to borrow a description from Hegel, “the night in which all cows are black.”
Once you’re in, you’re got. The system is going to run its course, and you are going to have to just sit and wait (“in review”) for its cloak-and-dagger process to take however long it takes. You won’t be allowed to ask any case-specific questions. You won’t be given any case-specific answers. You’ll only be given the courtesy of sitting back and enjoying this list of unhelpful, AI-generated links to “read more” about what you already know is happening to you.
The default sentence for Facebook jail is 30 days, so the best you can hope for is that you will be treated as just a garden-variety offender. But, if thirty days passes and you haven’t been emancipated yet, you will have entered the dead zone of redress. You’ll have start devising what I call a “squeaky wheel” strategy, soliciting all of your friends to report your case to Facebook as a “mistake,” in hopes that you can generate a cacophony of human squeaks loud enough that it just might (probably won’t) get the attention of at least one of Facebook’s human content-reviewers, who you just might (probably won’t) catch on a good day, and who just might (probably won’t) apply some grease to your case.
This is Facebook jail.
You can check out any time you like, but you can never leave.
According to Facebook, their paramount commitment is to “creating a place for expression” that gives people Voice. If/when they decide to limit users’ expressions, they do so in the service of one of four “values”: Authenticity, Safety, Privacy, or Dignity. So, Facebook jail really is meant to take away one’s Voice, to silence them, but for the sake of the community.
To be fair, maintaining “Community Standards” on Facebook is no easy task. There are so many Facebook users (2.4 billion), so many types of violation (6 different Community Standards categories, with 26 distinct violations), and so much content to review (11.6 million pieces of content reported in Q3 2019, almost double the 5.9 million of Q2 2019), that it’s no wonder that the judicial branch of this digital nation moves like molasses. Those numbers are mind-boggling.
Governing Facebook is like governing TWO Chinas.
To make things worse, Facebook has a relatively tiny army of human “content reviewers” (15,000 in 2019, up from 7,500 in 2018). Their job is truly awful, alternately mind-numbingly routine and PTSD-inducing. They are regularly overwhelmed both by the scale and the nature of their work. So, in the last few years, Facebook has been shifting a lot of the “moderation” work from human content reviewers to AI review. This strategy proved effective at “cleaning up” the platform initially, as the algorithms were pretty good at detecting and removing porn and nudity, but AI moderation (what I’m going to call “algo-policing”) has turned out to be considerably less effective at other kinds of moderation tasks.
Artificial neural networks and deep learning technologies are now capable of automating a lot of tasks that, only a few years ago, we would have considered beyond the reach of computers. Image recognition is one of those tasks that AI is very good at, and that is why Facebook’s algo-police were so effective at cleaning up the site of nudity and porn. Given sufficient quantity and quality training data, AI systems regularly out-perform human minds– even expert human minds (like pathologists and radiologists)– in image-recognition tasks.
Things get a lot more complicated when it comes to tasks involving speech or text, which require understanding context and intent. We’ve made a lot of progress in the natural language processing (NLP) capabilities of computers– see: Siri, Alexa, Google Assistant, and their kin– but it is exceedingly difficult to compile massive amounts of quality (that is, correctly annotated) training data for these systems to “learn” how to get better. When it comes to speech or text moderation, well, AI just isn’t there yet.
We humans often can’t decide among ourselves what a statement “means” in context, much less what a writer or speaker intends, because natural language operations are messy and variable. The situations in which speech-acts are operative are infinite in number and inconsistent in the rules governing their interpretation. A written statement can mean many different things at once. People can intend more than one thing in a single speech-act, or be unaware of their intentions altogether.
As Zuckerberg himself noted: “It’s much easier to make an AI system that can detect a nipple than it is to legitimately determine what is hate speech.”
So, at least for now, platforms like Facebook still need human moderators to look over the shoulder of their algo-police colleagues and to correct the latter when they misunderstand. Without some kind of double-check system in place, AI content moderation ends up looking a lot like a digital version of “stop and frisk.” The Facebook algo-police just stop anyone with a nipple, or anyone who uses the N-word, or (worst of all) anyone who is “reported” as an offender, throw them up against the wall, and then punish them as an offender.
Here is the current problem: Facebook has moved too quickly in off-loading moderation duties to AI systems that are ill-equipped to perform the tasks they are assigned to do. At the same time, Facebook has slowly and systematically removed every avenue for its users to “appeal” to (or communicate in any way at all with) human moderators to double-check the algo-police.
Algo-policing is part of the reason why it is so easy to be thrown in Facebook jail. The other reason is that algo-police are largely directed to “make arrests” by users, who are not expert jurists, who are not always familiar with the Community Standards, are who often don’t care about the community. Because Facebook has eliminated most avenues for appealing algo-policing decisions– and they largely ignore those appeals even when they are registered– the platform now operates with a default deference to the decisions of algo-police.
And that is why Facebook jail is a no exit situation.*
You may remember Jean-Paul Sartre‘s one-act play No Exit from your undergraduate existentialism class. In it, three wretched souls find themselves trapped in a room with one another, thinking that they are on their way to eternal damnation, only to later find out that they have been led there by a mysterious Valet to torture one another. Near the end of the play, Sartre give us this:
GARCIN: So this is hell. I’d never have believed it. You remember all we were
told about the torture-chambers, the fire and brimstone, the “burning marl.” Old wives’
tales! There’s no need for red-hot pokers. HELL IS-OTHER PEOPLE!
I was reminded of Sartre’s play several times during my 32 days in Facebook jail. (And many more times during my years on Facebook.) I’m very concerned that Facebook’s rush to embrace algo-moderation in its effort to redouble its commitment to “Voice” is, in actual fact, undermining that very commitment.
The AI systems being used for speech and text moderation are not up to the task. Yet. They may be some day soon, but not if they are being trained by the mob to distinguish between voices that should be protected and voices that should be silenced.
And that is my biggest fear about Facebook right now. These AI systems are learning (and learning fast) how to moderate, but if their erroneous decisions are being ignored, if they go unchecked and uncorrected, then what they are being fed is bad training data. Just like with stop-and-frisk, if you don’t tell police that it is incorrect to presume that all black and brown people are likely criminals, they will continue to arrest black and brown people and, propter hoc, not only reinforce their presumption but actually make it the case that more black and brown people are criminals.
We saw the damage that stop and frisk did in one major metropolitan area in one nation. Now imagine its effects extended to a community of 2.4 billion people.
* Obviously, I got out of Facebook jail, so it’s not impossible to exit. However, I am not happy about how I was emancipated.
After exhausting every available avenue for appeal (which are few and ineffective), I decided last week to enlist my friends to report the banning of this blog’s URL as a “mistake,” in hopes that I could just annoy my way to liberation. That strategy did not work.
As it turned out, I have a friend who has a friend who works at Facebook. The Facebook employee got me out of Facebook jail in less than 24 hours. I’m happy to be out, but you shouldn’t have to rely on having a well-connected network of friends to exercise your, ahem, “Voice.”
Every chance you get, ask your representatives– local, state, and federal– what they know about algorithms and what they’re doing to promote algorithmic transparency… while you still have a Voice.
wStsCOynoANKTg