YouTube sparked widespread speculation about its moderation policies this week after it admitted to accidentally deleting comments that contained phrases critical of the Chinese Communist Party (CCP). Today, the company told The Verge that the issue was not the result of outside interference — an explanation for the error floated by many.
The phrases that triggered automatic deletion included “communist bandit” and “50-cent party,” a slang term for internet users paid to defend the CCP. Some speculated that an outside group, perhaps connected to the CCP, manipulated YouTube’s automated filters by repeatedly reporting these phrases, causing the algorithm to tag them as offensive.
Speaking to The Verge, YouTube spokesperson Alex Joseph denied that this happened and said that, contrary to popular belief, YouTube never removes comments only on the basis of user reports.
“This was not the result of outside interference, and we only remove content when our enforcement system determines it violates our Community Guidelines, not solely because it’s flagged by users,” said Joseph. “This was an error with our enforcement systems and we have rolled out a fix.”
The incident is yet another example of how big tech companies have found themselves unwilling participants in a global debate about censorship and free speech. When did YouTube end up the de facto enforcer of Chinese censorship rules on the world’s internet?
Although YouTube’s comments today offer more detail than previously supplied, they leave important questions unanswered. How exactly did this error enter the system? And why did it go unnoticed for months? These aren’t trivial issues, as YouTube’s lack of a proper explanation has enabled political figures to accuse the company of bias toward the CCP.
This week, Senator Josh Hawley (R-MO) wrote to Google CEO Sundar Pichai, asking for answers in regard to “troubling reports that your company has resumed its long pattern of censorship at the behest of the Chinese Communist Party.” At a time when Republicans are being criticized for mishandling a global pandemic, talking points about Big Tech supposedly enforcing Chinese censorship are a welcome distraction.
The missing missing context
The big question is how exactly did these terms with very specific anti-communist meaning become designated as offensive?
YouTube has explained by saying its comment filters work as a three-part system, one that is broadly consistent with other moderation approaches in the industry. First, users flag content that they find offensive or objectionable. Then, this content is sent to human reviewers who approve or reject these claims. Finally, this information is fed into a machine learning algorithm which uses it to automatically filter comments.
Crucially, says YouTube, this system means that content is always considered within its original context. There are no terms that are considered offensive every time they appear, and no definitive “ban list” of bad phrases. The aim is to approximate human ability to parse language, reading for tone, intent, and context.
In this particular case, says YouTube, the context for these terms was somehow misread. That’s fine, but what’s unclear is whether this was the fault of human reviewers or machine filters. YouTube says it can’t answer that question, though presumably it’s trying to find out.
Whether humans were responsible for this mistake is an interesting question, as it suggests it’s possible for human moderators to be tricked by users flagging content as offensive — despite YouTube’s protestations that this was not the case.
If enough CCP-friendly users told YouTube that the phrase “communist bandit” was unforgivably offensive, for example, how would the company’s human reviewers react? What cultural knowledge would they need to make a judgement? Would they believe what they’re told or would they stop to consider the wider political picture? YouTube doesn’t censor the phrase “libtard,” for example, though some people in the US might consider this an offensive political insult.
What’s particularly strange is that one of the terms that triggered deletion, “wu moa,” a derogatory term for users paid to defend CCP policies online, isn’t even censored in China. Charlie Smith of the nonprofit GreatFire, which monitors Chinese censorship, told The Verge that the phrase really isn’t considered to be that offensive. “In general, wu mao neither need protection nor need to be defended,” says Smith. “They are wu mao and their job is just to cut, paste, and click. Nobody pays them any heed.”
Again, we just don’t know what happened here, but Google’s explanation doesn’t seem to rule out completely the possibility of some sort of coordinated campaign being involved. At the very least, this is more proof, if it were needed, that internet moderation is an unrelentingly difficult task that’s impossible to resolve to the satisfaction of all.
Transparency over censorship
This incident may be forgotten, but it points to a bigger problem in how tech companies engage with the public regarding how platforms obscure or highlight content.
Big Tech has been generally unwilling to be too explicit about these sorts of systems, and it’s a tactic that has enabled political accusations, particularly from right-wing figures, about censorship, bias, and shadowbanning.
This silence is often an intentional strategy, says Sarah T. Roberts, a professor at UCLA who researches content moderation and social media. Tech companies obscure how these systems work, she says, because they’re often more hastily assembled than the firm would like to admit. “I think they’d like us to all imagine that these processes are seamless and infallible,” says Roberts. But when they don’t explain, she says, people offer their own interpretations.
When these systems are exposed to scrutiny, it can reveal anything from biased algorithms to human misery on a grand scale. The most obvious example in recent years has been revelations about Facebook’s human moderators, paid to evaluate the most gruesome and disturbing content on the web without proper support. In Facebook’s case, a lack of transparency ultimately led to public outrage and then government fines.
This isn’t really a good motivation to be more open, but in the long run, this international obscurity can lead to even bigger problems. Carwyn Morris, a researcher at the London School of Economics and Political Science who focuses on China and digital activism, tells The Verge that a lack of transparency creates a widespread rot in platforms: it undermines user trust, allows errors to multiply, and makes it harder to slapdash moderation from genuine censorship.
“I think content moderation is a necessity, but it should be transparent to avoid any authoritarian creepage,” says Morris, “or to find mistakes in the system, such as this case.” He suggests that YouTube could start by simply notifying users when their comments are removed because they violate its terms — something the company only does currently for videos. If the company had done this already then it might have noticed this particular error sooner, saving it a whole lot of trouble.