Twitter has significantly increased the amount of moderation over the second half of 2020.
In the period spanning from July 1 to December 31 last year, the social media platform took action against more than 1.1 million accounts for hateful content and more than 946,000 accounts for abusive behavior. The two represent a 77 percent and 142 percent increase respectively when compared to the first six months of 2020, according to a transparency report the company published this week.
Twitter’s increased moderation comes amid ongoing efforts to “increase the health of public conversation,” and the tech giant attributes its recent success to two main factors. The first is a collection of changes it made throughout last year to its hateful conduct policy, which introduced “content that incites fear and/or fearful stereotypes about protected categories” in response to abusive behavior spawning from the coronavirus pandemic. The second factor is an improvement in technology: the company’s systems managed to flag 65 percent of all cases before users had complained about them, as compared to the 50 percent back in 2019.
Of course, there’s still plenty of work for Twitter to do to mitigate and reduce hateful and abusive content on its platform, but the figures show that it’s at least heading in the right direction.
Elsewhere in tech, Facebook is paying out more than $1 billion USD to content creators through 2022.