Home » Technology » Twitch responds to ‘Twitch Do Better’ movement with improved chat filters

Share This Post

Technology

Twitch responds to ‘Twitch Do Better’ movement with improved chat filters

Twitch responds to ‘Twitch Do Better’ movement with improved chat filters

Today, Twitch has issued a statement announcing the steps it’s taking to protect its marginalized streamers.

“We’ve seen a lot of conversation about botting, hate raids, and other forms of harassment targeting marginalized creators,” Twitch writes. “You’re asking us to do better, and we know we need to do more to address these issues.”

Twitch says it’s identified “a vulnerability in our proactive filters, and have rolled out an update to close this gap and better detect hate speech in chat.” It also says it will implement more safety features in the coming weeks, including improvements to the account verification process and ban evasion detection tools.

This statement is in response to the hashtag #twitchdobetter, which was an effort started by Twitch creator RekItRaven in order to bring awareness to harassment issues that Black creators were experiencing on the streaming platform.

“I was hate raided for the 2nd time in a week and I shared both the first and second occurrences [on Twitter] because they were very pointed rather than the normal, ‘You’re fat, black, gay stuff,’” Raven tells The Verge via direct messaging.

(Content warning: racism)

Raiding is a popular Twitch feature that allows a streamer to send viewers to another streamer at the end of their broadcast. It’s a tool used to boost viewership, grow communities, and foster connections between streamers and their audiences. Hate raids are the polar, toxic opposite. In hate raids, a streamer directs their viewers to another creator — who is often times Black, queer, female, or has an intersection of marginalized identities — in order to bombard that streamer with hate speech and harassment.

Raven believes they became a target for hate raids because they stream using the Black tag, a new Twitch feature that allows users to classify their streams with different markers. The tags are ostensibly used for creators to categorize their streams so interested users can better find the content they’re looking for, but it also creates a beacon trolls use to zero in on vulnerable, marginalized streamers. After their experience with hate raids, Raven noticed other marginalized streamers in their community were having the same experiences. And with no word from Twitch on what was being done to protect its users from that kind of targeted, violent harassment, Raven decided to re-start the conversation.

“I started #TwitchDoBetter because I’m tired of having to fight to exist on a platform that says they’re diverse and inclusive but remained silent to the pleas of marginalized creators asking for more protections from hate raids,” Raven says.

Twitch struggles with keeping toxicity off its platform. Last year, streamer CriticalBard was subjected to a wave of racist trolls when he became the temporary face of the “pogchamp” emote. Twitch also removed its TwitchCop emote amid concerns it might be used to harass creators talking about police violence after George Floyd’s murder. In these situations and now, Twitch has been reactive to the needs of its users rather than proactive, resulting in creator frustration. Better, more proactive moderation tools have been a perennial ask from Twitch’s marginalized creators.

The tools Twitch is implementing in today’s safety rollout will seemingly only address trolls using non-Latin characters to circumvent chat filters. Streamers are asking for more.

“I’d love to see creators having more tools to control their experience like allowing creators to block [recently created] accounts from chatting, [and] allowing mods to approve or decline raids,” Raven says.

Share This Post