Today, Twitch provided another platform safety update, this time discussing what it has done and is doing to support the battle against malicious bot attacks, sometimes known as “hate raids.”
Twitch’s blog post referred to the situation as a “pervasive problem throughout the internet” and highlighted that many Black and LGBTQIA+ creators had to cope with hate raids last year.
According to a blog post on the platform, however, the number of reports mentioning “hate raid” is down 97 percent from the spike Twitch saw in September of last year. The platform said that this was made possible by the development of fresh “machine learning models” that can stop and stop that kind of harassment. According to Twitch, more than 75 million “potentially harmful messages” have been “proactively blocked” by the company so far this year. But the figure didn’t say whether those communications were specifically about hate raids or botting.
Twitch noted the recent purchase of Spirit AI, which will be used to improve auto-moderation in channels, when discussing how it aims to keep fighting child grooming on the platform. The post also listed a number of recent additions, including verified chatting, shared ban information, and ban evasion detection. There was one piece of information at the conclusion of the post that was new, even if the majority of the update was simply a repeat of items that many individuals who actively follow the platform already knew.
On Nov. 30, the platform intends to release a brand-new feature named “Shield Mode.” The message was vague about the tool’s use, but it will be clearer when the officially disclosed version of the anticipated feature appears later this week.