Twitter is getting better at moderating its platform. That’s one of the main takeaways from the company’s most recent transparency report, which it shared on Wednesday. Between July 1st and December 31st, 2020, Twitter says it took action against 964,459 accounts for abusive behavior. Compared to the first six months of 2020, that’s a 142 percent increase. Over the same timeframe, Twitter also removed more hateful content. On that front, the company says it took action against 1,126,990 accounts, a 77 percent increase from the 635,415 accounts it reprimanded in the first half of 2020.
What’s notable here is that the company attributes the latter increase to policy changes it put in place throughout 2020. Specifically, it calls out the fact that it began taking action on “content that incites fear and/or fearful stereotypes about protected categories” as a result of an uptick in harassment during the COVID-19 pandemic.
The company also calls out an expansion of its hateful conduct policy in early December that saw the company prohibit language that “dehumanizes” people based on their race, ethnicity or national origin. Not mentioned in the report — but likely still something that had a positive effect — is the ban the company put in place in July against links to content that promotes violence and hateful conduct.
In another part of the report, Twitter also attributes its recent success to better technology. As of the second half of last year, the company claims its automated moderation tools helped it take action against 65 percent of abusive tweets and other behavior before someone had to flag the content for its moderators. To put that percentage in perspective, those tools were about 50 percent effective by late 2019.
Obviously, Twitter has yet to completely stop abuse, harassment and hate speech from taking place on its platform, but today’s report shows the company is at least making progress on its 2019 promise to “increase the health of public conversation.”