Facebook is using AI to help its content moderators
Facebook has relied on AI to help its content moderation efforts for awhile and today the company shared the latest way in which machine learning is making the thankless task easier. The Verge reported that the social network is now using machine learning to prioritize posts in a queue that human moderators need to go through.
Before the new AI was implemented, the queue was mostly sorted chronologically in the order the posts were reported. This could slow down the company’s response time to more urgent cases while moderators are making their way through earlier posts. With the new method, the system will use a few machine learning algorithms to prioritize flagged content based on “their virality, severity and the likelihood they’re breaking the rules.”
The company previously stated that it’s taken action against 9.6 million pieces of content in the first quarter of 2020 — a significant increase over the 5.7 million in the quarter prior. While some of those posts are obvious enough to lead to automatic blocking or removal, the rest are entered into a queue for human moderators to evaluate. The process of determining if content is harmful can lead to mental health issues, and earlier this year Facebook settled a case with about 11,000 of its moderators with a $52 million payout. It also promised to update its content moderation software, muting audio by default and showing videos in black and white.
With Facebook continuing to be the forum by which many people in the world communicate with their friends and family, its ability to react to fake and hateful content is crucial to keeping its platform safe.
(13)