Facebook says it is improving the way it moderates content on its platform by using artificial intelligence (AI). The social networking giant, which has a content review team of around 15,000 reviewers who review content in over 50 timezones, receives a significant amount of user reports on objectionable content on an active basis. However, as reviewing those reports is vital to build an effective social network, Facebook is now deploying machine learning. This helps to prioritise reported content. Facebook is also boosting copyright protection by allowing page admins to submit copyright requests.
Content moderation is must for a massive platform like Facebook. But with thousands and millions of users posting content simultaneously, it is not an easy task to filter out something that is not harmful or objectionable at first glance. The growth of hate speech and violent posts on social media is also making it difficult for human reviewers to put a stop on all inappropriate content. Thus, Facebook wants to use its AI and machine learning skills to speed up the filtering process.
Facebook was initially relying on a chronological model to deal with content moderation. However, it over time started making a shift towards AI and enabled the system to automatically find and remove content that isn’t suitable for the masses. That automation helped recognise duplication reports from Facebook users, identify content such as nude and pornographic photos and videos, limit the circulation of spam, and prevent users from uploading violent content.
Now, Facebook wants to go beyond automation and use its machine learning algorithms to sort the reported content on the basis of priority to help utilise its human reviewers optimally.
“We want to make sure we’re getting to the worst of the worst, prioritising real-world imminent harm above all,” Ryan Barnes, a Facebook product manager who works with its community integrity team, told reporters during a press briefing on Tuesday.
Facebook is using its algorithms to intelligently rank user reports in a way that its human reviewers could review and filter out all the content that couldn’t be caught by computers but is harmful for the society. One key factor that the company is taking into consideration is around how popular a violating content could potentially be on the platform.
“We look for severity, where there is real world harm, such as suicide or terrorism or child pornography, rather than spam, which is not as urgent,” Barnes said.
Additionally, Facebook is considering the likelihood of violation and looks for the content that is similar to what already violated policies. This would help prioritise areas where human reviews are important.
Having said that, Facebook knows that AI is not the perfect solution for all problems and cannot solely help moderate content on its platform.
“We’ve optimised AI to focus on the most viral and most harmful posts, and given our humans more time to spend on the most important decisions,” said Chris Palow, a software engineer in Facebook’s interaction integrity team.
Facebook has also developed a local market context that helps understand market-specific issues, including the ones that emerge in India. This will allow the machine learning algorithms to consider local context and help mark out content that could impact a particular group of people, Palow explained.
In addition to the new changes to its content moderation, Facebook has announced that it is expanding access to its Rights Manager to give all page admins on its platform and Instagram with the ability to submit copyright protection applications. This will allow more creators and brands to issue takedown requests for the content re-uploaded on both Facebook and Instagram. The Rights Manager was piloted with certain partners in September.
In 2020, will WhatsApp get the killer feature that every Indian is waiting for? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts or RSS, download the episode, or just hit the play button below.