This quarterly report shares metrics on how the platform is doing at preventing and taking action on content that goes against its community standards, while protecting the community's safety, privacy, dignity and authenticity.

The latest report shows some positive strides towards improvements in prevalence, providing greater transparency and accountability around content moderation operations across different Facebook products.

It includes metrics across 12 policies on Facebook and 10 policies on Instagram.

During the fourth quarter of 2020, on Facebook the team took action on:
  • 6.3 million pieces of bullying and harassment content — up from 3.5 million in Q3, due in part to updates in technology to detect comments
  • 6.4 million pieces of organised hate content — up from four million in Q3
  • 26.9 million pieces of hate speech content — up from 22.1 million in Q3 due in part to updates in technology in Arabic, Spanish and Portuguese, and
  • 2.5 million pieces of suicide and self-injury content — up from 1.3 million in Q3, due to increased reviewer capacity.
During the fourth quarter of 2020, on Instagram the team took action on:
  • five million pieces of bullying and harassment content — up from 2.6 million in Q3, due in part to updates in technology to detect comments
  • 308 000 pieces of organized hate content — up from 224 000 in Q3
  • 6.6 million pieces of hate speech content — up from 6.5 million in Q3, and
  • 3.4 million pieces of suicide and self-injury content — up from 1.3 million in Q3, due to increased reviewer capacity.
"Our goal is to get better and more efficient at enforcing our community standards. We do this by increasing our use of AI, by prioritising the content that could cause the most immediate, widespread and real-world harm and by coordinating and collaborating with outside experts," says Kojo Boakye, director of Public Policy, Africa.

Facebook has indicated that it has plans to share additional metrics on Instagram and add new policy categories on Facebook. Efforts are also being made to externally audit the metrics of these reports, while making the data more interactive so people can understand it better.

"We will continue to improve our technology and enforcement efforts to keep harmful content off of our apps," concludes the team.

For more information, visit www.facebook.com.