AI

Facebook Machine Learning Tool to Check Terrorists Posts

In this post, you will learn about details on Facebook machine learning tool to contain online terrorists propaganda. The following topics are discussed in this post:

  • High-level design of Facebook machine learning solution for blocking inappropriate posts
  • Threat model (attack vector) on Facebook ML-powered solution

ML Solution Design for Blocking Inappropriate Posts

The following is the workflow Facebook uses for handling inappropriate messages posted by terrorist organizations/users.

  • Train/Test a text classification ML/DL model to flag the posts as inappropriate if the posts is found to contain words representing terrorist propaganda.
  • In production, block the messages which the model could predict as inappropriate with very high confidence.
  • Flag the messages for data analysts processing if the confidence level is not very high. These messages are prioritized based on likelihood estimate of the messages being inappropriate.

Threat Model (Attack Vector) on Facebook ML-powered Solution

The following lists down the threat model using which terrorists could carry out attacks on the claimed ML model (by Facebook) thereby compromising both systems’ integrity and systems’ availability. Here the assumption is that ISIS representatives of Islamic background/religion are trying to hack into the ML-powered system.

  • The hackers (hypothetically speaking, the users categorized as terrorists) get control of users’ account belonging to any non-Islamic country. Alternatively, the hacker from any non-Islamic country opens up an account on the Facebook.
  • The hacker starts posting encoded messages with words of interests interleaved and hard enough for the data analyst to flag the post as inappropriate.
  • The ML model gets trained with the encoded messages.
  • In future, the hackers (terrorists) start exchanging messages thereby compromising ML systems integrity as the inappropriate messages pass through the system. This is a classic example of false negatives. The following diagram represents the same.

  • Let’s say, Facebook in future finds out about the encoded messages.
  • It makes it difficult to block these encoded messages with the primary reason that valid message will also get blocked. This would compromise ML system availability. In simpler words, this would mean that Facebook ML system would make system unavailable for users posting valid messages. The following diagram represents the same.

Summary

In this post, you learned about how did Facebook go about designing a machine learning solution to contain online terrorist propaganda. The challenge is to make sure that the model is retrained appropriate appropriately and look for adversary data sets entered into the system by hackers (the terrorist organization) to train the model inappropriately.

Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking.

Recent Posts

Creating a RAG Application Using LangGraph: Example Code

Retrieval-Augmented Generation (RAG) is an innovative generative AI method that combines retrieval-based search with large…

8 hours ago

Building a RAG Application with LangChain: Example Code

The combination of Retrieval-Augmented Generation (RAG) and powerful language models enables the development of sophisticated…

23 hours ago

Building an OpenAI Chatbot with LangChain

Have you ever wondered how to use OpenAI APIs to create custom chatbots? With advancements…

2 days ago

How Indexing Works in LLM-Based RAG Applications

When building a Retrieval-Augmented Generation (RAG) application powered by Large Language Models (LLMs), which combine…

6 days ago

Retrieval Augmented Generation (RAG) & LLM: Examples

Last updated: 25th Jan, 2025 Have you ever wondered how to seamlessly integrate the vast…

6 days ago

What are AI Agents? How do they work?

Artificial Intelligence (AI) agents have started becoming an integral part of our lives. Imagine asking…

4 weeks ago