How does Facebook detect bots?

 

Photo by NeONBRAND on Unsplash

 

Facebook is able to detect bots in a lot of ways, but mostly with associated accounts. It flags and bans those accounts that:

 

1. Reach out to a lot more other accounts than usual
2. Showcase a large volume of activity that seems automated
3. Perform activities that don't seem to originate from the country associated with the account
4. Have an unusual demographic structure of their Facebook friends
5. Aren't involved in using Facebook beyond a certain objective

 

Background

 

On 4. of March this year, Facebook revealed a new AI tool for fighting fake accounts. It uses a technology called "Deep Entity Classification". With it, Facebook algorithms can detect fake accounts based on a lot of factors, namely human behavior. It even improves over time, being able to learn from past experiences with bots.

It's also building a fake platform to serve as a "honey pot" for bots. The idea is to make a scaled-down imitation of the real platform, without human users. Bots would then be drawn to it. There, they would deal with fake users (other bots) and Facebook would learn from their behavior. Most importantly, they wouldn't be able to harass or spam actual human users.

These are just some of the steps that Facebook has recently made to prevent unwanted traffic. However, some of them had an unfortunate side effect of banning human users as well. Many people have complained about their accounts being banned or permanently suspended. Naturally, that left a lot of people, both developers and non-developers alike, wondering:

 

How does Facebook detect bots?

 

Photo by Christina @ wocintechchat.com on Unsplash

 

As it turns out, in a lot of ways. One of the original ways to prevent bot traffic was to hide good portions of data from the web, requiring users to log in or sign up to see the full content. That is more complex than it sounds; Facebook needs to offer enough content to people without accounts to draw them in. It's forced to thread a thin line between protecting data and shutting itself from the public.

So, Facebook implemented a strategy where it shows some content to the public while requiring a person to log in to access the rest.

However, that increased the number of fake accounts substantially. To make matters worse, a lot of unethical developers started using them to spread viruses, spam, and even political propaganda. It was clear that Facebook needs to improve its strategy. That's where the aforementioned "Deep entity classification" (or DEC) comes in.  It uses deep learning to weed out any account that it deems suspicious.

Facebook didn't offer full disclosure as to what offenses may deem an account "fake". However, there are some rules of thumb that they either revealed or have been discovered by developers. Knowing them will help avoid accounts being classified as fake.

Accounts that get bans usually:

 

 1. Reach out to a lot more other accounts than usual

 

Reaching out to a large number of people in a short time is one of the most obvious patterns of spammer behavior. A fake account created for such a purpose will obviously try to message or befriend as many people as possible. Facebook realized this quickly so it now bans accounts that are exceedingly reaching out to other accounts, especially if that behavior comes from a freshly made account that is not active on Facebook in other ways.

A good rule of thumb for users to avoid a ban is to try and use Facebook in a more moderate, wholesome way.

 

2. Showcase a large volume of activity that seems automated

 

Humans and bots behave in different ways online. While human actions tend to be slower and less predictable, bots behave according to highly logical algorithms and in a much faster way. Facebook managed to track the behaviors of its users and weed out bots based on their highly predictable, fast, and repetitive usage. If Facebook doubts that a bot is using a certain account, they might invoke a partial ban, that only prevents the account from accessing some functionalities while letting them use others.

It's not surprising that not many genuine accounts get banned for this offense. It's hard for people to imitate bot behavior even they try to do it on purpose, let alone accidentally.

 

3. Perform activities that don't seem to originate from the country associated with the account.

 

 Photo by henry perks on Unsplash

 

It's no secret that ads are the main source of revenue for Facebook. In order for them to be effective, Facebook uses data from its users to enable the creation of targeted ads. That's one of the most important reasons why it tracks people's whereabouts. Naturally, the company uses that information to find and disable fake accounts.

If someone logs in from, say, Tokyo and 20 minutes later from Rio de Janeiro, it assumes that they didn't really travel that far so quickly. So naturally, they used proxies or tried to manipulate Facebook's algorithm in some other way and the company doesn't like that. Especially when considered that it might mean that an account has been hacked in. In order to prevent that from happening, the company disables or temporarily locks such accounts until the issue gets cleared up.

 

4. Have an unusual demographic structure of their Facebook friends

 

As mentioned earlier, Facebook started using artificial intelligence in its quest against bots. That means that it learns from actual, human users and is able to determine when an account falls out of line. As a social network, one of the most obvious ways it can determine abnormal behavior is based on their friend network. More precisely, its demographic structure.

A human might have a lot of friends around the same way with some older/younger work colleagues and members of their family. Therefore, the demographic structure of their Facebook friends is pretty predictable. A bot only looking for friend requests regardless of who is it with is much more suspicious. And accounts that Facebook finds suspicious are bound to either get disabled, banned, or at the very least see some restrictions.

 

5. Aren't involved in using Facebook beyond a certain objective

 

Over the years, Facebook grew to encompass a lot of possible online activities, especially social ones. Chats, games, quizzes, videos, pictures, groups, and business pages are just a part of what Facebook offers to its users. So when an account only uses one functionality every time, its algorithms assume that they are not there to use the network in the way it was intended. Spammers and bots are especially infamous for that because they are created with a specific purpose.

Therefore, it's not surprising that Facebook might restrict some functionalities from users that seem to abuse them, especially if it considers them to be bots or spammers.

Again, the solution seems to be in using Facebook in a wholesome, more genuine way.

 

Photo by Glen Carrie on Unsplash

 

It's clear that Facebook has stepped its game in an effort to win back its credibility in the eyes of users and ad companies alike. Of course, no one would argue that it shouldn't do so. It's beyond important to weed out spammers and unethical developers.

However, it also means that others may be harmed in the crossfire. We at justLikeAPI are investing extra effort in an attempt to make sure our service works on Facebook in an ethical, legal way. Even though it may require additional resources.

Leave a Comment