Researchers have developed a tool that will help you unmask fake accounts on Twitter ? the BotOrNot analyzer.
Indiana University researchers have found a way on how you will know whether or not a Twitter account is operated by human or a social bot ? an automated software application.
“We have applied a statistical learning framework to analyze Twitter data, but the ‘secret sauce’ is in the set of more than one thousand predictive features able to discriminate between human users and social bots, based on content and timing of their tweets, and the structure of their networks,” said Alessandro Flammini, an associate professor of informatics and principal investigator on the project.
BotOrNot has the ability to analyze over 1,000 features from a user?s friendship network, Twitter content and temporal information by calculating the likelihood of whether or not an account is used by a bot or not.
One of the researchers, Alessandro Flammini said that they used a ?statistical learning framework? in order to identify which is real. The researcher said though they have this ?statistical learning framework?, the true ?secret sauce? is the number of features used by an account in order to tell the difference between social bots and human Twitter users.
Moreover, they said that the BotOrNot analyzes the content, timing of the Twitter account?s tweets, and the structure of the accounts? networks.
The researchers recounted how they developed the tool. They said they looked at the habits of Twitter bots created by a Texas A&M University professor?s infolab and they trained statistical models to tell the difference between humans and social bots. And by using the AUROC evaluation measure, BotOrNot gets a .95 score, with 1.0 indicating perfect accuracy.
“Part of the motivation of our research is that we don’t really know how bad the problem is in quantitative terms,” said Fil Menczer, the informatics and computer science professor who directs IU’s Center for Complex Networks and Systems Research, where the new work is being conducted as part of the information diffusion research project called Truthy. “Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander.”
The goal is to support human efforts to counter misinformation with truthful information said Flammini and Menczer. Both believed that social bots could be dangerous, if not identified. It may cause panic during an emergency, affect the stock market, facilitate cybercrime and hinder advancement of public policy.
(Photo courtesy of http://truthy.indiana.edu/botornot/)