Troll-Hunting Algorithm Developed – Researchers Can Now Spot Internet’s “Future Banned Users”


Researchers at Cornell University have developed an algorithm that can positively identify internet trolls before they become a menace.

Cornell University researchers have developed the algorithm that can spot internet trolls quite accurately. The team managed the feat by conducting an 18-month-long study of banned commenters on cnn.com, breitbart.com and ign.com. The three-member team now claims they can identify Internet trolls with great accuracy. They proclaim that the algorithm needs just about 10 posts to spot an inflammatory commenter.

The study, which was funded by internet giant Google, compared anti-social users or “Future Banned Users” (FBUs), to more cooperative commenters or “Never Banned Users” (NBUs). Nearly all of the 10,000 FBUs studied commented at a lower perceived standard of literacy and clarity than the average. Furthermore, this standard only declined until they were warned and then later banned from commenting.

The study also discovered that trouble-making commenters are more likely to focus their efforts on fewer comment threads relative to the amount they post. In simpler terms, internet trolls are always on the lookout for online war of words and hunt for threads that have a lower comment count, which makes them easier to spot a viable opportunity to spread hate or start a virtual fight.

As internet isn’t equal everywhere, trolls too, aren’t created equal. The researchers quickly realized that instigators on CNN were more likely to initiate new posts or sub-threads, whereas trolls at Breitbart and IGN were more likely to comment on existing threads.

Summarizing the complex algorithm, the team stated that in general, FBUs are incendiary and persistent commenters with poor grammar, and they tend to get into heated arguments right before they are banned.

The research is quite important owing to the rising amount of comments that need to be scrutinized for inflammatory or instigative content. It is becoming increasingly difficult to manually verify each comment or thread. Moreover, machines simply aren’t intuitive enough today to pick up on the cues hidden within plain sight. While a human may easily spot derogatory, insulting, or insinuating posts from an internet troll, a digital brain can only censure explicit profanity.

Despite the promise of automatically identifying and even auto-banning comment tyrants, researchers caution that their algorithm needs to be further improved. By their own admission, their system wrongly accuses one in five commenters as internet trolls.

Apart from the error rate, many-a-times, it’s the restrictive or archaic policies of the platforms that can anger a person turning him or her into an internet troll, caution the researchers.

“Taking extreme action against small infractions can exacerbate antisocial behavior (e.g., unfairness can cause users to write worse).”

[Image Credit | Wikipedia]

Share this article: Troll-Hunting Algorithm Developed – Researchers Can Now Spot Internet’s “Future Banned Users”
More from Inquisitr