Twitter Is Improving Its Troll-Detecting Capabilities

The move is part of Twitter's plan to improve the health of discussion on the platform

Much like your average unfiltered commenting platform, Twitter abuse problems have seemed to slowly devolve.

Users can also mute people they find offensive.

Although Twitter employees have been caught on camera admitting that shadowbanning takes place on the platform, the company continues to publicly deny that it engages in the practice - they have even made such denials in Senate hearings.

"Some troll-like behavior is fun, good and humorous". According to the announcement, Twitter will also be "looking at how accounts are connected to those that violate our rules and how they interact with each other". While these apparently account for less than one percent of Twitter accounts, the platform maintains that this portion of users still significantly affects the online experience.

When an account is identified as a possible troll, Twitter will deprioritize any content shared by this account.

Those tweets will still be available, but users will have to manually click "show more replies" in order to reveal them. "Others don't but are behaving in ways that distort the conversation", the company said.

Twitter admitted a bug allowed for passwords to be stored in plain text in an internal log. Today's blog post expounds on the newer, additional signals it'll incorporate to further protect the site's integrity.

Currently, Twitter uses policies, human reviewers and machine learning to decide how tweets are organized and presented in conversations and search.

The company said tests of the new system had resulted in fewer abuse reports being filed, suggesting people were having a "better experience".

This is to improve the health of the conversation and improve everyone's Twitter experience. All of these factors - and more - will be taken into account when deciding how visible individual tweets should be.

Gasca and Harvey don't say whether the proclaimed troll tweets would be demoted for everyone, or just for specific users they've been known to target.

The company has been testing the new methods in markets around the world, and claimed it had seen abuse reports fall as a result, with a 4 per cent drop in reports from searches and an 8 per cent decline in reports from conversations.

Twitter acknowledged it expects to make mistakes, and that the system will change, learn and improve over time. There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter. "We are making progress as we go", Dorsey said.

Related news: