BryceElder

Bryce Elder · @BryceElder

27th Jul 2013 from Falcon

Dear @twitter. Here's my proposal to counter trolling that doesn't involve petitions, newspaper columns and generalised outrage.





Problems with a "report abuse" button:
* it's after the event. The offence has already happened.
* it requires human involvement to screen for spurious reports. This slows down the process as well as costing lots of money. And, last I checked, Twitter is neither profitable nor a charity.

A better way would be to automate the troll catching. This would be relatively simple.

The average troll:
* has created the account within the last month or so.
* has fewer than 50 followers.
* sends a disproportionate number of messages to people with blue ticks.
* uses a few trigger words repeatedly in their messages. You know which ones.

Using these criteria, it'd be simple to write an algorithm that screened and flagged suspicious behaviour without any need for human censorship.

Users could then be offered a " safe Twitter" option, which automatically muted any user who'd been flagged as a potential troll. An opt-in filter, basically.

There would be false positives, of course, as well as stuff the algorithm wouldn't catch. But so what? No one's account has been suspended. No one's freedom of expression has been infringed. The only effect would be that potentially trollish messages may not reach their target. Would anyone troll in those circumstances? Fewer would, certainty.

Of course, it's questionable whether many people would opt in to such a filter. It's human nature to want to hear /everything/ being said about you, no matter how shitty. But human nature really isn't Twitter's problem to solve. It's just a web site.

Reply · Report Post