Ein paar Fragen und Antworten zum Thema #shadowbans


Ich habe Twitter heute ein paar Fragen zu den "Shadow Bans" gestellt. Die meisten wurden leider nicht beantwortet:

1. How many accounts from Germany were affected from limited visibility and for how long?

2. A report from "Junge Freiheit" quotes a spokesperson from Twitter. He or she says that the ban was merely a technical problem and the limited visibility of the accounts had nothing to do with their content. Is this true? Could you please specify the technical problem?

3. Some Twitter users in Germany, mostly from the right side of the political spectrum, claim that your platform puts certain accounts at a disadvantage for content-related reasons, i.e. criticism of migration, Islam or the German government. What is your reply to such allegations?

4. Why were the account owners affected by the aforementioned technical problem not notified? Will Twitter provide a compensation and/ or apology to them?

5. Is this the first time such a problem has occurred?

6. How many "withheld" accounts are there in Germany? Have any of these decisions been revised yet? Why or why not? How long is such a "withhold" being carried out?


Hier die Antwort eines Sprechers:

"We are aware of an issue with one of our spam filters that has affected the Search functionality on Twitter. This has resulted in some content and accounts appearing to be unavailable in Search. Our teams are working to resolve the issue and we expect full service to resume very soon".

(Dann heißt es - als "Hintergrundinformation" -, dass Twitter keine Inhalte zensiere und keine Shadow Bans betreibe.)

"We can, however, temporarily lock accounts for engaging in abuse or for violating our rules. It's written clearly in our policies and users are notified directly: https://support.twitter.com/articles/20171312. We announced this in March as part of our latest safety updates: https://blog.twitter.com/official/en_us/topics/product/2017/our-latest-update-on-safety.html. From the blog:

We aim to only act on accounts when we’re confident, based on our algorithms, that their behavior is abusive. Since these tools are new we will sometimes make mistakes, but know that we are actively working to improve and iterate on them everyday."

Reply · Report Post