Technology

Twitter is giving Satan his due.

After a OneZero article about a Satan parody account whose owner felt his tweets were being unfairly hidden from public view, the company acknowledged late last week that a feature intended to limit trolls had been mistakenly affecting some legitimate accounts, including his. Twitter would not say how many other users were inadvertently caught up by its algorithm, which limits the public visibility of certain accounts based on their behavior and other signals, even if they haven’t broken the platform’s rules.

“Upon further investigation, we realize some of these systems were impacting people using Twitter in a healthy way and so we adjusted them,” a Twitter spokesperson said in a statement. “Thank you for surfacing this to us, we’re always working to improve.” The owner of the Satan account, a 26-year-old British man named Michael, confirmed to me that his account had been restored to good standing and has been rapidly gaining followers since then.

The snafu highlights how Twitter’s efforts to automate troll-fighting can quietly go awry, with little recourse for those affected. And it isn’t the first time.

Twitter has been besieged for years by spam bots, porn bots, and human users who use the site to spew hate or harass people. Its human content moderators have proven unable to stem this tide of ugliness, so last year it tried something new: an automated filtering system.  The goal of the system — which Twitter has never given a formal name — is to make the site feel friendlier and healthier by downgrading accounts that show signs of obnoxious or spammy behavior. It can downgrade them in different ways, such as burying their replies behind a warning at the bottom of threads or preventing them from showing up in search results by default. Their followers can still see their tweets, but they become largely invisible to the average user. (When it launched, I dubbed it “Twitter purgatory,” a term that Twitter PR didn’t appreciate.)

Notably, Twitter doesn’t notify users affected by the system, and it won’t disclose its inner workings. That became a problem just weeks after its launch, when Vice reported that some prominent conservatives, including several Republican members of Congress, were not showing up in the platform’s search suggestions. Twitter quickly fixed the issue, but not before the report sparked an outcry and even Congressional hearings, in which Twitter was accused by conservatives of political bias — a charge it has repeatedly denied.

Vice called the practice “shadow banning,” and the term has stuck among Twitter’s conservative critics, even though it’s slightly misleading: The users affected don’t become invisible; they just become harder to find. Last month, a series of direct messages from the popular Satan parody account (Twitter handle @s8n, 887,000 followers and counting) prompted me to look into why the company’s systems appeared to be hiding him from replies and search results. The account’s owner, Michael, was desperate, having spent weeks trying, without success, to contact anyone at Twitter who could tell him what was going on or why. Without commenting directly on his account, Twitter implied to me, at the time, that Satan had been limited because he was linked to other accounts that violated the platform’s rules — an accusation that Michael denied.
But it appears that story eventually prompted Twitter to take a closer look at how the feature was affecting certain accounts, including his. Two weeks after it was published, I got another direct message from Michael: “I have some good news, I’m pretty sure my account has been fixed.” A quick check confirmed that Satan once again showed up in public search results and that his replies to other people’s tweets were no longer hidden behind a warning message at the bottom of the thread.
Source: OneZero

Share This. Note: Sharing is Caring