Trumps Twitter Feed Being Shadow Banned

Shadowbanning Is Big Tech's Big Problem

Social-media companies deny quietly suppressing content, but many users still believe it happens. The result is a lack of trust in the internet.

A stylized image of a silhouette in front of a bunch of "account suspended" notices
Getty; The Atlantic

Sometimes, it feels like everyone on the internet thinks they've been shadowbanned. Republican politicians have been accusing Twitter of shadowbanning—that is, quietly suppressing their activity on the site—since at least 2018, when for a brief period, the service stopped autofilling the usernames of Representatives Jim Jordan, Mark Meadows, and Matt Gaetz, as well as other prominent Republicans, in its search bar. Black Lives Matter activists have been accusing TikTok of shadowbanning since 2020, when, at the height of the George Floyd protests, it sharply reduced how frequently their videos appeared on users' "For You" pages. (In explanatory blog posts, TikTok and Twitter both claimed that these were large-scale technical glitches.) Sex workers have been accusing social-media companies of shadowbanning since time immemorial, saying that the platforms hide their content from hashtags, disable their ability to post comments, and prevent their posts from appearing in feeds. But for almost everyone who believes they have been shadowbanned, they have no way of knowing for sure—and that's a problem not just for users, but for the platforms.

When the word shadowban first appeared in the web-forum backwaters of the early 2000s, it meant something more specific. It was a way for online-community moderators to deal with trolls, shitposters, spam bots, and anyone else they deemed harmful: by making their posts invisible to everyone but the posters themselves. But throughout the 2010s, as the social web grew into the world's primary means of sharing information and as content moderation became infinitely more complicated, the word became more common, and much more muddled. Today, people use shadowban to refer to the wide range of ways platforms may remove or reduce the visibility of their content without telling them.

Shadowbanning is the "unknown unknown" of content moderation. It's an epistemological rat's nest: By definition, users often have no way of telling for sure whether they have been shadowbanned or whether their content is simply not popular, particularly when recommendation algorithms are involved. Social-media companies only make disambiguation harder by denying shadowbanning outright. As the head of Instagram, Adam Mosseri, said in 2020, "Shadowbanning is not a thing."

But shadowbanning is a thing, and while it can be hard to prove, it is not impossible. Some evidence comes from code, such as the recently defunct website shadowban.eu, which let Twitter users determine whether their replies were being hidden or their handles were appearing in searches and search autofill. A French study crawled more than 2.5 million Twitter profiles and found that nearly one in 40 had been shadowbanned in these ways. (Twitter declined to comment for this article.) Other evidence comes from users assiduously documenting their own experiences. For example, the social-media scholar and pole-dancing instructor Carolina Are published an academic-journal article chronicling how Instagram quietly and seemingly systematically hides pole-dancing content from its hashtags' "Recent" tab and "Explore" pages. Meta, formerly Facebook, even has a patent for shadowbanning, filed in 2011 and granted in 2015, according to which "the social networking system may display the blocked content to the commenting user such that the commenting user is not made aware that his or her comment was blocked." The company has a second patent for hiding scam posts on Facebook Marketplace that even uses the term shadow ban. (Perhaps the only thing more contentious than shadowbanning is whether the term is one word or two.) "Our patents don't necessarily cover the technology used in our products and services," a Meta spokesperson told me.

What's more, many social-media users believe they are in fact being shadowbanned. According to new research I conducted at the Center for Democracy and Technology (CDT), nearly one in 10 U.S. social-media users believes they have been shadowbanned, and most often they believe it is for their political beliefs or their views on social issues. In two dozen interviews I held with people who thought they had been shadowbanned or worked with people who thought they had, I repeatedly heard users say that shadowbanning made them feel not just isolated from online discourse, but targeted, by a sort of mysterious cabal, for breaking a rule they didn't know existed. It's not hard to imagine what happens when social-media users believe they are victims of conspiracy.

Shadowbanning fosters paranoia, erodes trust in social media, and hurts all online discourse. It lends credence to techno-libertarians who seek to undermine the practice of content moderation altogether, such as those who flock to alt-right social networks like Gab, or Elon Musk and his vision of making Twitter his free-speech maximalist playground. (Last week, in response to his own tweet making fun of Bill Gates's weight, Musk tweeted, "Shadow ban council reviewing tweet …," along with an image of six hooded figures.) And distrust in social-media companies fuels the onslaught of (mostly Republican-led) lawsuits and legislative proposals aimed at reducing censorship online, but that in practice could prevent platforms from taking action against hate speech, disinformation, and other lawful-but-awful content.

What makes shadowbanning so tricky is that in some cases, in my view, it is a necessary evil. Internet users are creative, and bad actors learn from informed content moderation: Think of the extremist provocateur that posts every misspelling of a racial slur to see which one gets through the automated filter, or the Russian disinformation network that shares its own posts to gain a boost from recommendation algorithms while skirting spam filters. Shadowbanning allows platforms to suppress harmful content without giving the people who post it a playbook for how to evade detection next time.

Social-media companies thus face a challenge. They need to be able to shadowban when it's necessary to maintain the safety and integrity of the service, but not completely undermine the legitimacy of their content-moderation processes or further erode user trust. How can they best thread this needle?

Well, certainly not the way they are now. For one thing, platforms don't seem to just shadowban users for trying to exploit their systems or evade moderation. They also may shadowban based on the content, without explaining that certain content is forbidden or disfavored. The danger here is that when platforms don't disclose what they moderate, the public—their user base—has no insight into, or means of objecting to, the rules. In 2020, The Intercept reported on leaked internal TikTok policy documents, in use through at least late 2019, showing that moderators were instructed to quietly prevent videos featuring people with "ugly facial looks," "too many wrinkles," "abnormal body shape," or backgrounds featuring "slums" or "dilapidated housing" from appearing in users' "For You" feeds. TikTok says it has retired these standards, but activists who advocate for Black Lives Matter, the rights of China's oppressed Uyghur minority, and other causes claim that TikTok continues to shadowban their content, even when it does not appear to violate any of the service's publicly available rules. (A TikTok spokesperson denied that the service hides Uyghur-related content and pointed out that many videos about Uyghur rights appear in searches.)

We also have evidence that shadowbans can follow the logic of guilt by association. The same French study that estimated the percentage of Twitter users who had been shadowbanned also found that accounts that interacted with someone who had been shadowbanned were nearly four times more likely to be shadowbanned themselves. There may be other confounding variables to account for this, but Twitter admitted publicly in 2018 that it uses "how other accounts interact with you" and "who follows you" to guess whether a user is engaging in healthy conversation online; content from users who aren't is made less visible, according to the company. The study's authors gesture to how this practice could lead to the silencing—and perception of persecution—of entire communities.

Without authoritative information on whether or why their content is being moderated, people come to their own, often paranoid or persecutory conclusions. While the French study estimated that one in 40 accounts is actually detectably shadowbanned at any given time, the CDT survey found that one in 25 U.S. Twitter users believes they have been shadowbanned. After a 2018 Vice article revealed that Twitter was not autofilling the usernames of certain prominent Republicans in searches, many conservatives accused the platform of bias against them. (Twitter later said that while it does algorithmically rank tweets and search results, this was a bug that affected hundreds of thousands of users across the political spectrum.) But the belief that Twitter was suppressing conservative content had taken hold before the Vice story lent it credence. The CDT survey found that to this day, Republicans are significantly more likely to believe that they have been shadowbanned than non-Republicans. President Donald Trump even attacked shadowbanning in his speech near the Capitol on January 6, 2021:

On Twitter it's very hard to come on to my account … They don't let the message get out nearly like they should … if you're a conservative, if you're a Republican, if you have a big voice. I guess they call it shadowbanned, right? Shadowbanned. They shadowban you and it should be illegal.

Making shadowbanning illegal is exactly what several U.S. politicians have tried to do. The effort that has gotten closest is Florida's Stop Social Media Censorship Act, which was signed into law by Governor Ron DeSantis in May 2021 but blocked by a judge before it went into effect. The law, among other things, made it illegal for platforms to remove or reduce the visibility of content by or about a candidate for state or local office without informing the user. Legal experts from my organization and others have called the law blatantly unconstitutional, but that hasn't stopped more than 20 other states from passing or considering laws that would prohibit shadowbanning or otherwise threaten online services' ability to moderate content that, though lawful, is nevertheless abusive.

How can social-media companies gain our trust in their ability to moderate, much less shadowban, for the public good and not their own convenience? Transparency is key. In general, social-media companies shouldn't shadowban; they should use their overt content-moderation policies and techniques in all but the most exigent circumstances. If social-media companies are going to shadowban, they should publicize the circumstances in which they do, and they should limit those circumstances to instances when users are trying to find and exploit weaknesses in their content-moderation systems. Removing this outer layer of secrecy may help users feel less often like platforms are out to get them. At the same time, bad actors that are advanced enough to require shadowbanning likely already know that it is a tool platforms use, so social-media companies can admit to the practice in general without undermining its effectiveness. Shadowbanning in this way may even find a broad base of support—the CDT survey found that 81 percent of social-media users believe that in some cases, shadowbanning can be justified.

However, many people, particularly groups that see themselves as disproportionately shadowbanned, such as conservatives and sex workers, may still not trust social-media companies' disclosures of their practices. Even if they did, the unverifiable nature of shadowbanning makes it difficult to know from the outside every harm it may cause. To address these concerns, social-media companies should also give external researchers independent access to specific data about which posts and users they have shadowbanned, so we can evaluate these practices and their consequences. The key is to take shadowbanning, well, out of the shadows.

smithersunly1976.blogspot.com

Source: https://www.theatlantic.com/technology/archive/2022/04/social-media-shadowbans-tiktok-twitter/629702/

0 Response to "Trumps Twitter Feed Being Shadow Banned"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel