Musk claims moderation stifles free speech on Twitter. He’s wrong
Research shows content rules help preserve free speech from bots and other manipulation
Political bias
Manyconservative politiciansandpunditshaveallegedfor yearsthat major social media platforms, including Twitter, have aliberal political biasamounting tocensorship of conservative opinions. These claims are based on anecdotal evidence. For example, many partisans whose tweets were labeled as misleading and downranked, or whose accounts were suspended for violating the platform’s terms of service, claim that Twitter targeted them because of their political views.
Unfortunately, Twitter and other platforms ofteninconsistently enforce their policies, so it is easy to find examples supporting one conspiracy theory or another. A review by the Center for Business and Human Rights at New York University has foundno reliable evidencein support of the claim of anti-conservative bias by social media companies, even labeling the claim itself a form of disinformation.
A more direct evaluation of political bias by Twitter is difficult because of the complex interactions between people and algorithms. People, of course, have political biases. For example,our experiments with political social botsrevealed that Republican users are more likely to mistake conservative bots for humans, whereas Democratic users are more likely to mistake conservative human users for bots.
To remove human bias from the equation in our experiments, we deployed a bunch of benign social bots on Twitter. Each of these bots started by following one news source, with some bots following a liberal source and others a conservative one. After that initial friend, all bots were left alone to “drift” in the information ecosystem for a few months. They could gain followers. They acted according to an identical algorithmic behavior. This included following or following back random accounts, tweeting meaningless content and retweeting or copying random posts in their feed.
But this behavior was politically neutral, with no understanding of content seen or posted. We tracked the bots to probe political biases emerging from how Twitter works or how users interact.
Surprisingly, our research providedevidence that Twitter has a conservative, rather than a liberal bias. On average, accounts are drawn toward the conservative side. Liberal accounts were exposed to moderate content, which shifted their experience toward the political center, while the interactions of right-leaning accounts were skewed toward posting conservative content. Accounts that followed conservative news sources also received more politically aligned followers, becoming embedded in denser echo chambers and gaining influence within those partisan communities.
These differences in experiences and actions can be attributed to interactions with users and information mediated by the social media platform. But we could not directly examine the possible bias in Twitter’s news feed algorithm, because the actual ranking of posts in the “home timeline” is not available to outside researchers.
Researchers from Twitter, however, were able to audit the effects of their ranking algorithm on political content, unveiling thatthe political right enjoys higher amplificationcompared to the political left. Their experiment showed that in six out of seven countries studied, conservative politicians enjoy higher algorithmic amplification than liberal ones. They also found that algorithmic amplification favors right-leaning news sources in the U.S.
Our research and the research from Twitter show that Musk’sapparent concern about biason Twitter against conservatives is unfounded.
Referees or censors?
The other allegation that Musk seems to be making is that excessive moderation stifles free speech on Twitter. The concept of a free marketplace of ideas is rooted in John Milton’s centuries-old reasoning that truth prevails in a free and open exchange of ideas. This view is often cited as the basis for arguments against moderation: accurate, relevant, timely information should emerge spontaneously from the interactions among users.
Unfortunately,several aspects of modern social mediahinder the free marketplace of ideas.Limited attentionandconfirmation biasincrease vulnerability to misinformation.Engagement-based rankingcan amplify noise and manipulation, and the structure of information networks candistort perceptionsand be“gerrymandered” to favor one group.
As a result, social media users have in past years become victims of manipulation by“astroturf” causes,trollingandmisinformation. Abuse is facilitated bysocial botsandcoordinated networksthat create the appearance of human crowds.
We and other researchers have observed these inauthentic accountsamplifying disinformation,influencing elections,committing financial fraud,infiltrating vulnerable communities, anddisrupting communication. Musk has tweeted that he wants todefeat spam bots and authenticate humans, but these are neither easy nor necessarily effective solutions.
Inauthentic accounts are used for maliciouspurposes beyond spamand arehard to detect, especially when they are operated by people in conjunction with software algorithms. And removing anonymity mayharm vulnerable groups. In recent years, Twitter has enacted policies and systems to moderate abuses by aggressively suspending accounts and networks displaying inauthentic coordinated behaviors. A weakening of these moderation policies may make abuse rampant again.
Manipulating Twitter
Despite Twitter’s recent progress, integrity is still a challenge on the platform. Our lab is finding new types of sophisticated manipulation, which we will present at theInternational AAAI Conference on Web and Social Mediain June. Malicious users exploit so-called “follow trains” – groups of people who follow each other on Twitter – to rapidly boost their followers andcreate large, dense hyperpartisan echo chambersthat amplify toxic content from low-credibility and conspiratorial sources.
Another effective malicious technique is to post and thenstrategically delete content that violates platform termsafter it has served its purpose. Even Twitter’s high limit of 2,400 tweets per day can be circumvented through deletions: We identified many accounts that flood the network with tens of thousands of tweets per day.
We also found coordinated networks that engage in repetitive likes and unlikes of content that is eventually deleted, which can manipulate ranking algorithms. These techniques enable malicious users to inflate content popularity while evading detection.
Musk’s plans for Twitter are unlikely to do anything about these manipulative behaviors.
Content moderation and free speech
Musk’s likely acquisition of Twitter raises concerns that the social media platform could decrease its content moderation. This body of research shows that stronger, not weaker, moderation of the information ecosystem is called for to combat harmful misinformation.
It also shows that weaker moderation policies would ironically hurt free speech: The voices of real users would be drowned out by malicious users who manipulate Twitter through inauthentic accounts, bots, and echo chambers.
This article byFilippo Menczer, Professor of Informatics and Computer Science,Indiana University,is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.
Story byThe Conversation
An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.
Get the TNW newsletter
Get the most important tech news in your inbox each week.