Twitter’s plan to let users flag ‘misinformation’ only amplifies existing bias

Is it misinformation, or information you don’t agree with?

Familiar information feels right

As individuals, what we consider to be “true” and “reliable” can be driven by subtle cognitive biases. The more you hear certain information repeated, the more familiar it will feel. In turn, this feeling of familiarity tends to be taken as a sign of truth.

Even “deep thinkers”aren’t immuneto this cognitive bias. As such, repeated exposure to certain ideas may get in the way of our ability to detect misleading content. Even if an idea is misleading, if it’s familiar enoughit may still pass the test.

In direct contrast, content that is unfamiliar or difficult to process — but highly valid — may be incorrectly flagged as misinformation.

The social dilemma

Another challenge is a social one. Repeated exposure to information can also convey a social consensus, wherein our own attitudes and behaviors are shaped bywhat others think.

Group identityinfluences what information we think is factual. We think something is more “true” when it’s associated with our own group and comes from an in-group member (as opposed to an out-group member).

Researchhas also shown we are inclined to look for evidence that supports our existing beliefs. This raises questions about the efficacy of Twitter’suser-led experiment. Will users who participate really be capturing false information, or simply reporting content that goes against their beliefs?

More strategically, there are social and political actors who deliberately try to downplay certain views of the world. Twitter’s misinformation experiment could be abused by well-resourced and motivatedidentity entrepreneurs.

How to take a more balanced approach

So how can users increase their chances of effectively detecting misinformation? One way is to take a consumer-minded approach. When we make purchases as consumers, we often compare products. We should do this with information, too.

“Searching laterally”, or comparing different sources of information, helps usbetter discernwhat is true or false. This is the kind of approach a fact-checker would take, and it’s often more effective than sticking with a single source of information.

At the supermarket we often look beyond the packaging and read a product’s ingredients to make sure we buy what’s best for us. Similarly, there are many new andinteresting waysto learn about disinformation tactics intended to mislead us online.

One example isBad News, a free online game and media literacy tool which researchers found could “confer psychological resistance against common online misinformation strategies”.

There is also evidence that people who think of themselves asconcerned citizens with civic dutiesare more likely to weigh evidence in a balanced way. In an online setting, this kind of mindset may leave people better placed to identify and flag misinformation.

Leaving the hard work to others

We know from research thatthinking about accuracyor the possible presence of misinformation in a space can reduce some of our cognitive biases. So actively thinking about accuracy when engaging online is a good thing. But what happens when I know someone else is onto it?

The behavioral sciences and game theory tell us people may be less inclined to make an effort themselves if they feel like they canfree-rideon the effort of others. Even armchair activism may be reduced if there is a view misinformation is being solved.

Worse still, this belief may lead people to trust information more easily. In Twitter’s case, the misinformation-flagging initiative may lead some users to think any content they come across is likely true.

Much to learn from these data

As countries engage in vaccine rollouts, misinformation poses a significant threat to public health. Beyond the pandemic, misinformationabout climate changeand political issues continues to present concerns for the health of our environment and our democracies.

Despite the many factors that influence how individuals identify misleading information, there is still much to be learned from how large groups come to identify whatseemsmisleading.

Such data, if made available in some capacity, have great potential to benefit the science of misinformation. And combined with moderation and objective fact-checking approaches, it might even help the platform mitigate the spread of misinformation.

Eryn Newman, Senior Lecturer, Research School of Psychology,Australian National UniversityandKate Reynolds, Professor, Research School of Psychology,Australian National University

This article is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.

Story byThe Conversation

An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with

More TNW

About TNW

France’s AFP sues Twitter over non-payments for displaying news

Discover TNW All Access

Meta takes new AI system offline because Twitter users are mean

Musk and Twitter are stuck in a stupid stalemate about bots