Interview: Sumitra Badrinathan on tackling fake news and the effects of BJP’s ‘supply advantage’
Plus, the 'backfire effect.'
Note: We’ve been focusing on the pieces over the last few weeks, because of how much there is to say, but if you’ve been missing the links and want them back, please let me know. Write to email@example.com
Twitter is in the news this week in India for putting a “manipulated media” tag on posts by leaders from the Bharatiya Janata Party, containing propaganda that fact-checkers found to include misinformation. While the Indian government has turned this into a fight for narrative against the social media network, the development is also a powerful reminder that misinformation – and efforts to address it – will be closely watched.
Sumitra Badrinathan is a postdoctoral research fellow at the University of Oxford’s Reuters Institute, who received a PhD in political science this year from the University of Pennsylvania. Badrinathan’s work focuses on misinformation and comparative politics, with a focus on India.
In a recent paper based on an experiment in Bihar during the 2019 elections, for example, Badrinathan found that even an hour-long module aimed at improving people’s ability to identify fake news did not necessarily make them any better at it. Even more significantly, the results found that those who identified as supporters of the Bharatiya Janata Party seemed to become worse at identifying fake news after the training module – potentially because of a ‘backfire effect’ in which people tend to hold firmer to their beliefs after being corrected.
I spoke to Badrinathan about the Bihar experiment, what it might tell us about political identities in India, and what further research she would like to see on misinformation.
To get an in-depth Q&A like this on Indian politics and policy in your inbox every week, sign up here. All of our older Q&As are collected here.
Tell me a little bit about your academic background.
I just finished a PhD in political science at the University of Pennsylvania. And I’m about to start a postdoc research position at Oxford. Before my PhD I was born and brought up in Bombay, I grew up there before moving to the US.
I’d always been interested in politics, but it was when I moved here to study that it became clear to me that politics could also be about research and good-grounded science. So that’s what I have focused on.
How did you come to work on disinformation?
First, let me say that, when the 2014 elections were going on, I was in my final year of college, and elections were happening around me for the first time in a way that I was actually able to appreciate them.
As part of that, I worked on a campaign and we went door to door to talk to people trying to get them to go out to vote. It struck me at the time that we knew very little about why a particular person casts their vote in a certain way, at least in terms of systematic data.
So I went back to the folks I was working with and said, is this tabulated? Are we knocking on doors randomly? Or do we have an idea of why we’re doing this because it seems like people vote for candidates not only because they like them, but because of a whole host of other reasons that might have little to do with a candidate personality or policy ideas.
It became clear to me that that sort of systematic data about these things in India was not easy to come by. Now, it is a lot easier than it was back then. But that’s what got me into data and politics.
When I started my PhD, I was still interested in data science and how it could apply to politics. I was taking classes on doing experiments on big data, on advanced statistics, and so on. But I didn’t exactly know what I was going to focus on at that time.
This is about 2016-2017. Misinformation was a big deal in the US because [former US President Donald] Trump had just gotten elected. At the time, more and more people in India were getting access to the internet, I was on all of these WhatsApp groups with friends, the extended family and so forth.
I started to see similar patterns in India. In that, when there was a big election or event in the country, there would be a deluge of fake news on my phone. In the US, academic researchers were trying to see whether they could talk to people about this as an issue, and whether that would turn them around. Tech companies like Facebook and Twitter got involved. They started piloting initiatives like putting a disputed tag on a message to see if it had an impact.
There was nothing like that in India. For me, a light struck in my head. It became clear that I had the tools to conduct something like this, and it matters to me, because I’ve seen people around me succumb to false information and propaganda.
Putting two and two together, that’s where I started, and I’ve stuck on that path.
Do you think, in the Western context, we have a good handle on this research area now?
Yes, and I can give you examples.
For one, we know that one of the largest vulnerabilities to misinformation in the West is who you voted for. The way that affects how you come across this information is through a mechanism called ‘motivated reasoning’, which in simple words, is to say that humans are motivated to reason in certain ways. And that reasoning, more often that not, coincides with your partisan identity.
You voted for somebody. You will feel cognitive dissonance in your mind if you shy away from trying to support the position that you already took. So we are prone to biases like confirmation bias and disconfirmation bias. And we don’t want to do anything that goes against these pre-existing views, because it causes dissonance in our heads.
That concept has been shown time and again, in a variety of contexts, to affect misinformation consumption in such a strong way that not only has it reduced the effect of corrections – if somebody is correcting information that is beneficial to my party’s cause, I’m more likely to not have that correction have an impact on me – but in some cases, it also led to a backfire effect.
Which is that previously I might have believed or not believed a piece of information beneficial to my party. But once you correct it, and if you’re somebody I don’t like, then I double down in such a way that it doesn’t matter what I thought before. I am going to say this is definitely true and not going to listen to your correction.
This is just one example. I don’t think we have a clear understanding about the drivers of misinformation and the mechanisms to believe them in the Indian context. That is what me and a bunch of colleagues are working towards understanding.
A lot of the literature that I cite comes from psychology and cognitive sciences, because we are talking about ultimately how the human mind confirms and believes things. And in general, more than political science. It’s political communication. That’s my minor in my PhD also.
You’ve said there isn’t much work on India, but is there research on other non-Western spaces?
Very little. In general, it’s limited to contexts we would characterise as developed, and where they use more public platforms like Facebook and Twitter. So naturally, the solutions that we come up with will be tailored to those platforms, which is why I keep talking about how it’s hard to imagine those solutions applying to not just India, but a large majority of the world, that is using WhatsApp or other private applications like Signal or Telegram.
Tell me about the misinformation experiment in Bihar.
The 2019 elections were coming up, and I wanted to do something around that, because we know misinformation would start to rise. So it seemed to be a good opportunity to go to the field. But I wasn’t sure what I would actually do.
One of the things that had been tried [elsewhere] was telling people beforehand that misinformation was out there and reminding them that they should try to analyse information with the goal of accuracy. And that has led people towards better information processing in the past.
I liked that idea. Before running a study, I talked to a bunch of people – knocking on doors, focus groups – and found out that a lot of people, especially older folks, were getting on the internet for the first time. People whose families had saved up to buy mobile phones and they had one per household.
And this led to a series of observations that weren’t in my mind before I went to the field, which is that people – because they’re new to the internet – weren’t aware of the concept of misinformation to begin with.
That might seem like a bad thing, but for the study, it was like talking about a blank slate. This is an opportunity to teach people that there is news out there that is not entirely true. And maybe we can teach people to become more careful news consumers.
So that was the premise of the study. We selected a set of households. For each household, we had an enumerator go and talk to, sometimes for close to an hour, about misinformation. For some, the idea itself was a surprise because they said things like, ‘it’s on my phone, it must be true,’ because the phone was to them an elite authority source.
So we talked to them about sources, saying you can trust some and distrust others. We talked to people about some fake stories that had gone viral at the time. We printed out four of these stories – the original image, and then a small bubble next to it explaining what was wrong.
And the enumerators explained to people that these are just four examples, but we want to show you how the things you come across on the phone may or may not be true. We talked to people about ways they can go about countering these stories, like reverse image searches, or going to fact checking websites. And we left behind a flyer with tips to spot misinformation.
And then people voted in the general election. After that, we went back to the same households to measure whether what we did worked or not.
I don’t want to get too technical, but the experiment part was that only some households randomly chosen were talked to about misinformation, some were not. The key thing we’re interested in finding out is the difference between the houses that were given the treatment and the ones that were not.
Now obviously, we don’t know how people voted, but the premise was – if misinformation can affect your opinion, that affects your voting behaviour. So we went back after the elections, after voting but before results had been announced, because we didn’t want results to affect the way people answered our final questions.
So we went back and measured through a series of questions whether people got better at identifying fake news.
And the results were somewhat surprising to you?
I don’t know if they were surprising as much as… I would be lying if I said they weren’t disappointing. Obviously you want something to work.
In the literature which I’m talking about, people haven’t done this thing where someone goes and talks to respondents about misinformation, with an up-to-one-hour-long module that combined a bunch of different things that I would call more pedagogical or learning focused. It hasn’t been done.
All of the solutions have involved one-line nudges or push notifications, that sort of thing. This was a much more evolved intervention. Just on that basis, I expected it to work.
But second, there are normative implications. If misinformation is such a big problem for people’s opinions, and they’re casting votes on the basis of it, for the health of democracy, you want something like this to work.
Which is why it was disappointing to find that in general, the whole intervention did not work. The difference between the treatment group and the control group was zero. The group that did not get any of the training was not worse at identifying misinformation to the group that did.
There was a more surprising part also. I broke up the sample of respondents into people based on their party or whom they said they liked, which in practice meant people who liked or preferred the BJP or BJP allies at the Centre, and those who said anything else.
Remember the backfire effect, which is when people’s affinities towards their party is so strong that they double down on something that you’re telling them is false. That happened here.
Respondents who said they supported the BJP, when they got the training, they became worse at identifying misinformation. They were better before. They significantly decreased their ability to identify misinformation when they got the training.
For people who said they did not support the BJP, they were not very good beforehand – meaning in the control group – but after the training they were able to improve their information processing.
Essentially, the treatment worked in opposite ways for both of the subgroups, which I had not expected at all. When we talk about parties in India, nothing in the literature says that we should expect party identities to be so strong and consolidated to the point where they affect people’s attitudes and behaviours. That’s not to say that people aren’t sure who they are voting for. That’s to say that voting may or may not happen on the basis of ideology and identity. People vote for a host of different reasons.
This is what the literature on India in comparative politics has shown. So to find that your identity in terms of who you support politically, as opposed to other identities like religion, caste, and so on, can be so strong that it can condition your responses on a survey, that too only for one set of partisans, that’s something that hadn’t been found before.
My understanding of the backfire effect is that the research in the US has been muddled – it exists in some contexts, but not in others.
That’s right. The backfire effect is one of those things we’ve gone a bit back and forth about. I’m using that term in the Indian context because of a lack of a better word. And by this we shouldn’t conclude that such an effect definitely exists.
This is the first, and to my knowledge, only field study that has been conducted on this, and we need so many more to understand if this sort of effect replicates. One of the things that may push us towards thinking that it won’t replicate is that this was conducted during a very contentious election. And we know from previous research, not in India but in other contexts, that people’s identities are stronger during elections and other contentious periods because it is salient.
Everyone around you is talking about BJP, not BJP, people are knocking on your doors asking for votes. It’s likely that the salience of that identity pushed people to behave a certain way, and that if you take away the context of a contentious election, it wouldn’t have happened.
We don’t know whether this is limited to just this particular sample for this particular time. And it is very possible that it is. Whoever is going to read your newsletter, if people are interested in misinformation in India, we need several more people working on this to be able to say that what we know is true for sure and not limited to the context of one study.
One of the other interesting things about the paper is that, before the intervention, it seemed that those who said they supported the BJP were better than others at discerning fake news?
Yes, and that’s a puzzle. There are a couple of different reasons for this anecdotally. One reason is that, anecdotally, the BJP has a supply-side advantage. When it comes to misinformation, most of the political misinformation out there almost always has the BJP name on it. Either the misinformation is favouring the BJP, or countering it.
But in my experience, the BJP is always referenced. And this is plausible because they have a supply-side advantage. We have heard about them having a war room of people to create stories.
It’s possible that respondents who support the BJP are aware that they have a supply-side advantage, and in the absence of treatment, this makes them better off in a survey setting at identifying true or false stories. That’s an anecdotal explanation. That non-BJP participants may or may not be aware of misinformation to the extent that BJP participants are, just because it doesn’t favour them.
The second explanation is, if you look at where this better information processing for BJP respondents is coming from – this is a smaller sample, since it’s just the control group – you see that the overall better rate of identification comes from their ability to identify pro-BJP stories as true.
Even in the absence of treatment, they’re doing what we would expect any strong partisan to do. For non-BJP supporters, this alignment is not there in this sample. I don’t know if that’s super convincing, it’s not to me, but it’s the extent to which I can go with this data.
For the lay reader, how would you summarise the results of the non-BJP respondents?
They were worse off beforehand, but they were able to improve their information processing skills from the treatment.
But one thing I want to say is that the two sides are also very different. One side supports a party. One side is made of people who support a bunch o f different parties, but the only thing they have in common is that they don’t support a party. Even ex-ante, the sides aren’t equal. And that’s not easy to solve, because of the nature of misinformation in India, which is either pro-BJP or not.
In Bihar, at the time, if you thought of trying to find misinformation that was pro-RJD or pro-JDU, and I scoured the internet for stories like this, there weren’t any. So by design it had to be like this. And that has created a little bit of an imbalance between the two groups.
We shouldn’t expect them to behave the same way because one group is not bound by a common shared cause, the way that the BJP sample is, and I guess that’s saying something about Indian politics in general these days.
You also find that those who are more digitally literate did not necessarily discern fake news better.
Yes, and that’s a tricky one to answer. I created a measure from scratch, because everything that exists to measure digital literacy is focused on the Western context. Mine measured familiarity with WhatsApp. You can think of digital literacy in a bunch of different ways. You can think of it in terms of how someone navigates their phone, which is very difficult to measure because you have to observe people doing it. Maybe if I had gone down that road, answers would be different.
I measured by a series of questions that indicated how familiar someone was with doing different things on WhatsApp – how to create a list of people to broadcast a message to, how to mute groups and so on. And the responses were self-reported.
What we find in the Western context is those who are less digitally literate tend to be older people and they are worse at identifying misinformation. In this Bihar context, those who are better at digital literacy are not necessarily worse at identifying misinformation.
One of the reasons for that is, in order to pass along misinformation, you have to have a certain amount of digital literacy to be able to do that. It is plausible that what is being measured in this context is a measure of digital familiarity that correlates with your ability to push messages forward, which may correlate with your ability to push misinformation forward, if you’re so inclined.
I don’t know that for sure, but that’s what might be going on in this context.
So the results seem to suggest that partisan identities, or at least the pro-BJP identity, is stronger than we think. Let me bring in your other paper with Simon Chauchard titled ‘I don’t think that’s true, bro’, which seemed to suggest something slightly different.
The result of that is pretty much the opposite of this. So [the Bihar paper] was a field experiment, or a training experiment. You could think of it as a fact-checking or correction treatment.
This paper was very different. It was purely a correction experiment. The result was also very different.
In the field study, I found that on average, there was no difference between the treatment and the control groups. In this other study, which is an online one, we find that a very subtle treatment is able to move beliefs or that people can get very easily corrected.
But there were a lot of differences in the studies, so it’s hard to imagine that we should expect the results should be the same.
For one, the second study was entirely online. That meant they were not just regular internet users, but those so experienced with the internet that they are signing up for online panels to take surveys. So a very different sample.
We gave people these hypothetical WhatsApp screenshots, in which two people are having a conversation with each other on a group chat. They’re talking to each other about something and somebody drops a piece of misinformation, and a second user counters them.
Now they can either choose to counter them or not counter them. And if they do counter them, they can choose to counter them with some evidence or without evidence. In essence, the treatment is that one-line counter message, which acts as the correction. And we tried to play with a bunch of different messages to do this. In some cases it involved a user just simply refuting the message with no proof.
The user would say something like, ‘I don’t think that’s true, bro’, which is where the title of the paper came from. And in some cases, they would refute the message with a tone of information and references.
It’s an open question: Does this sort of correction work? Because, as we said before, WhatsApp can’t correct messages because of their encrypted nature. So users have to correct each other. And not all of India is a setting where people are new to the internet.
We tried to see whether peer or social corrections can have an effect. And then there was the question of what kinds of corrections work.
In short, we found that any correction works to reduce people’s beliefs in misinformation, and have them process information correction. Anything. So the correction that says, ‘I don’t think that’s true bro’ works. The correction that says ‘I don’t think this is true, but here is a paragraph on why it’s not true,’ works equally well.
I think that was surprising to us. Similar correction experiments have been shown to work in the American context. But what was surprising to us was the type correction didn’t seem to matter. Even the short messages without any source worked just as well, relative to the longer messages backed by some evidence.
Now this seemed to suggest that there wasn’t such a strong partisan identity or motivated reasoning.
Yes. It’s not to say they didn’t have partisan identities. Everyone has identities. It’s to say that the context you’re in can bring those identities to the forefront, can make them salient.
In this online experiment, it’s not a time when people are coming to your door to campaign. Elections themselves make partisanship and political identities salient. In this case, you’re going online to make some extra money. You’re not thinking about party politics.
The context is very different. There’s some evidence of this in the American context. There’s a recent paper that shows that its the context that makes identity salient. So in the context of an election, where you’re already pitting one party against another, you are naturally motivated to think in such a way that will help or hurt your party’s cause.
When you think of the online experience, that happened after the elections, this competition or win-loss framework was not in people’s heads. That’s not to say they didn’t have partisan identities, just that the context of what was happening in the world at the time didn’t activate these identities.
What other research have you been doing on this front?
I’m working on a bunch of different things. But one that’s interesting me at the moment is a paper me and my co-author Simon Chauchard are working on, which is trying to understand the mechanisms of belief in WhatsApp groups. Why do people believe certain misinformation over others? And what motivates them to correct this misinformation.
One of the things we’re testing is that WhatsApp groups are common built around common cause – society groups, parent-teacher associations, sometimes political groups. More often than not, they’re built with a certain cause and come to assume a certain identity.
Our working theory is that because they come to assume this identity, the members of the group are motivated to more often than not agree with each other. There’s this consensus towards a shared group identity that pushes people towards agreeing, which is why a lot fo misinformation may just get lost or go uncorrected.
But that also means, when somebody does correct something, it can very easily change something because the seed has been sown. That gives other people the opportunity to say, ‘oh yeah, you’re right, I don’t think this is actually true.’
I have a lot of anecdotal evidence to show that this might be one of the mechanisms at play. I talked to a woman in Mumbai who, during Covid, had this piece of information that said vegetarians are immune to the Coronavirus, so eat more vegetarian food.
She forwarded that message to all of her groups. I asked her whether she thought it was true. She said, I’m not really sure, but at that point it was 9 am, and I had to send a good morning message. So I sent this.
Which goes to show that in some contexts in India, just because of the nature of our WhatsApp groups and the pressure on people to wake up in the morning and forward something can end up being misinformation, just because of the shared identity or norms of a group.
We’re testing whether breaking those norms in some way is the mechanism to lead other members to fall in line. We’re testing whether it is shared group identity, not actually belief in the message, but a need to be accepted by the group, as opposed to actually believing the message, which of those is the better mechanism to explain what is going on. We’re doing this in the context of Covid misinformation, so look out for that working paper.
Are there others doing interesting work on this front?
We have talked about corrections. But there’s a second strand of research, not do with correction, but with quantifying the amount that’s out there and maybe providing technical or AI-based solutions.
One lab doing really good work is that of Kiran Garimella at MIT. He and his lab are doing some fantastic work on trying to quantify how much misinformation is out there on WhatsApp in India and trying to see what we can do about it.
WhatsApp started public groups recently, where you can go to a link online and join, which takes away some of the privacy. Kiran and his co-authors have been scraping WhatsApp messages in these groups to give us an idea of how much is misinformation, how much comes from one party source versus another, how much is hateful speech, how much encourages Hindu-Muslim polarisation.
Some of his work is really excellent, so that’s one person I definitely want to flag in this field who’s doing great work.
What’s one misconception you find yourself having to correct all the time, whether from fellow scholars, journalists, lay people?
It’s funny, there’s this meme template floating around on Twitter, called types of academic papers, where people are coming up with common tropes in the field.
One misconception is that people, non academics, have strong opinions on fact-checking. Either fact-checking is awesome, or it doesn’t work at all. But the truth is we don’t know. We need to run systematic scientific studies to see if that sort of things work, because we’re interested in understand whether the treatment works.
You can’t push a fact-check out there, watch one or two people change their beliefs and conclude that it works. Whether fact checking works is a function of who’s doing, in what context it’s being done, what kinds of fact checks are being done, what the intensity of those fact checks are… there are so many sub questions.
That’s not to say that fact checking is not good. We need all of the normative things that we have to fight this problem. But apart from journalists and NGOs working on it, we need more academics to do systematic studies to show under what conditions these kind of interventions can be most effective.
We need more researchers working on this, so we can do more work, and then write about them in more public outlets such as yours. We know the only way to effectively measure intervention, just like a vaccine trial, is to see the difference between those who got the dose and those who didn’t.
That knowledge is not there, because there aren’t enough of us working on it. And the deluge of misinformation, compared to what we’re doing to counter is… there’s just such a vast difference, that sometimes it seems that whatever we don’t won’t be enough.
But that’s just to say that if we had 100 people working on it, as opposed to just 10 or 20, that would help.
Do you have three recommendations for those interested in the subject?
My favourite work from recent years on this subject broadly is the book Social Media and Democracy, edited by Nate Persily and Josh Tucker. It has a couple of chapters on misinformation specifically, with a detailed and clear review of the literature, but also covers hate speech online, social media echo chambers, bots and propaganda, and democratic transparency. The writing is accessible to a general audience and the book is open access here.
This article by Brendan Nyhan on facts and myths about misperceptions is a comprehensive but concise read on what we know and don’t know about misinformation, its measurement, and solutions to it: Nyhan, Brendan. “Facts and myths about misperceptions.” Journal of Economic Perspectives 34, no. 3 (2020): 220-36.
This is not an academic article, but I truly love this piece of writing in the Guardian by Will Davies titled What’s Wrong With WhatsApp. It is not specifically about India but provides anecdotes and discusses conventional wisdom all too familiar to Indians using WhatsApp. Some of the insights in this article shed a lot of light on the mechanisms of belief in misinformation, too, and can be a theoretical basis for potential future work: