In the age of instant global communication and information overwhelm, the world has seen an explosion in misleading content and its influence over social, cultural, and political life. Addressing misleading content is a complex issue, but is only part of the story when it comes to understanding misinformation. Research is being carried out across many sectors to help explain the dynamics involved, while the challenges and harms play out simultaneously in real time around critical issues such as war, violence, elections, pandemics, and more.
Dr. Claire Wardle is a Professor of the Practice at Brown University School of Public Health and the co-founder First Draft, one of the world’s leading nonprofits addressing the threat of mis- and disinformation. Claire spoke recently at Reos Partners’ Disinformation Convening, where she shared insights and recent trends in the field of mis- and disinformation since her 2017 report on information disorder. The Q&A below is derived from her presentation.
In your seminal 2017 report, you helped distinguish misinformation from disinformation on the basis of intent. For example, while misinformation is false or out-of-context information presented as fact without a specific intent to deceive or mislead, disinformation is false or out-of-context information presented as fact with the intent to deceive or mislead. Are these differences important?
Generally, we’re now seeing much less overtly false content and much more misleading content. “Gray speech” is speech that doesn’t break social media platform policies but goes right up to the line. The people who want to create harm know exactly what they’re doing, and they know how far they can push it — which leads us into tensions around freedom of expression.
As a society, we haven’t had conversations about the types of speech that we think are acceptable on these platforms. One could argue that gray speech is harmful, but we can’t do anything about it, and we don’t yet have longitudinal evidence of the long-term impact of a constant drip at this low level.
Does a focus on specific content and specific posts cause us to miss the larger picture?
We’re seeing the intersection of identity with beliefs — QAnon, anti-mask voting conspiracies, anti-vaccine, climate denialism. These people are connected through these identities, not particular issues; yet the funders and nonprofits that create these messages tend to be organized by those issues. We have completely failed to understand what’s actually happening with these groups that are networked.
It’s that constant drip, drip, drip of content, but we’re obsessed with the atoms of content. Should we flag this Facebook post? Should we take down this YouTube video? Should we de-rank this tweet? We’re focused on the atoms and we’re failing to understand the whole.
What are the challenges of addressing the systemic issues around misinformation?
Understanding the narratives and understanding the ways in which information is flowing is hard. We are organized around national boundaries, but the people creating these networks absolutely are not. Social media platforms are also not connected around national boundaries.
It is ultimately universal factors that cause people to be susceptible to misleading information — economic insecurity, uncertainty about the future, breakdown of community, and loneliness.
Conspiratorial content has moved far beyond fringe movements and into the mainstream. How does this happen?
There is a critical failure in providing answers to people’s questions. The pandemic provided a beautiful example of many people having very normal questions: Has there been enough testing? Will this make me infertile? Will my 9-year-old child get myocarditis? These are very normal questions to ask.
Those of us who live in our information ecosystem wait until we have enough data. But the conspiracy theorists do not wait. They fill these data deficits with conspiracy content. They are much better at SEO. They’re much better at juicing Google, and their content gets to the top of the search results. And we’re over here six weeks later creating a 67-page PDF that doesn’t hit the first page of Google.
The information ecosystem that we live in is top-down linear and hierarchical. We believe in trusted gatekeepers. The result is a very passive model of communication. If you look at the disinformation ecosystem, it is participatory. It is dynamic. It is designed so people participating in that ecosystem believe they’re being heard. They have a sense of agency.
What can we do to help “information ecosystems” evolve to become healthier spaces?
If we in our information ecosystem don’t learn from those dynamics and recognize that there is a reason people are being drawn into these spaces, we are going to continue to fail.
How can we really think innovatively about the complex dynamics of the communication ecosystem rather than playing whack-a-mole with individual pieces of content?
Unfortunately, we do not have the luxury of time to keep making these mistakes. We need a counterweight to the platforms, and the response must be global. Civil society must achieve the same scale, the same resources, and the same ability to think across borders in the way that the platforms do.