Breaking Democracy: Lies, Deception and Disinformation

  • Details
  • Transcript
  • Audio
  • Downloads
  • Extra Reading

With conspiracy theories and disinformation on the rise in both media and politics, is our democracy at risk? We may lose trust in society, in the institutions that inform us, and, ultimately, in the democratic process. Our sense of responsibility for the everyday information we share may diminish. Deceitful politicians may escape scrutiny by claiming that truths are false, falsehoods are true, and in any case nothing can be proved. How should we respond to these challenges?

Download Transcript

Breaking Democracy: Lies, Deception and Disinformation

Professor Andrew Chadwick

5th May 2022

 

Have you ever been deceived?

Thinking back to when you’ve been deceived, did you ever feel a bit embarrassed?

There are good reasons why I’ve started with these questions. It’s because I want to open with two fundamental points about deception.

 

Most People Are Vulnerable to Deception at Least Some of the Time

First, deception is a remarkably common occurrence. It’s fundamental to the human condition. This is not because everybody lies all of the time. Deception isn’t the same thing as lying. I’ll return to that point soon.

The reason why deception is fundamental to the human condition is because most people, most of the time, believe that other entities—be they organisations, people, media, news reports, even estate agents and politicians—are basically truthful most of the time. Some social psychologists have studied deception using lab experiments. They’ve shown that most people have a “truth-bias” or a “truth-default.”[i] Most people assume that others are honest and tell the truth.

If you think about it, most of the time this is a good way to be. It’s an accurate perception of how the world is. To behave as if others are lying all the time would be exhausting. Most people do tell the truth. Most people occasionally tell little lies. Some people frequently tell little lies. But very few people tell big lies. And even fewer tell big lies all the time.

Yet our truth-bias comes with a cost. Our assumption that other entities are honest actually makes most of us vulnerable to being deceived on the occasions when others really are determined to deceive us. Since a small minority of people and organisations spend a lot of time and other resources trying to deceive others, this makes all of us vulnerable.

So, my first point is: because most people assume honesty in others, and because there are determined liars in the world, most people are vulnerable to deception at least some of the time.

 

Admitting to Being Deceived Usually Involves Loss – of Social Status, Social Identity, or Both.

Admitting we’ve been deceived usually involves loss—of social status, social identity, or both. This is a bit more complicated, so I’ll explain.

Understandably, we’re likely to feel upset and angry about falling prey to deception. But we’re also likely to feel some embarrassment at being taken in; at our failure to avoid being deceived. We tend to think: ‘How could I have been so foolish and gullible? Why didn’t I spot the signals?’ ‘The things I took for granted have melted away.’ ‘I’m a failure.’

To illustrate this point further, let’s consider an example.

In 1954, the social scientist Erving Goffman wrote about deception in financial scams. In an article entitled “On cooling the mark out” Goffman used the example of what professional fraudsters or confidence tricksters (sometimes known as “con-artists”) do when they’ve successfully deceived a person and tricked them into handing over money in a street gambling scam.[ii]

In street slang, the person who’s the target of a gambling scam is known as a “mark.” The process of “cooling the mark” in Goffman’s title is when the con-artist sends an accomplice to talk to the poor deceived person (the mark) soon after the deception. The aim is to cool the mark down, to remind them of the drawbacks of going to the police or widely publicising that they’ve been conned. As Goffman put it, the process of cooling the mark is when “An attempt is made to define the situation for the mark in a way that makes it easy for him to accept the inevitable and quietly go home. The mark is given instruction in the philosophy of taking a loss.” The job of the con-artist’s accomplice is to show the mark there are good reasons to avoid admitting to other people that they’ve been deceived—“it will be embarrassing,” “it won’t do any good,” “you should just move on,” and so on.

Goffman’s point underpins a basic aspect of deception: there are strong incentives to avoid admitting—perhaps even to yourself—that you’ve been misled or you’re in some state of ignorance about the world. Deception is a social process. It thrives in contexts where people are keen to retain their social status or their social identity, or both. We can gain social status, social identity, and social solidarity even by adopting false beliefs. This is what the legal scholar Daniel Kahan has termed “identity-protective cognition.”[iii] It’s why we often choose collective identity even if it conflicts with the best evidence at the time. It’s why we’re susceptible to choosing “tribe over truth,” as Kahan puts it.

Yet given that so many areas of our social, cultural, economic, and of course political life are shaped by our desire to achieve social status and to maintain social identities, this makes deception a particularly difficult problem to solve.

 

What, Then, is Deception?

Deception is surprisingly difficult to pin down. It’s often better to start by saying what deception is not.

First, deception isn’t lying or lies. Lying of course plays an important role in deception, but lying’s mere existence doesn’t mean people are deceived. If it did, we’d be in much greater trouble as a society than we are today.

Nor is deception a lack of knowledge. There are all kinds of things about which I lack knowledge—how to make the best daiquiri cocktail, or the precise are in square miles of the North Sea—but I haven’t been deceived about these things.

Nor is deception secrecy. Deception often involves secrecy, but it’s possible to keep secrets in a way that doesn’t mislead others, or harm others’ interests.

Deception isn’t “disinformation” or “misinformation.” This is a bit more tricky. Over the last few years it has become common for social scientists to distinguish between disinformation and misinformation. Disinformation is usually portrayed as intentional; misinformation as unintentional.[iv] Depending on the case, these terms have been used as verbs (to describe behaviours) or nouns (to describe a quality of the information itself).

This is a good and useful distinction. But the existence of disinformation or misinformation—either as behaviours or as particular types of information—doesn’t necessarily mean people are deceived and change their attitudes and behaviour as a result. Even the most committed and well-organised disinformation outfits, such as the Russian Internet Research Agency’s online troll and fake social media operation, don’t succeed in deceiving people all of the time.

In fact, a longstanding challenge for communication researchers is how to identify when an intention to deceive actually results in deception. Demonstrating how, why, and to what extent disinformation works is difficult. Accounts of propaganda are often detailed about attempts to deceive—the content of messages and symbols. But the reception and acceptance of meaning—how people actually perceive the messages—can’t just be inferred from the content of the messages. On the other side of the coin, accounts of misperceptions, for example conspiracy theories, are usually strong on the cognitive biases that make people susceptible to false beliefs, but they don’t have much to say about where the false information comes from in the first place. They also don’t say much about how some people and organisations encourage and activate cognitive biases, to mobilise opinion and gain power.

The cognitive biases that make us susceptible to deception are put there by our past experiences and our social interactions. What this means is that, to understand deception, we must understand these experiences and social interactions. And we must understand how deceivers shape them.

 

A Simple Definition

So where does all of that leave us? Hopefully, with a simple one-sentence definition of deception.

I like to think of deception as a conceptual bridge. It is the bridge that links together intentions, interactions, and outcomes. The intentions can be those of people, organisations, or other entities, for example programmed technological artefacts, such as an automated fake social media account. The interactions are the wide varieties of communication between deceivers and the deceived. The outcomes are changes in attitudes and/or behaviours.

So, deception is summarised by this simple definition. It is when an identifiable entity’s intention to mislead results in attitudinal or behavioural outcomes that correspond with the intention.[v]

So far, so good. But like most simple definitions, when one considers the details, things soon get more complicated.

 

Five Varieties of Deception

1. Rhetorics

Bare-faced lies are rare. Complex combinations of true and false information matter more.

Second, deception can involve many different rhetorical techniques beyond the direct promotion of falsehoods. These include withholding or concealing, switching topic, strategic ambiguity, diversions, deflections, or generating conditional, counterfactual versions of events, which can make belief in false interpretations more comfortable.[vi]

Third, deception can arise when evidence that reduces misperceptions doesn’t become current and available. In this way, it can operate through what some political scientists have called “nondecisions”: deliberately limiting the scope of decisions in order to avoid issues that may reduce political support for your cause or interests.[vii]

Let’s consider one recent example of several of these forces in action.

“40 New Hospitals”

During the December 2019 UK general election campaign, Conservative Party leader Boris Johnson repeatedly claimed the Government would “build 40 new hospitals” by 2030. At the time, he omitted the information that funding was only in place for six hospitals, as an investigation by the Guardian newspaper revealed soon after the election.[viii]

But after that, things got murkier.

Last December, BBC News’ Reality Check team analysed the Government’s pledge to “build 40 new hospitals.”[ix] Journalists discovered an obscure document issued in August 2021 by the Department of Health and Social Care. This set out guidance to NHS trusts on what it called the “key media lines” to use when responding to questions about the pledge to “build 40 new hospitals.” The Government document defined a “new” hospital in many different and rather strange ways, but these came under three main headings: a whole new hospital on a new site or current NHS land; a major new clinical building on an existing site or a new wing of an existing hospital; a major refurbishment and alteration of all but building frame or main structure.

Here’s the important point: the Government document said there was a variety of different schemes but they “must always be referred to as a new hospital” in all press and PR communication.

When the BBC asked the Department of Health how many “entirely new hospitals were being built,” an official Department spokesperson replied, “we have committed to build 48 hospitals by 2030, backed by an initial £3.7bn.”[x] So now it was 48 hospitals. But notice that phrase, “backed by an initial £3.7bn.” Nowhere near enough, and only “initial” money.

After further research (which involved writing to all NHS Trusts) BBC News established that, on current plans, only three new hospitals were going to be built before 2030. Not 48, not 40, but three. Two of those are general hospitals; one is a non-urgent care hospital. And those two general hospitals were already being built and were due to open before the Prime Minister’s pledge to build 40 new hospitals. (Incidentally, those two hospitals have been delayed and have still not opened in May 2022.)

So, here we see several rhetorical moves:

  • Complex combinations of true and false information. There is a programme of new building underway in the NHS, but entirely new hospitals are only a small proportion of it.
  • Strategic ambiguity. The funding isn’t in place for the entire programme. It’s an initial £3.7bn and not enough for 48 building projects, let alone hospitals, of which only two have been approved.
  • Diversions and deflections, and generating conditional or counterfactual versions. The use of definitions in public documents that most people would not recognise in everyday language but feel “truthy” when repeated. Is an extension to a hospital actually a “new hospital”? If you added a conservatory to the back of your house, would you tell your friends you have a “new house”?
  • Withholding information, especially over time. It was only when BBC News quizzed the Government and NHS trusts that it revealed information that was potentially misleading.

2. Willful Ignorance

Sometimes called “willful blindness” or “contrived ignorance,” this variety of deception also doesn’t involve the direct promotion of falsehoods.

This can be structurally organised, in advance, by those in positions of power. During the Nuremberg, Watergate, and Enron trials, willful ignorance was a key theme. These investigations tried to establish not only who knew what and when, but also whether those in positions of power deliberately avoided exposure to evidence so that they could claim that, at the time, they couldn’t possibly have known the consequences of their actions.

Consider two areas—tobacco advertising and climate change—where history has shown that some organisations have promoted uncertainty to deceive others and bolster their self-interest in pursuing socially harmful courses of action. Tobacco advertising deceived many people from the 1950s to the 2000s, when the harms of tobacco were already well known to tobacco companies.[xi] Denial campaigns funded by carbon-intensive industries have also deceived many people into thinking climate change is not real.[xii]

Wilful ignorance is difficult to prove. But it might be important to consider during future public inquiries into the Covid pandemic around the world. It will be especially important to consider if wilful ignorance is used as a cover for when economic or political expediency outweighed the moral imperative to minimise the collective harm caused by Covid.

The complexity of modern bureaucratic organisation makes wilful ignorance easier to achieve. This is because official tasks are fragmented and it becomes more difficult to identify who is responsible for decisions. For this reason, international law on war crimes tries to hold individuals to account. Consider the example of Walter Funk.

Funk was a junior minister at the Nazi Ministry of Propaganda from 1933 to 1938. He then became the Minister for Economic Affairs and president of the German state bank until the end of the Nazi regime in 1945. At the Nuremberg trials in 1946 the U.S. prosecutor Robert Jackson famously called Funk “the banker of gold teeth.” While Minister for Economic Affairs, Funk had processed shipments of gold including dental repairs that had been removed from the bodies of victims of the Nazi death camps. Despite being involved in Hitler’s government for 12 years, at Nuremberg Funk denied he knew the origins of the shipments of gold teeth he received and he pleaded ignorance of the atrocities in the death camps. Funk eventually received the lesser Nuremberg sentence of life imprisonment, but was released due to ill health in 1957. In its judgement, the Nuremberg Tribunal said that “Funk either knew what was being received or was deliberately closing his eyes to what was being done.”[xiii] The key point here is that deliberately closing his eyes to what was being done depended on his knowing what was being done. And that is wilful ignorance.

3. Manipulating Social Identities

Still, for these strategies to work they need to operate in favourable contexts.

Recall that earlier I briefly mentioned Kahan’s theory of “identity-protective cognition.” Individuals tend to process information in ways that help them maintain status, social support, belonging, and ultimately social and political identity. People resist information that contradicts the dominant beliefs of those groups whose memberships they particularly value.

By recognising this bias, elites can, over time, increase the circulation of false signals about how one social group is supposedly threatened by another social group. Leaders can exaggerate these “out-group” threats. For example, some conservative Republicans in the U.S. have traded in signals of threats from ethnic minority and immigrant communities, as a way to encourage White in-group identity from which they benefit politically.[xiv]

But this strategy of manipulating signals to reinforce identity and sow division has recently been used in more surprising ways. The Russian Internet Research Agency employed it during its campaign of online interference in the 2016 U.S. presidential election campaign. The Russian state recognised the importance of stimulating engagement through social media behaviour such as clicks, likes, and retweets. And much of this relied on reinforcing divisions between social groups.

Between 2015 and 2017, 31 million U.S. Facebook users shared the Russian Internet Research Agency’s Facebook and Instagram posts with their social media networks. These posts were “liked” almost 39 million times, reacted to with emojis almost 5.4 million times, and generated almost 3.5 million comments. The Instagram posts alone received 185 million likes and 4 million comments.[xv] This is deception: intention, interactive process, and behavioural outcomes. The Russian Internet Research Agency’s themes on social media were diverse: pro-left, pro-right, religion, misogyny, racism, pro-Black, pro-LGBT, anti-immigrant. These themes were carefully chosen to reinforce political polarisation by pitting in-groups and out-groups against each other to generate engagement.

4. Repetition, Fluency, and the Illusory Truth Effect

If increasing false signals about out-group threats can deceive people about the extent of those threats and then influence behaviour, what are the mechanisms through which this process works? An important one is what researchers call “fluency.”

Fluency comes from how we feel when we think. It shapes how we approach the task of making sense of new information and it’s important for understanding how we’re deceived.[xvi] If we find a task difficult, for example making sense of information that we haven’t encountered, we’ll associate the task with negative feelings and mentally flag the information for scrutiny. The flip side is that if we find processing information easy, because we’ve encountered the information previously and are familiar with it, we’re more likely to hold positive feelings toward the task, be less likely to flag it for scrutiny, and more likely to accept the information—even if it’s false. Repeated exposure to information increases fluency; fluency increases credulity.

This “illusory truth effect” has been well-documented. Early demonstrations of it came during the Second World War, when studies of the spread of rumours demonstrated that simply hearing a rumour repeated by word of mouth made it more likely it would be believed. It can play a role in increasing acceptance of deception on social media. Repeated exposure to false information reduces people’s ethical dilemmas about sharing information that their cognition tells them is false. People either intuitively (and incorrectly) perceive that the false information has a “ring of truth” about it or that it is “already out there,” so they feel they have ethical license to share it.[xvii]

But the illusory truth effect also creates opportunities for deceivers to create false impressions of others’ beliefs and actions. Repeatedly exposing people to false information can stimulate people to act. Examples of this abound online, from “astroturfing” (the creation of fake grassroots campaigns or fake endorsements) to “sockpuppets” (the creation of multiple fake accounts). These methods exploit online recommendations and reviews and spread what are known as social endorsement cues, such as numerical indicators of “likes” and “shares.”[xviii]

5. Manipulating the Credibility of Sources

At this point, you might be thinking that you’re not going to be taken in by these techniques, because you only use trustworthy sources of information. Well, in a sense, you’re right. The person, organisation, or channel through which messages are conveyed is important for people’s judgments.[xix] But the problem is that, as media technologies have changed, so too has how we judge the credibility of sources. And the credibility of a source can be manufactured in various ways.

People still associate established news organisations that have editors (such as BBC News or the New York Times) with accuracy and trustworthiness. However, when a news organisation is not well established, studies have shown that other kinds of cues unique to online news become important for how we judge credibility.[xx] These cues can convince audiences that news stories are credible, even if they are not, because they activate what are known as “bandwagon effects.” For example, “recency” cues signal how current or up to date the news is. “Popularity” cues are those signalling how many other people have viewed, shared, liked, or commented on a news article. Multiple negative comments on a news article can undermine its credibility. All of these things can and have been manipulated by covert activity.

The way news organisations gather sources has also changed dramatically over recent years. They now routinely use online sources, particularly from social media. This makes them more vulnerable to becoming unwitting accomplices to deception when a source is believed to be credible by a news outlet, fools its editors, and is accepted by audiences that believe the news organisation itself to be trustworthy.

Some respected news organisations have unwittingly amplified deception on social media by embedding fake tweets as vox populi quotes in their stories[xxi] or by repeating unfounded conspiracy theories. In 2020, freelance journalists were unwitting recruits to yet another Russian state disinformation campaign that seeded false news stories into left-wing Facebook groups in the U.S. and the U.K.[xxii]

The key point here is that, online, deceivers can adapt their tactics to the context by manufacturing the cues that journalists look for when sourcing stories—not only popularity and recency, but also novelty, emotional outrage, wit, satire, or the apparent divulging of secret or hidden information. All of these can be faked, and the deception in these cases derives from the combination of a credible source and false information.

 

Why Does Deception Undermine Democracy?

Hopefully by now you will have learned about some of the varieties of deception and some of the ways individuals and societies are susceptible to it. But why does deception undermine democracy? We can think about the impacts in two ways: direct and indirect.

Among the direct impacts, deception can empower those who benefit disproportionately from its outcomes. It can undermine individuals’ or groups’ interests and the capacity to act with the social trust required for effective citizenship. Deception can also distort public opinion and policy preferences and amplify political enmity.[xxiii]

Deception begets deception. Political elites are more likely to have incentives to mislead others if they perceive there is some power advantage to be gained. Deception can spread as a norm—just “what it takes” to win.

But there are also equally important indirect ways that deception undermines democracy.

Valuable social norms of evidential verification can start to erode. Consider Donald Trump’s strategy of contesting the outcome of the 2020 presidential election on the grounds of false claims that voting fraud led to his defeat.[xxiv] The consequences of this kind of action can include the erosion of trust in public institutions, the spread of cynicism among publics and elites, and the growth of a culture of indeterminacy where distinguishing truth from falsehood becomes harder. One lesson of the past is that when people become uncertain about the status of public facts they might withdraw into the private sphere. This was an important strand of dissident criticism of the neo-Stalinist states in Eastern Europe during the Cold War.

Deception may have other indirect effects that undermine faith in democracy. For example, media coverage of Russian disinformation activity has probably reached greater numbers than were actually deceived by the activity itself. The coverage could lead indirectly to perceptions that elections can’t be trusted because voters are being manipulated, or it could simply generate chaos.

Such a culture of distrust frees illiberal elites to promise order and certainty, while restricting liberal democratic rights and freedoms. It also frees some leaders to wilfully mislead—because they can claim so little can be trusted.

On that note, I will add the thoughts of Hannah Arendt, a political theorist who skilfully dissected the corrosive impact of deception: “If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer…. A people that no longer can believe anything cannot make up its own mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please.”[xxv]

 

Principles for the Fightback

I believe the fightback against deception in public life starts with educating ourselves about the many ways it can work. I see it as the responsibility of social scientists everywhere to use their skills to contribute to civic efforts to reduce deception’s prevalence, inform programmes of education, and promote more ethically responsible practice in the public communication professions, to render social, economic, cultural, and political elites more meaningfully accountable.

I’ll restrict my remarks to some key principles.

  • Promote broad understanding of how the nature of deception has changed due to changes in our media systems.
  • Focus on empowering people, in their everyday social capacities, to understand and challenge attempts to deceive. Don’t just focus on quick technological fixes to “poor” quality information.
  • Recognize how today’s media and digital platform business models are often ill-suited to combating deception.
  • Independently fund investigative journalism and fact checking.
  • Fund independent scholarly research. Avoid research funded on terms directly dictated by digital platforms, media organizations, or governments.
  • Establish in law a transparent, shared public national data repository of social media take-downs and other identified attempts to deceive.
  • Recognize the importance of politics. Provide opportunities to challenge the idea that deception is a political norm or “just what it takes” to win.
  • Establish nuanced legal frameworks for retrospective public inquiries of all kinds.
  • To avoid moral panics and unintended indirect effects, try to focus efforts on mitigating deception, not just the existence of poor-quality information.

 

© Professor Chadwick, 2022

 

Acknowledgements

Special thanks go to those collaborators whose thinking influenced this article: James Stanyer, Catherine R. Baker, Cristian Vaccari, and Andrew R. N. Ross.

 

References and Further Reading

Several of the themes in this essay are explored in greater detail in this research article: Chadwick, A. & Stanyer, J. (2022). Deception as a Bridging Concept in the Study of Disinformation, Misinformation, and Misperceptions: Toward a Holistic Framework. Communication Theory, 32(1), 1-24. For the bigger picture of how changes in media systems have reshaped deception and created new vulnerabilities, see my recent book The hybrid media system: Politics and power (2nd ed.). Oxford University Press, especially Chapter 10.

 

[i] Levine, T. R. (2014). Truth-default theory (TDT): A theory of human deception and deception detection. Journal of Language and Social Psychology, 33(4), 378–392.

[ii] Goffman, E. (1952). On cooling the mark out: Some aspects of adaptation to failure. Psychiatry, 15(4), 451–463.

[iii] Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection: An experimental study. Judgment and Decision Making, 8, 407–24.

[iv] See for example, Chadwick, A., Vaccari, C., & O’Loughlin, B. (2018). Do Tabloids Poison the Well of Social Media? Explaining Democratically Dysfunctional News Sharing. New Media & Society 20(11), 4255–4274.

[v] Chadwick, A. & Stanyer, J. (2022). Deception as a Bridging Concept in the Study of Disinformation, Misinformation, and Misperceptions: Toward a Holistic Framework. Communication Theory, 32(1), 1-24.

[vi] Clementson, D. E. (2018). Deceptively dodging questions: A theoretical note on issues of perception and detection. Discourse & Communication, 12(5), 478–496; Effron, D. A. (2018). It could have been true: How counterfactual thoughts reduce condemnation of falsehoods and increase political polarization. Personality and Social Psychology Bulletin, 44(5), 729–745; Ross, A., & Rivers, D. (2018). Discursive deflection: Accusation of “fake news” and the spread of mis- and disinformation in the tweets of President Trump. Social Media & Society, 4(2), 1–12.

[vii] Bachrach, P., & Baratz, M. S. (1962). Two faces of power. American Political Science Review, 56(4), 947–952.

[viii] Walker, P., & Campbell, D. (2020). Most of Boris Johnson's promised 40 new hospitals will not be totally new. Guardian. https://www.theguardian.com/society/2020/oct/02/johnsons-37bn-for-40-new-hospitals-in-england

[ix] Barrett, N. & Palumbo, D. (2021). What’s happened to the 40 new hospitals pledge? BBC News. https://www.bbc.co.uk/news/59372348

[x] Ibid.

[xi] WHO (2019). Tobacco industry: decades of deception and duplicity. https://applications.emro.who.int/docs/FS-TFI-198-2019-EN.pdf?ua=1

[xii] Hoggan, J. & Littlemore, R. (2009). Climate cover-up: The crusade to deny global warming. Greystone Books.

[xiii] Gilbert, G. (1947). Nuremberg Diary. Farrar, Straus & Co, p. 443.

[xiv] Jardina, A. (2019). White Identity Politics. Cambridge University Press.

[xv] Howard, P. N., Ganesh, B., Liotsiou, D., Kelly, J., & François, C. (2018). The IRA, Social Media and Political Polarization in the United States, 2012-2018. Computational Propaganda Research Project, University of Oxford. See also DiResta, R., Shaffer, K., Ruppel, B., Sullivan, D., Matney, R., Fox, R., Albright, J., & Johnson, B. (2018). The tactics and tropes of the Internet Research Agency. New Knowledge.

[xvi] Schwarz, N., Sanna, L. J., Skurnik, I., & Yoon, C. (2007). Metacognitive experiences and the intricacies of setting people straight: Implications for debiasing and public information campaigns. Advances in Experimental Social Psychology, 39, 127–161.

[xvii] Effron, D. A., & Raj, M. (2020). Misinformation and morality: Encountering fake-news headlines makes them seem less unethical to publish and share. Psychological Science, 31(1), 75–87.

[xviii] Luo, M., Hancock, J. T., & Markowitz, D. M. (2020). Credibility perceptions and detection accuracy of fake news headlines on social media: Effects of truth-bias and endorsement cues. Communication Research. https://doi.org/10.1177/0093650220921321

[xix] Metzger, M J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413–439.

[xx] Sundar, S. S., Knobloch‐Westerwick, S., & Hastall, M.R. (2007). News cues: Information scent and cognitive heuristics. Journal of the American Society for Information Science and Technology, 58(3), 366–378.

[xxi] Lukito, J., Suk, J., Zhang, Y., Doroshenko, L., Kim, S. J., Su, M-H., Xia, Y., Freelon, D., & Wells, C. (2020). The wolves in sheep’s clothing: How Russia’s Internet Research Agency tweets appeared in U.S. news as vox populi. International Journal of Press/Politics, 25(2), 196–216.

[xxii] Nimmo, B., François, C., Eib, C. S., & Ronzaud, L. (2020). IRA again: Unlucky thirteen. Graphika.

[xxiii] Baines, D., Brewer, S., & Kay, A. (2020). Political, process and programme failures in the

Brexit fiasco: exploring the role of policy deception. Journal of European Public Policy, 27(5), 742–760.

[xxiv] Helderman, R. S., Brown, E., Hamburger, T., & Dawsey, J. (2021). Inside the ‘shadow reality world’ promoting the lie that the presidential election was stolen. Washington Post, https://www.washingtonpost.com/politics/2021/06/24/inside-shadow-reality-world-promoting-lie-that-presidential-election-was-stolen/

[xxv] Interview with French writer Roger Errera, 1974. New York Review of Books. https://www.nybooks.com/articles/1978/10/26/hannah-arendt-from-an-interview/

References and Further Reading

Several of the themes in this essay are explored in greater detail in this research article: Chadwick, A. & Stanyer, J. (2022). Deception as a Bridging Concept in the Study of Disinformation, Misinformation, and Misperceptions: Toward a Holistic Framework. Communication Theory, 32(1), 1-24. For the bigger picture of how changes in media systems have reshaped deception and created new vulnerabilities, see my recent book The hybrid media system: Politics and power (2nd ed.). Oxford University Press, especially Chapter 10.

 

[i] Levine, T. R. (2014). Truth-default theory (TDT): A theory of human deception and deception detection. Journal of Language and Social Psychology, 33(4), 378–392.

[ii] Goffman, E. (1952). On cooling the mark out: Some aspects of adaptation to failure. Psychiatry, 15(4), 451–463.

[iii] Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection: An experimental study. Judgment and Decision Making, 8, 407–24.

[iv] See for example, Chadwick, A., Vaccari, C., & O’Loughlin, B. (2018). Do Tabloids Poison the Well of Social Media? Explaining Democratically Dysfunctional News Sharing. New Media & Society 20(11), 4255–4274.

[v] Chadwick, A. & Stanyer, J. (2022). Deception as a Bridging Concept in the Study of Disinformation, Misinformation, and Misperceptions: Toward a Holistic Framework. Communication Theory, 32(1), 1-24.

[vi] Clementson, D. E. (2018). Deceptively dodging questions: A theoretical note on issues of perception and detection. Discourse & Communication, 12(5), 478–496; Effron, D. A. (2018). It could have been true: How counterfactual thoughts reduce condemnation of falsehoods and increase political polarization. Personality and Social Psychology Bulletin, 44(5), 729–745; Ross, A., & Rivers, D. (2018). Discursive deflection: Accusation of “fake news” and the spread of mis- and disinformation in the tweets of President Trump. Social Media & Society, 4(2), 1–12.

[vii] Bachrach, P., & Baratz, M. S. (1962). Two faces of power. American Political Science Review, 56(4), 947–952.

[viii] Walker, P., & Campbell, D. (2020). Most of Boris Johnson's promised 40 new hospitals will not be totally new. Guardian. https://www.theguardian.com/society/2020/oct/02/johnsons-37bn-for-40-new-hospitals-in-england

[ix] Barrett, N. & Palumbo, D. (2021). What’s happened to the 40 new hospitals pledge? BBC News. https://www.bbc.co.uk/news/59372348

[x] Ibid.

[xi] WHO (2019). Tobacco industry: decades of deception and duplicity. https://applications.emro.who.int/docs/FS-TFI-198-2019-EN.pdf?ua=1

[xii] Hoggan, J. & Littlemore, R. (2009). Climate cover-up: The crusade to deny global warming. Greystone Books.

[xiii] Gilbert, G. (1947). Nuremberg Diary. Farrar, Straus & Co, p. 443.

[xiv] Jardina, A. (2019). White Identity Politics. Cambridge University Press.

[xv] Howard, P. N., Ganesh, B., Liotsiou, D., Kelly, J., & François, C. (2018). The IRA, Social Media and Political Polarization in the United States, 2012-2018. Computational Propaganda Research Project, University of Oxford. See also DiResta, R., Shaffer, K., Ruppel, B., Sullivan, D., Matney, R., Fox, R., Albright, J., & Johnson, B. (2018). The tactics and tropes of the Internet Research Agency. New Knowledge.

[xvi] Schwarz, N., Sanna, L. J., Skurnik, I., & Yoon, C. (2007). Metacognitive experiences and the intricacies of setting people straight: Implications for debiasing and public information campaigns. Advances in Experimental Social Psychology, 39, 127–161.

[xvii] Effron, D. A., & Raj, M. (2020). Misinformation and morality: Encountering fake-news headlines makes them seem less unethical to publish and share. Psychological Science, 31(1), 75–87.

[xviii] Luo, M., Hancock, J. T., & Markowitz, D. M. (2020). Credibility perceptions and detection accuracy of fake news headlines on social media: Effects of truth-bias and endorsement cues. Communication Research. https://doi.org/10.1177/0093650220921321

[xix] Metzger, M J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413–439.

[xx] Sundar, S. S., Knobloch‐Westerwick, S., & Hastall, M.R. (2007). News cues: Information scent and cognitive heuristics. Journal of the American Society for Information Science and Technology, 58(3), 366–378.

[xxi] Lukito, J., Suk, J., Zhang, Y., Doroshenko, L., Kim, S. J., Su, M-H., Xia, Y., Freelon, D., & Wells, C. (2020). The wolves in sheep’s clothing: How Russia’s Internet Research Agency tweets appeared in U.S. news as vox populi. International Journal of Press/Politics, 25(2), 196–216.

[xxii] Nimmo, B., François, C., Eib, C. S., & Ronzaud, L. (2020). IRA again: Unlucky thirteen. Graphika.

[xxiii] Baines, D., Brewer, S., & Kay, A. (2020). Political, process and programme failures in the

Brexit fiasco: exploring the role of policy deception. Journal of European Public Policy, 27(5), 742–760.

[xxiv] Helderman, R. S., Brown, E., Hamburger, T., & Dawsey, J. (2021). Inside the ‘shadow reality world’ promoting the lie that the presidential election was stolen. Washington Post, https://www.washingtonpost.com/politics/2021/06/24/inside-shadow-reality-world-promoting-lie-that-presidential-election-was-stolen/

[xxv] Interview with French writer Roger Errera, 1974. New York Review of Books. https://www.nybooks.com/articles/1978/10/26/hannah-arendt-from-an-interview/

This event was on Thu, 05 May 2022

Professor Andrew Chadwick

Professor Andrew Chadwick

Andrew Chadwick (PhD, London School of Economics) is Professor of Political Communication in the Department of Communication and Media at Loughborough University

Find out more

Support Gresham

Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds. 

You May Also Like