Gresham College Lectures

Breaking Democracy: Lies, Deception and Disinformation

May 11, 2022 Gresham College
Gresham College Lectures
Breaking Democracy: Lies, Deception and Disinformation
Show Notes Transcript

With conspiracy theories and disinformation on the rise in both media and politics, is our democracy at risk? We may lose trust in society, in the institutions that inform us, and, ultimately, in the democratic process. Our sense of responsibility for the everyday information we share may diminish. Deceitful politicians may escape scrutiny by claiming that truths are false, falsehoods are true, and in any case nothing can be proved. How should we respond to these challenges?


A lecture by Andrew Chadwick

The transcript and downloadable versions of the lecture are available from the Gresham College website:
https://www.gresham.ac.uk/watch-now/breaking-democracy

Gresham College has been giving free public lectures since 1597. This tradition continues today with all of our five or so public lectures a week being made available for free download from our website. There are currently over 2,000 lectures free to access or download from the website.

Website: http://www.gresham.ac.uk
Twitter: http://twitter.com/GreshamCollege
Facebook: https://www.facebook.com/greshamcollege
Instagram: http://www.instagram.com/greshamcollege

- I want to start by just acknowledging some of my colleagues, who's thinking has played a role in shaping this particular talk tonight. Academia is always a collaborative endeavor, and this is no exception. So Professor James Stanyer, Catherine R. Baker, who's a PhD student of mine, Professor Cristian Vakari, and Andy Ross, who is another PhD student of mine. I want to start with a quick question. So this is a friendly educational setting, so I hope you don't mind this. Hopefully you won't find it too difficult to answer. Have you ever been deceived? Please raise your hand if you've been deceived. I thought that would be the response. Now when you realized you'd been deceived, did you feel a bit embarrassed about it? - [Audience Member] Yes. - Yeah, okay. Hold that thought for now. You might be thinking, why all these questions? I'm here for a lecture. Give me some answers, not questions. There are good reasons why I've started with these questions. It's because there are two fundamental points I'd like to begin with about deception. First, deception is fundamental to the human condition. This isn't because everybody lies all of the time. Deception isn't the same thing as lying. And I'll come to that point soon. The reason deception is fundamental to the human condition is because most people, most of the time, believe that other entities, be they people, organizations, media, news reports, even estate agents and politicians, are basically truthful most of the time. So some social psychologists such as Tim Levine, have shown that most people have what we call a truth bias, or a truth default. Most people assume that others are honest and telling the truth. And if you think about it, most of the time, this is a good way to be. It's an accurate perception of how the world is. If we had to be behave as if others are lying all of the time, it would be exhausting. Most people do tell the truth, surveys show that in many countries around the world. Most people occasionally tell little lies. Some people frequently tell little lies, but very few people tell big lies. And even fewer people tell big lies all of the time. But the problem is, is that truth bias that we have, also comes with a cost. Because our assumption that others are honest makes us vulnerable. Most of us vulnerable, to deception at least some of the time. And it makes us vulnerable when others are really determined to deceive us. And since a small minority of people and organizations, do spend quite a lot of time and other resources including money, trying to deceive us. This makes all of us vulnerable at some point. So hold that thought before we start judging ourselves too harshly, and judging other people for being deceived. The second point I'd like to make about deception at the level of individual experience, starts to broaden out into its social implications. And it explains a lot to do I think with why deception is such a useful strategy, particularly for certain political elites. The second point is that admitting we've been deceived usually involves some loss. Either of social status, social identity, or both. This is a bit more complicated, this point. So I'll explain. Now understandably we're likely to feel upset, angry, when we fall prey to deception. We're also likely to feel some embarrassment at having been taken in. How could I have been so foolish and gullible? Why didn't I spot the signals? The things I took for granted, have melted away and I'm a failure. These are all very common experiences of being deceived both at interpersonal, in interpersonal relationships but also, when it comes to bigger public issues as well, I would argue. So to illustrate this point, let's consider an example. This is Charles Ponzi. The notorious banker turned financial fraudster and of course, inventor of the autonomous Ponzi scheme. I'm not going to go into detail about what a Ponzi scheme is, but if you haven't heard about it, it's a financial scam, based on the continuous deception of new investors, whose money is funneled to previously deceived investors without any investor, ever owning any tangible assets apart from of course, the fraudster, him or herself at the top of the pyramid. In 1954, the great sociologist Erving Goffman, wrote a fascinating article about the characteristics of financial deception. The article was called, "On Cooling the Mark Out: "Some Aspects of Adaptation to Failure." Goffman used the example of what professional fraudsters do when they've successfully tricked someone, into handing over money in a gambling scam. Now in street slang, the person who's the target of a gambling scam, is known as a mark. You may have heard that before. The process of cooling the mark in Goffman's title, is when the con artist sends an accomplice, to talk to the poor deceived person, the mark, soon after the deception has happened. The aim is to cool the mark down, to remind the mark of the drawbacks of going to the police, or publicizing that they've been conned. So the con artists accomplice is there, to show the mark that there are good reasons to avoid admitting to others, that they've been deceived. It'll be embarrassing. It won't do you any good anyway. You should just move on, go home, and so on. Goffman's point underpins this basic aspect of deception. There are strong incentives to avoid admitting, perhaps even to ourselves, that we've been misled or we are in some state of ignorance about the world. Deception is a social process. It thrives in contexts, where people are keen to retain their social status or their social identity or both. We can gain status and social identity, even social solidarity, by continuing with false beliefs. This is what the legal scholar, Daniel Kahan, has termed identity protective cognition. I'll say a bit more about that later on. It's why we often choose collective identity, even if it conflicts with the best available evidence at the time. And it's why we're susceptible to choosing tribe over truth as Kahan puts it. Tribe over truth. Yet given that so many areas of social, economic, and of course political life are shaped by the desire to achieve social status, and to maintain our social identities. This makes deception a particularly difficult problem to solve, in our media systems, in politics, in society, in interpersonal interactions, online and so on. So beyond those two points, I want to turn now to trying to say something about defining deception. It's surprisingly difficult to identify and measure. It's often better to start by saying what deception is not. So I'll do that. First deception is not lying or lies. Lying of course plays an important role in deception, but lying's mere existence doesn't mean that people are deceived. If it did, I'd argue that we'd be in much greater trouble as a society, than we are today. Nor is deception a lack of knowledge. There are all kinds of things about which I lack knowledge, how to make the best daiquiri cocktail, or the precise size of the North Sea. But I haven't been deceived about those things. I'm just not knowledgeable about them. Nor is deception secrecy. Deception often involves secrecy, but it's possible to keep secrets in a way that doesn't mislead others, or harm others' interests. I'm sure that we've all had examples of that in our lives. And finally, nor is deception disinformation or misinformation. Now this one's a little bit more tricky 'cause you probably heard, these words banded around a lot over the last five or six years, because of the explosion of research in this field. And I'm a participant in that field as well. It's a bit more tricky so bear with me. Over the last few years it's become common to make a distinction, especially social scientists, to make a distinction between misinformation and disinformation. Disinformation is often portrayed as intentional, and misinformation as unintentional. So depending on the case, these terms have been used either as verbs to describe behaviors, spreading things, for instance. Or nouns to describe the quality of the information itself, bad quality information. Now this is a good and useful distinction, but the mere existence of disinformation or misinformation, doesn't necessarily mean that people are deceived and changed their attitudes and behavior. In fact, a longstanding challenge for political communication researchers like me, is how to identify when an intention to deceive actually results in deception. It's really difficult. So historically accounts of propaganda for instance, are often highly detailed about attempts to deceive the content of the messages, the symbols and so on. But the acceptance of meaning, how people actually perceive the messages, can't just be inferred from the content of the propaganda messages. And on the other side of the coin, accounts of beliefs such as conspiracy theories, are strong on the psychological biases that make people susceptible to false beliefs, but they often don't have very much to say about who introduces false information such as conspiracy theories, in the first place. They don't have much to say about how some people in organizations, try to activate our psychological biases, to mobilize opinion and gain power. The biases that make us susceptible to deception are put there by our past experiences I argue, and our social interactions. So what all of this means is that to understand deception as a distinctive thing, we need to understand social interactions, and we need to understand how deceivers can actively shape the context of communication, to achieve their goals of deception, and changing people's attitudes and behaviors. So with those principles in mind, where does that leave us? I like to think of deception as a conceptual bridge. It's the bridge that links together intentions, interactions, and outcomes. So the intentions can be those of people, organizations, or other entities, even technologies for example, such as an automated fake social media account. The interactions are the wide varieties of types of communication between deceivers and the deceived. and the outcomes are changes in attitudes or behaviors. So in my view deception is best understood not as all of those things that I said it wasn't, but as when an identifiable entities intention to mislead, results in attitudinal or behavioral outcomes that correspond with the intention. So that's the kind of definition of deception that I like to work with, because it's more precise than a lot of the social science literature that we come across on misinformation and disinformation. So far so good. But like most simple definitions, when you start to expand on the detail of course, things soon get more complicated. In the rest of the talk, I'm going to talk you through five varieties of deception. Then I'm going to say something about why deception's bad for democracy. And I'm going to close with some broad principles for how we might fight back. So my first variety involves rhetoric. The first thing to say here is that bear faced lies are rare. Complex combinations of true and false information, matter more in this game. Deception can involve lots of different techniques beyond the direct promotion of falsehoods. These include withholding information, switching topic, strategic ambiguity, diversions, deflections, generating counterfactual, or conditional versions of events, that can make belief in falsehoods more comfortable for people. And deception can also rise when evidence that reduces false beliefs, doesn't become current and available. So in this way can operate through what some political scientists have called nondecisions. So a nondecision is when you deliberately limit the scope of political decision making, to avoid dealing with issues, in ways that might reduce support for your cause, or your interests. So deception can operate in that way. I want to give you an example, a very recent example. On this day, very apposite actually given it's the local elections. Let's consider one example. During December 2019, in the UK general election campaign. Boris Johnson the prime minister, repeatedly claimed that the government would, and I quote, "Build 40 new hospitals by 2030." At the time, he left out the information that funding was only in place for six hospitals, as an investigation by The Guardian Newspaper revealed soon after the election. But after that things got murkier still. Last December, BBC News Reality Check team of fact checkers, analyzed the government's pledge to build 40 new hospitals. The journalist there discovered an obscured document issued in August 2021, by the department of health and social Care. This document set out guidance to NHS trusts on what it called, and I quote, "The key media lines to use when responding "to questions about the pledge to build 40 new hospitals." The government document defined a new hospital in many different and rather strange ways. But these came under three broad headings. First of all, a whole new hospital on a new site, or current NHS land. Secondly, a major new clinical building on an existing site, or a new wing of an existing hospital. And thirdly, a major refurbishment. Now here's the important point. The government document said that there was a wide variety of schemes, but, and I quote, "They must always be referred to as a new hospital, "in all press and public relations communication." Now when the BBC asked the department of health, how many entirely new hospitals were being built, an official spokesperson replied, and I quote, "We have committed to build 48 hospitals by 2030, "backed by an initial 3.7 billion pounds." So now it was 48 hospitals. But note the phrases committed to, and backed by an initial 3.7 billion pounds, not currently being built, not enough money, and only initial money. So after much research, which involved the BBC writing to all NHS trusts across the land, BBC News established that on current plans only three new hospitals were definitely going to be built, by 2030. Not 48, not 40, but three. Two of those are general hospitals. One is a non-urgent care hospital. And those two general hospitals were already being built, and were due to open before the prime minister's pledge to build 40 new hospitals. Instantly those two still haven't opened because they've been beset by delays. So what do we see here? Several aspects of rhetoric. First, complex combinations of true and false information. There is a program of new building underway in the NHS, but entirely new hospitals are only a small part of it. Secondly, strategic ambiguity. The funding isn't in place for the entire program. It's an initial 3.7 billion, and not enough for 48 building projects, let alone hospitals. The cost would be far higher, and only two have been approved. Diversions and deflections and counterfactual versions. In this case, the use of definitions, most people wouldn't recognize in everyday language, but feel truthy when repeated. So is an extension to, or a refurbishment to a hospital actually a new hospital? If you added a conservatory to the back of your house, would you tell all of your friends that you have a new house? Probably not. Concealing or withholding information especially over time. It was only when BBC news quizzed the government, that it revealed information that was potentially misleading. My next variety of deception is a bit tricky. It's willful ignorance. It's a special category. And I'll explain why. Again, this doesn't involve the direct promotion of falsehoods. The kind of deception of this kind can be structurally organized and advanced by those in positions of power. Some of you may have come across this before. So in famous investigations such as the Nuremberg trials, the Watergate investigations in the United States in the early 1970s. The Enron fraud trial in the mid 2000s. Willful ignorance was established as a key theme. These investigations tried to establish not only who knew what and when, but also whether those in positions of power deliberately avoided exposure to evidence. So they could claim that at the time they couldn't possibly have known the harmful consequences of their actions. So consider two areas. One is tobacco advertising, the other is climate change. Where history has shown that organized interests have promoted uncertainty to deceive others, and boast of their self-interest in pursuing a socially harmful course of action. Tobacco advertising deceived many people from the 1950s to the 2000s, including my dad. When the harms of cigarettes were well known to tobacco companies but were buried. Climate denial campaigns funded by carbon intensive industrial interests, have also deceived many people into thinking that climate change is not real. Now the complexity of modern organizations makes willful ignorance easier to achieve, because lots of tasks in modern bureaucratic organizations are fragmented. It becomes difficult to identify who's responsible for decisions. And for this reason of course, international law relating to war crimes, much in the news right now for good reasons, tries to clearly hold individuals to account rather than organizations. This is a picture of Walter Funk. Funk was a junior minister, at the Nazi Ministry of Propaganda from 1933 to 1938. He then became Minister for Economic Affairs and President of the State Bank in Germany, until the end of the Nazi regime in 1945. Now at the Nuremberg trials in 1946, the US prosecutor, Robert Jackson, famously called Funk. And I quote, "The banker of gold teeth." When minister for economic affairs, Funk had processed shipments of gold, including dental repairs that had been removed from the bodies of victims, of the Nazi death camps. Despite being involved in Hitler's government at the senior level for 12 years, at Nuremberg Funk denied he knew the origins of the shipments of gold teeth he received into the Bundesbank. And he pleaded ignorance of the atrocities, in the death camps. In its judgment, the Nuremberg tribunal said, and I quote, "Funk either knew what was being received, "or was deliberately closing his eyes "to what was being done." So the key point here is that Funk's deliberate closing of his eyes to what was being done, depended on his knowing what was being done. And that's willful ignorance. Closing your eyes when you know what it is you've seen. So a particularly difficult category of deception, but one that's really important, and is likely to be important I would argue in any future public inquiry into the coronavirus pandemic and its effects. My third variety is deception by manipulating social identities. For these kinds of strategies I've talked about so far to work, they need to operate in a favorable context. And earlier I briefly mentioned Dan Kahan's theory of identity protective cognition, tribe over truth. Individuals tend to process information in ways that help them maintain status, a sense of belonging, and their social and political identity. They resist information that contradicts the dominant beliefs of their social tribe. We're all susceptible to this. But by recognizing this bias, elites can over time increase the circulation of false signals, about how one social group in society, is supposedly threatened by another social group. Leaders can exaggerate what we call in social psychology out group threats. The fear of the other. So for example in the United States, many conservative Republican politicians have long traded in signals, of threats from ethnic minority and immigrant communities, as a way to encourage white in group identity, from which they benefit politically in certain parts of the country. But this strategy of manipulating signals to reinforce identity and divisions, has been used recently in a far more surprising way, because the Russian state via its internet research agency, used it during its campaign of online interference in the 2016 presidential election. So I'll show you that now. So the Russian state operatives recognized the importance of stimulating engagement through social media behavior, such as clicks, likes, and retweets. And much of this relied on reinforcing social divisions between different social groups in US society. Between 2015 and 2017, the numbers are staggering. 31 million US Facebook users shared the Russian Internet Research Agency's Facebook, and Instagram posts with their social media networks. These posts were liked, clicked on like, almost 30 million, 39 million times, and received emoji reactions, about five and a half million times. And generated three and a half million comments. The Instagram posts alone received 185 million likes, and 4 million comments. All of these data by the way, come from the US Congress investigation. This is deception. There's intention, interactive process and behavioral outcomes. People sharing, clicking, liking, smiling emojis, et cetera. I could go into far more detail about this, and I'm happy to answer questions in Q and A if you like. But in the interest of time I'll move on to some examples. So these are some examples of the social media posts that the Russian Internet Research Agency, posted onto Instagram and Facebook. Those themes, religion, American identity, but also black American identity as well. The themes on social media were incredibly diverse. Pro left, pro right, politically. Religion, misogyny, racism, pro black, pro LGBT, anti-immigrant. These themes were carefully chosen to increase political division, and to fire people up to activate that tribe over truth, and to pit groups against each other in American society. So that reinforcing or manipulating social identities can be used, not just in the ways that you'd expect in mainstream politics. My fourth variety, again goes a bit deeper into how all of this works, what makes the context work? So if increasing false signals about threats can deceive people, and then influence their behavior online, how does this work? An important one is what researchers call fluency. And I'll explain this because it's not as straightforward as it seems. Fluency is a sense of how we feel when we think. So if you imagine when you've got new information to process, you're doing it at the cognitive level, you're trying to make sense of the information, but it also has an emotional impact on you that matters. Because how we feel about a task comes to shape our approach to making sense of the task, which in this case is processing new information. If we find a task difficult, processing information that we haven't encountered before, we'll associate the task with negative feelings, and mentally flag the information for further scrutiny. The flip side of this is when we find processing information easy, because we've encountered it before perhaps, we're more likely to hold positive feelings toward the task. And we are more likely, and this is the key point, to accept the information even if it's false. So repeated exposure to information over time, increases our sense of fluency because we feel more comfortable with the information, and therefore it increases our credulity, the extent to which we'll believe something. Now this is the so-called illusory truth effect. And it was first established in the Second World War. It goes back a long way, when social psychologists did some early studies of the diffusion of rumors during war time. Repeated exposure to false information reduces people's ethical dilemmas about sharing it as well. So it's not just that we feel comfortable with it. We're also more likely to share it. We think either intuitively, or incorrectly in this case. We perceive that the false information has a ring of truth about it, so it's okay to share it or it's already out there. So it's okay to share it. So you feel you have an ethical license to share it. If you think about it, this is how gossip works. But the elusory truth effect also creates opportunities for deceivers, to create false impressions of other people's beliefs and actions. So repeatedly exposing people to false information can increase its acceptance and stimulate people to act. So examples of this abound online from so-called astroturfing, where you create fake campaigns online, in order to generate enthusiasm around a product or a politician or a particular cause. To so-called sock puppets, the creation of multiple fake accounts, which as I say, the Russians have done a lot of over the years, but then far from the only ones. What these methods exploit are online recommendation technologies, which is what we call social endorsement queues. So if you write a good review for a product or you sort of like one of Boris Johnson's tweets, or you put a smiling emoji on something that your local councilors tweets online or whatever. That's an endorsement cue. You're doing it because you want to send a signal to others, about your opinion of that particular act or utterance. But the problem is, is that in today's media systems, these endorsement cues are super important for how people make decisions in many different areas of their lives. So over the last two years I've been an advisor to the department for digital media and sport, on issues such as disinformation and fake news. And one of the things that we've talked about for instance in relation to the issue of vaccine hesitancy toward coronavirus vaccines, is how we can use social endorsement to encourage more people, to learn about the vaccines, and take the vaccines up because they're really effective measure for public health. But we're fighting, and DCMS is always fighting this battle. And it's there in the online safety build that is going through parliament right now, about the fakery that's involved in the online world where these kinds of social endorsement queues can be hijacked. Whether it's fake reviews for products, all the way up to politicians deliberately using armies of supporters on social media that are based on fake accounts. So with that in mind, I will turn to my final variety of deception. That again builds on what I've just said. And this is manipulating source credibility. Now you might be thinking, oh, I'm not going to be taken in by these techniques. I can spot a fake review. I know when the tweet's not real. Because you use trustworthy sources of information. You think, oh no, I spend a lot of time looking at BBC News. I'm well informed about the world. You're absolutely right. The person, the organization, the channel through which messages are conveyed. We know that those things are really important for people's judgements. But the problem, and this is the key problem, is that as media technologies have changed how we judge the credibility of sources has also changed. And the credibility of a source can be manufactured in various new ways that lead to deception. So broadly speaking, most people in British society still think that established news organizations that have editors, such as the New York Times, or The Guardian, or the BBC, are trustworthy. However, when a news organization isn't well known or established, studies have shown that other kinds of cues that are unique to online news, become more important for how we judge credibility. And these cues can convince audiences that new stories are credible, even if they're not. So one example is so called recency cues. So something that signals, such as the time of an article on a website, something that signals how recent the article is. They matter for how seriously people will treat the news story. Popularity cues. Our old friend, the likes, the retweets, the smiling emojis. They matter as well. But so too do comments underneath the news article. Multiple comments on a news article can affect how people perceive its credibility. Experiments in media and communication research, going back 10 years have showed this. Negative comments are particularly powerful for some reason. They tend to undermine the credibility, of perfectly credible stories written by perfectly credible journalists. So of course this opens up opportunities for people to organize themselves, to manipulate these contexts, to signal to others in ways that undermine the credibility of messages that they don't agree with, or they want to undermine for political reasons. The way that news organizations gather their sources, has also changed dramatically over recent years. Perhaps there's one or two journalists in the room who knows. News organizations now routinely use online sources, particularly social media posts in their storytelling. But the problem is, that this makes journalists and indirectly us, more vulnerable to deception. A source can be believed to be credible by multiple news outlets, full editors of those organizations, and then be accepted by audiences, that believe the news organization itself to be trustworthy. So to give you an example, some respected news organizations including The Guardian, the New York Times, The Washington Post. Big names in news, have unwittingly over the last five years, embedded fabricated social media posts particularly tweets, as vox pop quotes in their stories, in order to add a bit of color or add a bit of interest. It's the equivalent of walking down the street and asking somebody what they think about inflation or whatever. And journalists have long used vox pops. But when they're online and they're super accessible and super convenient, and there is a norm that they should be embedded in new stories from now on, that creates vulnerabilities. In 2020 freelance journalists were unwitting recruits to yet another Russian state disinformation campaign that was revealed by the organization Graphika, in the US. The Russian state had seeded false news stories into left wing Facebook groups in the US, in the UK. They'd hired journalists, freelance journalists, to write the stories, they'd paid them to do it. And then they said, "We want you to post them "into Facebook groups, to try and spread fake ideas "around the war in Syria, Afghanistan, "all kinds of different causes." So the key point here is that online deceivers can quickly adapt their tactics to the context now. And the cues that journalists and we look for when we are encountering important events, make us more vulnerable. So all of these things can be faked. And the deception in these cases, in some of the cases happens when a credible news organization, unwittingly reports false information. So I want to turn now to the issue of why does deception undermine democracy? Hopefully by now you'll have an idea of some of the varieties of deception. And I want to think about this impact on democracy briefly, in two ways. The first is direct. The second is indirect. The first set of impacts are the direct impacts. So deception can undermine individual or group interests. Straightforward. If you are deceived, you can't act with full rationality, and push for your interest in the public sphere. Can also undermine the capacity for all of us to act as effective citizens. I'll say a little bit more about that shortly, because it's not always a question of directly being deceived. It's a question of not being able to distinguish true from false. And feeling lost in a kind of mess of uncertainty, that I would argue is as important. And of course, deception can empower those who benefit disproportionately from its outcomes. Deception can also distort public opinion and policy preferences. So if it comes up to an important thing such as a referendum, or an election, and there are deceptive political messages circulating, during those important moments that obviously can add up in various complicated ways, to how people make a decision on which way to vote. If they're not voting with full awareness, then that can have bad effect because it can distort public opinion, ultimately policy preferences. And there are some people who would argue that for instance, Brexit is an example of that. I'm not going to go into details about that right now. As I mentioned earlier as well, deception can amplify political divisions and it could be used to deliberately amplify political divisions. The next point here is that, and I think, again, this is a really tricky one. Is that deception can bigot deception. Political elites have incentives to mislead others, if they perceive that there's some power advantage to be gained. And when this happens, deception can spread as a norm, or just what it takes to win, in the world of politics. And that's deeply corrosive I would argue. But then there are all kinds of indirect impacts. First of all, social norms of verifying evidence can start to erode. So consider Donald Trump's strategy of contesting the outcome, of the 2020 presidential election on the grounds of false claims that so-called voting fraud, led to his defeat. This can erode trust in all kinds of public institutions. It can spread cynicism among other elites, but also among the public. And it can lead to this culture of what I call a culture of indeterminacy. Where distinguishing between truth and falsehood becomes harder, and it leaves us in a state of paralysis. One lesson of the past is that when people become uncertain about the status of public facts, they can withdraw into the private sphere. They can say, "Politics that's not for me. "They're all the same, they're all lying. "I don't want to know." I'm sure we're all aware of these kinds of sentiments. But this was an important strand actually of dissident writings, in the Neo-Stalinist states in Eastern Europe during the Cold War. Where the concern was not so much that people would be deceived by propaganda, but that they would withdraw from the public sphere. And that would give more power to the political class, to go about its own business in a way that it's all fit. And finally, when it comes to indirect impacts, media coverage, of course, of the Russian disinformation campaigns during Western elections and during wartime now, that media coverage has probably reached greater numbers than were actually deceived directly by the activity itself. So this coverage could lead indirectly to perceptions that elections can't be trusted anymore, because voters have been manipulated. And again, that's corrosive of liberal democracy. So on that note, before I talk about some broad principles for how we might fight back against this situation, I'll note the thoughts of one of my favorite political theorists, Hannah Arendt, who was so skillful in dissecting the corrosive impact of propaganda and deception. She made this point very succinctly in 1974, in an interview with a journalist at the New York Times. And she said, "If everybody lies to you, "the consequence is not that you believe the lies, "but rather that nobody believes anything any longer. "A people that no longer can believe anything, "cannot make up its own mind. "It is deprived not only of its capacity to act, "but also of its capacity to think and judge. "And with such a people, you can then do what you please." And I think that for me, is in one paragraph is kind of the biggest problem that we face when, it comes to deception undermining democracy. How should we try to fight back? Hopefully this talk today, which is also being streamed online and will be archived to YouTube will in a very small way, provide some kind of educative effect. But that's an important point. I think that the spread of deception in public life, fighting back against that, starts with educating ourselves about the many ways in which it can work. And I see as the responsibility of social scientists everywhere, to use their skills to contribute to civic efforts, to reduce the prevalence of deception, to inform programs of education, and promote more ethically responsible practice, in the public communication professions. And to hold social, economic, cultural, and political elites more accountable. So this is a vast area. I'll restrict my remarks to some key principles that I'm going to go through fairly quickly, and you'll have a chance in the Q and A to ask me about these. So the first principle is to promote broad understanding of how the nature of deception has changed, quite markedly in recent years, due to changes in our media systems. And I think communication scholars in particular, and people with an interest in all forms of communication not just political communication, are in a good position to spread the word about this. Secondly, I think we should focus on empowering people in their everyday social capacities, to understand and challenge attempts to deceive. And we shouldn't just focus on quick technological fixes to so-called poor quality information. We need to start from the person, not just from the information itself and try to banish the information from the public sphere. Thirdly, we should recognize how today's media and digital platform business models are often ill-suited to combating deception. Now that's not to say that Facebook or Twitter turns a blind eye to these problems. Absolutely it doesn't. And a seismic shock has gone through this world since 2016 as you're probably aware. But there comes a point where the fundamentals of harvesting attention, on social media in order to generate revenue from advertising, clashes with the principle of fighting against deception. Because simply put, there are many, many good reasons why deception, through for instance, emotional appeals, reinforcing or sewing divisions between different political groups. There are good reasons why that makes good business sense for social media companies. So we need to recognize that and build different models. The next point is independently fund, investigative journalism, and fact checking. Now this is already happening, but we could use more of it around the world not just in Britain. And also fund independent scholarly research. Let's try and avoid research funded on the terms directly dictated by digital platforms, media organizations of other kinds. And of course, governments. Let's have some independent scholarly research. Platforms themselves have spent hundreds of millions of dollars over the past five or six years, funding academic research. Now most of the time there are no strings attached and the terms and conditions of those contracts, are most of the time. Thankfully, they enable academic independence. But as everybody in this room will know, independence is a relational concept. It's not just embedded in legal documents. It's about the expectations that you have when you sign up for research funding. And it's the expectations of the funder who provides the funding. So it's a complicated setup, and I think we could use more independent research. Independent funding for research rather. And then finally, a few more points. First of all, establish in law a transparent and shared public national data repository in the UK, of social media takedowns and other identified attempts to deceive. Now this is something that we talked about as I say in some of the advisory meetings that I've been involved in, at government level over the last couple of years. Sadly it doesn't look like it's going to happen. I wasn't the only one calling for it. There were fact checking organizations as well. This would provide transparency, and a public record that all would be able to see and analyze. Next is recognize the importance of politics. Provide opportunities to challenge the idea, that deception is a norm and just what it takes to win. Because when that norm takes hold, I think we're in trouble. Established nuanced legal frameworks for retrospective public inquiries of all kinds. As I said earlier, willful ignorance is a particularly fascinating but difficult to determine, aspect of deception. We need legal frameworks to take these things into account. Some of them already do, but we could use more of it. And finally, to avoid moral panics and unintended indirect effects, try to avoid focusing just on the existence out there on social media of poor quality information. That will be with us forever. But instead focus efforts on mitigating deception, where people are actually deceived and modified their behavior and their attitudes as a result. Because as I said at the start of the talk, the mere existence of lying poor quality information, does not necessarily mean that people are deceived. And on that more optimistic note, I will close. Thank you very much for listening, and I look forward to your questions. (audience claps) Thanks. Thank you. Thanks. - Thank you very much for a very stimulating lecture which raised all sorts of questions. And we're inundated with questions online, and I'm sure there'll be lots of questions in the room as well. I'm going to start off with one of the online questions, which is this. "How do we override our truth bias, "and why have we not yet evolved to do so? "Is it that for most of the people, most of the time, "the cost of deception is minimal?" - That's a really good point from the online questionnaire there. I think that there are good social evolutionary reasons why we have a truth bias. And as I said in the talk, I'd hate to be in a society where there was absolutely no trust. It couldn't function. Imagine walking down the street, you imagine, oh, is that the floor, are those lights, is that above? You've got to have that acceptance in order to cooperate with others, and to get things done. Incidentally the research literature on trust is very interesting, because actually contrary to what you might think, it's not the most trusting people that are more gullible. It's actually people who are socially isolated are more gullible. So those people are more gullible because they don't learn as well as those who make lots of social relationships, in trustworthy ways. They don't learn the cues of deception as well as people, who spend a lot of time trusting others. It's actually fascinating. It's sort of counterintuitive, but it's the case. So I would say that the second part of the question is to do with, we have little to lose from deception. From interpersonal deception where somebody tells you something about, that might mislead you about your finances or whatever. I think it's not clear cut in all cases that the cost can be small. I think that the cost can be quite serious. But I think at the mass level, the level of mass deception, where you've got significant societal distortions and things like huge public health problems, caused by people who've been misled about the safety of COVID vaccines, for instance. I would say that there's always quite a significant cost when it comes to mass deception. That's not to say that everybody's deceived, but there could be large sections of society. And that can make a difference not just to their lives, but to all of our lives. So that would be my response. I'm not sure if I've entirely answered the question, but I hope it went some way towards it. - [Audience Member] Thank you. So you mentioned in the slide before about, it's probably not going to happen, but publishing those social media takedowns, things like that. If it were to happen, which unfortunately it won't, how would that work and what would it actually look like? - Sure. The model that I've got in mind is, look, we know now, and these efforts have been accelerated significantly in the last two to three years. We know that content moderation, taking down material, happens on a minute by minute basis across all social media platforms, big or small. The truly big and important ones that are public. Not platforms such as WhatsApp for instance which is entirely different, but the public platforms we know that this happens. But the big problem that we've got, is getting our hands as researchers on the data. And also there is a question of accountability here. So the sheer lack of transparency, when we don't know when the decisions are being made and how they're being made. And they've been attempted to address this with Facebook, with its oversight board and so on. But only by making these things public. Can we first of all, identify more successfully patterns of online deception. But secondly, increase trust in the process by which the public sphere is moderated, its content is moderated. So by doing that and allowing fact checkers to submit. When fact checkers, organizations like Full Fact for instance, do great work, but it will be really great to have something beyond their website, which is what they've chosen to tell us. And it's all really great stuff, so I'm not criticizing them. But it would be really great for them to show they're working, and kind of publish it and say, "We came across this, and this led us to that, "and we found this." That's what I'm talking about. It's essentially just making the whole issue, public like a public utility really in the interest of citizenship, people who want to go and look at this material. And then we can have an argument about it, and have an argument in public. But the problem is that when you've got these very large scale public utilities, like Facebook. Facebook is as important. I would argue to our society not quite, but almost as important as electricity. So if you think about it, why shouldn't we have more public spirited, public oriented forms of oversight of these large, powerful, organizations. Not through censorship, not by government saying, "No, you can't publish that." But by opening it up so that we can all then have a look at what's being done and have an argument about it. - So I'm going to ask you a question which has come out of that. It follows on from what you've just been saying. Several people have voted for this question online. Which is this. "Aren't fighting deception and fact checking, "just other ways of government censoring "what doesn't suit them?" (Andrew laughs) It was a very good question. - Now it is a really great question. I'm not sure if I'm going to be able to answer that one in a few minutes, but let me try. Look, I think at the end of the day, what we've got to to realize is that there are categories of speech that are harmful, to social cohesion, to public health. There are categories of speech that lead to hate speech for instance, incitement to racial hatred, extreme misogyny online. Disinformation based extremism is a thing. In other words, political extremists who deliberately spread conspiracy theories, in order to either destabilize public communication, or to advance their own cause in a way that they hope will spread the net as widely as possible so that they will get new adherence to their cause. I think for me, it's not so much about government censorship and telling people what they can and cannot say. It's about recognizing that in order to protect the principles of liberal democracy, and to make sure that they aren't eroded. There are times when governments have to establish an infrastructure, for accountability of speech. And I personally don't have a problem with that, as long as they are not the only ones who are doing it and they are not the final arbiters in all cases. So a classic example, this is not a new idea by any means, but consider the BBC. So the BBC is one of the most trusted news organizations in the world. It's had a rocky time in recent times from the political left and the political right, but still survey show that it is highly trusted. It's also widely admired around the world for its great factual programming, its entertainment, and particularly of course its news. Now it happens to be one of the most well-resourced news organizations in the world. So let's not forget that. But of course the BBC operates at arm's length from the British state. It is a creature of the state, but it isn't controlled by the state. And the way in which the BBC operates is set out in law, in a series of pieces of legislation that govern things like the license fee, editorial independence, the board of governors and so on. So if we take that kind of model, the idea of free speech and good quality information, and enriching public discourse, is not fundamentally incompatible I would argue, with establishing an infrastructure. Where we decide collectively, what kinds of speech we want to promote and amplify, and are useful for our civic life. And what kinds of speech spread hate, disinformation, undermine democracy, and undermine people's interests, undermine collective public health, and so on. So that's where I draw the line. I don't think there's a fundamental incompatibility between a public infrastructure. I don't think that's the same thing as censorship and government censorship, but I recognize that of course, again, with the online safety bill going through parliament right now, these are very, very live issues. Very, very important issues. - [Audience Member] Can you tell me, how does one determine truth? - How does one determine truth? Again, I've got a short period of time and I'm not sure I'd ever be able to answer it if I had two hours. So there are various definitions aren't they. First a longstanding definition is the idea of the marketplace of ideas. So this goes back to the French Revolution in the late 18th century. It's the idea that the best arguments and knowledge will prevail. But the problem with that is that it can be distorted by the ways in which out the market for the circulation, of information can be distorted. So some people have more power than others to communicate in the public sphere. So the marketplace of ideas can be just something that happens because somebody, it sells more. This is a classic problem. So that's not the truth. Another definition of the truth that's become, not necessarily a definition of the truth, but a definition of what constitutes misinformation. So misinformation is often defined, as when people hold beliefs that contradict the best available authoritative knowledge at the time. Now in the field of political communication research, and in a lot of political psychology research as well, that's the sort of shorthand definition that people use. So there are some strengths of this because it establishes the principle of authoritative knowledge. So it involves a recognition that there are social divisions based on expertise and learning, and it assigns more importance to scientific voices and voices of people who are learned. That's simultaneously it's weakness however. Because one of the problems as we've seen is that, especially with populous critiques of scientific knowledge, is that people are not often comfortable with being told what to do and how to live their lives. So my view is that we probably bump along in the space between where we've got authoritative knowledge, which is continually, of course, being refined and redefined as scientific progress evolves. The thing about science and the truth is that the truth isn't there forevermore. Science is a collective endeavor, and people are always knocking previous theorists and scientists off their purchase and saying, "No, you got it wrong," and showing that with new evidence. But at the same time I think that the public generally need to be involved in an argument, in the public sphere about what constitutes authoritative knowledge. And the more people that we can get involved in those kinds of arguments, the closer we'll get to the truth. So that's my answer. It's a kind of that space in between collective, public demands and aspirations, but also authoritative scientific knowledge. And I think that's stood us in pretty good stead by and large until, we face the problems that we have recently. It's stood us in good stead for most of the time since the emergence of liberal democracy across Europe and other parts of the world, in the 19th century. Of course we've got periods of immense barbarity and brutality such as wars, that have made a huge difference, this huge setbacks. But I think recently, now we are in a greater mess I would argue, than certainly I felt during my lifetime, since the early 1970s. So I think that there is something that's changed in the nature, of how widespread deception is, and how it's become more of a norm. And it's so easy to achieve online, in ways that perhaps are more difficult to achieve before the rise of the internet. And that worries me greatly. - I'm afraid that's all we have time for this evening. But I'm sure that Professor Chadwick will hover around the podium and answer any more questions, from the audience in the room. But in the meantime, can we please thank professor for a really fabulous lecture this evening. Thanks you. (audience claps) - Thanks, thanks.