Gresham College Lectures

Social Media, COVID & Ukraine: Fighting Disinformation

April 20, 2022 Gresham College
Gresham College Lectures
Social Media, COVID & Ukraine: Fighting Disinformation
Show Notes Transcript

Organised disinformation about the Covid-19 crisis has degraded public understanding of the crisis and threatened the reputation of credible vaccines and health policy. 

This talk looks at the broad structures and recent history of computational propaganda - the use of algorithms, automation and human curation to distribute misleading information over social media. 

Dr Howard reviews the latest evidence about social media use during our current health crisis, and reports on the very latest themes in Russian information operations about its invasion of Ukraine. He discusses the opportunities for using social media to deepen democracy and 'build back better'.


A lecture by Philip Howard

The transcript and downloadable versions of the lecture are available from the Gresham College website:
https://www.gresham.ac.uk/lectures-and-events/covid-disinformation

Gresham College has been giving free public lectures since 1597. This tradition continues today with all of our five or so public lectures a week being made available for free download from our website. There are currently over 2,000 lectures free to access or download from the website.

Website: http://www.gresham.ac.uk
Twitter: http://twitter.com/GreshamCollege
Facebook: https://www.facebook.com/greshamcollege
Instagram: http://www.instagram.com/greshamcollege

Support the show

- Now, the research story I want to lead with is actually about a career low, a career low I hit this fall. I've been spending a significant amount of time studying misinformation. And this fall I studied an information operation that blamed COVID on a shipment of bad lobster from Maine that had flown across and landed in Wuhan just before the health crisis started. And it was either the lobster fishermen or the lobster themselves that carried COVID into the human populations. The number of hours, personnel hours that would've gone into crafting this message, finding the images, creating the fake accounts across multiple platforms, disseminating it through major news media was very difficult for us to estimate, but certainly impressive. It's a story that lasted easily a month. You may not have heard it, but it may not have been targeted at you. It was at this point that I got much less interested in chasing stories of misinformation. I've been doing that for several years, and I don't want to do it anymore. The research arc that I'm going to tell you about today actually began with a different information operation, something in the summer of 2014, I was based in Budapest, and some of you may remember this was the summer that the Malaysian Airlines flight was shot down over Ukraine. And I watched as my Hungarian friends got multiple equally ridiculous stories. There was the story of Ukrainian democracy advocates who thought Putin was on the plane, flying a commercial airline from Amsterdam to Malaysia, and they shot it down. There was a story of U.S troops based in Ukraine secretly who had shot the plane down. And my favorite was the story of the lost tank from World War II that had been stuck in the great forests of Ukraine and emerged confused and accidentally shot the plane down. And it was at this point that I realized the real craft of misinformation is not so much in producing one story, one narrative that you give to your opponents, it's about producing multiple conflicting, sometimes equally ridiculous stories, getting them all out into public life so that your opponents don't know what to respond to, think the whole thing is a joke, are uncertain about where to go. Now, what's really surprised us or surprised me over the years is to see this communication strategy travel, move from being something that dictators use on their own populations, to being something that authoritarian rulers use on voters in democracies, and then to shift again, to become something that our own politicians use on us when we vote. So what I'm going to talk about today is some of the arc of how this communication platform, this communication strategy has moved, involved several different kinds of platforms. I'll say a little bit about our team at the University of Oxford. I'll say a little bit about the book "Lie Machines" and go into two very contemporary topics for us, misinformation around COVID and misinformation around Russia's invasion of Ukraine. I'll sketch out some of the global trends that are relevant here. And then ultimately my goal is to depress everyone and then build you back up right about the prospects for turning things around. I think you'll find me, I'd like to think I am cynical, but not fatalistic. It's not too late to fix the information environment we are in, and I'll share some of my strategies for those fixes. The Program on Democracy & Technology has a fairly straightforward mission. Our mission is to use social data science to improve public life, to put the latest techniques in machine learning into address public problems, to try to increase civic engagement, to make our political lives more transparent, more efficient. We only work in countries where we have a student who knows the language, who's from the culture, who's able to understand the local nuances. And at this point we've done studies in 45, 50 different countries. At the moment, we're a team of about a dozen, this shifts, of course, during the academic year, depends on how many, the academic year has the important impact on how many people are in the team. The basic science of what we do started with funding from the National Science Foundation and then later the European Research Council, and the Ford Foundation helps us with our public outreach work. Let me start with a few definitions, this is critical to understanding the thing that I'm studying, that I've been studying for several years now. I define a lie machine as a complex social and technical mechanism. The two parts are critical to understanding how the machine works. They both work in service of ideology. They take an untrue claim and put it into service for some higher political objective. They can be large and complex. They can be small and agile and nuanced. For the most part, I'm going to speak of lie machines in the sociological sense as being large, multi-person, physically cited places. In other words, these aren't lone wolf operations, these are organizations with bosses and hiring plans. They pay rent, they have performance bonuses. They have career trajectories and they have fairly large contracts behind their financing. So these are significant organizations. And I'll also use a variety of terms during my talk. You may know most of these now because it's hard to not open a newspaper or look at the news online and see a story on misinformation. For the most part, the book "Lie Machine" is about computational propaganda, so it's the special form of propaganda that's customized and targeted for you based on some of your data, based on some trail of data that you have provided. It is organizationally directed. It is emotive rather than rational in their appeal, and it provokes action for sponsors. It's well beyond rhetoric, the craft of rhetoric. It's got the mechanism behind it that that selects its audience. I speak for the most part of misinformation, different researchers prefer disinformation, some prefer mis, I'm in the camp, I prefer misinformation as the overall category of deliberate, purposeful information operations that take falsehoods and deliver them to your inbox. The definition of what we're actually talking about has changed year to years. Sometimes it's evolved over the course of months, and very recently we speak now more of mal-information. Mal-information being sometimes true claims where the emphasis is put on the negative. So you might teach the controversy on a particular issue, but choose to spend more time saying no even if there's consensus around a yes, providing the emphasis around the no is a form of mal-information. But in all these cases, whether you prefer the term misinformation, disinformation, or mal-information, there is a particularly stable cycle of production of dissemination and of marketing. It's a small set of social media platforms that provide the primary conduit for stuff. And so it becomes fairly straightforward to trace a mechanism or a story as it moves over time. Now, the art of the science of all this has also evolved. When we started this work in 2014, we spent most of our time on Twitter. The bear might give this account away, but effectively there's a set of accounts that would occasionally tweet in Cyrillic, and then move back into English, perhaps the programming was bad, perhaps they made a mistake, or it was deliberate in some way, but you could make maps of accounts that had clearly been set up at the same moment, a hundred at a time, all advocating for particular kinds of positions, and it was fairly easy to identify the accounts that were fake. Incredible numbers of followers but not following anybody, or incredible numbers of tweets, but never interacting or engaging with any anybody else in what we would think of as a human community of thought. These Twitter accounts were also easy to identify because they often came with no pictures. And I mean this as a methodology point, not a political jab, when we would start a fresh scoop of Twitter accounts, of fake Twitter accounts, we often started with Trump's follower list, again, not a political jab, but he had such a large number of fake accounts following him, and you've seen these, if you're on social media, numbers instead of names, no pictures, no identities, no profiles. It's getting harder and harder to obscure yourself these days on social media platforms, and so it's harder and harder to do a scoop of accounts that we're pretty sure are fake. But it's not the fact that the accounts are fake that is a problem for public life, it's when thousands of these accounts wake up at one point and push a story that is fake, in this case, a story of Muslim women in hijab ruining your beach vacation, or a storming passport control, running across the border, events that didn't happen, photos that were taken totally out of context, events that didn't happen but were covered by Russia today. And so it's when these large families of accounts push stories out at critical moments in public life. That's when we have a problem, I think, with discourse, political discourse. I said earlier that we're up to 45 distinct country case studies. Now we've covered elections, we've covered referenda. We've done specific studies on language groups. Most of the research is in English. We know much less about what's going on in Arabic and Spanish, except that there's misinformation, significant information operations there to. And I'd like to say a little bit about what the scope of the actually looks like, because it's important to understand that this is not ultimately a computational project, it takes our ethnographers, right? The people who spend time in the labs, it takes their time and patients often to interpret what we find computationally. So when we want to study junk news, the fake news that circulates in our social media feeds and the algorithms that distribute that content, we take a multi-method approach. One that involves spending time in the labs, labs of people who produce this stuff, qualitative research, there's a comparative of context, understanding how things are different in Africa or Latin America, or here at home. And the computation will often involves the big scoops of data, that's where we take something we observe in the field and look to see its prevalence on a social media platform, or we travel back the other way, we spot something that's computationally suspicious, and then go back to our field workers for help with interpretation. A good example of how this works is that among the most productive times our research team has spent was a several months with a lab in Poland that manages 10,000 fake Facebook accounts. You don't purchase these accounts, you rent them from the lab and each battery of 10 to 20 legends, they're called, because they're complete with photos and a long profile, 10 to 20 legends are maintained by one person on a regular basis, flower pictures, football scores, you know, a stream of content. You rent them from a week to week basis. What industry do you think might pay to rent 10,000 fake Facebook users at a time.- [Lady] Gambling.- Gambling, good, yes, but no, they're amongst the most aggressive advertisers, not gambling. What other industries might pay to rent 10,000 fake Facebook users? Pharmaceuticals. Their primary clients are pharmaceuticals, they'll have and pay for campaigns in which 10,000 fake users will have trouble managing their migraines, and then 10,000 others have a new medicine they've discovered for managing their migraines. They have interactions in public and that helps bring home the message that the medicine may be valuable. Their primary clients are not politicians, right? The primary clients are actually in the corporate sphere, but the toolkit is the same, so when an election comes around and a politician needs this infrastructure, it's there, it's available for them. Computationally, we have tended to work with Twitter and Facebook data because those are the firms that are most accessible. These days, my lab is spending much more time on Instagram, TikTok and Tinder, YouTube, or WhatsApp, the Tinder bot was particularly amusing, and it's actually something that impacted this country. It was a bot that was released in the 2017 election here that would flirt and then talk about Jeremy Corbyn. The only reason we know about it is that the campaign managers that designed it went on to Twitter afterwards and thanked their bot for delivering the few percentage points, difference that they thought their MP got to win in those seats. So this isn't something that took a lot of computational resources to discover. It was something that was announced and acknowledged on Twitter. It's sort of unfortunate that this is how we get some of the best stories of misinformation. But the important lesson here is that there probably isn't a social media platform that can't be involved in political manipulation in some way. When we get to questions, I'm happy to talk about gaming platforms because they are also not immune. After we published a research on the Brexit debate in this country and the 2016 election in the United States, my team and I were invited to work with the Senate Select Committee on Intelligence in the U.S to help understand the trends, to help understand what it was exactly that the Russian government had launched during the election there. We were invited to play with some of the data that was provided, six firms provided data to the Senate Select Committee, Twitter, Facebook, Instagram, Google provided data, but they sent their data in PDF format, which is sort of an engineering joke because PDFs are not machine readable, you can't do anything computationally with them, you only produce them if you think you're going to print the data. So I'm going to say some things about the other forms of data, but we don't know what was going on Google, the platform. I'll say something first about the arc of activity from social media, fake social media accounts, these are fake U.S voters. And even though 2016 seems like a world away, the arc of these campaigns hasn't really changed. This is a very simple chart of the daily activity of the some 3000 fake U.S citizens who were clearly managed from St. Petersburg. The firm acknowledged that all of these account accounts were managed in the same window of time each day in the right time zone from St. Petersburg. There's a straightforward rhythm and I've identified some of the highs and lows and troughs here. Among the simple lessons from this graph, the accounts are most active on the important days in public life, the day there's a debate, the day that a candidate announces or another candidate withdraws, those are the days in which the accounts wake up and are particularly active. There is an interesting low right after the election when the accounts, it's almost as if the people behind the accounts took a break, right? The election was over, they went on holiday for a little bit, so they took a break. To me unfortunately one of the most important parts of this figure is that the bulk of Russian activity with these accounts is after the election, not before, It's almost as if the organization behind the accounts decided they were successful or got more investment from government or doubled down, so their activity increased once they were caught. The other final thing I'll point out is that once we identified the accounts, we got the data in early 2018, we were able to trace back their activity. They actually started, woke up in 2015, and some of these accounts, we found the Twitter parallels as early as 2012. So they started years before we caught them. They didn't stop once we caught them. And once we found them, we found that they transitioned pretty significantly to a platform we can't study at all. Now, I mentioned there were five or six different sources of data, I pulled out two here, simply the Facebook ads. So these were the ads that were famously bought in rubles, these are the ads for political action. I think sometimes in the policy conversation, we spend a lot of time talking about political ads. It's not a problem of political ads. It's the organic content behind misinformation that has the most impact. Interestingly, the bulk of activity from this operation seems to have moved on to Instagram. This is a visual medium, it's very difficult to study, they don't share data, this is where the kids are these days. And in fact, there are kids who are not even on Instagram, they're on other platforms now that share even less data. So we spend much of our time thinking about Twitter and Facebook because those are the platforms that us older colleges can actually study with data. The action is elsewhere, and we know that, but have no sight of it. I'll say a little bit about the thematic structure of what the content that was pushed at American U.S voters in 2016, because it's also it reappeared in 2020. We'll see it again in 2024 and it's the same kinds of misinformation we get in this country too. The first primary is a primary form of misinformation are the stories that poke race, race issues, Black Lives Matter, organizing a Black Lives Matter protest to appear on the street corner opposite of Blue Lives Matter protest, right? At the same moment, these are purposeful campaign goals. One of the most pernicious of campaign goals was to encourage African American voters to not vote because no white politician would ever represent them well. This was a boycott campaign. Boycott the vote, stay out of the election process as a way of protesting the vote. This is also something we've got in this country. So voter voter disenfranchisement is a particular problem in the U.S but linked with this other odd form of political expression, protest the vote by not voting is one of the things that I think is unique to the Russian misinformation operations. There's also a field of content directed at the far right, guns, abortion, issues that are particularly important to the far right in the United States. And much more recently, remember I said, much of the activity from these 3000 accounts was after the election in 2016. Much of that activity moved from being about African Americans to being about Islam, Muslim Americans, with the same message. If your Muslim, don't vote, no white politician will ever represent you well, stay at home. This is the kind of polarizing content that we've caught now in many countries, in many different elections over the last six years, I go into some of this work in "Lie Machines" and I'm happy to tell you more about the book if you wish later. But I want to move now to the more contemporary issues of misinformation around COVID and Ukraine. I think this is critical. I think misinformation is critical because to me has become the one existential threat that prevents action on all the other existential threats. It will be very difficult or to do anything about climate, about social inequality, about access to institutions of higher learning until we've tackled the problem of misinformation. Perhaps the most pernicious and deadly form of misinformation so far is that about COVID. You may not have seen this particular ad, but this is my favorite of the examples because it wraps everything together in one package, a cult of personality around Bill Gates, longstanding fear that the government is trying to put chips in your arms, fairly middle class movement that vaccines harm children, right? And that's actually a movement that's been around for several decades and is well organized and funded, but wrapped together with purposeful campaigns from foreign governments to discourage people from taking vaccines, fear that 5G is killing bees. There's a range of things that are all wrapped together in a very complex narrative that is also unfortunately, tied to homeopathy and naturopathy websites. What makes it pernicious is now in our media environment, the involvement of state-backed media agency, particularly Russia today, and CGTN, the PRCS and primary media outlet, in pushing the stories. About a year ago some colleagues and I did a study of how many social media accounts follow these different news sources. It was a fairly simple metric. We looked to see how many YouTube users, how many Reddit followers there were, how many Facebook members there were of a group. We simply added them up. And we found that the state-backed media, primarily CGTN and RT on a good week can reach upwards of a billion social media accounts with the content that they push on COVID. Now there's interaction effects. It often takes a Hollywood star, right? Or a U.S president, a prominent political figure to really elevate and escalate, give traction to a story. But if the moment is right, if the dynamics are there, they can reach significantly more people than even the most professionalized of news outlets. I included here, just the BBC, CNN, the Guardian, and a few others. And I should point out, I separated, we separated out the websites that are clearly doing junk health news. So the fake cure websites, the websites protesting lockdowns, those were separated from the state-backed media agencies. And it's those state-backed media agencies that can have quite a significant global reach. Another interesting dynamic to this problem of misinformation is not so much the content. Often if a group of reasonable people sit down and look at a piece of content, we can arrive at some assessment of how credible the source or the ideas or the facts are. What's much harder to understand is the infrastructure that makes it all profitable to produce misinformation. Big tech firms will often invest in community norms and content moderation to help catch the nasty stuff. But those big tech firms still provide the infrastructure for junk news websites. We took a sample particularly of those that protest public health measures, promote COVID-19 scams, frauds and profiteering. These are money making ventures, right? These are not about public health. They don't have serious concern for public health, and they disseminate significant amounts of disinformation about the causes and consequences of COVID. Even after the prime, the large social media firms, even once they're able to take content down, they still continue to generate revenue by providing the trackers that make it possible to follow you from a website or a Facebook group through to the other groups you might see or the other websites you might be interested in all the way through to the credit card transaction that you'll need to do to buy the baseball hat or buy the t-shirt that they may be trying to sell you. Google, GoDaddy, CloudFlare, in particular provide that infrastructural support. They do all the behavioral analytics, the tracker systems and the cross platform integration tools that make it that make this a business venture, make it possible. I'm Canadian and perhaps one of the most embarrassing things I've found in this research is that a significant volume of the content directed at the U.S on COVID misinformation is actually hosted in Canada. And I don't have an explanation for this. What we know is that a significant volume of this content which is clearly directed at U.S citizens, isn't hosted in the U.S, it's cited on infrastructure in Canada and Russia. That's part of what makes this a difficult thing to address. It's an extra territorial international problem, not something that can be easily tackled by one regulator in one country. I want to say a little bit now about Ukraine misinformation. It has very similar contours to what I've described so far. I can give you a little bit of a highlights of the kinds of issues that are active this week on misinformation. Our typical poll of sources and for COVID related misinformation involves Russia Today, CGTN, Global Times. There's a few other sites there. For much of this week, the misinformation has involved the massacre at the train station, and I have made, I've done what you should never do in giving a talk and that is have tiny, tiny print. But the reason I've done that here for identifying the three stories is that these stories aren't incredibly nuanced. You really have to be in a murky sea of misinformation to understand all the nuances. Story number two, Russian and overt and covert state affiliated news outlets have accused Ukraine of seeking British help to whitewash Ukrainian war crimes citing an alleged Russian intelligence source that Ukraine had alerted HMG that it would conduct war crimes on Russian POWs. Ukraine's not conducting war crimes and Russian POWs, but it's an incredibly nuanced story. And in week to week, any number of researchers can point out the primary themes for the most part now, because of the photos we've seen from the train station, the information operation is about, it wasn't us, it's not that train station. It's somebody else, Ukrainians did it. It was a lost tank from world war II. There's a huge range of options that are possible with the modern misinformation operation. Perhaps one of the things that's changed since the story of the Malaysia Airlines flight is that national diplomats now spend more time backing up what their stake-backed media agencies put out. So the media ecology used to just involve followers who would sign up to the BBC's newsfeed or CGTN newsfeed or RT's newsfeed. But now the infrastructure of diplomats, the people who do the cultural affairs, the who do political analysis, they can be involved and tasked with promoting the messages. They get the same kinds of directives from central government that go off to the media agencies. It creates quite a significant ecology and an extended life for any one of these particular kinds of news stories. Let me offer now some global context. In 2017, we began an inventory of countries around the world with an organized information operation of some kind at work. Again, these are formal organizations in the sociological sense. They have office space, the telephones, they pay rent and not lone wolf operations, these are the formal organizations. In 2017, there were just under 30 countries where we found active live information operations. Our most recent 2020 was just over 80 countries with live running information operations. We've actually decided not to do the study again, because pretty much every country we look at, we'll find some kind of running information operation. Interestingly, what's a very contemporary trend, just from the last two years is a real rise in the number of these operations that are conducted by straightforward PR firms. These are comms firms in Toronto, in London and New York. These are not social media specialized firms. These are not hard to find under the radar labs in Poland or in Brazil, they're mainstream comms firms. At this point over half of the operations that we're able to track year to year are produced professionally as part of a regular comms offering. One of the things about political economy of all this is that the primary clients now for these kinds of operations are not so much authoritarian governments, but our own politicians who will pay for these kinds of operations in that sensitive two week period before people vote. We also ran last summer, a very large survey, a global survey involving 154,000 participants, 142 countries, asking people about the risks they fear, what they fear about using the internet. And there were a range of questions. Will AI take our jobs, fears of sexual harassment online, fears of credit card fraud, fears of misinformation, and globally fear of being misled by something online is the number one fear. Disinformation is the single most important fear of internet users and social media users. More than half of regular users are concerned about disinformation. Almost three quarters of internet users are worried about a mixed cure of threats, including online disinformation, fraud and harassment. Interestingly, these numbers plummet in Southeast Asia. Concerns about disinformation vary by region. Highest in North America and Europe, lowest in East and South Asia. Concern about online harassment is the primary concern for young women in Latin America. In China, the concern about disinformation was somewhere around 10%. It's quite noticeable how different cultures respond or perceive to the threat, the risks of being misled online. So I can tell you particular stories of dramatic effects of misinformation that seems silly once we talk about it or unpack it, but actually has somewhat measurable impacts. I can say something about global trends, how pervasive the problem is, how far and fast things circulate. But I want to end with some strategies and some ideas for fixing things. As I said at the outset, it's not too late. We can address part of the problem. It's certainly going to take the cooperation of the firms and some light touch government regulation to really fix, really address the problem of misinformation. There are a number of policy ideas out there for how you can address the problem of targeted misinformation. Many of the algorithms that deliver the content to us have been trained up on data that's very poor quality. Doesn't actually represent the public interest, that doesn't actually represent our own individuated interest. So building better algorithms, being able to audit them in significant ways could be one of the paths to cleaning up our information environment. This second idea is something I borrowed simply from the Blood Diamonds campaign. You may remember the Kimberly process, gave us this idea that if a consumer, a diamond consumer, could find out where their diamonds came from, if their diamonds actually came from the nastiest pits of Africa, where slaveholders had had pulled the gems from the rocks, if consumers actually knew that story, they would not buy those diamonds. I think it should be possible for us to look at our device and ask it to tell us who is using, who the ultimate beneficiary of our data is. We can't do that right now. We can't ask our smart TV, our Samsung smart TV, who Samsung has sold our data to, that trail is not available to us. But reporting the beneficiary of our data probably would improve the transparency of the system, right? Figuring out where our data sits is probably going to be the first step to making that system more transparent. In fact, it has to be the first step if we ever want to get close to the second possibility of donating data, I would happily donate my data to medical researchers, especially COVID related research. I'm sure many of us would do the same for medical data that was relevant to public life. We don't have that capacity now to donate our data. It should also be possible to tie with infrastructure, to be able to lend our data and some of our technologies to the civil society groups we want to support. One of the few things we can learn from the U.S and regulation on this is the idea of a nonprofit rule. So one of the few rules they have governing data mining is that you can't profit by selling voter registration files. You can profit by selling other things that might be merged to the voter registration files, but you really are not supposed to profit, and there are mechanisms for finding firms that do profit this way. I think at this point it would be reasonable to extend a range of variables, data that's collected by the census perhaps, a range of other variables that are of public value that firms could not profit from. They could still profit by adding and aggregating and disaggregating as they do, but creating a pool of data that actually is for the public interest would be a basic regulatory change that I think would be very valuable. The final step in this, a growing consensus at least amongst academics, that it should be possible to do rigorous algorithmic audits if the firms share and provide access to the data. The question may be though, who would do those audits and maybe more existentially, do we have a right to the truth? I think this might be an unsettled question. For me I would want the answer to be yes, I do think we have a right to the truth. But at this point, misinformation is the means through which racial, gender and religious discrimination, extremism, several kinds of violence is stimulated or condoned and ignored. This is the infrastructure that prevents us from speedy action on critical issues. I think a healthy information environment is a necessary condition for voting, for getting our markets to work, for maintaining and checking that our other human rights are intact. And for overcoming some of the toughest policy challenges we face. So even politics aside, if you want to clean up the markets, if you want to have accurate consumer information out there, the same policy solutions can solve multiple kinds of problems. Now, one of the most exciting initiatives out there is this possibility of creating an international panel on the information environment, an organization that would be led by the science, spend most of its time tracking misinformation trends, working with data from the firms, or capturing that data on its own. The model for it is similar to what the IPCC, the Intergovernmental Panel on Climate Change has offered for climate. Same kind of thing might be useful for the information environment. The IPCC model took 30 years, we don't have 30 years, but it took 30 years to arrive at a consensus building mechanism that surfaces points of information from the research and delivers them to policy makers in fairly succinct, accessible language. The policy makers don't always listen, right? The action isn't fast enough, but the mechanism for arriving at consensus for reviewing literature and distilling ideas is there. And that is something we can learn from. So the broad aims of an IPIE, and an International Panel on the Information Environment would be to advance the science of misinformation, to track some of the global trends I've identified here, to involve citizens, right? Because this can't be a government-led process. I don't think the world would want misinformation initiative coordinated by governments. This is about engaging multiple kinds of stakeholders,'cause it's also will be very difficult to make any progress without collaboration from the technology platforms themselves. Building bridges between industry and lawmakers is also critical. One of the things I've noticed in the different kinds of consulting I've done is how rare it is for government policy makers to actually work with industry or talk to them in critical moments, even though there's something of a revolving door, right? People will cycle between industry and government. The active live conversations and moments of crisis are fairly rare. So this kind of IPIE could have some specific objectives, coordinating the research, promoting information access, and sharing best practices. Perhaps during the Q&A, we can talk more about some of the solutions out there. And one of the interesting things about our moment is that we don't know which solutions will work best. So creating the mechanism for understanding which policy intervention works and which one has negative consequences for political speech. We have no mechanism right now for evaluating those trade offs. In conclusion, I'll say a little bit about where I'd like to take our own misinformation research in the years ahead. We're going to keep going on COVID misinfo' because it's pretty clear that's going to be a long battle. Working in languages other than English is something our lab is going to spend most of our time on in the years ahead, we had pivoted to move away from Russian-backed misinformation to working mostly on PRC-backed in misinformation. This point we're going back to do them both. I think one of the things that we know the least about is the impact of misinformation on children. We know that the kids are not on Facebook, on Twitter, they're on platforms that we don't have access to, they're platforms that we know very little about. We know that children develop their political identities on social media in ways that very few of us did, but we know nothing about that particular mechanism, developing a political identity in an environment of misinformation promulgated on a social media platform. It's possible the next generation will survive. But I think at least doing the basic research to understand the mechanism and improve the platforms will be critical for next year's voters. So this is for the next generation of voters. One of the parts of doing research that I'm most proud of and most pleased with is the work we do with civil society groups to try to encourage them to engage on misinformation problems and do things that protect themselves. So we have a series of civil society training workshops that help the groups working in particularly hot areas, figure out how to track an information operation, how to work with platforms to stifle it before go goes too far. We have dedicated program for women headed civil society groups who get particularly nasty forms of racist or sexist content. I hope at some point, if you're still interested in this material, you'll visit our website at the Program in Democracy & Technology, we have three newsletters that are monthly, one on COVID misinfo' one on PRC-backed information operations, and we've just started one up on the Ukraine crisis. So keeping policy makers alert, keeping civil society groups on guard is one of the critical public service projects that we have at Oxford, and I myself am trying to throw some weight behind this campaign for Intergovernmental and International Panel on the Information Environment in the hope that we can be a little more coordinated in what we ask of the firms. It's tough to end in a positive way having delivered so much content about our prospects, but I'm sorry, Jeannie, your answer was correct, Kevin shouted his incorrect answer over years, so he gets the points. This is not the public I want to live in, right? This is not the polity I want to be a part of. And I don't think we have to have this form of political engagement in our social lives. We are in some ways as a research team, working on fumes, desperate to find examples of ways to counter the toughest of information operations, but we're also operating on faith, right? That it's worth doing these things and that it's worth the long term investment to try to build the democracies, the political institutions that we want to live in. Thank you very much for your time for attending.(audience clapping)- So the first question is one about mechanism, really. Is there any evidence of defunct unused old accounts being hacked and used by these misinformation pushers or do they only start new accounts?- That is a fabulous question. Sometimes old accounts can get taken over if somebody's not been diligent with their passwords and has something that's easy to guess. In an important way, there's actually styles of misinformation, so the Russian government's style of an information operation is to have these accounts that have been around for a long time, they're full on personalities, they've got a long trail of content and suddenly they start working on a political issue. The PRC's style of misinformation is to simply buy 10,000 accounts and then they show up and they start running. It's different approach, I don't think the PRC style works as well, but it's a great question.- What is the role of humor in misinformation? A joke seems to be a mechanism for increasing reach.- That is a fabulous question because the instinct is absolutely right. Humor is a way uniting people'cause everybody likes a good joke and likes to make fun of public figures. Barbs can often be appreciated by the far left and the far right and the center. Humor is critical because it's often irony that hooks somebody with a sensational headline. so what makes what we call click bait, right? It's content that has all capital letters in the titles, potty mouth words, no asterisks or irony in the title is what makes a sensational story travel far on social media. So it's absolutely critical. I should say humor is also very important to social protest. So what's very tough in studying the rhetoric of all these is knowing which components of our humorous image or a turn or phrase are trying to activate which kinds of users.- [Gentleman] You talked about different views and fears on misinformation in different parts of the world, Europe and America tends to be more skeptical than Eastern South Asia. But when you talk about Eastern South Asia, are there differences between say South Korea and Japan versus China?- Very interesting.- [Gentleman] A bit of a lump together of East and south Asia.- Certainly there are definitely national differences. I don't have those three particular countries on my fingertips as trends. I would say I would hypothesize that South Koreans are much more comfortable with open debate and discussion and argumentation than the Chinese users of social media platforms that have basically been designed as mechanisms of censorship. So there's almost every social media platform we afford and have here, they also use within the PRC, but it's designed for tracking, it's not designed for engagement. One of the interesting things about China is that social media discussion, open political discussion is encouraged or tolerated on a few issues, notably environmental pollution and government corruption. So there are a few things that the government actually wants to see some deliberation about because it likes when there's progress on those issues. Most of the other issues that you might discuss in Japan or Korea are just not open for, I think it's censored before we can even ever start analyzing them.- [Gentleman] Hi, first and foremost, thank you. That was fascinating. Given your point at the slide where you mentioned that pharmaceuticals tend to be the largest purchaser of such accounts, and obviously this being a COVID talk, the question about vaccines has to come up. I guess what I'm asking is as lay people, what sort of faith can we have? I say this as someone who has taken two doses of the vaccine and about to take a booster, but given that it's so lucrative, there was such uproar during the height of the pandemic. What faith can we have in a version of the truth from any of the major pharmaceutical companies? Obviously, I believe Moderna was linked with Oxford, we have, forgive me, AstraZeneca and Pfizer, cetera. Where does the average person stand? And how would you go about trying to find an objective version of the truth given that it is such a critical topic pertinent to so many? And one in which we can't really afford to get wrong.- Certainly true, among the best sources of information on medical questions are the nation's doctors. And I think there were important points of consensus at different moments both in the U.S and here where the nation's doctors were making recommendations and on occasion our policy makers or elected officials derailed conversations, made different kinds of pronouncements, made it difficult to understand what were the priority moves for public health. As always in a society it is incumbent upon us to do a little bit of our own research, so it's important to listen to the elected officials and then check with what the professionals are saying, have a look at what your professional news outlets are advocating for and check in with your own doctor. And I think that rough out line of activity, so verifying what you get from the news media with what you might see as direct pronouncements from the NHS or the CDC can help you make the right choices. I hope nothing I said here would ever lead someone to conclude that you'd want to cut off people who are arguing in other directions. So it is ultimately your choice, I think on whether to get vaccinated, but the medical consensus is that you should get vaccinated. And so that for me is an important source of evidence. The same kind of thing can be said about climate change, The consensus is that the climate is changing and that it's human induced, and that if the temperature goes up this amount, this amount of London will be underwater, right? These things are measurable and there's strong consensus about it. So finding the best expressions of that kind of consensus is the challenge we have as being smart citizens.- [Gentleman] I grew up in the media since the age of 17, I am 64 and I used to work for the BBC under John Tusa. Under John Tusa, nothing could be published unless there were three independent sources, they used to prove that what you were saying was actually factual true. Unfortunately, the BBC moved away from those standards trying to compete with private media. Will you say that the media themselves and politicians created fertile ground for disinformation with plenty of examples of fabrication 45 minutes weapons of mass destruction, for example. Campaigns fought for one reason, in actual fact there was another, 20 years in Afghanistan after they already killed Osama Bin Laden. Would you say that the mass media and the politicians themselves, including those who are trying to fight disinformation are very much responsible for having created the environment that makes possible disinformation?- I think they certainly have a role. I would would say that the public, more investment in civics education and more investment in public libraries and more investment in shared professional news outlets. Those are the good long term investments. The proximate cause of, the proximate impact of misinformation is when the social media platforms deliver targeted messaging to a voter in the two or three days before they vote. Now, the political scientists tell us that most people most of the time don't make up their minds really until two or three days before they vote, and that's a critical window. Politicians rarely vote for more fines on politicians. The part of the problems of a parliamentary democracy is that it's hard to change those to clean up that system. I would say that one of the things that seems to inoculate a country a little bit against misinformation is the presence of a public broadcaster. So it may be that the BBC standards have changed in ways you don't like, but the CBC in Canada and the BBC and PBS and Israel has an excellent public broadcaster as well. Those organizations, it's not so much that they produce high quality content that everybody reads, it's that they create a culture of professional journalism that leaks into the other news outlets. The countries that don't have a culture of professional news journalism through public investment don't don't have those protection. And I don't mean that the government owns the media'cause in all of the good examples, editorial control is not with a political appointee, direct control. So the BBC may have its challenges, but the public investment in the BBC is critical to keeping in the country together.- [Gentleman] Touched on on issues like homeopathy where pseudo medicine, medical organizations push their agendas. Is this worldwide phenomenon where you have pseudosciences like homeopathy, chiropractice, or other nonsense, like (indistinct) trying to push the agendas using social media.- We noticed after the Brexit debate here and the referendum here in the 2016 election that a family of the Russian accounts we had identified, moved from talking about politics to about homeopathy. And at the time we thought it was an odd shift in messaging. Now we're pretty sure that it's about connecting the audience for that kind of content with things they can buy. My answer would be that this has more to do with the profiteering, making money, by creating an audience that will look for several different kinds of things related to conspiracies. And it's also important to remember that the audience for misinformation is not the entire public. The audience for misinformation is usually the far right, white supremacists, it's a very fairly narrow demographic. Most people most of the time do not find themselves locked into these cycles of websites that have different kinds of conspiratorial content. So the audience, maybe this is the hopeful way to end, the audience for this stuff is actually fairly bounded. It's just that in an election, it only takes a few percentage points to make a difference in an outcome.- Professor Howard, I think we're going to have to drown into the lecture in the Q and A there. I wanted to thank you very much for your lecture and just to thank our audience for your participation as well. I would like to point out that the next lecture in the series on media, trust and society will be what can we learn from fakes with Professor Patricia Kingori, and that is on Thursday, the 28th of April at 6:00 pm. So please do join us for that one. And please join me in thanking Professor Howard once more.(audience clapping)