Gresham College Lectures

How To Fight Fake News

December 16, 2022 Gresham College
Gresham College Lectures
How To Fight Fake News
Show Notes Transcript

Fake news, influence operations, disinformation, misinformation and conspiracy theories are different flavours of falsehoods that have one thing in common: they put citizens in the front line of countering threats to democracies, national security, and community safety.

This talk will explore governments’ and platforms’ efforts to counter falsehood, and what citizens can do to defend themselves, their loved ones, and ultimately their nations from influence operations.


A lecture by Dr Victoria Baines

The transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/fight-fake

Gresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/

Website:  https://gresham.ac.uk
Twitter:  https://twitter.com/greshamcollege
Facebook: https://facebook.com/greshamcollege
Instagram: https://instagram.com/greshamcollege

Support the Show.

- So, this is the story of how I was entangled in a conspiracy theory, how my words were hijacked to exploit people's fears, and how I unwittingly became a supporting voice of the belief in a reptilian elite, beings that have taken on human form to control society, and whose members are said to have included our late Queen, leading politicians, company executives and bankers, and Justin Bieber. It is a story that bear's testament to the vulnerability of our words and thoughts to manipulation, and to the importance of understanding the context of statements of fact. On the 4th of March, 2020, I was asked to appear on "Newsnight," and the eagle-eyed of you will notice that they've already committed a fake news sin by spelling my name wrong on this segment."Newsnight" is the BBC's late night current affairs program, and the topic for discussion was the then very novel coronavirus, and it was to be my last in-person TV appearance for some months. I raced up from a talk on cybersecurity at a school in Bournemouth, and I found myself backstage at Broadcasting House with a virus modeler from a London University, and someone who worked for a fact checking organization. My job was to explain what social media companies and search engines were doing to combat misinformation on COVID, and when I was asked by the presenter how easy it was for platforms like Facebook to take down this sort of material, this is what I said. From what I understand and from what we've seen reporting, and taking down the material when it's been reported, is just one of the measures that companies like Facebook, and I would say also Google, with Google Search and with YouTube, that they're taking at the moment. They're working with the World Health Organization and with the NHS, so they have a hotline, if you like, from those official sources, but they're also promoting those official sources. So people logging into Facebook today and using Google Search today will have noticed that there are SOS alerts at the top of their news feeds and on their search pages when they search for coronavirus. Then, behind the scenes, there will also be another approach to try and investigate, identify, and remove some possible sources of coordinated disinformation, which is something that President Putin, of course, has alluded to today. It's not impossible that hostile actors will be looking to sow disinformation deliberately. Now, my intention was to reassure viewers that tech companies were taking steps to promote trustworthy information and to counter the misuse of their platforms. Barring a fairly memorable trip in a black London cab through the New Forest, in the pitch dark, in the middle of the night, this evening, faded from my memory, overtaken like yours, by a global public health emergency and a national lockdown, until the 20th of August that year, when I received a message from my brother that a friend of his had spotted me in a documentary called"Plandemic: Indoctornation," and what I had said on"Newsnight" five months earlier had been repurposed to support the theory that tech companies and governments were conspiring to bury the truth about COVID-19, and that Microsoft founder Bill Gates was attempting to microchip the world's population, and this is the clip they used. They're working with the World Health Organization and with the NHS, so they have a hotline, if you like, from those official sources, but they're also promoting those official sources. Now, I did actually say those words, didn't I, the sentences themselves weren't fake, but they had been taken out of context and reproduced as evidence of something that wasn't true, and I don't believe. And I found this somewhat ironic given that the interview was itself about misinformation. And it set me thinking how on earth, despite my very best intentions, had I become a purveyor of false information, and was there anything that I could have done differently? False information of various kinds is popularly known as fake news, but I and other researchers would argue that it's actually quite an unhelpful term as it bundles together several different types of content and behavior that have different aims, tactics, and impacts, and may therefore require different countermeasures. Firstly, there is disinformation, and this is the dissemination of deliberately false information with the intention of influencing the policies and the opinions of those who receive it. Disinformation is sometimes referred to as information warfare or influence operations. Misinformation is wrong or misleading information that is not always strictly intended to mislead. It may be uncoordinated or spread organically, and it's more likely to be shared in good faith or out of fear. It is the massively distributed social media era version of gossip or rumor, conspiracy theories such as those about COVID's origins, vaccines, 5G mobile phone technology, the deep state often fall into this category. And there are related concepts of propaganda, whether that's produced and distributed by another state or our own, and low quality junk news content that also may merit consideration as fake news. There's the phenomenon of clickbait, salacious online content that encourages users to follow often dubious web links, and it relies on our appetite for junk news and gossip. And of course, for one former president of the US, fake news has become shorthand for representations in the mainstream media that are simply unfavorable. Against the backdrop of political populism, concepts such as alternative facts have gained legitimacy, and there is justifiable concern that we are now living in a post-truth society, where objective fact is harder to identify and allegedly less important. As well as presenting information that is simply untrue, the fake news ecosystem abounds in the practices of manipulation and distortion. My comments on "Newsnight" are an example of precisely this, because for most viewers of a quality news program, public health organizations and tech companies working together can be a source of reassurance, but for those choosing to watch a movie that promotes conspiracy theories, it is evidence of their worst fears, of collusion between governments and technology providers to suppress the truth. Okay, so let's take a closer look at disinformation. The Russian word, and it's no coincidence that it's a Russian word, dezinformatsiya, denotes the established practice of injecting false information into foreign intelligence holdings. And over time, that has evolved into the injection of false information into public discourse, including social media. In November, 2016, Facebook's founder Mark Zuckerberg dismissed as a pretty crazy idea the notion that fake news on social media had influenced the outcome of the US elections that year, but a year later, the company revealed that it had found evidence of fake accounts using the platform to share and amplify data that had been stolen from the Democratic National Committee's email accounts. A few months after that, the company's chief security officer detailed how 470 fake accounts had spent around $100,000 on 3,000 ads, and concluded that these accounts and pages were affiliated with one another, and likely operated out of Russia. As you might imagine, a fair amount of Russian disinformation at the time focused on discrediting the Democrat candidate, Hillary Clinton, but Facebook's analysis found that the vast majority of the ads on its platform focused instead on sowing discord in communities around subjects such as LGBTQ rights, race and ethnicity, immigration, and gun ownership. Researchers at the University of Washington traced how Russian troll accounts on Twitter seeded disinformation on the Black Lives Matter movement in an attempt to create greater division in American communities. And on this side of the Atlantic, think tank Demos found that UK-related tweets from accounts affiliated with Russia's Internet Research Agency around the time of the 2017 terrorist attacks were notable for their anti-Islamic sentiment. So much, so Cold War spy thriller, but perhaps lesser known is the fact that Western governments also engage in foreign influence operations on social media, and recent research by Graphika and the Stanford Internet Observatory found an interconnected web of accounts on Twitter, Facebook, Instagram, and five other social media platforms that use deceptive tactics to promote pro-Western narratives in the Middle East and Central Asia. One fake account in the Central Asia campaign used a doctored image of the Puerto Rican actor Valeria Menendez, shown here. On further investigation, Meta, now the parent company of Facebook and Instagram, found links to individuals associated with the US military. Now, for some governments, the solution is for tech companies to just remove everything that isn't true. In 2019, Singapore passed the Protection from Online Falsehoods and Manipulation Act, which prohibits, and I'm going to take a deep breath here, the communication of a false statement of fact that is likely to be prejudicial to the security of Singapore, public health, public safety, public tranquility, or public finances, to the friendly relations of Singapore with other countries, to influence the outcome of an election or a referendum, to incite feelings of enmity, hatred, or ill will, or to diminish public confidence in the performance of any duty or function of the government or an organ of state. But is removal of false or fake content always the best option? One concern is that by putting the onus on platforms to remove such a broad sweep of content, people will be less exposed to untrue narratives, but they'll also therefore be less able to distinguish fact from fiction, or to challenge falsehood when they see it. Their critical thinking may in fact deteriorate if it is not used. Alternative responses are for platforms to down-rank low quality or junk news sources, effectively making them less visible by pushing them further down users' news feeds, and some large social media platforms already do this. They also apply labels to paid-for advertising, especially in relation to political campaigns, and they can apply warning labels to posts that they have found to be inaccurate or misleading. Focus on platforms' removal of content at large scale also presupposes that those in power post objective facts online and do not engage in influence operations, which, unfortunately, is not always the case. Some governments use disinformation tactics against their own citizens. In my first lecture in this series,"Who Owns the Internet?", we discussed how Russia, China, Iran, and North Korea greatly restrict access to the Internet and promote pro-government content online and in the mainstream media, and there have, of course, been numerous instances, since the invasion of Ukraine, of Russian state media misrepresenting their military's performance. But perhaps the most amusing Russian disinformation tactic has been the tendency to use video game footage in coverage of military operations. In 2017, the Ministry of Defense published a scene from the game

"AC-130 Gunship Simulator:

Special Ops Squadron," as, quote, "Irrefutable proof"that the US provides cover for ISIS combat troops," and in 2018, Russian state TV used footage from the game "Arma 3" in a news item about a Russian soldier killed in Syria. When a source that we trust starts to manipulate the narrative, whether that's a public figure, a political party, or a media outlet, this can be harder to spot and may require specialist assistance, so online platforms work with fact checking organizations, of which there are many around the world, working in different languages. In the UK, political debates and campaigns are now routinely fact checked, with services such as Full Fact here challenging dishonest practice and contacting the sources of inaccurate claims to seek correction of the record, and such is the perceived authority and trustworthiness of fact checking services that during the 2019 general election, the Conservative Party changed its Twitter account name to factcheckUK, retaining the blue verified tick that it had previously obtained, and with more than a passing similarity to both Full Fact and Channel 4's FactCheck service. Although it was denounced by these services and roundly criticized by most media outlets, there was no official sanction for this imposter tactic. But there are technical measures that can and should be deployed by platforms in these cases. For example, a name change for a social media account for a political party or a politician is highly unusual and it should be flagged as suspicious, not least because it may indicate that the account has been hacked. We've seen in the past few years how President Biden's Twitter account was hacked to serve cryptocurrency scams, so we know that these accounts are of interest to cyber criminals. But of course, the most striking example of a public figure playing fast and loose with the truth is Donald Trump. By branding quality news outlets fake news and by claiming to be both a speaker and an arbiter of the truth, even naming his proprietary app Truth Social, Trump consistently and dishonestly challenges fact with so-called alternative facts. He's also been known to perpetuate conspiracy theories in order to garner support from their believers, in particular the viral theory known as QAnon, whose central belief is that Trump is battling a cabal of Satan-worshiping pedophiles. He first used this "Game of Thrones" meme while he was still in office, that's actually about threatened sanctions against Iran, but he recently re-shared on Truth Social, that's what that ReTruthed that you might be able to see in the top right-hand corner there, he re-shared it on Truth Social, a version of it that refers to QAnon's moment of reckoning, the storm, and an abbreviated QAnon slogan at the bottom there. With Trump's explicit and tacit encouragement of the movement to stop the steal of what he and his supporters considered to be a rigged 2020 election, QAnon believers were prominent among those who attacked the Capitol Building in Washington, D.C. on the 6th of January, 2021. Now, personal attacks have long been part of political campaigning in the US, with paid-for attack ads frequently appearing on TV. Their transfer to social media has given rise to large scale use of dishonest tactics by seemingly legitimate political and media organizations. Ahead of the midterms in 2018, right wing media organizations pushed false narratives about the Hungarian-born billionaire George Soros, a leading Democrat donor, and these included accusations that he had funded the migrant caravan then heading towards the US, that he had been a Nazi SS officer, and Antisemitic conspiracy theories apparently endorsed by celebrities and the family of the then-president. Around the same time, automated bot accounts on Twitter posing as Democrat supporters began to discourage people from voting in the elections, and on this occasion, Twitter removed an estimated 10,000 accounts in one go. How exactly were they able to do that? Well, again, if you joined me for my first lecture, you'll recall that each device on the Internet has a numeric Internet protocol, or IP address. Bot activity, robot activity can often be identified by an unusually large number of accounts being operated from a single IP address, that is, they're being coordinated by the same device, and in the cyber crime world, this is known as command and control. So platforms can close down the linked accounts and block the establishment of new accounts from that address, and inevitably, and unfortunately, this means that some criminals do update their tactics by changing devices frequently or masking those IP addresses using VPNs and anonymizers. Another tactic is to encourage real account holders to copy and paste a message that seems authentic, complete with grammatical errors, but which, on closer inspection, is anything but. And I'm very grateful to my colleague Andy Phippen for the evidence you see on screen. So-called keyboard armies are often paid to do this, and the largest US platforms use tools that identify and remove the same content being shared by large numbers of users over and over again. But in both these cases, whether they are bots or real people acting just like them, tech companies look for inauthentic behavior, as well as seeking to verify the accuracy of the content that's shared. Now, in my fields of cybersecurity and online safety, successful security measures focus on people, process, and technology, they're the cornerstones, the holy trinity, if you like, of how we combat cyber attacks and cyber crimes, and I would argue we can apply the same tripartite approach to all the flavors of fake news that we've just explored. Society certainly requires regulations and rules to help distinguish what is prohibited from what is permissible content, along with procedures, the processes, for suppressing the former, and technical tools are certainly necessary to identify bad actors, bots, inauthentic content, manipulated videos on a massive scale. But however stringent these rules, and however sophisticated the tools, strengthening humans will always be just as important, and I would argue that over-reliance on any one of these three corners risks negative impacts on the others. By way of example, for thousands of years, societies have turned to satire and to parody to criticize public figures. Mocking politicians reminds us, and them, that they are only human, and that, in democracies at least, they're in power only because we have elected them, so satire has an important social function, and it's not always comfortable reading, listening, or viewing. And one of the most famous examples of satire that disturbs and disconcerts is "A Modest Proposal," by Jonathan Swift, the author of "Gulliver's Travels." Published in 1729,"A Modest Proposal" takes the form of a political essay to make the very straight-faced suggestion that the impoverished Irish sell their own children to be eaten by the rich. I can see some nervous-looking faces over there. It is a work of considerable grotesquerie, but also considerable rhetorical elegance, and it's often taken to be a critique not only of contemporary economic policy, but also of the dehumanization of the Irish people. As a very bright Stanford computer science student pointed out to me during one of my lectures, it would likely fall foul today of some governments' expectations for suppressing content that sows community discord or diminishes public confidence in those in power, think of that Singaporean legislation. And in fact, satire is of concern to regulators in a number of countries, including the UK and the US. In a 2018 report, the House of Commons Digital, Culture, Media and Sport Committee of members of Parliament included satire and parody in its definition of fake news because they may, quote,"Unintentionally fool readers." And in the US, satirical news website The Onion recently submitted a document to the Supreme Court in support of a man who had been charged with disrupting a public service because he had set up a spoof police page on Facebook. Now, I heartily recommend you read this brief in its entirety, it is beautifully written, it is itself satirical and genuinely very, very funny. It also invokes the spirits of Swift and the Roman poet Horace, who himself was also a satirist. At the same time, it makes the deadly serious argument that parody functions by tricking people into thinking that it is real, that because parody mimics the real thing, it has the unique capacity to critique the real thing, that a reasonable reader does not need a disclaimer to know that parody is parody, and that it should be obvious that parodists cannot be prosecuted for telling a joke with a straight face. According to The Onion, the US Appeals Court's decision to back the police suggests that parodists are in the clear only if they pop the balloon in advance by warning their audience that their parody is not true. But some forms of comedy don't work unless the comedian is able to tell the joke with a straight face. Marking satire and parody precisely as such, deadening the effect and missing the point, is something we are already seeing more of. As of last week, Twitter, which you may have noticed is under new ownership, now requires parody accounts to incorporate the word parody or fake in their account names and bios. This inclusion of satire and parody in definitions of fake news assumes that members of the public are unable to distinguish between fact and fiction, between authoritative news coverage on the one hand and parody on the other. Quite apart from insulting the intelligence of citizens, the assessment also assumes that humans can't spot context or filter out junk content, but machines can, when in fact, the opposite is more likely to be the case. In the UK, there is a tradition of quality journalism and a parallel tradition of journalism whose content readers have become accustomed to take with a pinch of salt. We appreciate headlines like this one, alleging a comedian ate someone's pet, not as unadulterated records of objective fact, but as content that has been embellished to elicit sensation and heighten our engagement. Phrases such as, a source close to the government, or sources close to the Palace are tantamount to an admission that a story has been fabricated, or is at least based largely on rumor. Readers, I would argue, are in on the joke. And that is not to say that no one believes anything they read in a tabloid newspaper, but that readers are able to use their judgment to separate serious reporting from junk news. Now, in July of this year, amendments were added to the draft UK Online Safety Bill, and these impose a duty on the largest online platforms to safeguard all journalistic content shared, including news publishers' journalistic content. And what this means in practice is that a social media post that makes unfounded claims, for instance about migrants, would be liable to removal if posted by an ordinary user, but protected if published by a newspaper, radio, or TV station. Any solution that imposes stricter to standards of behavior on citizens than on the media is clearly unjust, but also potentially dangerous, and such a blunt distinction may also not be the right tool for the job given that it doesn't appear to address that common tactic of repurposing or distorting mainstream media content, for example in the case of my appearance on "Newsnight." The speed at which unverified information travels has been remarked upon for millennia. In the 5th century BCE, the Greek dramatist Aeschylus described rumors as having wings, as did the Roman poet Virgil in the 1st century BCE, and this fabulous illustration of Rumor, Fama in Latin, comes from an early 16th century edition of his epic poem "The Aeneid." Mass adoption of digital technology has of course greatly increased this speed, and we now talk about news content going viral, regardless of whether it's trustworthy. In response, major journalistic outlets such as the BBC, "The New York Times," and Reuters have partnered with Microsoft and Meta in the Trusted News Initiative so that they can share rapid reports of disinformation in real time, and they're also working on a common standard for establishing whether a piece of video content is authentic. The advent of deepfakes has made this need more pressing. Deepfakes are fake videos generated by machine learning, and so far, they've been fairly easy to spot. In this video of Ukrainian President Volodymyr Zelenskyy, we can see that the animated head has been spliced onto somebody else's body, that the eye movements are not naturalistic, and that the president is also unnaturally motionless. I've muted the audio so that I can talk over it, but we're also reliably informed that the voice isn't quite right. The concern, however, is that deepfakes are getting more sophisticated and more convincing, and they're finally being used to influence politics. Interestingly, research conducted by MIT and Johns Hopkins universities indicates that humans and computers are equally able to spot deepfakes, but humans significantly outperform the leading deepfake detection technology when it comes to videos of well known political leaders. The hypothesis is that this is because humans can critically contemplate the authenticity of the video beyond the visual perception, for example, the expected sound of their voice or the expected turn of phrase that a politician might use. The same research found that humans and machines appear to make different mistakes, for instance, human participants' accuracy was affected by anger, whereas, of course, the machines' weren't, and this in turn suggests that the most promising model for detecting deepfakes is human-machine collaboration. So we can't sit back and let computers do all the critical thinking for us just yet. Technical solutions alone aren't enough also because in other aspects of society, we're increasingly used to seeing artificially generated content, we routinely interact with legitimate chat bots, so we can't just outlaw all automated content. We've also come to accept computer generated resurrections of deceased actors, for example Carrie Fisher

in "Star Wars:

the Rise of Skywalker," and it was recently reported that Bruce Willis authorized a company to create his digital twin for this Russian TV advert for a telecoms company. The idea of an American actor's deepfake being used to influence the Russian public has a rather satisfying ring to it, doesn't it. So strengthening the human response will always be important, I would argue. Influence operations seek to exploit our human susceptibility to fear, to the next big scoop, to the inside story, and the confidence of friendship. Those of us on social media are therefore on the front line in the fight against fake news because every time we share content with our family, our friends, our professional networks, we risk being an unwitting agent of misinformation. Now, it's easy to dismiss those who subscribe to conspiracy theories as the tin foil hat brigade, people who, amongst other things, believe that wearing this protects their brain from electromagnetic fields. We might also be tempted to dismiss their flawed logic and to deride their tendency to link unrelated facts and present these as privileged information, the real truth. But conspiracy theories feed off understandable fears, and no more so than during the COVID pandemic, when legitimate fear of death, combined with technophobia and concerns over surveillance, to perpetuate the myth that vaccines would turn humans into robots. At a time of great uncertainty and change, when the president of the United States publicly suggested internal light treatments and disinfection as possible cures for the virus, false information ran counter to global public health efforts and put people at additional risk. Research conducted in 2020 by the US Centers for Disease Control and Prevention found that one in three adults had used chemicals or disinfectants unsafely while trying to protect against COVID-19. 4% had drunk or gargled diluted bleach, and 4% doesn't sound a lot, does it, but in the US, that's over 8 million people. In my first lecture, we explored how citizen rights online come with responsibilities. In a world of fake news, your sharing of unverified information could put others in your trust networks at risk. But conversely, the more you question and verify what you see and share online, the better you can defend your friends, your family, your colleagues, media literacy for all ages is therefore crucial. And in case you're thinking that this is something only children and young people need, researchers at Princeton and New York universities discovered that during the 2016 US presidential campaign, Facebook users over the age of 65 shared nearly seven times as much fake news as those aged 18 to 29, so we need to ensure that no one is left behind. Many of the fact checking organizations and quality news outlets around the world have built searchable databases that you can use for free to check the claims that you see online, as well as educational videos, lesson plans, and infographics to help people discern fact from fiction. I've listed several of these in the handout for this lecture, which you can also find on the Gresham College website. But among many highlights, I particularly like the EUvsDisinfo resources for their fact checking of pro-Kremlin propaganda and its direct engagement with social media users, so they're actually challenging claims directly on Facebook and Twitter. But I also like the fun, simple, and actionable advice from Newseum, the American museum of news and journalism, and all these resources are available on their website. So ultimately, you shouldn't believe everything you see online, just as you've long been advised not to believe everything you read in the newspapers or see on TV. Content that is designed to provoke a reaction and be shared instantly can be difficult to resist, so it's all the more important to take a few seconds to question the content. Use search engines, including, by the way, Google's reverse image search and fact checking sites, question the intent of the person or organization that's posting, question and check the source, even if it's someone you trust, because you can't assume that they have done their own fact checking, and this is a tough one, but where you can, let your friends and family know when they have shared something that you know to be untrue or manipulated. Yes, I am that person who falls out with their family and friends at Christmas because they've seen something shared dodgy online. When we refuse to be taken in by fake, manipulated, or junk content online, we protect everyone else from influence operations and from misleading information that could harm their health or exploit their fears. Empowering yourselves to spot when you are being played online will prevent you being enlisted as an agent of a hostile foreign state, it will reduce the risk of people in your community coming to physical harm, you may even have the ability to save someone's life. But if you're not minded to do it for these guys, for the Man, please, please, please do it for me, and if you don't fancy doing it for me, because you don't know me from Adam, please do it for everyone else. And of course, it has the added benefit of arming you to decide whether Freddie Starr really did eat my hamster.(audience laughs) Thank you very much.(audience applauds)- [Questioner] Every country has laws relating to freedom of speech, but the United States' laws are incredibly excessive, you can say virtually what you want, it's not libelous, there's very little control in the United States, which is of course one of the reasons why the great social media companies, the large social media companies, are all based in the United States,'cause they can do what they want.- Yes. Goodness. I think my research has found that, actually, everybody's doing it to everybody else, which is what we would expect from the Cold War era. If I think also about, and I'm taking this off topic a little bit, Dai, so apologies, but if we're thinking about the phone hacking of political figures, we've found that Angela Merkel's phone was being hacked, allegedly, by the US probably also by the Russians, so I think if we take that disinformation stance, then really, what we're we're looking at is, it's open season, I think. But it's important to remind ourselves that it's not just, as per the first lecture, it's not just, oh, the bad countries do this and the good countries do this, we don't have the digital dark knights and the digital white knights, but I think I wouldn't necessarily agree with your assessment about why the tech companies, why the big ones are all in the US,'cause I don't think they are, I think we've got China as well, so I wouldn't want to neglect Chinese influence in this state, but as we mentioned in the first lecture, Chinese control of the Internet is much more restrictive. And I also don't subscribe to the belief that freedom of speech is absolute in the US, I don't think that's the case, and I think tech companies' terms of service tell against that. So for instance, there are lots of platforms, like Facebook and others, that don't allow nudity on their platforms. That doesn't mean that nudity is outlawed in the US, it's simply that they have decided we're not going to have nudity on our platform. I think there's sometimes an assumption that what is permitted on US platforms is just what's in the US Constitution, I think you're right to a degree that there is an alignment, but I don't think it's an absolute match. So short answer, I don't agree entirely with your statement, but I see exactly where you're coming from on that one, and I do have a certain amount of sympathy.- I think we'll turn to an online question which I think is quite interesting, someone says here,"I'm part of a team moderating an online community"writing about 4,000 comments a day."We find it an incredible challenge"to draw the line between fact checking everything,"which is frankly unfeasible,"and still needing to fight disinformation."Any comments?"- Oh gosh, my first comment is thank you, thank you for doing that, because having worked for a company that has had to do that, I know what it's like to be someone who does that, to be a human moderator, to be exposed to things that you don't necessarily want to see, and have to be the police and the arbiter of that. One of the things that we talked about in the Metaverse lecture, my second lecture of this series, is decentralization, and one of the things that the Metaverse promises is greater decentralization, so you're not going to have, possibly, we're not going to have so many large social media platforms with thousands and thousands of content moderators, and automated tools, and money to put behind engineering, what we might have, and we see this on the people that have moved from Twitter to Mastodon, which is a decentralized platform, the people that manage those servers are the people who use those servers, it's not an overarching platform with big trust and safety teams. And so I think there is a risk, or that the future could well be millions of us running our own fora where we're having to review all of this content all of the time. I think there are some light-touch automated tools that people can get hold of, so it would be worth looking into whether some of that automation can be set up, for instance of flagged keywords that you would outlaw in a particular environment. But automation comes with a cost that you miss the nuance, you miss the context. I'm sorry I don't have an easy solution off the shelf for the person who's who's asked the question, but I am full of admiration for the fact that they're doing it.- [Questioner] Following on from that, and bearing in mind the, to me, the almost impossible task ahead of any moderation of the content of these blogs and fora, would it not be simpler just to abandon all regulation, let anyone do what they like, and they're governed by the laws of libel, and just let people become hardened to everything they see, to question everything, because they know there's no regulation behind it, and see what happens then?- Oh goodness, it's certainly one approach, and I think it's worth considering from a philosophical or an ethical perspective, I think it's a good tabletop exercise, if you like, to say, if we did this, what would happen? I think what we see with some of the platforms that have less stringent content moderation and less stringent review is that, very quickly, people become very disrespectful of each other, and that can end up in real-world harm. So if we think about troll accounts on certain platforms, platforms like Twitter, but I'm not necessarily pointing the finger at Twitter for this, we have real cases where people have been bullied to death, and that feels, to me, like an online harm that we probably do want to try and avoid, not to have a risk appetite where we say, oh well, we're going to lose a few. And I'm being flippant here, and I don't mean to be deliberately flippant, but it is one of the things that, in my research, we look at, do we have a risk appetite for the the number of people that we will accept being harmed online? And if they're harmed because people have said something genuinely offensive to them and genuinely hurtful, and I know that there's one particular minister in the current government who says that we shouldn't be legislating for hurt feelings. I mean, I think that's a somewhat extreme way of representing some of the legislation that's going through in Parliament at the moment, like the Online Safety Bill, but we probably do need some thresholds in society about what is acceptable online and what isn't. What we tend to say is that what is illegal online is what's also illegal offline, and kind of transfer that over. I think if you're peddling misinformation to the extent that it physically harms people, in the sphere of something like COVID misinformation, that there should be measures for suppressing that, whether you down-rank it, whether you make it less visible or not, because there is that physical harm attached to that. So I think where we have a bit of a problem, and that I think we're getting a little bit more attuned to, is that we used to just treat online as a separate sphere, we used to just say, oh well, it's only data, it's only people's feelings, it's only ideas, but I think what we're coming to is the realization that online impacts offline, that people get radicalized online and then commit offline terrorist offenses, and that we could actually prevent people getting physically harmed if we tackle the online side of it as well.- [Questioner] Thank you. Considering all the points you've just been saying in terms of dealing with fake news as always reactive, there's elements of increased Internet literacy that is proactive in dealing with it, but the bulk of it is reactive. Considering the pace of technological change going forwards, is the future not rather bleak in that sense, is it only going to get worse? Are we at a peak now, or will it get worse, do you think, in the future?- Worse is an interesting term. In the Metaverse lecture, we talked about how hugely enabling and empowering this technology might be for people with mobility issues or people who don't want to go outside, who feel safer in the confines of their own home, they can suddenly fly, run, jump, and we've seen this already in online gaming. With that, I have been considering how fake news might be delivered in a metaverse, in an immersive environment. You could have somebody come up to you who looks like a member of your family, or looks like Huw Edwards, a trusted news anchor, and says, hey, this is the news, Russia's invaded the UK, you all need to huddle in your basements. I mean, I think we probably would distrust that, but it could feel a lot more persuasive than just looking at something at arm's length, and thinking, hmm, that doesn't look quite right, I might put it down for a second and go and fact check it. How do you fact check an extended live conversation with somebody that looks like you and somebody that looks like me? So there are going to be some real challenges around that. Again, I would argue that we would have procedural means for dealing with that, checks for verifying people's identities, for instance, we would have technical measures to spot inauthentic behavior, inauthentic avatars, but we'd still need a person to somehow use their critical thinking. What could happen in that situation is that the window that you have for determining whether something feels authentic or not could be a lot shorter than it is with news content delivered on a screen or social media items. So worse in the sense of, I think there'll be new challenges, I think some challenges will be exacerbated, but potentially, the fact that, in an immersive environment, we could deliver some really, really good media literacy classes might offset that to some extent.- Thank you very much. Victoria Baines' next lecture is on Valentine's Day?- Valentine's Day, and it's going to be all about privacy.(audience laughs)- We look forward very much to that. Victoria Baines, thank you very much.- Thank you.(audience applauds)