Gresham College Lectures

Encryption: What's the Problem?

February 20, 2023 Gresham College
Gresham College Lectures
Encryption: What's the Problem?
Show Notes Transcript

End-to-end encryption secures messages before they leave a device, preventing them from being read in transit. Increasingly the default protocol for messaging apps, neither governments nor the platforms on which it operates can access unscrambled communications and message content. Some governments have demanded ‘back doors’ for criminal investigations, while others have exploited workarounds to access the encrypted messages of political dissidents.

This talk considers the current public discourse on online surveillance and privacy, and where society might go from here.


A lecture by Victoria Baines recorded on 14 February 2023 at Barnard's Inn Hall, London.

The transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/problem-encryption

Gresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/

Website:  https://gresham.ac.uk
Twitter:  https://twitter.com/greshamcollege
Facebook: https://facebook.com/greshamcollege
Instagram: https://instagram.com/greshamcollege

Support the show

(text whooshing)- Good evening, everyone. It's Valentine's Day so I have decided to make the ultimate sacrifice. I'm going to offer myself up for one night only to be the object of all of your affections. And while it may be something of a tall order to be as attentive to all of you exactly as you might like, I'm willing to have a go in the interests of our continued exploration of cyberspace and digital ethics. So let's imagine that you and I really, really like each other. In fact, we may even be in love. And on this day of the year, when we are directed by the media and by retailers to exchange loving words and gestures, we may be inclined to send one another cards or messages that are sickly sweet, fruity, and perhaps, even a little bit naughty. The mood may be enhanced with music and lighting, but rather spoiled by the thought that messages intended only for one other person might be read by someone entirely different. We may not want our parents, our siblings, or our children to see these for fear of embarrassment. So how might we feel about them being intercepted by the government or read by an employee of a social media company? Hold that thought. For thousands of years, people have practiced the art of cryptography and that's writing in code or cipher to keep their communication secret. They've also practiced the science of encryption, which is how we encode that information. According to the Roman historian, Suetonius, Julius Caesar used a substitution cipher which shifted the letters of the plain text of his communications one by one. So Suetonius records that if he, Julius Caesar, had anything confidential to say, he wrote it in cipher. That is, by so changing the order of the letters of the alphabet that not a word could be made out."If anyone wishes to decipher these," Suetonius says,"And get at their meaning, he must substitute the fourth letter of the alphabet, namely D, for A and so with the others." Now incidentally, there may be some of you who are a bit eagle eyed, some of the classicists in the room as well, and you're wondering why A and B are repeated at the bottom right. That is, of course, because the Roman alphabet had only 23 letters. Over time, these substitutions became more complex. So for instance, the Playfair cipher, used by the British military in the early parts of the 20th century, split messages into pairs of letters that were then swapped according to their positions in a five by five grid. So a short message like hi, H-I, becomes K-E-K. Now there's a problem with this kind of encryption, which they call symmetric encryption. It's one which uses the same private key to encode and decode the data. And that problem is that once you know how the substitution is made, you can crack the whole message. And if the substitution instructions are not changed regularly, you can read all other messages encrypted in this way. In the Second World War, German forces famously used enigma rotor machines like this one to perform consecutive rounds of substitutions electromechanically. The code was changed daily and on Sundays, 5,000 intercepted messages came into Bletchley Park here in the UK, now the site of the National Museum of Computing. But that gives an indication of the enormity of the task facing those cryptanalysts and the importance of another electromechanical device, the bomb in decrypting at such a large scale. In the latter half of the 20th century, digital encryption became the dominant method for concealing text. And its success has been due largely to the increasingly complex algorithmic calculations that computers could quickly perform over and above using pen and paper or rotor machines. And internationally recognized standards developed for encryption algorithms that comprised multiple rounds of substitution using keys of ever-longer bit length in binary code. So for example, the advanced encryption system, AES, uses multiple rounds of encryption with 128 bit, 192 bit, or 256 bit keys. And the Rivest-Shamir-Adleman, RSA, uses variable keys between 1,024 and 4,096 bits in length. Digital encryption has also increasingly relied on public key exchange, what's known as asymmetric encryption. So let's break that down a bit. This key exchange scheme, which was published by Whitfield Diffie and Martin Hellman in 1976, is widely used to secure internet communications and it allows two parties, who aren't previously known to each other, to generate a pair of keys, one public and one private. They can then share the public key with each other while keeping hold of the private key. So that was my task of a very brief history of cryptography in about seven and a half minutes, which is not too bad. I'm not sure if it's a world record, but it's a record for Gresham College, I think. Okay, all of this encryption is only possible when the message is in transit on the move from one point to the next. As soon as the message reaches the servers of the message platform, it can still be read in the clear. And this means that the companies operating the platforms can read our messages and that government authorities can intercept them and read them too. Now, a saying that you will often hear in this context is if you have nothing to hide, you have nothing to fear. But I would argue that this confuses privacy with secrecy. Hiding. Hiding suggests concealment for nefarious purposes, being dishonest perhaps, or otherwise lacking in transparency and transparency is something that helps us build trust with other people. But we all have things that we want to keep to ourselves. Perhaps it's something in our past about which we now feel a bit embarrassed, a thought that we worry might be shameful, an opinion with which others might not agree. Keeping things to ourselves can be an act of self-regulation. It can prevent us hurting others or speaking inappropriately. We may be asked to keep secrets for or about other people in order to protect them. So I used to work for law enforcement and early in my career, I signed the Official Secrets Act in the UK. I was cleared to access documents marked secret, and the secrecy of that information to which I was privy was determined on the basis of national security because sharing intelligence beyond a very strict circulation was deemed to pose a risk to the country and a risk to citizens. Far from being a threat to people, secrecy, in this particular context, was deployed in the interest of protecting them from harm. Whatever is secret is either not allowed to be known or it's only allowed to be known by selected persons. Privacy, in contrast, is freedom from interference or intrusion. And unlike secrecy, privacy is a human right enshrined in Article 12 of the Universal Declaration of Human Rights as proclaimed by the United Nations General Assembly in 1948. It reads, "No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks." Admittedly, the use of exclusively male pronouns here is not great, but I should stress that Article 2 of the Declaration states very clearly that everyone is entitled to the same rights regardless of sex. And I want to draw your attention to the word arbitrary in the first line there because it suggests that there may be times when interference with one's privacy is permitted as long as it is subject to certain restrictions and to oversight. And this is something that the European Convention on Human Rights probes further these exceptions. So Article 8 of the convention states that everyone has the right to respect for his private and family life, his home and his correspondence, but also that there shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety, or the economic wellbeing of the country for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others. Well, those are quite a few exceptions, aren't they? And perhaps more than some of us might have expected. The right to privacy is the reason the police need a warrant to search your house. In democratic countries with rule of law, a search warrant or a similar legal order is also required to compel phone companies, internet providers, and messaging platforms to disclose the content of your communications. And at this point, I think it's high time that I introduced you to Alice and Bob. As well as enjoying what seemed to be totally impractical bike rides, Alice and Bob are the golden couple of cryptography and you'll find that they pop up in almost every discussion on the subject. So let's explore message security through Alice and Bob's experience of using apps. And making use of a lovely, clear infographic, which Martin Kleppmann, a renowned cryptographer, has very kindly agreed that we can use. When Alice sends a message to Bob, as in the top row here, it will look to her as if the message has gone straight to him, but there's an additional step. What she doesn't see in her app is that in order for the message to reach Bob, it has to pass through the app's servers. And as we mentioned, even though the data is encrypted in transit when it's on the move between two different points, it comes to rest when it reaches those servers of the platform and when it's at rest, however temporarily, it's no longer encrypted. And this is what allows online providers, like social media companies, to identify what you are interested in, to serve you adverts based on the content of your conversations. And for the last decade at least, this kind of access has been a standard feature of the business model of big tech. So while many of us may suspect that Facebook and others are listening to our offline conversations through our phone's microphones, actually, it's been much more practical for those companies to mine our text chats for keywords that correspond to products that we may want to buy. Government authorities can also request the content of those messages. And this slide shows the number of times law enforcement around the world requested user data from Meta, Facebook's parent company, in the first six months of 2022. I've included links to this report and to data for the other tech companies in the further reading for this lecture in your transcripts. Many governments also have the legal power to directly intercept your messages. Just like a telephone wire tap, they've been able to listen in live to what you do and say online. And this process has also relied on the fact that data has not been encrypted when it rests on the servers of the platform being used to send the message. Tech companies can also scan the content of your messages to identify serious crimes such as the distribution of child abuse material or terrorist propaganda and global mechanisms have developed that give clearly illegal photos and videos a unique signature, a hash value. Those enable this kind of content to be removed from many different apps and reported to law enforcement in lots of different countries. And this process is, in fact, enshrined in US law. Some of the largest US-based platforms make millions of reports, 29.4 million reports in 2021 of suspected child abuse material to the National Center for Missing and Exploited Children in the US, NCMEC for short. And NCMEC sends these reports to police all over the world on the basis that distributing, downloading, possessing images of child sexual abuse is a serious criminal offense in the vast majority of countries. However, information that is plain text that is unencrypted is also vulnerable to criminals. If someone is able to hack into the service provider in that top row example in the middle there, they too can read our messages in the clear. Information such as dates of birth, home addresses, credit card numbers, bank details, passwords are valuable to the bad guys. So from a security perspective, the next logical step is to encrypt data so that it's unreadable for the entire length of its journey from sender to recipient. And this is known as end-to-end encryption and it's shown in the middle row. In end-to-end encryption, the message is still scrambled and it can't be read even when it is at rest on a platform service. And this has become the new industry standard for messaging platforms precisely because it provides greater security. But it also presents practical challenges because companies can no longer read the content shared on their platforms, they also can no longer scan that content for activity that breaks their rules or breaks the law. And this is understandably a concern to governments who want to know when their citizens are committing serious crimes. Police, intelligence agencies, and child protection charities are among those who would rather end-to-end encryption wasn't deployed at all on the most popular apps and platforms. Government authorities can still intercept that data themselves and they can still request it from the company, but they can't unscramble it so that it reads as clear text. It remains jumbled. Now whether that's a good thing or a bad thing, very much depends on your perspective. Any parent of a child in a democratic country with rule of law may understandably want that child's safety to be prioritized above all else. But a journalist who risks their life to report the truth under a repressive regime may depend on the privacy and security of their online communications simply to survive. And because encryption is either on for everyone or off for everyone, it's not possible for companies to decide that they will turn it on for journalists but off for criminals. There's also a further complication, as if we needed anymore, that as we explored in the first lecture of this series, Who Owns the Internet, countries do not always agree about what constitutes criminality. We have internationally accepted definitions of child abuse material but not of terrorist content and certainly not of prohibited speech. So a journalist may, in fact, be considered a criminal simply for criticizing the government. We have some proposed solutions and one of these has been to give government authorities keys to decode encrypted communications in exceptional circumstances. For example, where there is a legitimate suspicion that a serious crime is being committed on the app in question. But as soon as we consider how that might work in practice, we have to admit that this too has significant drawbacks. And in the case of end-to-end encryption where the tech companies themselves don't have keys to unscramble the messages, this would require them to build new vulnerabilities into the protocol specifically for the benefit of law enforcement and intelligence agencies. And this is exactly what happened when the FBI asked Apple to do the following after the terrorist attack at San Bernardino, California in 2015. They issued a court order demanding the Apple write software that would enable them to unlock the iPhone of one of the suspects. Apple opposed the order on the basis that that tool would create a backdoor to everyone's iPhones all over the world, not just the one belonging to the suspect. In the end, the FBI did gain access to the phone and that was reportedly with the help of professional hackers. And this in turn raises further questions about the knock-on effects of measures designed to make our communication more secure. When technology presents obstacles, by and large, humanity has a way of finding workarounds. And to get around end-to-end encryption, some governments have purchased software that effectively hacks into people's devices where content is still readable and tools that spy on their communications. Research by Citizen Lab in 2018 found that Pegasus spyware, produced by an Israeli company called NSO Group, had been installed on pupil's smartphones in 45 countries precisely so their messages could be read in the clear. And although the company has claimed that the software is only intended for use against criminals and terrorists, more than 600 politicians and government officials were targeted around the world, including French president, Emmanuel Macron, and the leader of the Opposition Congress party in India. Other targets were Amazon founder, Jeff Bezos, the editor of the Financial Times, murdered Mexican journalist, Cecilio Pineda, and a number of people close to the murdered Saudi journalist, Jamal Khashoggi. Further analysis was conducted by the Pegasus Project, which was a team of researchers from "The Guardian", "Le Monde","The Washington Post", and other media outlets. And they identified a number of governments believed to be NSO customers including Mexico, Saudi Arabia, Hungary, India, and the United Arab Emirates. Meanwhile, some companies and governments have proposed a different legitimate workaround but that also accesses our devices. This is known as client side scanning and it uses the processing power of your phone or tablet to look for known child abuse material. In August, 2021, Apple announced that it would use this very approach to scan everyone's camera rolls in their iPhotos. This was met with strong approval from child protection advocates, but absolute consternation from leading privacy campaigners and cryptographers. One of the concerns expressed was that Apple would come under pressure to point the tool at other types of content such as political speech or anti-government activities. Technology with any kind of backdoor in it is necessarily no longer as secure as it was. And for a backdoor ever to be proportionate, we would need to be assured that governments would never misuse it, that individual law enforcement officers and national security agents have the utmost personal integrity at all times, and that large organizations have perfect oversights and control over who can access their systems and networks. Now, it's impossible for us to be certain of any of these. Different countries have different ideas about what is criminal and what is dangerous. Government authorities all over the world are ultimately made up of humans and some humans break the rules and ultimately, no organization is completely unhackable. Meanwhile, a war of words has ensued between governments and tech companies. In October, 2019, the UK's then home secretary, Priti Patel, US Attorney General, William Barr, Acting US Secretary of Homeland Security, Kevin Mcaleenan, and Australian Minister for Home Affairs, Peter Dutton, sent an open letter to Facebook CEO, Mark Zuckerberg, concerning the company's plans to apply end-to-end encryption to all of its services and it opened as follows,"We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety and without including a means for lawful access to the content of communications to protect our citizens." So the ministers then saw only one viable solution to the problem, which was continued government access to people's messages in the clear unencrypted. Facebook's failure to guarantee this is portrayed as endangering people. This letter is still publicly available online. I've included the link at the bottom of this slide but also in the transcript to this lecture. And if you choose to read it in full, you will see that at no point in this letter do the ministers acknowledge that it's impossible to ensure the security and integrity of end-to-end encryption if you build in backdoors for government agencies.

The letter continues:

"Companies should not deliberately design their systems to preclude any form of access to content even for preventing or investigating the most serious crimes. This puts our citizens and society at risk by severe eroding a company's ability to detect and respond to illegal content and activity such as child sexual exploitation and abuse, terrorism, and foreign adversaries attempts to undermine democratic values and institutions preventing the prosecution of offenders and safeguarding of victims." The closing words of the letter present the issue as a crisis point. As you have recognized, it is critical to get this right for the future of the internet. I would agree with that. Children's safety and law enforcement's ability to bring criminals to justice must not be the ultimate cost of Facebook taking forward these proposals. The debate in this letter is reduced to a binary choice between end-to-end encryption on the one hand and safe children and effective criminal justice on the other. The future of child safety and enforcement of law and order are depicted as hinging on Facebook's decision alone. One would be forgiven for thinking that Mark Zuckerberg had invented end-to-end encryption and that Facebook was the first major tech company to propose using it. But he didn't and it wasn't. Apple incorporated it into its messaging app, iMessage, some years ago. Inevitably, Facebook published a reply letter on its website a few weeks later. It was written by the heads of WhatsApp and Facebook Messenger and they wrote,"We all want people to have the ability to communicate privately and safely without harm or abuse from hackers, criminals, or importantly, repressive regimes." So where the ministers depicted end-to-end encryption as the threat, Facebook instead casts the minister's demands in that role. They say, "Cybersecurity experts have repeatedly proven that when you weaken any part of an encrypted system, you weaken it for everyone everywhere. The backdoor access you are demanding for law enforcement would be a gift to criminals, hackers, and repressive regimes creating a way for them to enter our systems and leaving every person on our platforms more vulnerable to real life harm." The mention of repressive regimes here raises an important point. The ministers, doubtless, considered their countries to be good democracies, but there is no way for tech companies to build an encryption backdoor that can only be used by good countries. And there is no universally accepted objective way in the first place to distinguish between good countries and bad ones. And this has been summed up very neatly by Harvard cryptography lecturer, Bruce Schneier, who said, "You have to make a choice. Either everyone gets to spy or no one gets to spy. You can't have we get to spy, you don't. That's not the way the tech works." So in this very public exchange of letters between powerful governments on the one hand and one of the world's largest and most powerful tech companies, end-to-end encryption is presented simultaneously as the most and least responsible choice, the safest and the least safe, the strongest and the most vulnerable, in citizens' best interests and their very worst. And in this highly polarized debate, whether governments and tech companies are on the side of the angels or the devil, seem to be a matter entirely of perspective. Both the ministers and Facebook also presume to speak for citizens, for all of us. But when we actually ask the public what they think, it gives rise to answers that perhaps more accurately reflect the complexity of the moral and ethical choices at stake. There does appear to be some public expectation that online communication should be private. In 2020, "The Sun" newspaper published an article with the headline "Booked, Facebook Plot to Encrypt All Chats Will Help Child Abusers to Hide, Former Police Chief Warns". And that former police chief has gone on to be the commissioner of the London Metropolitan Police, the largest police force in the UK. So it is Mark Rowley. And it relayed his concerns over end-to-end encryption. The online version of this article invited readers to take part in a poll and given the negative thrust of Mr. Rowley's comments, I was quite surprised, back in 2020, to see that the largest proportion of voters on "The Sun's" website, 58% still thought that messages on Facebook should be end-to-end encrypted. Now last month, "The Sun" very kindly agreed to rerun this poll for us. 15 and a half thousand votes were cast in just three days. And these are the results and you are the very first people to see them. This is brand new data. Should Facebook encrypt all of your chats? Definitely, 47%. No, absolutely not, 28%. I'm not sure, 24%. So we then asked a second question. Should the government have access to your online chats? 33,000 votes were cast in just a few days and the results were as follows. Yes, they should have access to our online chats, 7%. No, absolutely not, 84%. Hmm, I'm not sure, 9%. Now what these responses here tell me is that members of the public understand that the decision whether or not to deploy end-to-end encryption is not as straightforward as presented by some of the most ardent advocates for or against it. The people who voted I'm not sure are just as interesting, perhaps even more so than those whose opinions were more certain. Because it is one of the most difficult ethical dilemmas the global community has faced in regard to information technology. And since end-to-end encryption increases the risk to some members of society and reduces the risk to other members of society, some researchers have tried to take a utilitarian approach, one that draws on Jeremy Bentham's philosophy of the greatest happiness for the greatest number. But any attempt that we make at an accurate cost benefit analysis quickly encounters real obstacles. We simply don't have the data to quantify people harmed versus people unharmed in the UK and we certainly don't have them for the wider world. Even where we do have partial data, for instance, the reports of child abuse material to NCMEC in the US that I showed you earlier, it's not possible for us to verify the direct impact of end-to-end encryption on those report numbers until it's actually deployed. And it's even harder for us to quantify its impact on the safety of individual children around the world. And besides, the more we go down this route of trying to quantify harm, the more distasteful that endeavor becomes, insisting on a tally of children harmed is understandably unacceptable to child protection advocates for whom every single instance of child abuse is one too many. And at the same time, arguments that pit child safety against privacy bully us into an unholy reckoning. They force us into weighing the life of a child against the life of a human rights activist. And as a human, as a citizen, I don't ever want to be put in that position. In fact, I object to it, particularly when the different parties in the debate appear to underplay the fact that end-to-end encryption is a wicked problem. It's one that cannot be solved simply by appealing to logic, ethics, or emotion. So is end-to-end encryption the best thing ever or the worst thing ever? Well, I maintain that this is the wrong question. Whether secret communications are good or bad depends so much on one's individual perspective and context. For an intelligence agency, secrecy is a good thing. Transparency is a risk. For human rights defenders, that secrecy is seen as counteractive to the greater transparency that's required to hold governments accountable. The ideal response to aim for a balance between operational secrecy and transparency that is sufficiently accountable to the public while protecting the techniques and assets of those charged with keeping us safe is no simple feat to achieve. And as soon as we accept that the deployment of secure messaging is not simply a battle of absolute safety versus absolute privacy, we're confronted with the need for key actors to start changing their mindsets, for governments to stop arguing with the laws of mathematics, and for all involved to take a longer view of the potential consequences. According to those polls in "The Sun", some of us are comfortable with the idea of governmental authorities intercepting our text messages or calls and that's fair enough. But what about our eye movements, our physical gestures, our thought patterns? Because a tool to our communications now might also give governments the ability in the future to read encrypted data captured by next generation virtual and augmented reality platforms by haptic technology and by brain computer interfaces, all of which are already in development. As the data that's collected on us becomes more and more intrusive, the debate will intensify about how that data is processed, stored, and accessed by governments and companies alike. So tech companies will need to find new ways of keeping their users safe and demonstrating how they do that. Among these are tools that empower users to prevent harm. Imagine that. A great example and particularly apt for today for Valentine's Day is StopNCII, also in the further reading for this lecture. And that enables people to stop intimate images of them being shared on social media and on the wider web without their consent. It empowers them to do that. As for government agencies who have grown used to regular access to the metadata, the traffic data, and the contents of our communications, they will need to find new routes for evidencing online crimes or perhaps return to the old routes, the tried and tested method of using covert human investigators to infiltrate criminal groups. And the criminal justice system may need to refocus its evidence gathering on suspects and victims' devices again with consent. 2000 years ago, my favorite Roman and incidentally, the subject of my PhD, the Roman satirist, Juvenal, he asked,(Victoria speaking in foreign language) who will guard the guards themselves? And this maxim is still regularly applied to modern surveillance conducted by governments and by tech companies. But oversight is only possible when authorities lift their customary secrecy to such an extent that we can scrutinize their methods and hold them accountable. And by the same token, companies may wish to conceal the details of their security operations because how they do what they do has a commercially competitive value and making public how they identify criminals and terrorists can also give the bad guys valuable information on how to game their systems. But the alternative, if they keep everything secret is that we, the people, cannot judge if they are abusing their power or overstepping their bounds. So paradoxically, it may be the very opposite of privacy and secrecy, transparency that helps us determine how to proceed with securing our communications and overseeing surveillance in the future. Because the more we can see for ourselves what is done with our data, the better we can chart the right course of action for society. That said, my fervent and my heartwell wish for you on Valentine's Day is that whatever you get up to, you at least have the option of doing it away from prying eyes. Thank you very much.(audience applauding)- Thank you so much, Professor Baines, for such a fascinating lecture. I've got a couple of questions, actually one question online and then we'll go to the room in a second. So my first question for you is we were talking before the start of the lecture about decrypting Mary Queen of Scots.- Yes.- Can you tell me the story about why that matters?- This is so exciting and it was reported in the news, I think, a few hours after I sent through my slides and my transcript for this lecture so I didn't quite have chance to to squeeze it in there. So as many of you will know, Mary Stuart, Mary Queen of Scots, she wrote in cipher, in a graphic cipher. So it's a cluster of symbols for each letter or word or idea. And you also may know if you did history at school, those letters were intercepted by Walsingham, Elizabeth I's spy master. And the decoding, the decrypting of those letters led, in no small part, to her prosecution for treason and her execution. Now we've always had the letters on the English side, but she was writing too in French to people in France. And this is a great example to me of security and obscurity because three, I wouldn't say amateur cryptographers cause one of them is computer science professor and I think that would be doing him a disservice. But three people who'd been working to decrypt documents in the French National Library spent the last 10 years doing so, found a bunch of letters that were considered by the French National Library to be something to do with Italy. And they've been there for hundreds of years, over 400 years. It's 436 years since her execution, I think. 1587? Yeah, that sounds about right. And so they were reading these letters and they thought,"Well, these are odd letters because they've got feminine pronouns in them." They appear to be written in French and they start seeing this keyword of Walsingham popping up and they realize that these are letters from Mary Stuart. But what kept those letters secret and encrypted all that time? The fact that people didn't think they were hers. The fact that they sat in an archive and someone assumed they were somebody else's and they weren't relevant. So it's a fantastic example of security and obscurity. And I think obscurity is something that I haven't touched on enough in this lecture. Yes, if you have nothing to hide, you have nothing to fear. But actually, for most of us, if you're not that interesting, your communications are not necessarily going to be flagged as suspicious. And I think a fair point is that when we are scanning for content, and I used to work on the team in Facebook that worked with law enforcement, you know, reporting and acting on this kind of content. So I can see all sides of this very, very tricky problem. But when you're scanning for content, you're not reading everybody's messages just in case they mention something. You're looking for particular signatures of images or particular keywords being used in particular contexts. That said, end-to-end encryption prevents that kind of scanning and flagging and that's what's really problematic about it. But I love this idea and I'd like to probe this idea a bit more of security through obscurity hiding in crowds.- [Audience Member #1] I wanted to ask more about Pegasus. You said that when somebody comes up with an encryption system, somebody else is trying to break it, let's just assume. So is there anybody, do you know of anybody working to make Pegasus irrelevant somehow?- Well, yes. So the trouble with Pegasus is that it hacks people's phones and one of the dominant, sorry, the trouble with Pegasus is that it hacks people's phones. So it relies on people like you and me not falling for the scam and letting it into our phones. So one of the really, really important things that we can do there is awareness. And so the Pegasus Project and Citizen Lab I thought were very good, particularly "The Guardian" newspaper,"The Washington Post" in getting the story out there so that people could understand that they might get a WhatsApp message that looks genuine, that looks legitimate and that if they were to click on a link or they were to open that message, that spyware was downloaded onto their phone. And that appears to be what happened to Jeff Bezos, certainly. There is a story, reportedly, around getting a message, allegedly, from a Saudi prince and possibly having his phone infected that way, or someone pretending to be a Saudi prince, I should say, before we get into trouble. So that relies really on people's personal security and not falling for phishing scams. What I would say is come back for the sixth lecture of my series where we're going to talk about cybersecurity for humans.- [Audience Member #2] Is just getting on Signal enough?- Yep.- [Audience Member #2] Do you have to be on WhatsApp or can you just get it anywhere...- So WhatsApp is end-to-end encrypted. Signal is end-to-end encrypted. So you're probably going to have the same problem of people trying to get into your device. Again, I think unless you are of particular interest to a hostile regime, which you might be, then, by and large, you might not be of interest for that kind of targeted attack. Cause it really is a targeted attack, not a random one. But what we're going to be covering in a later lecture, the last lecture of this year is looking at how we can all protect ourselves from those kinds of social engineering attacks.- [Audience Member #3] Most modern encryption algorithms are variants of the RSA one, which you mentioned earlier. And they rely, as everybody knows, in the inability to decompose the peak of the public key because it is made of very large prime numbers and another number.- Yes.- [Audience Member #3] However, the NSA has got stunning resources in computing and what makes you think that they cannot and have not already generated every prime number that is available to the word length and the register combination word length of every machine and therefore, they can actually do this decryption quite easily.- So there's two things I want to touch on there and I hope I remember both of them cause I'll be frustrated if I forget the second one. Right, the first one is that we do tend to have a bit of a double standard where governments recommend use of certain standards or use of certain products or they outlaw use of certain products, but then you find them using them themselves or they're using them and they recommend them and then you find them hacking them themselves. So a great example of that is, of course, the UK government's public opposition to end-to-end encryption on WhatsApp and Facebook Messenger. But as you will know, if you follow the news in the UK, the government is being sued by privacy and transparency organizations for using WhatsApp for the secret, you know, confidential business of government and those messages not being available for scrutiny. So it's a little bit double-edged, I would say. Now you're talking about prime factoring and that brings us into a consideration of quantum computing, which I didn't touch on in the lecture and so far we haven't done a lecture on quantum, I don't think, but it may be high time that we do considering lectures for further series in later years. There's a lot of hype at the moment around the, well, a lot of genuine concern as well around the potential for quantum computers to be able to crack RSA and other leading encryption standards. And the trouble with that is that we don't know when quantum computers are going to be able to do that. So we don't quite know when to panic, but as and when those those standard algorithms are cracked, we've got a problem. And so now banks, financial institutions, other people, governments, other people who, you know, really do need to have secure communications are now looking at quantum safe or post-quantum cryptography. What we have in place to secure communications and data when quantum computers crack our existing algorithms. I wouldn't want to put money on it. Chris Reese, I'm sorry to drag you into this, but I'm looking at you. I mean, we could have a debate about this for days. Some people would say,"It's 2030 and you need to panic right now." Some people would say,"Oh, it's more like 2050." And other people would say,"Well, we've got a good 50 to 100 years before this is a problem. But at least people are showing some foresights and realizing it's going to be a problem. At least we're not there and wondering how we're going to react to it."- [Chris] I guess we shouldn't take too long, Victoria. I was going to ask about quantum, but as you've kind of answered it, would you like to comment on the use of AI in encryption?- Ooh.(Victoria chuckling nervously) Gosh. Help me out here, Chris, in what respect? Cause we have the ability of AI to do the grunt work of that. What what were you thinking of in particular?- [Chris] Well, I don't know the answer to the question, about the question, but I would've thought that AI can be deployed not so much to, obviously, not to break end-to-end encryption, but to mount attacks on encrypted- Oh, right, okay.- [Chris] communications, for instance.- So criminal AI?- [Chris] And/or defense.- Yeah, I mean, so there's a lot of concern about cyber criminal use of AI across the board, really. And please bring me back to your question if I get too far off this topic, because there's a whole kind of cyber criminal underground to describe to you here, which is another lecture. So we've certainly seen automated delivery of attacks for many, many years. So when you get a phishing email, by and large, it's not somebody on another terminal or another device typing out that email to you. And you can see the telltale signs of that in, you know, the language that's used. So automated delivery of even some of the lowest level attacks, automated delivery of denial of service attacks that bring down people's websites. It's much more efficient to point a tool at that. For me, I think, and I think you are going to have some much more interesting answers than I have, but if I think about AI, I think about automation, but also the learning ability. So it's that combination of we can do this at scale in a way that humans can't, but also in a way that learns from what works and what doesn't. And I think that certainly is a concern. But what we're seeing, I would say in the cyber world for both attack and defense is greater automation full stop. So as you know in the cybersecurity world, automation is what is taking the pain out of security operations, incident response for humans and arguably, giving the humans who work in cybersecurity a much more interesting job cause they're not just looking at lines and lines of code for something that looks suspicious.- I think, Professor Baines, I think we are probably at,- We're coming up to time.- Time, unfortunately.- Yeah.- But can we just flag up your next lecture to everyone,- Please do, yes.- In the room? So the next lecture is Defeating Digital Viruses, Lessons From the Pandemic, 6:00 PM Tuesday, the 21st of March. Please do sign up, you'll get a follow up email after this. Otherwise, you can go online and sign up. Thank you so much, Professor Baines, for such a fascinating lecture.- Thank you.(audience applauding)