Gresham College Lectures

How AI Disrupts The Law

October 18, 2023 Gresham College
Gresham College Lectures
How AI Disrupts The Law
Show Notes Transcript

Artificial Intelligence and Generative AI are changing our lives and society as a whole from how we shop to how we access news and make decisions.

Are current and traditional legal frameworks and new governance strategies able to guard against the novel risks posed by new systems?

How can we mitigate AI bias, protect privacy, and make algorithmic systems more accountable?

How are data protection, non-discrimination, free speech, libel, and liability laws standing up to these changes?

A lecture by Sandra Wachter recorded on 11 October 2023 at Barnard's Inn Hall, London

The transcript and downloadable versions of the lecture are available from the Gresham College website:

Gresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation:


Support the show

Thank you so much for, for the introduction, and thank you so much to all of you for, for coming out today, um, to discuss how AI disrupts the law. Um, this is actually the shortest summary of my day job. That's basically what I do on a daily basis is wonder how new technologies including AI are disrupting the law. Um, I don't have to do this myself by myself. I'm very lucky that I have, uh, fantastic people who support me in that. I lead the research group that is called Get the Governance of Emerging Technologies. And those are the, the folks that are currently working with us. We just recruited four new people. So the whole group is, is growing. Um, and what we all have in common is that we care about the governance of emerging technologies. So how we can actually do that, if you're interested in, in the topic and the, the people that I work with and the project that we have, you can visit our homepage and find more, find more information about the research, um, that we're doing. Um, but yeah, let's, let's get started. There are three areas that I would like to discuss today, um, where I think we can see that AI is disrupting the law as well as society. Um, those three areas have to do with misinformation, discrimination, and workplace automation. I didn't realize it rhymes, but it does, and it's also true. Um, so those are the three areas that I would like to discuss and just show a little bit how I think that AI is really, really, um, causing trouble in, in a way and what we can do about it. Um, but let's start with the first one, which is misinformation. Um, and ai. I, I think everybody will be aware that algorithms call cause a lot of trouble when it comes to misinformation. Um, some people say that we are in a post-truth society, that we can't actually trust anything anymore that we see online. And I think the, one of the most important examples of that is, was probably the Cambridge Analytica case in 2018. As, as everybody will be aware of, where we saw that algorithms and platforms were complicit in trying to shift voting behavior, um, had impact on elections around the world, including the UK and the us. But this was not the peak of how technology started to impact public opinion and spread misinformation. During the Covid crisis. We saw another very troubling example where President Trump told people that they should drink leach in order to prevent a covid infection. Um, not only is that type of treatment, um, ineffective, it's also very dangerous and some people that actually die, but this information was spread on social media very quickly, and people believed it to be true and followed that advice. Similar advice was also followed during the insurrection of the capitol, um, in January, 2021, where, you know, social media platforms were used to incite violence. And I think it's not just a problem in, in terms of violence that was acted, but also just to storm the capitol. Um, the symbol of democracy is really, really troubling just from a democratic perspective to see how technology can actually have a real life impact, not just on the health of people, but also on our symbols of democracy. But not just democracy and men, uh, and, and physical health. Also, the planet suffers because of, um, misinformation. Um, there's a lot of misinformation about climate change out there. I think most people will be aware that climate change is a fact, but if it wasn't for social media and other types of algorithmic dissemination, um, methods, we wouldn't still discuss whether climate change is a fact or not, we would actually do something about it. So you can see that, um, social media really has a role to play when it comes to, to misinformation and impact in public opinion. Um, but is it new? Is that something that has never happened before? Haven't we always had type of, um, misinformation? And that reminds me of a wonderful quote that Mark Twain once said. Um, he said, history may not repeat itself, but it rhymes. Fun fact is that Mark Twain never said that. So this is an example of fake news that is 200 years old, basically. Um, you could say that's not too bad. It's a, it's a good quote. It's a true quote. I wouldn't have problem having this associated with my name, but it is an example of how things have been spread that are not true. But I'm gonna give you three historical examples of the past that show type of misinformation that is actually pretty harmful and was really harmful for people. So the first one that I have here is Elizabeth. The first, I think everybody in this room will know who this person is. Um, she had a very interesting rumor attached to her personality. People said that she was actually a man. That was a rumor that was spread about her. Um, she was a very popular room, uh, ruler. She kept peace in the countries. It was, she was a supporter of the arts and sciences. She advanced education, she ensure economic prosperity. Of course, such a ruler can possibly be a woman. It must of course be a man. Another very, very, um, rumor that hasn't left our public sphere is one that Marie Antoinette had to fight with. The thing that she's most famous for. The quote, let them eat cake, is the thing that she actually never said. And yet it is the thing that is to this day attached to her name. And it's over 300 years old. The fact is that it was a rumor that Jean Chakra actually spread about her. Um, he was an activist, a politician, a philosopher, very much interested in furthering the French Revolution. And so he was spreading rumors, um, about her. She did a lot of things that were problematic, but she never said that, for example. But Maria Antoinette did not only have to deal with rumors, she also had to deal with pictures or paintings or drawings that were spread about her. So here you can see, uh, Maria Antoinette in closing embrace with one of her favorite, uh, ladies in waiting. And it was the remember spread that she had romantic relationship with her, and that she was unfaithful to her husband. Of course, none of that was true, but again, an idea of fake news misinformation that is also 300 years old, misinformation that is almost 3000 years old, goes back to this gentleman. This is narrow, that is set to have burnt down Rome while playing the fiddle. Again, something he never actually did. Um, he wasn't even in the city during the big burn of, of Rome, let alone was able to play the fiddle. But he was a very unpopular ruler. Some historians have compared him to Jeffrey Bian of the antiquity. He was a very, very unkind ruler. He tortured his opponents, he killed his own mother, and he castrated his lover. So people just wanted to get rid of him. He was very, very unpopular. So people started spreading this information about him near in turn, tried to pin it on the Christians, so everybody was testing around that hot potato of, of, um, guilt. And nobody wanted to take the blame for it. Well, that gets us to the core question, which is, what is truth then? Like, how can we know that anything is true if we apparently have always had trouble to figuring out what is true? And here, I I wanna cite an example, a modern example that we currently have, which is the flat earth movement. Yeah, I already see a couple of people grinning, uh, which is the reaction that I want because everybody kind of thinks this idea is somewhat ridiculous, right? You have people that believe that the earth is not in fact round. It is flat. People believe there's a conspiracy against them, that they have been lied to, that in fact the earth is flat. And many people ridicule that. It's a minority of people that we kind of laugh at because we think, I mean, come on. How can you stand up against established dogma and science? That's quite ridiculous. This is also how Galileo gal must have felt, um, in the 17th century when he also stood up against dogma and, uh, the established sciences and religion. He in fact, believed that the earth circles around the sun and not the other way around. Um, he paid for it. Actually. He was put on the house arrest for eight years until his death, uh, because he started to criticize what established theories about science had to say. Coming back to a more recent example, um, this is also something where people start laughing when they see it, but it was necessary during the Covid crisis to issue such warnings to tell people, please do not drink bleach. It is harmful, it's problematic. It might injure you, it might kill you, in fact, and here again, people start to laugh about this, and they, I don't understand how you cannot trust doctors. Why would you even listen to somebody like President Trump when they give advice like that? Obviously we should always listen to the experts that have something to say on it. But you could ask, was it always smart to believe our experts? Was it always the right decision? And here I wanna remind you of the very interesting practice of blood, blood letting, um, which is a, a medical method that was deployed for more than 3000 years. So, um, it was a method where you would cut a person, um, let blood out of them because you believed that you can cure certain illnesses with it. You would blood let in order to cure hysteria, depression. Um, and it was only banned in the late 19th century. But obviously it's not just ineffective, but also dangerous. So you could say, well, okay, why did I come here? Why did I come to this lecture? If we have always struggled with truth and always struggled with lies? Is there anything new under the sun? How is technology any different? Are we just dealing with the same problems? Of course, I believe it's different this time. Um, and so I'm gonna spend the rest of the lecture convincing you that it is a little bit different. And again, I wanna come back to something that, uh, mark Twain said, uh, he said, A lie can travel halfway around the world while the truth is still putting its shoes on. But of course, mark Twain didn't say that that's another, um, fake news about him. Again, not a bad one. Also a really good quote. I wouldn't mind having that attached to me either. And it rings true, right? It rings really true. And I think that is one of the three reasons why I think AI is different in terms of spreading misinformation. It has to do with the speed, it has to do with the scale. You know, people in the 19th and 18th century, they didn't have the means to disseminate information as quickly as we can do now alone. People who lived 2000 years ago, like Sau actually has to put some effort into it to convince people that Maria, that was eating cake, um, all the time. But nowadays, we are in a situation where everybody has a microphone and they gave a stage. And Facebook, for example, has roughly 3 billion users at this present. And people like Alex Jones or President Trump have millions of followers where they can just spread their information. And I think that's just something that is different. The second thing that is different is how convincing they are when they lie, when algorithms start lying to us. So here's a very interesting example. You can see four paintings of Rembrandt here, one of which is fake. One of those paintings, one of those portraits wasn't actually done by Rembrandt. One of them was created by an artificial intelligence. So we play a game. Now you're gonna pick the fake one. I'm gonna give everybody a couple of seconds to decide which of those four portraits is the fake one. Okay? Here's the solution. This is the fake one. Regardless if you picked it or not, it's still really convincing. It really looks good. Um, which is, which is quite amazing what algorithms can do. And you could rightly say, ah, it's a painting. Who cares? Really? What's the societal harm that could potentially happen of having somebody confuse paintings? That's not really bad. But what would you say if the same is done with faces, Let's play the same game again. Look at those four faces and pick the one that is fake. I'm gonna give you a couple seconds. Have you picked the one that is fake? The solution is all of them are fake <laugh>. None of those people do actually exist, but they're very, very convincing in a way. And this is just pictures. If you pair pictures with sound and movement, you can end up in a situation like this. What's up, TikTok? You guys cool? If I play some sports? I love it more for the audio experience as much as the momentum. Hey, listen up sports, I'm TikTok fans. If you like what you're seeing, just wait till what's coming next.<laugh>. This is not Tom Cruise. This is completely artificially generated. Not only is his face not real, they also were able to fake his voice. What is really, really convincing and really, really good. Um, we can imagine what a technology would be capable in a couple of years time. And again, you could say it's Tom Cruise, it's a public figure. What, what really does it mean? But yes, this is, you know, an actor that is deep fake. But what if the same happens? If it's done a political leader, um, if it's somebody that is, uh, a politician, head of state, and algorithms start to put, uh, words in their mouths, what does it mean for society? What does it mean for peace? So that's definitely a different issue, um, than we had in the past. And then of course, the G P T came along. Um, I think it has been roughly a year or so, um, when opening AI opened its door, um, to everybody. And everybody could experience what algorithms are able to do and how they can generate text that is quite convincing. And they can do a lot of interesting writing. They can do actually academic writing. Um, I'm not giving any tips here for, for, for, for students. But you can actually just put a prompt in Jet G P T and ask it to write an academic essay for you. And not too, not too bad, actually. It actually looks quite convincing. But that also comes at a cost, right? What does it mean for education if professors are no longer able to assess whether the homework or the essays are authentic or if they have been generated by artificial intelligence? And what does it mean for society if we cannot trust that the students actually learn the things that they were supposed to learn at university, especially if those students come become doctors or psychologists or lawyers or architects where they make very important decisions. Sometimes life changing decisions if we cannot be sure anymore that they actually studied as they should. But that's not only an issue for academia in terms of education, it's also a problem for academia in terms of science, because Jet G p T is also able to write academic papers, um, quite interestingly. And so there was an interesting experiment that was done, um, this year in, in, in January where a couple of abstracts, two batches of abstracts were sent out to expert peer reviewers. One batch of the abstract was done by experts, by professors, by scientists. The other batch was completely, uh, fake and generated. And the experts, the human experts were only able to detect the fake ones in 68% of the time, which is not a lot, it's a bit better than chance. Um, and that's not really good for, for the scientific community. Not just because science, uh, needs to be true, but also because we use science to make policy. Um, in fact, that's what we want. We want policy making that is based on evidence, that is based on scientific methods. But if the scientists can be fooled, how can we make sure that we actually write good laws that deal with climate change, that deal with pandemics and things like that if the experts don't really know what's going on anymore. So we have those two different things. We have the scope and the reach is different. We have the fact that it's much more convincing than traditional ways of, of lying. And then the third one that we have has to do with the silence. The silence around the algorithms. Um, algorithms, we don't hear them. We don't see them, we don't feel 'em, we don't smell them. But they're here. They're always here. They're around us. They're not as motivated and loud as au who walks in and is like, oh my God, Maria Antoinette, can you believe what you did today? Where you understand he is a politician, he has an agenda. You know what he's after? I don't know what an algorithm is up to. I don't understand what they're doing. I don't even see them and hear them, but they still pulling the screen strings behind the scenes. And they're doing that in a way where they put us in filter bubbles, right? They put us in filter bubbles. Every time you interact with a digital technology, what it does is learns about you. It learns very quickly who you are, what you like, what ethnicity you have, what sexual orientation you have, who you voted for, and whether you're religious or not. And based on that information, everything is tailored for you. We think we have a shared reality online, but we don't. We have a fragmented reality. So everything I see is different from what you see. The prices I see on Amazon, the posts on Facebook, the tweets on Twitter, the search engine results, they're all tailored for me. They're all not the same for everybody. It's not like a newspaper where everybody sees the same. But the issue is, the statistics show that people don't know about that. 80 to 90% of people believe that what they see on Google is neutral. That everybody sees the same search result and everybody sees truth and an objective reality when in fact we don't have that anymore. But what we do know is how algorithms do that. Um, and we know that since the, uh, the leaks, Facebook leaks, um, in October, um, 21, where we started to see behind the curtains of big tech companies and to understand a little bit better how their business model works. Their business model works in a way where they capitalize on the attention economy. The idea is to keep you on the platform for as long as possible, because eyeballs means advertisement revenue. So they have to find out ways to keep you engaged. And the Facebook leaks showed that. The research suggests that what people, what the thing that keeps, keeps people engaged is not rainbows and unicorns. It's toxic content. It is content that are rumors, scandalous lies, things that make you angry, things that make you, um, yeah, angry in a way. Nobody wants to see pleasant content. It doesn't keep people as engaged. So we have those algorithms that of purpose give us information, um, that is very, very upsetting just because it keeps us engaged, but we don't know about it. We don't feel it because we don't even know that we're in this bubble. The other thing that algorithms are doing is they only show us part of the, um, the voices that are actually on the internet. So content moderation, algorithms that are tasked to take down toxic content. I seem to be very biased, unfortunately. So algorithms think that content is created by people of color, but people from the L G B T Q community is more toxic than the content of of white, um, cis people. But again, we don't see that. Um, we don't know about that. Um, and that is a, is a big issue. This is why it makes the spread of misinformation, um, and tailored misinformation just very, very different. First point. Why do you think that AI is very different than, um, regular type of misinformation? It leads us also to the second part where I think that AI is different and it's disrupting the legal framework that we have. And so that's the second point. Uh, fairness and ai, uh, fairness AI is, is, is a big, big topic. Um, and it is a big topic because of how the technology works. In a very simple way. If you just ask the question, how does an algorithm work? Is it is looking at the past, trying to predict the future. That's all it does. And to give you an example, let's just say I want to hire, uh, a law professor at Oxford, and I wanna use an algorithm to help me with that. The thing that I would do is I would feed into the algorithm to all the historical information that I have about past hires. So everybody who has held those positions in the past, I would feed the algorithms with, um, past reference letters, their grades, their status point, uh, status, um, letters, and would test the algorithm to create a profile. I would ask the algorithm, what do all those people who were law professors at Oxford do they have in common create a profile for me because those people were good law professors. So they must have something that is special about them. So please make me this ideal profile, um, of the historical data that I have from the last 20, 40, 60, a hundred years. And so what will happen is the algorithm will come up with that profile of that past information, then a new person will apply for that job, and the algorithm will say, well, do you look similar to that profile? If you do, you get invited to a job interview. If you don't, you're gonna be rejected. So let's think about who has been a law professor at Oxford in the last 20, 40, 60, a hundred, almost a thousand years. It is probably somebody that is middle-aged, white, British, and a man. So I wouldn't have gotten the job at all. So I'm quite happy that at Oxford, we still decide, um, with humans when it comes to hiring decisions. But that is the problem, right? When you take data from the past, that is unfair and almost all data is somewhat unfair, you will transport the past inequalities into the future. And that is a problem. That is a problem that we already know and already see and had to deal with in the past. Um, just two examples that really showed, it's not just an academic problem, it is a real life problem. So in my home country in Austria, um, the um, employment agency did try to use an algorithm to help decide what kind of support should be offered to people who are currently unemployed. And they used historical data as well, um, to help them make that decision in a fairer more objective, objective way. And what did happen is that the algorithm did start discriminating against people that were older people living with disabilities and women. It would reflect the unfairness of the job market. Again, transporting it into the future. Here in the uk we have a similar problematic example, uh, with the Quam algorithm in 2020, um, during the pandemic, it wasn't possible to have the A level a A levels together. So we had to move the whole thing online. So instead of sitting in a room and taking UA levels of qum designed an algorithm to predict the grade you would've gotten if you had been able to sit the exam. And what did happen, quite unsurprisingly, is that it definitely started to disadvantage, um, against students of color and students who are not at independent schools. Um, and again, the the class system and the unequal access to education that we have in the UK was picked up by the algorithm and then reflected in the grades that were given to the students. So you see that, um, the unfairness, um, presides here as well, but then you could say rightly, well, that's not new either, right? Discrimination has always happened. Let's just use non-discrimination law to deal with that because after all, um, that is, um, there to protect us from that type of of discrimination. Yes. And some of it is helpful, but also algorithms group us in a way and judge us in a way that humans would never think about. So they might find out that you are a dog owner, that you own a dog, and they use this information to show you certain products online. They might find out that you are a sad teenager, which is something that, uh, Facebook and Twitter did, I think. Um, and they were shown certain content. Um, they might find out that you are a gambler or that you are a single parent or that you are a video gamer and they use this type of information to make decisions about you, right? And you could say, what's the big problem? Like who caress? That's not the same as traditional discrimination. Why would anybody care about those new groups? Well, for example, if you lived in China, you might would care about that. So in China, um, the social credit scoring system exists. That means that, um, the Chinese government is deploying a scheme where they take privately and publicly available data, use an algorithm to decide who is a good citizen, quote unquote, good citizen. If you get a high social credit score, that means very good things for you. It means that you get better offers in supermarkets. If you are a bad citizen, it might mean that you're no longer allowed to leave the country. In China, being a video gamer makes you a social score drop. So all of a sudden, being a video gamer causes negative consequences, even though that's not something that a human would usually do, because that's not how we think about people. We would never use, um, that type of criteria or grouping to make decisions about people. But we don't have to look to China to see those examples. Those examples are here right now in Europe and in the Western culture as well. So we already know for 10 years that if you are buying stuff online and you use an Apple machine to do so, you will pay higher prices. So you're being discriminated based on the type of machine that you use. If you were to apply for insurance in the Netherlands, you will pay higher rates if you address has only a number in it. Whereas people who have a number and a letter pay lower rates. So that's a criterion to discriminate against people. Um, a piece of advice for people who are thinking of applying for a job. If you are applying for a job and you have to submit your application online, please do use a browser like Chrome or Firefox. Do not use Internet Explorer or safari.'cause if you do that, you're more likely to get rejected if you were, if you apply for a loan online, what will happen is that the speed at which you scroll through the application, as well as the fact whether you use capital letters or not, has an impact on whether you get the loan or not. When applying for a job, you, your face might actually be the reason why you get the job. Face recognition software is running over your face. It's determining how fast your retina moves, how much sweat you have on your forehead, the pitch of your voice, how you gesture. And that has an impact on whether or not you get an interview or if you even get the job. But it's not just in the workplace. We do the same in education where eye tracking software is used to decide if somebody's gifted or not. So we just look at how their eyes moving and determining whether they should be put in, um, advanced classes or not. So it's not things that you write on a piece of paper, it's how your eye moves that, um, makes the difference. And lastly, your emotions. They play a role as well. There is a lot of facial recognition software that is being used during interviews to infer whether you're happy or sad or angry in order to infer, infer if you enthusiastic about a job or not. And that's again, just how your face moves, the pitch of your voice, things like that. People at border controls use the technology, doctors use it for diagnoses, and um, HR uses it to monitor, to figure out what you move this during a business call. So those types of groups are all of a sudden here, but they're very different than traditional groups, right? We have those two types of new groups that we start discriminating against. We have those groups, like weird groups, like dog owners or internet explorers or, or video gamers, and those really incomprehensible groups like how I move my mouse, how fast I scroll, what kind of letters I use, how fast my retina is moving. But obviously none of those groups are protected under the law. Why would they? They never needed our protection at all. Non-discrimination law focuses on things like ethnicity and gender and sexual orientation, ability, age, but sad people, apple users, fast retina movers, slow scrollers, those people don't find any protection yet they are the basis for decision making for very important decisions in our society. And the law is not prepared for it because it never had to be. So if you're interested in this topic in more detail, I wrote a very long paper, uh, on this. Can't go into detail there, but if you're interested in that question, um, you can download it for free. So those were the first two ones. How I think that AI is disrupting the law and society, um, because misinformation is different, because discrimination is different, and because I think the workplace is different as well. This is the, the third one. Um, with all the things that we have discussed, you could rightly ask, what does it mean for our job market? If you have an algorithm that is, you know, twice as fast at half the cost, is your job still safe? And that's an interesting topic and I think a topic that is not really discussed in public at all anymore. Lastly, um, last time we heard about this was roughly in 2016 where people were predominantly worried that robotics would take over manual labor, which is a, an absolute justified worry. And it did. Um, but it was a very short discussion and it never actually resulted in any interesting results. But now we're in a situation where it's not just jobs that have to do with manual labor, it is other types of jobs that might be at risk as well, because it's not just a manual stuff that AI can do. AI can do a right range of things, right? It can write poetry, it can write screenplays. We saw how it can produce paintings like the Rembrandt one. Um, you can use Jet G P T to write a press release to summarize a policy report or to do legal drafts. We saw it can do scientific articles. AI can even write its own code. And so what does that mean for artists, doctors, journalists, lawyers, office workers, scientists, or even coders if part of their job can be automated? And again, you could say, well, this is new, not a new thing. Technology has always displaced and replaced some of the work, but it's different because we never had a technology that was disrupting so many sectors at the same time. We never had a technology that puts artists and doctors and journalists and office workers addressed at the same time. And so that is something where, um, I think we need to worry about and the legal of framework needs to adjust to it. Um, of course you can say, and this is also something that often comes up. Well, some jobs might go away, but others will come in its place. And that's very often something that comes from industry. But if you look at the recent news, that might not actually be true. Uh, recently this in May, um, I b m issued a public statement where they said, um, they're not gonna hire specific back office, um, workers anymore. They're gonna replace them with ai, which means that roughly 8,000 people are gonna not be hired anymore. And if you look at other tech companies from Twitter to Facebook, to Amazon, to any of the big ones, you will see that they had let a lot of their staff go. So the question of will it actually generate new jobs is unclear at this point. So here we are, here are the three topics, how I think that, um, AI is changing the law and is changing society. So after I took the, the first half to scare everybody, I also wanna talk about solutions and, and what can be done about it, if anything. Um, and so here we are in a little bit of a good news, bad news situation. In, in terms of proposed solutions. Um, I wanna say the, uh, bad news is that I think at the moment we were focusing on the wrong type of solution. So again, with with Jet G P T, when that thing came around, which was roughly in November, very soon after, um, roughly in, in in March or so, um, there were a bunch, a couple of public letters, open letters that were calling to stop that type of technology to pause it for a little bit so we can catch up with our legal and ethical thinking around it. Um, at first, this sounds good, right? It sounds very good that we say, oh, this is a technology that is unprecedented in a way, it's very disruptive. Let's do something about it. Um, but the interesting thing is, if you look exactly on what that letter has in mind, I'm not exactly sure that it's worrying about the right thing. So if you look at the wording, it says things like, we have to make sure that AI doesn't rival human intelligence. We have to make sure that it doesn't outnumber us, outsmart us, make us obsolete and replace us. We have to make sure that we don't lose control of it. And from the wording, you can say what the letter is very concerned with is the idea of AI somehow waking up, becoming sentient, having its own agenda and doing things that we have nothing to do with the actual things. Other things like misinformation, job automation, it's mentioned roughly in a footnotes, but actually it's really about this idea of ai, um, completely waking up and taking over control of our society. Um, after that, interesting enough, another letter was issued June, where all of the nuance was completely abandoned. Um, and that public letter said, oh, we have to the risk of AI waking up and enslaving us just as seriously as we have to take the risk of pandemic and nuclear wars, right? That's, that's heavy. Um, that's heavy and also untrue. Um, because there is no scientific evidence that we are on a path to sentient ai. In fact, there's not even evidence that such a path even exists. The issue in reality is that we have a bunch of other issues that we have to deal with that are far more, uh, less, you know, science fictiony and maybe not as exciting, but those things are here right now and we really have to deal with it. And I think that, um, this focus on AI running away from us is really, really problematic in that sense because it's taking away the focus and the oxygen out of the room to think about the actual issues. And again, just with ai, those are areas of law that I think are all completely being disrupted because of it. You know, algorithms do need data. Yeah, that means they just grabbing everything that they can find on the internet. But data doesn't grow on trace. It belongs to somebody. People do have intellectual property rights and that means things for people who publish as well as for artists. But their data is being taken away and repurposed without them being asked. We already talked about the issues of misinformation and what that means from a regulatory perspective and, and how convincing it is. Um, one of the things that is never really mentioned is the environmental impact of those technologies. Um, uh, data doesn't grow in trees. It needs a lot of resources. Um, it needs a lot of water, it needs a lot of space, it needs a lot of electricity. So just to give you an idea, if one training session of Jet G P T requires the same amount of energy, like 126 Danish households in a whole year, one training session to cool, um, a data center because you need to cool the servers in a data center that's necessary in order to cool the servers in a data center, medium sized data center, you need 360,000 gallons of water every day. An interaction with Jet G P T 20 questions costs us half a liter of water. Those are resources, um, that nobody talks about. Data doesn't grow under reef, it's not organic. It costs us something and it costs us something here. And now we already talked about the issue of workplace automation and what that means for society if people might not have their jobs anymore. We talked about discrimination, how discrimination law might not be good enough for that purpose. Either data protection, your data is being taken, but it is yours. Um, and it can be used against you because for example, you could use Jet G P T, um, to find out intimate information about other people with prompt engineering. You get Jet G P T to tell you something about a person that that person doesn't want to have, um, disclosed, which can then lead to reputational damage if that information is being spread about you. And of course, journalism and academia, the true seekers in our society, um, their field is being disrupted. And that is a big issue, especially 'cause we rely on them. So those are the real issues. And unfortunately around the world as well as in the uk, policymakers started buying into the idea that this is not the real battle. The real battle is that speculative fiction one where we worry about the killer robots at some point taking over. And you can actually see this here, um, with the recent announcement of the, of the government of the AI safety Summit, which will be held on the first or 2nd of November, which is like the governance actually good idea of having a flagship event to position themselves as a leader in ai, which I think is a fantastic idea. But the whole focus of that summit is none of the issues that we have discussed. It is all about AI safety in the sense of making sure we don't lose control of ai. In fact, in the scope it actually says that it doesn't want to talk about misinformation and disinformation about discrimination or workplace automation. Those are things that are not as important at the moment. We need to talk about losing control, um, of ai. And I think even though it's a very powerful narrative, it's a very wrong narrative because AI isn't an alien that lands on our planet. And now we have to negotiate with it. And we'd be like, please, please, please listen to our values and laws. No, no, no. We created it, right? We built it. Every design decision is an ethical decision. How biased the system is, is a design decision. How transparent it is is a design decision. How much data it needs is a design decision. It wasn't born that way. We made it that way. And just saying that we now have to plead that it still listens to us, is taking away the responsibility of the people who built those systems that know what they were doing. So this is, um, just from the policy perspective, but again, it's not just in the, in the, in the UK where I think there's a little bit of a trouble here. It's, it's not much better on a European level. Many people will know that, um, the, uh, European Union is, uh, proposing the, uh, AI act or is proposed. It's probably gonna come into force either end of this year or or next year. And again, the framework has really good points in it. And there are good, there's good stuff that I'm very, really happy about. Um, it has this risk-based approach where you think about where technology is being deployed and depending on how risky it is, there are certain rules that you have to follow, which is a really good idea. And the most interesting part, at a higher risk application where the legislator said, here are eight different categories that will believe are higher risk. So if you deploy AI in one of those, you have to make sure to follow certain rules. None of that is banned, but you have to do certain risk assessment and documentation. You have to give certain information to people, that type of stuff, right? But if you look at it, it doesn't really deal with any of the things that we have discussed. Like it doesn't really talk about, um, the misinformation issue. It does talk about, um, employment, but it doesn't talk about what we would do with people being displaced. It just says if you use AI for employment, be transparent about it. It doesn't talk about what it means for the workforce that are being displaced. This discussion of whatever we should use it is not at all discussed at all. Um, that is something that was completely left out. The AI act presupposes that the AI is here to stay and here's how we deal with it. It never asks the question of what would happen to the people that are already, uh, working in those areas with bias. The same thing. It's not really mentioned here. It's really mentioned like once or twice. Um, but it doesn't tell you anything. You have to be examine those biases. But it doesn't talk about what bias, it doesn't talk about new types of biases. The new groups that we have discussed, none of that is mentioned at all. Um, but this is, yeah, this is an issue because it really doesn't, uh, get enough to the core of, of the issue. The interesting thing again about the AI Act is that, um, developers have to follow the legal framework, but they also have to follow the accompanying standards. So you have the AI Act that will come into force by the end of the year, and then there will be another two years or so where standards are being developed that are supposed to give more detail on that framework because this is just an overarching idea and the details and nitty gritty, that's what comes in the standards. The interesting thing about the standards is who writes those standards? Um, uh, and in the European Union, it's common that this has been being done by Senate and seg, which are the standard setting bodies in Europe. Those are not public entities, they're private entities. So they're not democratically legitimized to do so, yet they're the ones that are setting the standards. And plus they invite people from industry to be on the working groups as well. So you have centennial plus industry writing those standards for themselves. Um, other entities like NGOs or civil society, they only have so-called observer status, which means they get a seat on the table, some of them, but um, they don't get do voting rights. And then the question is, how can you be sure that they're actually following those rules? The issue is that they do this with self-assessment. So a developer at some point did not only write the rules that they need to follow, they're also assessing whether they actually comply with those rules. And of course it could work, it could work, but it could also go terribly wrong. Um, and the other issue is that in, uh, people don't really have individual level rights to complain against any of this. What is the good news? Just very quickly, 'cause I don't wanna, I wanna don't leave you with just bad news is that it's not over. Um, the reason why I'm doing this is to show that the the fight is not over. In fact, the European Parliament has proposed amendments to it. And I really hope this will go through. This is why I'm trying to raise awareness around this. Um, for example, they have now suggested that, um, recommended system that, uh, can contribute to misinformation should be a high risk system. So that would be one way of dealing with that, especially when it comes to system that can impact the electoral um, system. And they have proposed, um, that you should have a right to complain and that you should have a right to explanation. And again, they're in trial logs, so we don't know what the final version would be. But the um, uh, but the parliament has proposed this. Unfortunately, nothing is in there about the conformity assessments or the fact that I certify whether I comply myself after I've written the rules for myself. There is nothing in there, but there's also still room to get amendments in if needed. And the last good news is that I wanna dismantle the fake news that the genie is out of the bottle and there's nothing we can do about it. That AI is here to stay and you either jump on the train or you're gonna be left behind. That is a rhetoric that we have to question and we have to ask who is telling us this information? It's usually people who have an interest that the genies are of the bottle rather than they're talking about truth and reality. Every time there is a new technology, we have a hype cycle that means, you know, autonomous cars, the metaverse, NFTs crypto. Every time something like that happens, people say, oh, this is gonna be, it was never gonna be the same. But that wasn't always true and it wasn't true because people said, that's actually not technology I wanna have. I don't wanna adopt it. It's not useful for me. And that's what I want to get across. Like the fight is not over. We are the ones that get to decide if the gen of the bottle or not because we are the ones that shape technology if we want to. We are the ones that can decide, um, if you want to adopt it. And we have a say in how to regulate so we can make sure that technology serves us and not the other way around. Thank you very much. Thank you very, very much. I'm very conscious that Sandra has to catch a train shortly, so I'm going to rush through a few questions. I think one of the, uh, issues that concerns me as a scientist myself is that it's quite difficult to see that basic principle of science, the replicability from the original data. There is so much silence around how the algorithms work. Yes. And for the layman, which is most of us, I suspect you can't see what it does. Yes. And therefore you don't know what bits of data within it are. Correct. So if we go back to the bit about, um, the retina and the mouse, that group of abstract things that are used in say job appointments. Yeah. Where did they get the evidence from and how robust is it? And what would happen if you test that? Yes. I love that question. And it's, it's exactly right question. Um, we don't know. And that's the issue. So what you're pointing at is that there is a scientific thirst or even need to not just look at correlation but also think about causation. So correlation just means that two things happening at the same time, but it doesn't mean that one is impacting the other, that they're even connected. And so as good scientists, we are interested in correlation 'cause it shows where we have to look, but then the journeys start and we have to figure out how those things are actually connected. Algorithms disrupt this because they don't care about this at all and we don't care about it either anymore. We just say there's a correlation and the correlation is good enough, a good enough indication that there's something going on and I don't know what, and I don't care about it at all. Um, in fact, algorithms are used to find correlations that's their without Sorry, the eyes getting angry at me <laugh>. So if we, if we go back to the job just as, yeah, I'm just using this as an example. Yes. How do they know they've got it? Right? So if you're hiring, if, if you're applying an algorithm model through a PR department in working in multiple companies Yeah. How later on do they know whether that algorithm has hired the right people? Yes. We actually never really know, know for sure. Right. We, um, because we never know what would've happened with the people that we didn't get the job. So we don't actually know if we made the right decision. Um, but we assume that their algorithm will have some indication that, for example, I don't know if you are somebody that downloads a browser over another, that you're more tech savvy and that means that you may be gonna be longer at the job. But whether we write or not, we don't actually know and we don't know it because there are certain things that are just completely impossible to predict. First of all, what makes a good worker is subjective. Um, whether somebody stays on the job is impacted by so many things that science could never comprehend. So really knowing whatever right or not, we don't, um, we just can do our best guess. And that's what algorithms do. They do guesswork, um, but we believe that they know something. We, we think they're so intelligent, but in reality we don't. Thank you so much. Um, on that very point, surely the question is not does the algorithm know that it's picked the right person for the job, but is it better than a human interview panel doing the same thing? Yes. Um, and again, the question is what better means better? It's definitely faster. Um, it's definitely cheaper if that's what better means, but the algorithm doesn't sit down and think about, I wonder what makes a good job applicant. Like there is no critical thinking about the capabilities that the human has to have in order be a good employee. It's just mimicking what it has seen in the past. It's not coming up with new ideas, it's just learning from our past. So it's just finding new creative ways to do the same thing that we have been doing. So in a way it's not better. It is the same. The issue is though, it gives us the impression that it is somehow neutral because people are biased. Um, people don't know that they're biased algorithms. This is just math. How can math be biased? But obviously it is because the data that is fed into it carries the legacy of past discrimination in it, but we think it's more neutral. And so in a way it's more harmful because it's faster. Um, it operates on a larger scale and it's discriminating in a way that we wouldn't anticipate. So it's in a way worse good news is if you know about it, you can do something about it. You can test it. Like you can actually try to open up the black box and figure out how biased it actually is. You can redo that with a human. You can't crack open their head and figure out what's going on. So, but you have to first accept that by default that thing will be biased and then you have to do something against it. And if you crack it open and figure out where the bias lies, you can do something about it. But it means to admitting first that there's a problem and then you can actually solve it. And then you could come to the point that you wanna get that, that you actually make fairer decisions than you used to, but it requires an active step. If you just let it run, it's gonna make things worse. If you do something, it makes things better. Sorry. Thanks Sandra. One of the quest, one of the problems you didn't talk about with AI is the effect of the internet eating itself and the Aura Boris problem of AI trained systems creating data, which then is used to train more AI systems. Yes. Um, related, I saw a good joke last week which said that the most useful and valuable thing in five years time will be a copy of Wikipedia on a U S B stick from 2022 or something. Um, because it will all just fall apart. Yes. Um, as everything eats itself. Yes, absolutely. Absolutely. What you probably need to do is just clean up the internet at some point, um, which is really pragmatic because the internet doesn't forget. Um, we, the internet will never forget anything, anything that you put online will always be online. You absolutely right that we come in a situation where there's just so much information and wrong information and it's just gonna be, as you said, eating itself. And that's a whole other harm that I didn't touch upon, but I fully agree with. Older algorithms were presented with a training data set. Yes. Which grew as required. So that argument says that they no longer have human control over the training dataset. The dataset is being sucked in on its own account. Is that correct? What, otherwise you could take your last slide and say maybe you should make the people who design the training data set accountable for the consequences of their Actions. Oh, oh, I'm absolutely saying that. I'm absolutely saying that the people who are making the design decisions so be held accountable, it's never gonna be an algorithm in my mind that should be held responsible. And the idea that algorithms just do what they want without being asked. Again, that's not really how it works. Right. There is still a person who makes a decision to use a certain data set or to use it for a specific application, and that person has to carry the responsibility if it fails. The idea that algorithms carry a responsibility in itself is, is problematic from a legal perspective As a law lecturer. Um, I mean, I do understand your point that, you know, when it comes to misinformation and discrimination, we still haven't lost the fight because as you said, there's a lot of things we can do in terms of, you know, changing the design of ai. Yeah. Um, but one question I had in mind is what the as a potential solution, AI replacing the workforce. You know, as a lecturer, I often tell my students, would you want to see a robot teaching you the AI model probably can do a better job than me. Um, so in terms of, you know, replacement of the actual labor, yeah. Um, I was thinking like, could you look at tax laws perhaps? Yeah. What are the solutions you have in mind? Absolutely. Yes. It's one, it's one slide I skipped over because I was running out of time. But your point was on that slide actually. Um, which was, you know, what would we do in instead? Because like, we might not know that, um, what kind of jobs will be replaced and which ones will be created, but we already know that people wanna pay their rent. We already know that today. And so what that means, we have to make sure that we have regulation that, for example, guarantees minimal wages for, for people that we think about people as robo taxes, automation, taxes, that if you automate part of your workforce, you should pay tax for it. Um, which then would make up the, the, the hole that we have, um, in the budget. The same as like, you know, universal basic services, for example, to make sure that you're still being protected even though you might have lost your job. But again, that's a very unpopular thing to say, which just means we need more money because that's the opposite of what you do when you ultimately wanna cut money. But that is the core of what needs to happen, is to think about labor law and employment strategies on that scale. Exactly what you said. Look, um, ladies and gentlemen, I'm really sorry to cut the question short. I'm very anxious that Sandra gets her trained. It's, um, been a delight to have you here talking to us. I hope you've learned something and not been too frightened by what's gonna happen. We can hide those. I hope not. I hope those bad actors behind an algorithm somewhere else, <laugh>, Sandra vta. Thank You very much. Thank you.