Gresham College Lectures

Can AI Protect Children online?

December 04, 2023 Gresham College
Can AI Protect Children online?
Gresham College Lectures
More Info
Gresham College Lectures
Can AI Protect Children online?
Dec 04, 2023
Gresham College

Could artificial intelligence be used to tackle online harms to children? What are the specific “solutions” AI could offer – for example, age verification, preventing the sending of intimate images, and stopping the promotion of harmful content - and what would applying these look like in practice?

What ethical dilemmas and rights challenges does this raise? What do policymakers need to understand to develop good policy around AI? Are alternatives - like image hashing - potentially more effective?


This lecture was recorded by Professor Andy Phippen on 21 September 2023 at Barnard's Inn Hall, London.

The transcript and downloadable versions of the lecture are available from the Gresham College website:
https://www.gresham.ac.uk/watch-now/ai-children

Gresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/

Website:  https://gresham.ac.uk
Twitter:  https://twitter.com/greshamcollege
Facebook: https://facebook.com/greshamcollege
Instagram: https://instagram.com/greshamcollege

Support the Show.

Show Notes Transcript

Could artificial intelligence be used to tackle online harms to children? What are the specific “solutions” AI could offer – for example, age verification, preventing the sending of intimate images, and stopping the promotion of harmful content - and what would applying these look like in practice?

What ethical dilemmas and rights challenges does this raise? What do policymakers need to understand to develop good policy around AI? Are alternatives - like image hashing - potentially more effective?


This lecture was recorded by Professor Andy Phippen on 21 September 2023 at Barnard's Inn Hall, London.

The transcript and downloadable versions of the lecture are available from the Gresham College website:
https://www.gresham.ac.uk/watch-now/ai-children

Gresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/

Website:  https://gresham.ac.uk
Twitter:  https://twitter.com/greshamcollege
Facebook: https://facebook.com/greshamcollege
Instagram: https://instagram.com/greshamcollege

Support the Show.

I am gonna start with a spoiler. Uh, no, it can't. Um, but, um, it's, it's not time to go home yet. What I want to do with this is, is, is not really do a deep dive into AI or, or look particularly at the online safety bill or anything like that. I really wanna talk about AI literacy and, um, any techs here, anyone from the tech sector here. So you've probably had the, but surely it can do that. And, um, you know, those sorts of, but, but we've promised the client now, so just get on and do it, you know, those sorts of things. Um, uh, yeah, uh, and I think I'm, I'm gonna particularly dwell on, on political rhetoric around this issue because when I wrote this lecture, the online safety bill hadn't reached Royal Ascent. Now it has reached Royal Ascent. Um, and I'll talk a little bit about some of the challenging language around that and, and almost like the belief that, well, I've, I've seen what AI can do now I've, I've used chat Bt it's amazing. So why can't it do that? And, um, just some, just some issues around there really. Um, there's my email address. Uh, if folk do wanna contact me after this, you are very welcome to, I generally sit in my office in Cornwall writing and I'm always online, so I'm always very happy to, to hear from folk. Um, occasionally I get someone going, three years ago I saw a talk of yours. Can I just ask about this thing that's happening at my kids' school? That's fine as well. Don't mind doing that as well. Um, I'm not gonna do a deep dive into, into machine learning. Uh, I'm gonna look very briefly about the broad functioning of it and, and why it works in certain areas and why it doesn't work in other areas. But as I said, I'm going to spend a lot of the time looking at specific use cases and looking at the challenges of those and looking at how we've got to this position at the moment.'cause we are, you know, undoubtedly in a very exciting time for artificial intelligence. Um, and I think probably because it is so visible now and because we can get hold of it and see what you can do with it, it is something where people are very, very excited and we should be. But equally, there are parameters we need to think about and use cases we need to think about. And if we understand a little bit about how it works, we are probably in a better position to be able to, to, to look at the, the demands and things. Because I think one of the things I find, particularly with the online safety bill, which as I said reached ascent this week, is it's become, don't worry. Yes, there are issues about children online, don't worry, the online safety bill's gonna sort it. Um, I dunno if you saw, but in the press a few weeks ago, there was another report about sexual harassment, sexual violence in schools, and there was discussion there about young men accessing pornography and, and getting unrealistic expectations around, uh, sex and relationships and things. And, um, the response to that from government was, don't worry. The online safety bill's going to sort all that sort of thing out. We'll stop young people looking at pornography and then we don't have to worry about that anymore. And I find that quite challenging because I think from, from my perspective, I've spent the last 20 years talking to young people about how the online world affects their lives. And I'll, when I, when I bring this talk to a close, I'll talk a little bit more about that. But, um, the, the idea of platform liability, which is pretty much what the online safety bill does, you know, it's like the platforms need to sort this out now, um, is one thread, but it's certainly not a lot of what young people tell me about how, if you say, how can we help you have a better online experience? It's very rare they talk about technical intervention. But I'll come back to that in a little while. A little bit more about me though, um, somewhat unusually, I think it's probably fair to say in, in this area, uh, I'm an academic with a tech background. Um, probably before some of you were born, I was writing genetic algorithm code for engineering optimization problems in the nineties. Um, in those days, we had to write our own, we had to write it from the ground up, which sometimes means I'm a little bit more sympathetic to the demands that are placed on the tech sector than some maybe. Um, not that I don't think the tech sector has some serious responsibilities here, but equally they haven't got the solution on their own. Um, I've already said I've spent a lot of time talking to, to young people. I think the other thing that's important in this situation is I will flag up the fact I am a parent, um, from government ministers to high court judges, to, um, people I'm socializing with. They'll hear, what's it you do you cover stuff about online homes? Well, my son, and then you get this discussion about, and, um, everyone takes, not everyone, lots of people will take a parental perspective, as you might imagine. My kids who are nearly 20 and 21 now had an interesting teenage years with, with me as a father, <laugh>, um, uh, probably sometimes asking questions that were maybe they weren't wishing that, um, I was asking. But equally, um, we always had some very frank conversations. It's generally when one of a group of their friends were up, they'd be asking all the questions that they couldn't ask at school, or they couldn't ask, um, other people because they were just told, just don't look at it. All right. So, so it's been an interesting time. I'm also a fellow of the BCS. Um, for those of you that are involved with the BCS, um, I'm gonna start. We, we sometimes, if you'd look at this historically, AI has been around for quite a long time now. Fifties sixties was where the, the first AI summer came from. We talk about AI summers, AI winters. Some folks say we're still in the, the biggest AI summer. Now some folks say we're in the AI autumn, but the third one, if we look back historically, there was a, um, another really big AI boom in the eighties, uh, late eighties, early nineties, if any of you remember that time, expert systems were gonna replace gps, all those sorts of things. And, and similar sorts of discussions around is it gonna replace jobs and all those sorts of things. Um, I had lunch, I dunno if any of you ever heard of him, but there's a, my old academic mentor, professor Frank Land, who was at the LSE for 50 years, I had lunch with him the other day. He's 96 now. And I said to him, what do you think about all this AI that's going on at the moment, Frank? And he went, I've seen all before <laugh>. I went, really? He's like, well, no, it is different now. We have far greater data processing power than we used to. We have far greater processing power than we used to. I think he was a little bit sore because chat GPT had said he was dead. So I think he was a bit annoyed about that. But you know, he, he was sort of saying, you know, having an understanding of how it works gives you a, um, a more richer experience with it, is kind of what he was saying. But yeah, he was also saying, you know, we never seem to learn from history when it comes to computer science. A lot of these things have happened before, um, in, in different, different, different means and things. But, but as I said, I think probably one of the most visible things around AI at the moment is it's so available to us all because you can just sign up and go on to chat GPT and you know, every university in the country is absolutely terrified about, wow, we're gonna have to do something other than set essays now, aren't we? Because, um, although interestingly, I I, I did a visit to Kings a while ago, um, to the computer science department there, and Kings do some very good AI research. I said, what are you doing about GPT? And they said, well, I was talking to the students about it. They said, well, we got it to generate some code. The professor said to get it to generate some code based on a, a problem. And then we unpicked it, and then we started to look at where the bugs were in the code. And then we went on and we started to try and understand how it worked in order to create those bugs in the first place. That sounds like a really progressive view of embracing these technologies. Some of the universities just sent out a a memo towards students going, if you use chat GPT, you will be failed. It's like, well, how do you know, first of all? And secondly, really, you know, again from history, we looked 20 years ago, the search engine, you can't use search engines to write lecture, to write essays. And now can you imagine a time where you wouldn't be using a search engine to do these sort of things? It's, it's a bit like trying to fight the tide. It really is. So, so we should be embracing them and, and, and looking at at, at these positive aspects of it. But there are some things starting to creak at the moment. I'll, I'll look at a few examples of, of that sort of thing as I go along the way now. But, but I think, I think this one first of all is the sheer amount of data that's being processed and the, the processing power we've got is very different to how it's been before. So I'll come back to talk a little bit about training data in a minute, but the sheer volumes of training data you can use are, are, are better than they were before and you can process it a lot faster. So it does look amazing because it's, it's writing this stuff in real time and, you know, our go surfed on chat GPT, it was about 75% accurate about what I did. And then although I've noticed, I don't know if anyone else has done it, but more recently it's come up with this disclaimer at the end going, however, our data was collected in September, 2021. So you might like to, to look up other sources because academic careers changed and they might be somewhere else now, which is quite different to, um, I can't remember what it, it placed me at an institution I've never worked at a while ago. But there's some fantastic things that have happened. Massive improvements generally in, in what I might refer to as narrow domain areas. So, so one of the examples in the, in the transcript alongside this is around tumor diagnosis. There's amazing developments around tumor diagnosis now, it's a fairly closed system if you like. You can feed it tons and tons and tons of medical data. And it's getting extremely accurate at diagnosing when it's shown a new scan, whether it's a tumor, whether it's, um, needs treatment or whether it's something not to worry about. Um, the, the US government are talking about that a great deal at the moment. If you look at something like gaming, AI has massive improvements in gaming any gamers here. It's interesting. No one ever wants to admit they're a gamer.<laugh>, it's all right to be a gamer, it's fine <laugh>. Um, but you know, the, the, the AI in games as as, as it's is way better as it used to be. We have all the, all these visible content creation things now, um, although apparently in Vick's lecture on Tuesday, some examples of AI generated, um, images did produce some interesting response. It's normally the fingers dunno if any of you have done some of the, it's normally can't quite get, you know, you end up with 15 fingers on each hand and things. But there's, it's, it's really impressive stuff. And the fact you can do that on your phone is, is is equally impressive as well. But it starts to become more complex, first of all if you've got a limited amount of data. But also when you are in messy complex systems with maybe lots of human intervention and similar, which you might look at, for example, social media as an example of that. And I'll come on to come onto that in a little while. Um, still there are some challenges around creativity. Um, it, it's, it's, you know, there, there have been people say, well the easiest way to get automated vehicles on the roads is to stop humans driving cars because, you know, it's fine if the entire system can work on its own. You know, you, you see some great, um, progress in automated vehicles and things at the moment, but I live in North Cornwall fairly remote, send an automated vehicle up my lane when a sheep's jumping out or there's a fire up in the hedges and it's, you know, it starts to become more problematic. But, um, so it's always the edge cases that, that, that cause the problems there. But, but, but yeah, clearly there are, there is a, an awful lot of great stuff to, to look at within the AR world now. But one of the challenges with that is you get, but you've done that, so why can't you do that? And I think that's kind of where we are in the, in the, well I might broadly say the online harms world is, um, one of the other things I do occasionally is sort of mediate between organizations. And I was contacted a while ago by, by a charity I do quite a lot of work with. And they said, oh, we're, um, we we're commissioning, uh, a development company to provide, uh, a new online facility. Now they run a helpline that supports victims of, of fairly serious online abuse. Um, and I said, okay, that's, that's interesting.'cause they run the line between nine to five and they're saying, you know, because of the domestic nature, a lot of this abuse, we need to support victims outside of the nine to five time as well. We've had this company come to us, they say they've got a solution where we can implement some sort of a chat bot and it's all right 'cause they're using ai, it's very exciting. I said, but but what are they doing with the ai then? He said, no, they, they said that they've got a lot of experience in building these AI systems and they, they need to get our case data and they're gonna train it with our case data and then it's gonna support victims. I said, well, it seems like quite a leap because every single one of the cases you deal with, 'cause I know about the cases you deal with are unique, is it's, it's very unusual that you will see one person's experience be the same as another person's experience. So if you're starting to feed it where every single case unique, how is it going to generalize from that? And they sort of accused me of being a cynic and, and all those sorts of things. And, um, I'm just the, the old guy that goes, I've seen it all before. Um, but um, then about six months later say, how's it going? It's like, well, they haven't delivered everything we expected. It's like, oh, okay, why would that be then? Well, they said the data's quite difficult because every case is unique. I'm like, I should have recorded myself saying it at the time. No, it wasn't anybody's fault. I, I think, you know, on the one hand the developers were saying, we've got lots of experience of providing chatbot solutions to things. And the procurers had gone, this is great. And they hadn't really explored the sorts of data because they were bringing it from their perspective and the, the charity weren't sufficiently articulating the requirements of what they wanted. And I think that's always, I can remember years ago doing a bit of mediation between a a, a charity and a tech company said they, they were supposed to build us a website, they haven't done what we wanted. I said, well, what is your requirement spec then? And they went, the what <laugh>? It's like, well, how can you possibly know what you wanted or how do they know what you want without some sort of formal documentation? And they went, well, we just told them. So, so yeah. Anyway, um, and I think, you know, one of the things that it's just saying, it's okay, we are using ai. It's a bit like saying it's okay, we're using maths or something <laugh>, you know, it, it of itself, it's not the solution. It's how you apply these solutions now, now to the tech in the room, I now apologize because this is as detailed as I'm going to get around AI or more specifically machine learning. Most of the, most of the techniques that are being used at the moment focus very much on, on machine learning. And this is a very basic model, a very basic model.'cause I don't want to go in, I've, you know, I've used 20 minutes already. I don't want us to talk about supervised learning and unsupervised learning and all sorts of different sorts of learning and feed forward networks and all those sorts of things. Um, you take an algorithm, algorithm, I'll come onto to algorithms in a minute. Um, you, you take it, you feed it data, the more data you feed it because of how extremely well put together they are. Now, I can remember when I was doing my PhD, a friend of mine building a neural network from scratch. Now he earns a huge amount of money working in financial systems play <laugh>. It's always financial systems. Um, but, but you, the more you feed it, because it's learning by learning, it's recognizing data. So if you then start to show its similar data, it starts to recognize it. Um, and then you look at the model and see what it's producing and then you see what the outputs are and whether it's being accurate and whether it's being inaccurate. And then you improve it. There is an apocryphal tale of image recognition, which some of you have you any of you heard of the, the tank recognition system didn't actually happen, but it's a really good way of explaining image recognition. So, uh, uh, a military system was developed to recognize tanks and the, the system was trained with photographs of friendly tanks and photographs of enemy tanks. And it was trained with data and it was building up over time. And then it was shown a picture of a new tank. So not from the training data, but from the, and it was almost a hundred percent accurate in determining whether that was an enemy tank or whether it was a friendly tank. So this is fantastic, we need to get this on the battlefield. And then they deployed it in battlefield situation and it went down to about 50 50, again, almost, almost completely random in how accurate it was they went, but in the lab it worked really, really well. Why isn't it working? And they looked back at the trading data and it turned out that all the friendly tanks were photographed on a sunny day and all the enemy tanks were photographed on a cloudy day. So what the algorithm was actually doing was determining whether it was a sunny day or whether it was a cloudy day. Now, as I said, that's an apocryphal tale, but it's a nice way of explaining how image recognition works. And sometimes you go, well, we don't actually know what's going on in the middle of that box, but we know the stuff that's coming out at the end of it. It's really, really good. I think we're starting to understand what goes on in the middle more than we once did. But also it's, it is just a nice way to, to illustrate that model. But, um, things do go wrong now, um, algorithms, they're bad, aren't they? I did some training to a bunch of safeguarding professionals a while ago around young people on lawn harms, and I always say to the organizer, is there anything specific anyone wants me to look at? And one of the requests that came back can, um, can he talk more about the bad algorithms? So sorry. It's like, well, algorithms are bad, aren't they? No, they're not <laugh>. Um, there has been this thing now where there was, there was something on the BBC this morning about TikTok and recommendation engines. I dunno if any of you saw it, where TikTok was supposedly recommending excitable videos about that thunder <laugh>, uh, where it was saying, one of the examples it gave was TikTok promoted a lot of videos about the Nikola Bully, um, suicide. Um, and as a result of that, people were turning up at the village. Um, and it was because of the recommendation engines and, and because of TOS algorithm. And it was kind of interesting 'cause you kind of thinking, well, the mainstream media were doing a huge amount of coverage of that story at the same time. So to, to try and isolate it and say, well, TikTok drove people to go to that village was a bit of an odd one anyway, but it was just the, you know, I think one, one of the tabloids had 15 articles in one day about that case. So, and then everyone said, again, it's terrible what tos done. Now there are challenges with recommendation engines. I certainly wouldn't disagree with that, but it was, the language was, again, it's it's the algorithm, that algorithm. So, and, um, one of the, the more famous ones in recent time was during lockdown, um, obviously young people couldn't sit exams. So it was based on teacher assessment and that was sent off to, to the DFE for processing and, um, to make sure it was all normalized. And when people started to unpick the results, which, so, so the, the college gave data to the center. The center then said they were running it through a normalization process and then young people got their grades and they started on pick and go, well, people in deprived areas seems to be getting, generally speaking, lower grades than people in, in affluent areas. That's, that's a bit dodgy, isn't it?<laugh>, it's quite good being in this room with <laugh>. Um, uh, and then Boris Johnson came out because obviously there was a lot of negative press around this. Boris Johnson came out, oh, it's because this mutant algorithm, mutant algorithm. That's interesting, isn't it? Um, and there was another story, uh, a while ago as well. The, the links are in the, the handout that goes along with this where, um, the UK passport photo checking service was rejecting photographs of people of color, um, and not rejecting, um, white folks photographs. And again, these, sorry, racist algorithm them, an algorithm could only do what it's told to do. I think it's fair to say, you know, it's not gonna go, oh, I know I've been told to do this, but I've decided to do something different. Now in terms of the passport checking service, this is far more likely that that was as a result of anyone read, um, weapons, weapons of Math Destruction by Kathy O'Neill. She talks about this quite a lot. Again, it's in, it's in the handout, but, um, you can build a biased data set and you can feed that biased data set in his training data. So it is far more likely that the training data for that system had far more white people than people of color in it. So it was far more accurate at assessing the, the acceptability of the photograph from a white person, from a person of color, but to say the algorithm was racist gives it a sentience that it really simply doesn't have. Um, and I think that's one of the interesting things around a lot of the discourse in this area. I think a former Secretary of State was very clear that people should stop using algorithms because algorithms are bad, which is a really interesting concept, um, for those coders amongst you. Um, so what I want to do now is just look at some use cases and look at the challenges we have in saying, why can't you get AI or machine learning to solve this? I'm gonna take three snippets of political rhetoric. I'm not deliberately not naming the people, although one you can probably make a quick inference from because it's not really about picking on somebody and going, oh, look at this person. They said something that, it's more about looking at the nature of the political rhetoric and seeing why that presents such challenges. So I'm gonna start off with, with this one from after the Euros, there was a, a lot of racist abuse of, um, some of the England footballers on Twitter. And this was a challenge in prime minister's questions at the time. And, um, the person who was prime minister at the time, now we've had a few recently, so you might not be able to guess from that, but, but it was this, this statement here. Um, it's the bit at the bottom that I find particularly problematic in terms of the re the rhetoric being used. We all know they have the technology to do it So they could stamp out racism if they wanted to, is the implication there. And this then feeds into discourse around child protection as well. Children have used online, children see language and, um, discourse they don't like, while the platforms just need to stop that from happening. Children see, children see content that we don't want 'em to see, while the platforms need to stop that from happening. Now tech code is very good at symbolic matching. So if someone is using racist keywords, very easy to detect someone is not using racist keywords and bearing in mind, I mean this was in the days when Twitter still had a large safety team. This was the, the pre musk days and things. So, um, so there is, you know, if you reported a racist tweet, you would get a report back saying, we've looked at that, we suspended the account similar, but the idea that they can detect this and stop racism if they want to is kind of problematic because it implies, well, you just solve this stuff. So has anyone ever had this on Facebook? You've posted something up. Surely it's not just me. This, you, you post something up and this appears about five minutes, five minutes later, normally about five minutes later. God, you feel awful. We, we think what you said, Mike, I I didn't mean anything nasty by it. Is there anyone I can phone up and apologized to you? I feel terrible now. Now this is the nature. I think the language of this statement is particularly interesting if you look at it from how you might train an algorithm to do this. It's similar to others we've removed for bullying and harassment. So clearly they built up a big corpus of abusive terms and there's an awful lot of academic research that goes into the area as well. You, you build up a corpus of abusive terms and you try and get it to detect whether a term is abusive and it works to a certain point, particularly good if you're looking at a basic keyword matching. But if you're looking at, uh, tone and and nature of tone and things, the sentiment, it's less good at detecting sarcasm or satire. I got a warning a while ago because I'd said something, someone was mocking me for my rural life and I said, yeah, I just went out and threw a copper kettle at a tourist. Um, clearly I hadn't done that, but I got a community standards warning <laugh>, I didn't really throw a copper kettle at anybody <laugh>. I don't even own a copper kettle. Um, but, but brought home for me. Um, does anyone know this place? Plymouth? Plymouth. What part of Plymouth Is it? The ho There you go. Plymouth Ho. I posted a comment about Plymouth Ho a couple of years ago. Got a community standards warning<laugh>. Now I do know a few folk at met and said, can you just tell your engineers that Plymouth is a location, not a person, please. I wasn't being asked about it, but, but if we are saying you need to do more and that's a standard piece of political rhetoric, do more, they will generally err on the side of conservative because they don't want to be in a position where you have the outrage in the papers and, and look at this, but, but yeah, it doesn't happen anymore. So if you go home later from post art, I'd like to go to Plymouth Ho it's been sorted, but it was just an interesting little vignette from, from my perspective in that, ah, I know why that happened. Now this was from a recent debate on the online safety bill. Um, age verification is a challenge that has been going on for a very long time. Most of the political driver around this is the challenge that young people are accessing pornography. We don't want young people to access pornography, which I completely agree with. Having had conversations with teenagers who are clearly desensitized and had performance anxieties and sites anxieties and similar. Um, and then the solution is, well, we'll just age gate it. We'll just make sure anyone who provides this sort of content, um, has to get the user to prove they are 18 before that happens, before they're out two, I think highly effective is the thing that I find if you are a platform that's being told, we're going to regulate, you need to develop something that is highly effective. What does that actually mean? I'll come back to the term internet safety or online safety at the end. Um, have we all in this room and online, um, uniformly got a single way to demonstrate we're all over the age of 18? I would suggest not, and this is one of the challenges, age verification in the uk. I have said before, there's an easy solution, we just need an ID card. I was being sarcastic because I think bringing in a universal ID card brings a lot of its own challenges in terms of privacy and abuse of power and similar. But if you're saying we need to demonstrate through passport or we need to demonstrate through driving license, well those are both token based age verification devices where, um, not everyone has to have one. There's no mandatory reason. There's also a financial barrier. The challenge in this area, particularly if you're looking at pornography, this is legal content for adults. So if you're saying we need to stop young people seeing it, but adults are perfectly legally entitled to look at it, then you don't want a situation where a large number of adults can't look at it because of the way we've implemented age verification, the Digital Economy Act in 2017, um, der strongly talked about age verification and pornography providers then revoked the legislation a year later because there was lots of questions about privacy issues and those sorts of things. Um, it's been acknowledged now that age verification using a toco based system is a challenge, but now we're starting to go, well, why couldn't AI do it? You know, it's, it's ever a clever ai. So so why don't we have age estimation technologies that can use it as well? If you look at the information commissioners age appropriate design code, one of the suggestions, which is actually quite good on the whole, um, one of the suggestions in that is that providers might like to consider using a artificial intelligence based solution for age estimation. Now, Yoti who are certainly one of the, the leaders in this area produced a white paper last year saying we're about 95.5% accurate on estimating the age that someone is under 23. How would you, how would you build this sort of system? You collect up loads and loads of photographs of people knowing their ages, see them in as a training data set. Do you remember, I dunno if you, as I said, I'm getting on a bit now. There used to be regular local press stories about how the local stationers had sold glue to a 13 year old and those sorts of things. Remember those sorts of stories? I can remember it happening. My, my friend's mum worked in a local station. She came home one day in tears saying, well, I've had the local press phone me up, say I've sold glue to a 14 year old. And she looked 18, she looked 18. So humans find it difficult to actually expect, um, an algorithm to do it effectively. Now we might say that's accurate enough, but equally we've then got the ethical challenge of saying there are gonna be some people, sorry sir. Sorry madam. You're, you're very fresh faced. We're not sure you're over 18, so we're not going to let you access this content that you are legally entitled to. Access starts to become an issue then. And then we get into 13, which is the, the digital age of consent. It's the, the number that all social media platforms are are generally set by in terms of, uh, data collection from children, those sorts of things. That becomes even more challenging because first of all, what token based authentication might be we we be using? And secondly, are we talking about building a large database from children of that age that we are then gonna use as training data? So there starts to become some fairly significant challenges there. I spoke to someone from the AV industry last week, said, 18 is difficult enough about 13. I went, oh, that's really difficult. If we could access school data, it'd be brilliant. But, um, obviously there are some significant challenges in that sort of thing as well. But the other thing is this is proposed as the solution. If we put age gating on pornography sites, children will no longer be able to access pornography. And that's really quite challenging because what that's actually saying is if we stop it, we don't have to address this social problem. A lot of pornography that young people access is peer generated and shared on mobile phones. And similar young people will tell me, well yeah, it's, it's only UK based, isn't it? So I'll just get AVPN and bypass it. And then you get into political discussion about should we look at the legalities of VPNs? It's like, well I think privacy is a fundamental human right. I think we're still, we still agree with that. So when you start to talk, because you're sort of moving further and further into the, the, the extreme position young people and VPNs and, and there's lots of reasons. You know, the dark web is another one where dark web's terrible. And we do a lot of, we did a lot of work with young people on this sort of thing and generally speaking you go, what about the dark web? They go, oh, it's bad. You stay off it. Have you ever used it? No. One group of young people we spoke to, well yeah, it's brilliant, said, what's it brilliant? No one could see what you're browsing. That was a group of LGBT young people, some of whom said they had homophobic parents for example, and um, it's allowed them to browse in private. So slight tangent there, but, but if you're talking about self-produced intimate imagery by teenagers, that's not gonna be age gated. That's gonna be delivered on your phone, on a group chat, on a message, whatever. Which leads us onto the final piece of, of rhetoric. I I really want to look at all the main providers now have technology to identify a nude image. And because now we've got age verification technology, which we kind of have, um, we can identify that it's a nude, we can identify it's a young person using the phone. We can stop that image from being transmitted. I could do a whole hour on the legalities and the challenges of teenagers and intimate images. I've written a book on it, it's cracking read, you go out and buy. It's great. Um, <laugh>, um, This isn't the first time this has been said. There have been a number of other times in in select committees or similar, we've got the technology to identify a nude image now. And if it's a young person, we should stop it from being sent these clever algorithms. They can do that, can't they? You clever techie people. Why haven't you implemented this yet? Um, to unpick this, you know what I was saying about training data for algorithms, you feed it a load of images of the thing you wanted to detect and then it gets more and more accurate in detecting those sorts of things. Nudity is what nudity is a challenge. I dunno whether anyone's got any experience of nudity detection algorithms. You might not wanna put your hand up if that's okay, <laugh>, but nudity is a challenge. Why is that? Because it's a very broad domain, how people pose in nude images and things isn't similar. You have, I'm not gonna go into the detail, but you know what I mean. And um, traditionally large platforms have had a big challenge with controlling people put this stuff up, it's getting a lot better. As with all of these things, it's getting a lot better, but specifically it's saying intimate images of children. How can we stop? How can we build a system that identifies an intimate image of a child and prevents it from being sent? So what do you think that training data might look like? And that's where it becomes really, really problematic. Now, there are some law enforcement systems that train triaging for child abuse data on, on these sorts of things. I won't get into that now, but that also presents some interesting ethical challenges. But the idea that an app provider for example, or you need your, your training data and you need to build in the functionality that, that, that starts to become really, really quite problematic. But from the public rhetoric, there's a politician that stood up in parliament said they can do it. So why aren't they's like, well, because it's far more complicated than that. And then you get into determining the age of the device. We've already established that age verification technology isn't perfect. You've also got hand me down phones. You've also got, uh, pay-as-you-go phones in similar way. You haven't quite got the accuracy. Are we then gonna be in a situation where an adult takes an intimate image and the app goes, you can't send that because we don't think you're older. You know, it's, it starts to become really problematic in terms of rights and then the prevention of transmission. It's not just SMS, it could be any one of many, many messaging platforms we're saying that every single app provider needs to do this or are we talking about doing it at a mobile level? Once you start to unpick it, it starts to become really, really quite complicated. Um, and there's lots of other things as an ecosystem we could be doing rather than just sitting there going, can't they just get some clever software to stop this?'cause that's easiest for us. So can I, uh, protect children? No, can't, sorry, <laugh>. Um, but it could be used as part of a toolbox for technical interventions and platforms already do this stuff. There's lots of machine learning based aspects of, of safety tech within platforms. But sitting there and going, this happens online, this happens via digital technology, therefore digital technology can solve it is something that I find quite problematic and I'll come back to that in in a little while. I see some really good stuff happening outside of the AI world. I dunno if anyone anyone's heard of Stop NCII, um, which is a, is a development by a charity with a lot of in-kind contribution from meta, which uses fairly, I hesitate to use the word 'cause hashing isn't just straightforward, fairly straightforward technology. But, but what this is doing is if you've taken intimate some intimate images of yourself and sent it to somebody and then that somebody is then abusing you, they've either uploaded those images somewhere or they're threatening to upload those images. What you can do is download a piece of software, run something called hashing, which basically just runs, uh, a piece of code onto the image, processes a data in the image and produces something like a fingerprint of that image. So rather than having to share that image with other people and going, I think this image of me is out there, you share the hash instead that goes into a large database and lots of technology providers are now taking that hash database. So if someone does try and upload it onto their platform, it'll go down the hash list and go, no. And it's really interesting 'cause it's, it's, it's not very well known, but as a, a simple model of empowerment, now it will set up to protect adult victims of image-based abuse. But ncmec, um, whose acronym always escapes me, Vic, National Center for Missing and Exploited Children National Center for Missing and Exploited Children in the US is now saying we're gonna provide this for young people as well. So young people who might be being victimized or coerced or something as a result of sharing these images can also, no, it's not perfect. There are lots of challenges in terms of what happens if someone wants to upload hashes that aren't of intimate images and they're just trying to poison the database and similar. But it's a really nice example of something that that's away from the complexities of machine learning are similar and it addresses a a point and it provides imp empowerment to victims. But all of these things, I, I run a, a survey with a charity I work with called Southwest Bridge for Learning. We've got around 16,000 responses from you. It's basic. It's a fairly basic survey asking them about their online behavior. One of the questions we ask is, have you ever been upset by anything that's happened to you online? 70% of the time they say no. So this is a sample of 16,000 young people for those of your parents or grandparents. For the majority of young people, they've never had a negative online experience, which is quite nice to hear really, isn't it? But when you say for those who said that, yes, there was. And you say, can you explain what that is? Now all I did was take, this'll be in a book I'm writing at the moment, comes out next year. So you're getting a sneak peek at new data here. I just took all of the descriptions of the things they described as upsetting. The biggest thing that comes out is people, somebody. It's people that are causing harm, people that are causing them upset online. Interestingly, when you unpick the content side of this, because there is a lot of concern about young people accessing inappropriate content and lots of people seeing upsetting content, an awful lot of what young people say is upsetting content relates to current affairs. So when the Manchester bombing happened, there's a lot of young people saying they saw something about Manchester bombing happening and it was really upsetting. Large amounts of young people talk about climate change. Seeing coverage of climate change is really upsetting. So if we're saying we want an algorithmic intervention to stop young people seeing upsetting content, who defines what that upsetting content is? And, and here you've got a very clear view that it's people that cause problems online. And I am reminded and if you text now what Ray's law is, Marcus Rayham, who was a very famous cybersecurity researcher, came out with this statement a few years ago. You don't solve social problems with software. It can help certainly. And um, I'm certainly not saying in this talk laser tech platforms alone, they've done enough. I'm saying they could do more. Absolutely they could do more. But not with everyone else sat on the sidelines going, Why aren't they doing, why aren't they swaping this out? You know, I spend a lot of time bizarrely looking at the drug policy world as well.'cause if you look at a lot of the discussions around the misuse of drugs act and similar in the seventies, the language is exactly the same. We need to stop this from happening. We've got lots of examples of prohibit prohibitive social policy that haven't worked. I'm looking around the room, some of you will know who I mean by zamo from Grange Hill, that very famous Grange Hill storyline. Just say, no, don't take drugs. So nobody of my age has ever taken drugs because we had some very clear messages that you shouldn't take drugs 'cause it's illegal. That's fair to say, isn't it?<laugh>? Um, some people I know of my age still take them. It is a part to play. But there are lots of other things as well. And, and we know that prohibition generally doesn't work. I'm going to leave you with a comment from a, a young person. I said to the, it was a few years ago now. I said to them, I'm going to parliament tomorrow to talk to politicians about internet safety. What do you think internet safety is? And I think she was about 12. She came up with this statement here. It's brilliant When I talk to young people about what we can do to help them. It is very rare a young person will ever say, can you bring big tech billionaires to heal, please? Now there are comments around, you know, platforms could do more. One of the really useful things that no one's talking around in the online safety bill is transparency reporting. So platforms have to publish data on how many reports they've dealt with, how many accounts they've taken down. All that stuff's really useful to people like me because young people say there's no point in reporting stuff 'cause nothing happens. So if we're getting more and more evidence of stuff does happen, that's really, really useful. But most of the time, and bear in mind, I've been doing this for 20 odd years, when I say what can we do to help, it's generally about better education. It's generally about adults not freaking out. We're not gonna disclose because someone's gonna take our phone off us or someone's gonna tell us off or all the classic, you shouldn't have done that. That's illegal. Pretty much a guarantee that young person's not gonna disclose concern of harm. And if you look through the online safety billers, I'm sure you all have.'cause I know I have, um, very, very few mentions education in the entire thing. It's, it's almost focused on entirely one stakeholder. Now I know Professor Baes here has talked about public health models in these sorts of areas as well. That makes a lot more sense. This is why I don't like the term online safety.'cause safety implies we can make people free from harm. I'd much rather look at it from a, uh, uh, acknowledging the harms, looking at how we mitigate risk of harm and adopting harm reduction approaches within an, I hate to use these within an ecosystem where we all have a part to play because we're all pulling in the same direction. Ultimately, we want young people to be safe online and have positive experiences. Most of them do, but sometimes things go wrong and we need to better support that rather than just sitting there going, surely the platforms can sort this out. We've got all this AI now it's ever as a clever so shall finish there. Um, Yeah, we've got a time for a little bit of question at the end, but, but for those of you that are parents or grandparents, most of the time kids are having a really positive experience online, but that's not gonna get clicks or it's not gonna sell newspapers. And similar, which is why we see all the, all the excitable stuff about, uh, terrible things. And, and for 20 years now young people have been saying, I'm not gonna ask for help because I know it's gonna be made worse or I'm gonna get told off for it. Very rare they say, can we have better technical interventions, please? What's it those tech boffins. Why can't, why can't they do this? Well, you know, if it was easy, we'd have done it by now 'cause it's been around for a long time. But thanks for coming out. I really hope the rain starts on you, but lovely seal. Thank you. Hello everyone. I am Victoria Baes. I'm the professor of it here at Gresham and an swell co-author of Andy Pippen. And we do have about 10 minutes for questions. Um, you'll be very pleased to hear that we do have, I've seen you sir. Don't worry <laugh>. Um, you'll be very pleased. Hear that. We do have some very thoughtful questions online, but some, which I have to be honest, if I had to answer these, it would be a three hour sit down for everybody. Um, so not wanting to pardon upon steal Sandra Vatas Thunder from, um, her lecture. In your opinion, what is the legal future of this matter? A mess. Basically <laugh>. Um, I mean there's an awful lot of talk around tech regulation and AI regulation at the moment. Um, there are competing voices in that area as well. My view of particularly the stuff around you need to regulate us because we're gonna achieve sentient soon is it's a bit like pointing over there.'cause they don't want you to see over there. My, my fear is really about the more low level stuff, like taking all of our data and using it without consent for, for these sorts of things. Or HR departments buying a piece of kit which can assess someone's capability for the job by scanning a five minute video. And yes, those systems do exist. Um, which is, you know, those sorts of things is why I talk more about AI literacy. I think one of the challenges, a lot of the policy makers while being very bright, don't have a tech background. I was asked, um, I'm doing some training with some board level folk next year and the people organizing it said, what advice would you give a board member about this sort of thing? I said, well, listen to the techs because they're the ones who actually have to put this stuff together. And those are the ones that, um, you know, it's kind of been lost in this discourse.'cause it's almost like, well you would say it's not possible. You don't want to do it. No, that's not my experience. So I I think short answer to it is I think it's gonna be a mess for a few years. Um, if you look, the online safety builders reached a scent now, now the poor souls are off com. Have to actually look at it and see how on earth you implement this, this spaghetti of legislation, which does read a little bit like, oh yeah, and another thing. And yeah, let's put that in as well, <laugh>. Um, and if you look at the debates, it's, it's similar to that. So, um, I've got about 10 years before I retire. I've probably got three or four books in me around the law in this area before then I noticed that we've, we've already had some questions in parliament, haven't we? And in select committees about, um, will the metaverse be covered in the online safety bill? And of course the answer has been yes, of course it will be, even though it doesn't exist yet.<laugh>, we've absolutely covered for it in our future proofed legislation. Metaverse cool. Well it's, it's generally, yes, 50 year old blokes that think it's cool <laugh>. I I, yes, please watch my talk on the metaverse. Sorry, I just thought I hijacked that. Um, go for it. So, so we have you at the front here. Do we have a, if you just wait a second for the mic to reach you so that the people at home can hear your question. Hello? Can you hear me? Oh, that's better. We can, that's better. There was a recent documentary on one of the mainstream TV channels. Um, it was about a generative AI system that was generating explicit pictures of child sexual abuse. The reporter was reduced to tears. I don't know if you're aware of this documentary, either of you. I'm aware of it and, um, it's, uh, it's an area of a lot of concern at the moment. Yeah. My question is, if the AI system was trained sufficiently well on existing child pornography images so that it could generate them on request, why isn't it able to have an AI system recognize them? Well, first of all, the production of those images is, um, on the very edge of, um, the sorts of things that, that these systems can do. Um, and secondly, it would, or a system could potentially identify those sorts of things depending on the training data though. So would you train it with actual images of child abuse or would you train it with generated images of child abuse? And it starts to become extremely problematic in term. So like I said, there are some law enforcement systems that do do that, but you do raise questions and of, well, that's an image of child abuse, that's someone being abused, has that victim consented to that image being used in that way? So it starts to become a, an extremely heavy and complex area in terms of what the training data would look like. But this is one of the areas where clearly it's unacceptable and clearly if you have a system that's doing that, actually there was a, I listened to a podcast about it this morning. Um, it's on the extreme end of things and, and clearly there are use cases where there really isn't any debate in terms of, oh, it's okay because it is unacceptable to do. But the, the challenge is always where does the training data come from and what is the training data to be able to identify those sorts of things. Gentlemen, at the back there? Yes. Not, not a very technical question. Um, but um, obviously the Pandora's box is open. Um, everyone is addicted to their phones. Um, and I was hearing a news article, I think it was yesterday, that every day about four children are killed or injured on their phones crossing roads. Um, is there any way AI could detect on GIS systems where the child is? I mean is could it everywhere that's sophisticated and basically cut off access at such points? No, I, again, I don't, I'm looking at some of the folk that put put their hands up and said, you know, technically GP s based data and things, it doesn't seem to be a, a massively complex one compared to something like image recognition of educators and similar, but, um, it's the accuracy of, of the GPS that I think would be the problem with that because our GPS isn't as accurate, for example, as military GPS or whatever. So, so it's how accurate it it could be. And Again, in that case it sounds like a, a perfectly reasonable thing to do, but I think there would be challenges in, in the accuracy of those sorts of things. But, but technically, because, you know, you look at the wonders of Google Maps and the fact it'll tell you a road's being congested and things, we know that stuff works quite well. So it's, it's not, it's not a bad, um, idea to, to explore in more detail, but I think it's not one of the, um, sexier areas of this sort of thing. So, absolutely. Well, I mean, this comes back to the fact that that simple harm reduction based education is, you know, when Pokemon Go was, was a big deal, you'd get people chasing virtual Pokemon across the street into traffic and things as well. Um, rather than look at a technical solution, do we have these conversations in schools and things? Now I look at a very large database of what is delivered in these sorts of areas in schools. And I'd say no, it really isn't. Um, but there's very little guidance. You know, I, the last thing I would say is if teachers need to sort this out without guidance, they, they can't sort this out and without it being treated like a for, for, for those of you that look fresh faced and young, I would imagine your online safety educational experiences would, don't do it. Don't look at it and ignore it rather than think critically about what you're doing with it. I think that's one of the fundamental challenges. Yeah. You mentioned about some of the biases and algorithms due to lack of, uh, data collection. Mm-Hmm.<affirmative>. Um, so what are some of the things that you're seeing, uh, that, uh, tech companies or maybe other organizations that uh, um, uh, they're putting together in order to offset some of these biases? I think a lot of that is down to good ethics training in computer science courses, uh, those sorts of things. Now, as ABCS accreditation assessor who's assessed an awful lot of degrees across the country over the years, that's certainly not uniform. Um, you know what, you'll have some places where it's, it's really well done. You'll have some places where they go, we do ethics in year one. No one's really interested in it, but you guys ask for it, so you've gotta do it then. But I think there are some more fundamental challenges in, in, in what teaching about ethical IT and ethical AI is, and I, I'm involved with an engineering council group at the moment that's trying to look at what AI literacy education is and similar. So, but I think, you know, that that particular issue has been discussed for a fair while now. I mean, I, I'd encourage everyone to look at Kathy O'Neill's TED talk about this, which was 2016. I know you are. Sorry, sir. I just noticed you were smiling and nodding at the back there. So presumably you've seen it. Um, it's very, very good and very sensible and, you know, talks about the fact that, you know, one, one of the areas that's, that's constantly attacked as woke now is that stuff like unconscious bias training, unconscious bias training in the development of IT systems that, um, will then build training data that will then go on to, to victimize certain parts of society is, is an interesting, but I always fall back on the fact I've taught ethics to compute science students for many years. The first question I usually ask is, would you get on an airplane you've written the software for? They always say no, never had anyone say yes.<laugh>. Which is an interesting, you know, you don't start compute science course to study ethics. So it's how we ground it and how we make it relatable and, and things as well. But there's a, there's an awful lot to do in that area because it is viewed as a, a minor part of computer science curriculum. And again, no one's learning from the history of it. So, And on that note, one last quick fire question. No worries if I may, from the online audience, can AI protect anyone Not on its own <laugh>? Well, it's a very broad question, isn't it? It depends what you are protecting it from and the specific use case around it, I guess. But that's what I mean about it should be seen as a useful tool. There's a really, I dunno if any of you listened to the leading the, a Campbell, Rory short leading podcast. There was a really good interview with Paul Nurse who was a, an extremely well known scientist, and he was asked about AI and he said, well, if you look at it as a really powerful way of processing data and decision support and those sorts of things, that's really good because, you know, there are things we can do with that. If you look at it as the solution to everything and all the chat about artificial general intelligence, and I've seen Terminator, when's that going to happen? All those sorts of things. That starts to be, we should be looking at really good means of processing data, really powerful means of processing data rather than going, but it's, it's artificial intelligence. Why can't it do that? And I think in order to, can it protect anyone from anything? Well, I think it would depend on the use case, but I think conceptually we should be looking at it as a tool, rather as a solution. And on that note, we have run out of time, I'm afraid, so it only remains for me to thank you very much, professor Andy, thank you for the invite. Thank you.