
Gresham College Lectures
Gresham College Lectures
The Risks of Technology in Business
What are the risks of using technological innovations in business?
There are risks associated with the crypto world, including custodial risk and economic exploits. There are also regulatory risks with competition from central banks issuing their own digital currencies, and risks associated with extrapolation from patterns detected in big data by AI systems. Applying algorithms blindly can lead to miscarriages of justice, exploitation, and discrimination. So how should society mitigate these risks, and where do we go from here?
A lecture by Raghavendra Rau recorded on 5 June 2023 at Barnard's Inn Hall, London.
The transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/tech-business
Gresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/
Website: https://gresham.ac.uk
Twitter: https://twitter.com/greshamcollege
Facebook: https://facebook.com/greshamcollege
Instagram: https://instagram.com/greshamcollege
I'm going to talk about the risks of technology in business, right? This is the combination of a series of lectures I've been doing over the past year. We've been talking about how technology is changing the world of business. We have covered defi, we, we've covered blockchains, we've covered big data, we've covered ai. Now we're trying to bring them all together to talk about what are the risks of technology. So let me start by summarizing some of the things we already talked about, right? So, how does technology help the world of business? The basic idea is it helps us analyze information. Right? Now, there are several ways in which it allows us to do that. For example, it creates what I call one way transparency. In other words, businesses know about us. They know what you want, they know how you can react, and they can sell you goods and services appropriately if they know exactly what you want. We have talked about how they solve problems with information asymmetry. So one example of information asymmetry was this example I used in my first lecture where I talked about a car insurance company called Root. So car insurance companies want good drivers. They don't want bad drivers. So what Root does is makes you download an app onto your phone. So while you have the app on your phone, it makes you drive around for a week, two weeks before you ask for a quote. And during that two weeks, it analyzes everything about your driving, how fast you drive, what roads you drive on, how do you stop at red lights, how fast you go around corners. It knows everything about you. So basically allows you to basically cream off the good drivers and send the bad drivers to other insurers, right? So these are the problems of what we call moral hazard and adverse selection. And of course, businesses also don't trust each other. So you know, we write a contract, we have to pay somebody to verify the contract. That's all very expensive. So we also talked about how you can use decentralized blockchain technology, distributed ledger technology to record information so that nobody can alter it once that's written down, right? So these are some of the areas we've talked about in the past. The key problem with all these things is at the end of the day, businesses are making inferences about us as human beings. What does that mean? Let's take a typical economist, right? We can't see into your head, we don't know what you are thinking, but what we can see is what you actually do, right? That's what businesses have access to. They have access to what you actually do. So give, take an example. If an economist offers you an apple or a banana at the same price, and you pick the banana, what would the economist infer? The obvious answer is, well, you like bananas better than apples, right? Same price. Everything is the same. That's why you do it. But what if that's wrong? For example, maybe you actually prefer apples to bananas, not bananas to apples, but simultaneously you prefer organic to regular, and you prefer ripe to green. And what I'm actually offering you is not an apple or a banana. That's what I think I'm offering you. What I'm actually offering you is a contrast, but a ripe organic banana and a regular green apple. I like the apple part, but these two dominate. And so I give you something and I make an inference from it, which is really not true because I'm reacting to dimensions you haven't even thought about, right? And there are more dimensions, what happens if, how and where it was grown, but sugar content, nutritional value, shelf life, there's so many dimensions we can make a decision on, but businesses have only access to few of these dimensions. What that means is sometimes one answer might be okay, we just get more information. Information about all these dimensions that you're making your decision on. But the problem is, we ourselves don't know the dimensions on which you're making a decision except for visual information. We are terrible at processing huge amounts of information. There's a famous paper written by George Miller back in 1956. He called it a magical number, seven plus minus two. That is a number of things you can actually detect as human beings. For example, the number of tastes, the number of tones, the number of colors you can perceive. Seven is almost an upper limit on everything. That's possibly why American, uh, phone numbers are only seven digits long, right? British people are probably smarter because we have much longer phone numbers. But anyway, Well, the problem now is early attempts at manipulation, early attempts at selling you something depending on rather crude measures, right? It didn't, they weren't very sophisticated because you had no idea what we would, you what people were thinking. So as an example, in China, the number four is considered very unlucky, right? Because it's kind of rhymes a hoy for death. Similarly, the number eight bar rhymes with fa, which is a synonym for a hoy for good luck, prosperity, and so on. Number two 50 sounds like you're an idiot. So you never call someone a number two 50 in China. But how do companies manipulate you? Well, you never go public at the price of 4.44 you on. That's very, very unlucky. You go public at 8.88, even though two shares of 4.44 is one share of 8.88, it's the same thing, but you would never do one versus the other. But there are other attempts. This is China, but there are lots of attempts like this outside China as well. I mean, we see this everywhere. For example, this is a $250 hamburger sold in New York. And you might think, why would anybody pay 250 bucks for a hamburger? Well, it is covered with gold leaf. It does have black truffles, all that's fine, but still two 50. Well, one possibility is next to the $250 hamburger. The $90 steak is actually cheap. So people will say, whoa, two $50 hamburger. No, I'll buy the steak. Right? It's very, very cheap. One of my favorite examples of making things look cheap was this man, Steve Jobs, one of the ultimate marketers in the world. Think of what happened when he came out with the iPad, right? Mean, think of the iPad. The iPad is not an obvious success, right? It's too big to be used as a phone. You can't hold the whole iPad up to your ear. It's terrible as a typing device, you're typing on glass, it hurts your fingers, right? What is the purpose of an iPad? Well, let's imagine what, how. When Steve Jobs initially unveiled the iPad to the world, what he did was to have this, you know, standard press conference. It talks about the iPad, and then he says, let me get to the important question here, which is, how do you price this iPad? So he says, well, I went to my engineers and I asked them, how much should I price the iPad, right? And they said, a thousand dollars because the amount of effort we have put into the iPad. And then I said, he says, I went to my marketing people and they said, don't price it at a thousand. Price it at $999 and 99 cents, because it sounds less than a thousand people will be fooled, right? And the audience is laughing because, you know, obviously they're not gonna be fooled by that type of pretense, right? Right. So the word 9 99 99 appears in big letters across the screen. And then Steve says, well, I'm not gonna charge 9 99 9. The numbers disappear replaced by 7 99. He says, I'm not even going to charge 7 99. That disappears, replaced by 4 99. He says, that's what I'm gonna charge for the iPad 4 99. You've never seen an iPad before. You've never held an iPad before. What do you know about it? Already? A sheep, right? Hell, you can buy two iPads for the same price, right? Or yet another example, I don't know if you've ever seen this, but if you go to a Starbucks, they have slightly unusual sizes. This, by the way, the numbers were, I took this one, India. So these are rupees, it's not pounds, right? But still you have tall, grande and venti. Why does Starbucks have these weird sizes, tall, grande, and venti, right? The answer actually goes back to the initial days of Starbucks when they were competing against McDonald's, for example. So what would happen is that these guys would price their small as more expensive than the larger McDonald's. Of course, they were justified by saying, you know, McDonald's filter coffee, just opening a can and then giving it to you. We have single roast blends served by a train barista would make pretty patterns on the top. You have to pay more. And people would say, come on, coffee is coffee. Not a big deal, right? But then, so what they did was to make their smallest cup, call it the tall, so that now you're comparing that small cup of McDonald's with a large cup at, uh, a small cup here at Starbucks with a large cup at McDonald's. And then the next size up, you don't call it medium, you call it grande. I mean it a, it's grand, right? It's big. And so people would say, oh, wow, that's pretty good. But my favorite all time, nu uh, is the third one, which is ti. Now, if you know Italian venti, it just means 20, right? It's not a, it's not a size, it's a number, right? So, but why do they call it 20? Right? Well, because the key word, it's Italian. So people will say, Hmm, Italians associated with good coffee, and I sound sophisticated when I say I'd like a venti, you know, it's like, really, it's me, right? And of course, once you get used to it, you are used to paying super high prices at Starbucks, right? In it turns out in Texas, they now have a size called Enta, which is one and a half times the size of, oh my God, after drinking that amount of coffee, you will stay awake for three days. Okay? Alright. So what does tech do? So these were early attempts at manipulation. So what does tech do? Well, the idea is it allows us to go deeper into what people want, deeper into their preferences to figure out exactly what their choices are. An example I talked about last time was about target spending an enormous amount of time predicting when women became pregnant. Why? Because when you become pregnant, you're doing something for the first time, you're doing something you've never done before. You're shopping for things you've never shopped for before. So if Target can predict that, they can get you to switch to Target, you will start shopping in the new place and inertia will keep you there. That's the idea. But more generally, if you look at electronic, if you look at service related items, not just shopping for physical item, most, uh, websites and other things use something called AB testing. So what they do is they give you two versions of a website. Amazon does this, so does LinkedIn, which ran social experiments on 20 million users over five years. So the idea was basically it give you slightly different versions, different font, different something else. So is which one do people click on more often? If you do this or sufficiently large number of idea, uh, people you get, okay, this font appeals to people more than that font. Remember, they're still not going on, they don't know what's going on in your head. All they can see is what you're actually clicking on, right? And of course, that gave rise to something like Cambridge Analytica based out of Cambridge, but they claimed they could understand you in such detail. They could predict whether you were likely to vote for a Republican, as a Republican or as a Democrat voter. So b uh, Republican Party in America paid a ton of money to these guys to try to get them to sway an election. How successful we are, uh, I don't know, but you know, in 2016, Trump was elected. So I don't know anything anymore. Alright? So does it do a good job? Right? Well, let's take a simple product. This is a robo advisory service, right? The idea is a computer replaces a human advisor when you're choosing what to invest in. Okay? So here are some examples. And the beauty of using a computer is the fees are close to zero. So 0% management fee, 0.25% management fee, and so on. And that count minimums are really low. 0, 0 500 in that case, but usually very low, right? So the idea is I have AO advisory server, which can do exactly the same thing as a human advisor, but it's way cheaper. So how do these systems work? One thing they do, so if you look at Betterment, this is their, this is their, um, website. So they say, we serve one purpose to help you make the most of your money. We have proven investment strategies, blah, blah, blah. You know, everything really geared as a, we can do better as a computer than a human being could. Okay? So how do they do it? They look at questions. They ask you questions about how risk tolerant are you? And the idea is, if I find out how risk tolerant you are, I'll be able to do a better job of placing you into a portfolio than a meeting with your human investment advisor, right? The computer is predicting how tolerant you are. Do they do a good job? Well, let's take a look. This one here was sent to me by Goldman Sachs Marcus in 2021, and they said, well, he's a special offer, pay no advisory fee for 90 days, and we get a digitally managed portfolio built with Goldman Sachs expertise. I'm like, this is pretty good. Let's see how good this is. So I said, I I basically copied all that questions for this class, right? Because that's what we do. We see weird things like this. We always take, uh, notes of it. But anyway, so there's some questions I can answer, right? And bear in mind, I teach this stuff. So, so the question I can answer, are you able to save after paying your monthly expenses? Yeah, I can probably do that. Do you have currently enough savings to cover three months of earning expenses? Yeah, probably. Okay. But then you start coming with that. How long do you plan to invest your money in this account? I'm like, I don't know. Could be one year, could be five years. I, how am I supposed to know what this answer is, right? Things change, plans change. Or even better, how much market loss are you able to afford in a given year? Your ability to afford risk affects how aggressive your portfolio is. I don't know, right? I mean, I have been through 2008, so I should have some idea, but many of us, especially people who have never invested before 2008, have never been through a market downturn. How do you know what you're willing to do? Right? So the questions asked here for this robo advisory service are pretty rudimentary, as are the portfolios they put you into. So this is Goldman Sachs core, that's Goldman Sachs impact portfolio and their smart beta portfolio, they sound really good. But what is it about the portfolio? I, I basically answered the questions in a way. So it would give me, recommend each of the three different portfolios to me, three different times. And what did it give me? Well, here's my 60 40 core portfolio stocks and bonds, right? 60% stocks, 40% bonds. And let's take a look at some of them. Vanguard Value etf, their Real Estate Investment Trust etf, the Global Real Estate etf, and a whole bunch of bonds. Okay? What about the impact portfolio? They've got a bunch of ESG stuff over there, but they also have the Real Estate Investment trust. They have the global, uh, US real estate and they have the same bonds over there. Okay? What about the Smart data one? Well, they have the Vanguard Re etf, they have the global real estate, and they have the same bonds over there. I mean, at the end of the day, seriously, I mean, these are pretty much the same things. I would've got, you know, across the board. They're just giving me a couple of extra things to make me think I'm doing something different. So I wasn't terribly impressed by their robot advising capacity to tell how much my risk tolerance was. Okay? I don't know what my own risk tolerance is. How will they know? Okay, now maybe there are other things about tech, maybe it does better at not discriminating examples. There's a paper by Marianne Bero and Send Moan who did an experiment in back in 2004, where the male identical resumes to employers in Boston and Chicago, right? Resumes were the same. These are job wanted ads except one group of ads. One group of resumes had the word white sounding names like Emily or Greg. The other one had African-American names like Lakeisha or Jamal. Turns out, of course, that these guys got a much higher callback rate than those people over there, right? Uh, the same thing was done later by Edelman, Luca and Swarski who did this with Airbnb. So when Airbnb had places, they would, they would ask, they would request, uh, you know, a, a room and if they had, again, a black sounding name, you wouldn't get a room. They'd say, sorry, that place is taken. If you're a white sounding name, you're much more likely to get a room. These are examples of human discrimination and maybe tech is better at avoiding that. So here's a story of a woman who was never able to get a loan, but she applied for an automatic loan, which completely, you know, gave her the loan because she had all the characteristics, except she happened to be black. So she wasn't getting a loan through human loan officer, but maybe a computer can do better. Alright? Maybe it helps you learn about yourself. So this was, uh, a journalist at The Guardian who basically says TikTok detected her ADHD for everybody for 23 years. No one had been able to do it. She found out from TikTok based on what she was watching. And that sounds good, except as another article which talks about same diagnosis videos leaves some teens thinking they have rare mental disorders when you actually don't, right? The way TikTok works is it checks how long you're spending on watching a video. If you like that video, it'll give you more of those video, right? So the amount of time you spend is all it does. That's one variable, which tells that what type of person you are. You watch it for 30 seconds, you're gonna get more videos of that type. You watch it for two seconds, you get less videos of that type. Well, sometimes it may do too good a job. Here's an example of big pharma, finding sick users on Facebook. Unlike the NHS in America, it's all privatized. So there are lots of ads. If on Facebook you type in the word for cancer awareness, Facebook will sell it to GlaxoSmithKline and you get ads for Zejula. If you have, if you are interested in the National Breast Cancer Awareness Month, you got ads for pire signs, you're having a stroke. If you type that in, you get Brilinta and you know, chronic obstructive pulmonary disorder, Trelegy, you can look at this all yourself. This is when GitHub, and so it's called Inlet is the guy who compiled this. You can download literally every drug linked to the drug and why this drug is recommended to you. So whatever you type into Facebook, you get an ad right away for some kind of drug like that. And unfortunately, the information leaks, what that means is an example like this, right? So it turns out that there is a woman's health app called Flow, which asks you to enter intimate details about your body that you don't necessarily even want to tell your closest friends, right? Whether you have protected or unprotected sex, things like that. And Flow basically said, we are committed to respecting your data privacy, providing transparency about our data practices. And in particular, we will never provide third parties with access to information. Um, or no survey results, no information or which articles you view, you get nothing. They get nothing from us, right? And the only case we give that information is information as reasonably necessary to perform the work or comply with the law. Sounds pretty reassuring. They were sued for basically giving every detail of their, the information they had to Google, to Facebook, and to analytics from flurry through apps, flyer and so on. And they had no restrictions on what those people could do with the data. Basically all your data out there, right? So if you put any personal data on any of these websites, there's always a risk that it can be taken away, right? Alright. And it shows up in unexpected places. So here, for example, is a digital, uh, uh, product placement, uh, company, which is on streaming services. So in other words, if you watch Netflix on your phone, remember your phone knows what else you've been watching. So if you're a Pepsi drinker, the ad in the back of this scene over here will have a Pepsi ad. If you're a Coke drinker, it'll have a Coke ad. So it changes this on the fly while you're watching your TV series, right? Depending on who is watching at what time. And of course, sometimes it makes mistakes. So in India for example, it turns out that you have, you can buy a phone, right? I mean, liken everywhere. The point is, they're expensive for Indian markets. So what a lot of Indians do, they buy it on the second hand market. And that's fine, except because the phones are so expensive. Sometimes the first user sells the phone onto a second user. He takes a loan for the phone, and then he sells it onto a second user without telling the second user, there's a loan outstanding on the phone. So you can imagine, as it did happen, this poor guy called Rohan Zamir basically suddenly gets in the middle of the day a text message on his phone saying, your mobile is locked. You haven't paid up your loan. It hasn't, it's not his loan, it was a loan from the previous user, but his phone is locked anyway. But these guys do an interesting approach. First, they send you audiovisual prompts and regional languages as reminders. If you miss their first repayment, it changes the wallpaper on your cell phone. If you're prolific selfie taker, every time you press the camera button, it'll immediately send a note to your phone. It'll plaster a brand image on top of it. You continue to default. You block Facebook or Instagram and eventually basically make your phone into a brick, right? And so there's no way to appeal against this. It's like, no, it's not my phone. I bought, bought this phone from somebody else. Nothing. Right? Once it's, once the app starts, there's literally no way you can do it except pay the loan. Alright? So why do these things make mistakes? Well, the first problem is that the stuff we do in the lab when we test these kind of models is not the same as real world information. The most beautiful example is that of Covid. It turns out over the period of time when we went through covid, hundreds of AI tools were built to catch covid. Not one of them helped. Why not? Why did these things not work? Well, let's take some examples. Um, the training data set is consists of people who've got covid and people who don't have covid. And who are the people who don't have covid? Well, children were more, less likely to catch covid than adults. So people would feed in scans of children, feed in scans of adults. And so the AI system learned to distinguish a child scan from an adult scan, not whether they actually had covid or not. Or take another example. If you were very seriously ill, you most likely the photo of the scan will be taken when you were lying down as opposed to when you're standing up. So the AI system will say that you're seriously ill based on the fact that you're standing up or lying down, right? I mean, it really is not catching any disease, but you have no way of knowing this because it's a black box. All those things going on inside the system is nobody knows this. Or take a third example, it means there was a hospital, which lots of hospitals were using different forms for, you know, when they enter words into, um, their forms. And of course, you know, if your hospital has a particular distinctive font and a number of covid deaths or whatever, it would just say, oh, that font, the kid, the guy must have covid. It's not really anything to do with covid. You're just picking up on things on patterns, which you don't think of when you're actually running these things. So in other words, when you're looking at what you can do, this is clean, beautiful data and it doesn't work when you're looking at the real world. So another example is when you're looking at detecting cataracts. So Google came up with an AI system, which would then say, I'll send you a photograph of someone's eye. You'll tell me whether it has cataracts or not. Worked beautifully in the lab, did not work so well in the real world, because in the real world, these are, you know, nurses who are trying to take care of a patient. They're brownouts is in the Philippines and India, they're brownouts. They're not trained to do this stuff to the level required for an AI system. They're just doing the best they have with the equipment they have. It's not their job to train an AI system, but that's the information that gets uploaded. You have no way of actually working with this stuff to take, talk about real world inferences you can make. And the last problem is that of under specification, right? Even if the training process can produce a good model, might still put out, put out a bad model because it doesn't know the difference. But because there are so many moving parts inside the AI system, we don't know it either. We don't know what can we, what can be done. So there is a particular reason for making mistakes, and that is called good heart's law. Good harsh law says basically, once a useful number becomes a measure of success, it ceases to be useful, right? Let's take some examples non, basically non-technology related. So textile factories were required to produce quantities of fabric that were specified by length. So the looms were altered to make long, narrow strips of cloth. You get more money that way, right? Or UBE cotton pickers were judged on the weight of the harvest. So they would soak the cotton and water to make it heavier, right? Or America's first trans continental railroad was built, they were engine companies were paid for mile of track. So what you do is make the railroad go in much longer loops. Instead of just running a straight line, you get paid more. Or here in the UK we had the NHS 2005 reform where if you remember, some of you will remember this, that the doctors were required to see everyone within 48 hours. Anyone remember that? Right? So what happened? Did it work? Not really, right? One of the reasons it didn't work is the doctor couldn't see you in 48 hours. What would they do? They would simply not take your call. You had to call between eight and eight 15 in the morning. I mean, you still have to do that at times, right? Pick up the phone, go ahead. Oh, sorry, we are booked out, right? Anyone had that experience? I know I have, right? So that's the whole point. You're giving an incentive to do something, but obviously the moment you have that incentive, they know they have to meet that incentive. So the behavior changes to take advantage of that incentive. Let's apply this to technology. So the key problem here is you can't actually ask these systems to do something. They don't understand English, they understand numbers, so you have to give them numbers. So let's take a simple example first, which was an algorithm to land a plane on an aircraft carrier. Okay? So what was the system supposed to do? It was rewarded for the number of times the plane would land safely on an aircraft carrier. Not in real life, this was in assimilation. They hadn't trusted the algorithm enough to let it land a real plane on an aircraft carrier. But what the system figured out was that the counter from measuring the amount of force with which you put down, you want to put down as little force as possible, right? So that the plane doesn't crash. So it only had three digits. So 0, 0, 0 to up to 9 99. The system figured out that if you really slam the frame down on the deck really hard, it overwhelms the counter, it goes to over a thousand. And you know, when it goes to a thousand, the one is dropped off and it goes back to 0, 0, 0. So basically the simulation was destroying every plane. It all, it's a perfect success because according to the system, it was always landing smoothly and beautifully, right? Okay, a more serious example, which is really happening, a companies like North Point in America, which use algorithms to determine whether if you are in prison, will you commit a crime after being released. So what they do is they collect a lot of data. So they have a risk assessment form. They ask questions about your current charges. Are you a gang member? How many times have you arrested before, uh, about your family? Do you live with both parents? You know, all this kind of stuff, your residents and so on, right? So they collect an immense amount of data and they feed it into their systems. And the system pulls out a risk score. The risk score is high risk that after you're let out from jail, you will commit a crime again or low risk, right? So it depends on how you can bail and so on. So what's the problem? The problem is the one and the zero. Remember, you have to say, did you commit a crime? So you have all this data predicting whether you're gonna commit a crime or not. That's the one and the zero on the other side, one, you will commit a crime zero. You don't commit a crime, and you're trying to predict the number of times it's gonna be a one. So what happens? Well, it depends on the number of times these people are arrested after being let out from jail, but the police are not arresting people at random, right? You're much more likely to arrest somebody who's poor or black than somebody who is rich and white. So you end up in scenarios like this. So here's a lady, Bria Borden, who was 18 years old. She was apparently trying to pick up her Gods sister from school and she saw a bike or some other kids' bike lying there. So she grabbed the bike and started peddling away on the bike. Someone called the police, she was arrested, she was stuck in, uh, she was stuck in prison and they ran the algorithm on her one, on Prater, on the other hand, had already committed several crimes including armed robbery and assault. Um, he, and he was arrested too. Um, and the system called him a low risk person. Risha Borden was high risk, right? 18 years old. This guy is 35 years old and he is already convicted of several crimes before he got caught today. Um, she's never committed a crime again. This guy was recently put away for eight years for stealing several thousand dollars of, uh, merchandise from a warehouse. The key problem is essentially if you look at the algorithm, the algorithm predicts that the black defendant score. It doesn't matter who you are, if you're black, the score risk score is pretty constant for a white person. It drops dramatically. So you have these examples of prejudice built into the systems based on the fact, on the way the data is collected, right? If you are going to sample a group of people more frequently than another group of people, the data is gonna be biased. The system will also be biased. And so what happens if you're labeled higher risk but didn't do reoffend white, 23% African-American, 45%, the same thing labeled low risk, but did the u fend? 48% versus 28%? But anyway, the fact that we have to communicate in numbers means that you have to generate the numbers, right? So how do we generate numbers on people? Well, there are lots of examples, right? So in America and in the uk we have credit scores that keep track of how much we are borrowing, how much we are spending. So the credit scores about that, but it's not just that, it's everywhere. Posts on Facebook, number of likes you have, right? Teachers, there's, in America, there's a website called rate my professor.com. So where people rate their professors, right? You can look up any professor you want in America. Um, authors have Amazon scores, Airbnb hosting, guests have cleanliness scores, um, TaskRabbit delivery, Uber drivers, they all have ratings. Have you, any of you ever given your Uber driver a bad rating? Some of you have, okay? But most of us have, I have never given my Uber driver a bad rating. There's several reasons for this. Number one, you know, the guy, uh, his job depends on him not dropping below a certain rating. So if he's just being rude to me, I'm fine. As long as he's driving safely, I'm okay with that. And the second thing, of course, he knows where you live, right? So <laugh>, you don't necessarily want that. One of my favorite examples was a friend of mine who was, uh, who was based in San Francisco, um, and she, um, would as an investment banker who used to work in New York. It was all pre covid. But anyway, she was telling me the story of her driving, you know, to the airport. She had called an Uber. She was driving to the airport and she was typing away on her laptop, not looking up until she looks up, she's driving over the Bay Bridge in San Francisco, which means Uber Driver was taking you to Oakland airport, north San Francisco Airport, where she was supposed to catch the plane. So she panicked, she calls her husband and says, Uber driver is taking me to the wrong airport. And the husband says, don't yell at him, you'll get a bad rating. That was the first thing he thought of. He didn't say anything else, right? So that's the worry we have, right? Or Fitbit scores. So a lot of companies basically say, you know what, um, if you get, if you use a Fitbit to measure how healthy you are, we'll give you, we'll have team contests, we'll have ways to make people healthier. Move on a little faster, of course, right? I mean, I've, I've had that experience a couple of times. I've got a pedometer, I've got a bunch of things. But anyway, I realized that number one, if I'm not being measured, why do it? So, you know, if I'm sitting in a room and my Fitbit's in a different room and I'm gonna be bothered to get up and get it, I'm like, eh, I'm gonna sit in my chair and relax. If I really want to get my Fitbit score up, attach it to my dog.<laugh> really does a great job. But there's a company called People, which um, basically allowed you to rate everybody, anyone you interact with, so you can rate them on a score of one to five, right? Didn't, has is not, I don't think it's around today. Um, they even cases when you can rate your bowel movements online, I mean, seriously, I'm not going to talk about that. But all of this sort of reminds you that Black Mirror episode called Nose Dive, right? Where basically you're, you're rating everybody and if you want to, somebody has a low rating, why talk to them. I thought that was creepy when I first saw Black Mirror, uh, that particular episode until you realized something similar to this is happening everywhere around the world. The most prominent example is in China where companies are testing AI software. They can recognize minorities and alert police just to be only aware. You know, that there's somebody out there whom you might want to keep an eye on. But that's not all the most amazing example. Again, in China, I'll talk about how it's actually being implemented, which is different from what the concept is. The concept is that they want to give all that citizens a score. And that rating affects every area of your life. You have a low score, you cannot buy a train ticket, you cannot rent a house, nobody will do anything right for you. You're basically blocked off. It's not quite that bad. How do you construct a score like this? Well, you have up to 950 points, five sets of factors. Factor one is your credit history, do you pay your bills on time? Number two is ability to fulfill your contract. How much assets do you have to pay off somebody if they need to? Third your social status. So educational, professional background, and so on and so forth. Fourth is your behavior and preferences. So in other words, if you buy nappies, that's a sign that you're a responsible parent. You get a higher score. If you buy, you know, temper run or clash of cleanse, not good, they lower your score right there. In fact, they can change this on the fly and is the ideal state of the word that means. In other words, if you like having a drink, you have one drink, the price is okay. The second drink you have, the price goes up because the price has adjusted. But that's in a black mirror type world, right? Not yet happening in China, but you can imagine combined with central bank digital currency, that's something which can easily be done. But the funniest part, the, when I say funny, I mean odd, not hilarious. This, what do your choice of friends say about you? So in other words, if your friends are criticizing the government, your score goes down, right? So you have to be very careful in how many who, who you pick as friends. And that to me is particularly, uh, an insidious form of censoring everything you do. Now. So why sign up for this? Well, you got a lot of perks, right? For example, if you have 600 points, you can take out a loan loan, six 50 point, you can rent a car without leaving a deposit. Uh, 700 points you can apply for Singapore travel above seven 50. Your fast track application was shein visa. Um, you can also increase your order of getting a date. Turns out you higher your score, the higher your profile is ranked on websites, dating websites like buy here. Okay? So sounds scary. Well, reality, the system hasn't been implemented yet. Okay? So there's a lot of talk about this. People are getting very worried about it, but really hasn't been implemented yet. What has been implemented is the financial credit score. It's pretty much, it's like the us, right? So we do this all the time. The stuff which hasn't been done is the social side and nobody really knows how this works, which means that a whole bunch of local organizations are introducing pilot programs, right? Because they don't know what the rules are. So they're experimenting with different ways of doing these things. So for example, here is a town in China, which has its best citizens. These are the outstanding citizens posted on a website and on media around the town. So everybody starts in this town with a thousand points. You help somebody else like putting up a base basketball hoop in the local, uh, school, for example, 30 points. Um, you donated TV to, um, the homeless shelter. 50 points, you know, you're a good citizen, you get more points if you get a taff ticket down five points if you're done driving minus 20 points, things like that. But how do they get the data? Well, the group of people, they've got little, you know, clipboards. They go around town every day asking people, okay, what good deeds did you do? What bad deeds? And they ask their neighbors, what good deeds did your neighbor do? Or what bad deeds did your neighbor do? I mean, it's just literally paper collected data. It's not electronic, it's not, this is a small town right in the middle of nowhere. And this is the problem. There is no control about what that data is being collected for and who it's being used for. So in fact, this is sort of like the idea of social physics. This book, really set of books really influenced me when I was a kid, when I was talking about the whole idea that you can predict history by looking at vast psychological, you know, movements which change the way people think. And of course, as, as predicts, everything goes wrong, right? You predict something, something else happens. You have to keep trying to solve that something else happens. This is great books, highly recommended. Okay, what are the other problems? Security, right? In India, we have a system called AAR card, which is kind of like a biometric id. Everybody in India has one of these because you need it to function anywhere in India, you need a bank account, you need anything, you have to show your card. And of course it's been hacked. So you can buy this biometric id, which you cannot change, unlike your passwords, um, for something like the equivalent of $200, right? So access to millions of people, that's scary. Okay? Another idea is how easily that data can disappear. So in India for example, another big thing for most parents is the education of their children, right? So there are exams, like the, uh, Indian Institute of Technology have the joint entrance exams. The medical schools have something called need. There's a whole bunch of exams. And parents are desperate to get their kids into good colleges or good schools. So what these guys do apparently is buy the data from, you know, vendors ahead of time, right? And then they call the parents and say, I see your son's done well, but really not good enough to get into college. If you pay us money, I'll make sure that he can get into college. And of course you give, you know, are you going to say I I'm gonna wait. He said, if you wait, it'll be too late. I have to get you in before the numbers come out. So you know, what are you gonna do as a parent, right? You give up the money. And so these scams like this happen all the time. And it's again about the lack of control over that data. In the flow example, that was the company giving up control over data to get money. Here it is just a lack of privacy, lack of security on your data, okay? But that's not just in India and China, this is around the world. So there's a company here called the Ulysses Group, which knows that your cars are basically moving computers, there are chips in your car. Now, in the modern cars, you cannot, if you remember when you do your driving tests, right, uh, there's a handbook and the handbook says, how do you change the oil in your engine? Or how do you do this? Or how do you do that? I'm like, I don't know, I have no clue. You can't even touch, you can't even open the engine these days because all connected to a computer. And what this company does is collect that location data. So they have data from bmw, Daimler, gm, uh, you know, Reno, all these companies, and they can sell you the data. It's like these other locations around the world. They say they can get you 15 billion vehicle locations around the world, world every month. They know where your car is and where it's going. Alright, next problem. Almost the last problem, I'm gonna talk about it. Interacting algorithms. So we already talked about the problem with algorithms itself, right? The data is bad, the data is not secure, the data is easily stolen. All these problems we've talked about, but we also have algorithms that interact with each other. What do that mean? First one is of course our standard credit reporting algorithms, right? These are the algorithms that give us access to loans, cars, homes, employment and so on. But they're also algorithms adopted by government agencies. So that your healthcare, unemployment, child support services, right? So who generates the first one? Well basically consumer reporting agencies, credit bureaus, tenant screening companies, places like that. And they get the data from your public records, from your social media web browsing all that information. And what do they do with it? They basically give you scores, right? How what your score is, the problem is, okay, that's relatively straightforward. The problem is not that the problem is the second type. These, this data here feeds into the second data. And the second data set is government agencies who wanna modernize their systems. So what do they do is how do they get the algorithms point is most common public agencies don't say how we picked an algorithm, they go with the lowest price. If you go with the lowest price, right, you get what you pay for in many cases. So in many cases the system is prone to error. Lots of examples in America, algorithms basically are keeping people trapped in poverty because they're not getting benefits, they're denied benefits. Or even if you're given benefits in Australia company called Centrelink, basically called 40,000 people and claimed they were unfairly claiming benefits that they were not entitled to. So what went wrong? Turns out the algorithm assumed that all these people had regular jobs. They get up in the morning Monday morning, nine o'clock, clock off at five, and they do this week after week after week. But what about students? What about artists? What about people with irregular income? The system assumed they were lying. They're not, you know, they're not actually, they're not actually working. They're claiming benefits with, and they're claiming they don't have any income didn't show up. Or in the UK here, the post office scandal, where Fujitsu supplied in a number of algorithms and claimed it was completely kosher. The postmasters were convicted for defrauding the system based on a bunch of algorithms, which were wrong, right? The the cases were fought for several years, but finally, um, their convictions, many of them convictions are squashed, but it's too late for a whole bunch of people who had to give up their jobs. Okay? That's first type of algorithm. The second type of algorithm is an information algorithm. So what are the information algorithm things? Well, Facebook, LinkedIn, TikTok, Twitter, right? So when you watch something, even if you or your partner have exactly the same friends, the same likes, the same dislikes, the fact that you choose to watch something where your partner doesn't choose to watch that something will lead you to, in a completely different direction or items recommended to you on Facebook or Twitter or um, TikTok or LinkedIn. So you end up with completely different profiles even though you're basically living with each other and you know, know you have the same friends. So what happens? These guys are saying, okay, you need to worry about this. Buy this that feeds back into this. It's a, it's a little bit of a mess when you're talking about algorithms interacting with each other cuz nobody knows what the input of one and how it's being used with the other. Okay, last problem. Misuse of data. So a lot of us have things like Alexa, anybody? Yeah, a few hands up here. So one of the things which you all worried about is Alexa listening to you, right? When it's su not supposed to be listening to you, but is it? Well, it turns out Victor calling who was found floating dead in a hot tub of his friend and Drew Bates. Well, the prosecutor said, I want all the Alexa recordings to make sure that I could figure out whether this guy was, you know, actually had murdered his friend. Well, they didn't find anything. But the point is, you're never sure whether you're being, the data is being collected on you without your knowledge or the data is being used without even imagining why it's being used. One example is Uber who did something called the rides of glory. We would say, what the hell is a ride of glory? Well, Uber remember, knows everything about you. You get into Uber, it knows where you're being picked up. It knows where you live because you get picked up there on a regular basis, knows where you're dropped off everything. So they classified, this was the Uber blog. They actually were very proud of it. They're saying, they're talking about rides of glory. What is the right of glory?
Its definition is someone who takes a, a ride between 10:00 PM and 4:00 AM on a Friday night or a Saturday night to a place that's not their home. And then a few hours later takes a second ride from within one 10th of a mile from that one to the previous drop off point. So basically you got lucky, right? And Uber, Uber tracks it. So here is for example, San Francisco, the hot hottest areas are the ones with most rides of glory at that time. So next time you're taking an Uber, right? Okay, another thing they have is something called God view. So God view is the idea. There's a journalist who was coming to Uber headquarters and she was running late. So she calls her, uh, her, uh, interviewee to say, I'm running late. He said, don't worry, I can see that you're running late. He could see her data in real time as she was moving. They claimed, now they claim they have stopped it, but they had that open for a long time and they even did something more interesting called Gray Ball. If you were a politician who were opposing using Uber in your town, they could block you. So you're trying to get an Uber ride, but you don't get an Uber ride, right? So because they know you got out of this politician's office, they know exactly who you are. A tough luck. So you, you know, you don't get to use Uber if you're against Uber. So these are some of the ways in which we've talked about how technology can be misused, right? I mean the major problem with technology is it's very easy to let someone else make the decisions for us. This is an algorithm, right? Will the algorithm give us the right decisions if we don't actually do something about it? It's just very easy to accept a chat g PT phrase without it, actually, without checking it, I got an email last week from, uh, some, uh, PhD student who said, dear Professor Rao, I, um, I was trying to generate a list of references for my paper, which is on interest rates and mergers and acquisitions. And I note that you have one of the world's most heavily cited papers in this area. Unfortunately, I've not been able to find it, um, even though I've looked everywhere, you know, so have you published it in a different name or whatever? What advice can you give me? I said, my first advice piece of advice to you is don't use chat pt, because I never wrote that paper. So all the papers that come up were completely fake, but that's our problem. Why when technology says this is the outcome, some cases they have serious consequences, you're putting someone's future that's at risk by saying, okay, you know what the algorithm predicts you're gonna commit a crime before you commit a crime. That's like outta the minority report, right? A pre-crime. Is that really true? I mean, so the worry we have is not so much that systems like technology will destroy the world, is that human beings trust technology so much that we let them destroy the world sort of because we are lazy. Okay? So let's stop here, right? This is the final of my, uh, six lectures this year. I come back in the fall and I'm gonna talk about something which are a little more narrowly focused, which is on the big ideas of finance. So we're gonna focus on finance and we'll talk about the ideas in finance, which have won Nobel Prizes. So why were these ideas important? Why did they win Nobel Prizes? What do they tell us about finance? That's what I'll be talking about next year. And um, the good news is there are only six ideas in finance that are won Nobel Prizes and that fits in perfectly with the six lectures at the Gresham <laugh>, okay? Are you a believer in web three in terms of what it could mean with how, um, companies, organizations use data? Do you think it's gonna make a difference in, you know, the profit margin I guess, or just not, Right? Web web 3.0. Do you know much about Web 3.0 and will it make a difference? Well, there has been multiple levels of this. I think we, web 3.0 is more closely related to the idea that you have smart contracts and independent, um, you know, programs coming in there. We dealt with some of that in the lecture on Defi, right? The major problem with Web 3.0 is that it is not filter its potential. I think it's, it's sort of a work in progress, but I don't think it's got to the state when you can say, I'm gonna rely on those kind of technologies also to do anything. Right? At the end of the day, the inertia keeping our regular systems is so powerful. It needs something dramatic to get it past that. And as I, uh, one example of something was really leapfrog. That is chat chip PT and AI systems, which within one month of release signed up a hundred million users. But Web 3.0, the best systems have not managed to get anywhere near that traction because the existing systems are good enough. Well, sort of good enough, you know, now, you know, use credit cards instead of trying to figure out, you know, blockchains or dead ledger technology. They're okay enough that we don't shift, right? And that's the issue, but I don't know how it's going to change in the future, right? A lot of Web 3.0 also worked in a low interest rate environment and the world has completely changed in the last two years. The amount of cash being thrown at these kind of technologies has dropped dramatically because people now have other alternatives, right? In the days when the interest rate was zero, I could say I can take a punt on all those. It was costing me nothing. This gentleman here. Yeah. Um, Obviously we are producing a lot of data and information and a lot of people suffer from data overload at the moment. So I feel like there is like a filtering process for if you understand structured data, you could see things. If the data is raw, you can't see anything. Uh, is technology going to solve this issue one day to become easier for all of us? Because it feels like the world is divided in terms of the ones who can see things in data and the ones who don't actually understand anything in what they're looking at. Thank you. I think that's very, very good question and I think it was, um, addressed actually very beautifully, I think by, uh, Tim Harford in this weekend's, financial Times. He was talking about the fact was that who are these technologies going to help? Are they gonna help the Homer Simpsons of the world or are they going to help, you know, the ones who are really smart get even ahead of everybody else, right? And so, uh, based the idea is, goes back all the way to even think about the first spreadsheets introduced right from the spreadsheets were introduced. Before that we had a lot of people who were pretty much, you know, um, doing everything by hand. But now you can do the same thing in a spreadsheet way, way faster, right? Initially it was used by a lot of people to improve their productivity, right? So in other words, I could use a spreadsheet, but he couldn't use a spreadsheet, and so I would be much faster at giving out a solution. So it helps those, but over time, not pretty much everybody can use a spreadsheet. So, um, it, it starts out, I think, with a group of people who are able to use it much more, but then everybody catches up. So I don't quite know, um, how fast it's going to happen though. And this gentleman here, yeah, go ahead. Uh, a question about, uh, the future of, uh, law in the country regarding technology. If I'll give you an example. Um, hospital uses ai, which it's renting from a third party, uh, uses someone else's data in order to train it the AI misdiagnosis, and it goes to court for liability. Who, who's liable? Did you get the question? So the gentleman's asking about the role of law in our society and where liability would sit when an AI agent created by party A, rented by party B being applied in situation C. Absolutely. Well, well, um, that's also an excellent question. And, um, the answer is no one knows, right? Because the key problem at the moment is the regulator is always a step behind when a new technology comes in. Um, I'll take an example right now of AI again. Um, so many of the regulators are asking AI bosses to come in and say, can you help us out to understand the technology in order for us to regulate you? And obviously that leads to conflicts of interest in itself, right? Because the bosses are always gonna say, let's self-regulate. And it it, if you apply that to banking, you could say to a bunch of bankers, you know what, I'm gonna give you the ability to self-regulate because I don't understand how to make loans. Everybody would laugh that out of the room, right? So we need a way, I think, of having independent measures of trying to figure out these questions. But again, businesses have a lot of lobbying power, and so I'm afraid part of the problem is going to be that they are also trying to make, change the laws to make it as easy for them to operate as possible, which means effectively no law. Right? I think we can sneak in one last question. You don't need the, do you need the micro Gentleman here? Yeah, go Ahead. It's probably a good idea because otherwise it won't get Picked up. Yeah. We need to pick it up. Otherwise people online can't hear you. Um, it may seem very simplistic. It is, but how do you view the possibility of making an automatic guilty, um, application to anybody that defrauds or disa or, or disadvantages somebody who's false foul of their algorithms? It would not need the, uh, ability for the regulators to keep up. Just say, if you can show that you've suffered a financial loss of so many mounds, it's up to the, uh, perpetrators to prove that they didn't cause that loss, and you would automatically get reimbursed. This could, uh, would've, uh, solved the problem of the post office scandal if the post office was charged with causing the financial loss to those post subpostmasters, and they would have to prove that they didn't, then they would've automatically been reimbursed. So it's a bit like credit cards. You know, the, the, you only have to say you've lost money on the credit card purchase, and the credit card company will, will mostly reimburse you. Correct. Crime could be like that is, I can see a lot of reasons why you might want to keep things private without actually being illegal, right? So, uh, is, again, at the end of the day, you know, for example, cash does suffer a valuable purpose, right? It's not it, with the amount of data being released right now, do I really want Google or all these people to know every single transaction I make? Well, I mean, all my, I my personal identity is tied up in a lot of the things I do. So I'm also a little worried about the lack of anonymity if I have to prove what I'm, you know, in, um, uh, to prevent crime. So I'm gonna prevent crime by making you prove your identity, but also robs me of anonymity. It's, it's a difficult problem. I don't think anyone has solved that yet. But these are, these are useful areas, and that was one of the reasons why the original blockchain technology took off. Because the whole idea was anonymity is possible through technology. You can do a transaction and trust your counterparty without knowing who they are. But again, as I mentioned earlier, uh, it hasn't really taken off that much because it's so complex that people just used it to commit fraud in itself. Yeah. Thank you. So ladies and gentlemen, the clock of Gresham College ticks to the compulsory hour that is, uh, always allotted us for these lectures. Been a magnificent hour, wonderful series of lectures, uh, which I have really enjoyed and can't wait till next year. Thank you. Thank.