Gresham College Lectures
Gresham College Lectures
Born Supremacy – AI as a Pale Shadow of Real Humanity - Professor Matt Jones
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this lecture, we glimpse our best selves and compare that to a world where we lose everything of ourselves to AI. We are glorious creations that revel in agency, freedom and creativity. What do innovations such as cars that don’t need us to drive and creative AIs that remove the effort of, say, writing or music making mean in this context? Further, with a future being forged by limited perspectives, how can human diversity inform better AI for all?
This lecture was recorded by Professor Matt Jones on the 17th of March 2026 at Barnard’s Inna Hall, London
Matt Jones is a computer scientist at Swansea University - and a Fellow of the British Computer Society - who works alongside colleagues from many other disciplines and directly with everyday folk across the world to explore the future of digital technologies. Over the last 30-plus years, this human-centred approach has led to novel approaches for, amongst other things, mobile phone-based information searching and browsing, pedestrian navigation, voice assistants and deformable displays.
Much of his work has been driven by intense and sustained engagements with “low resource” communities from informal settlements in India, South Africa, and Kenya. Through their generous and gracious participation, these extra-ordinary users with the fresh and diverse perspectives have stimulated insights into the future of digital technologies for everyone, globally. In all this work, Matt works as part of a long-standing collaborative team with Jen Pearson, Simon Robinson and Thomas Reitmaier (from Swansea) and colleagues in India (including Dani Raju) and South Africa (including Minah Radebe).
His work has been supported by the UK’s science funders (EPSRC and UKRI). Currently, this funding includes a Fellowship to explore the future of interactive AI and leadership roles in responsible AI and inclusive digital technologies. This funding has led to a series of impactful publications, talks and influences on people, policies, and practices.
Matt has collaborated with private, public and third sector organisations, including Microsoft, the NHS, Google, IIT-B, the BBC and IBM. He is a member of the Foreign and Commonwealth Development Office’s Research Advisory Group and Welsh Government’s AI reviews.
The transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/ai-humanity
Gresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/
Website: https://gresham.ac.uk
Twitter: https://twitter.com/greshamcollege
Facebook: https://facebook.com/greshamcollege
Instagram: https://instagram.com/greshamcollege
And he's going to talk tonight about AI as a pale shadow of humanity. So, Matt, off you go. Thank you. Thank you very much. Thank you very much, Provost, and thank you all for coming and those of you online. This talk is also for you. And happy St. Patrick's Day for those of you who celebrate. Now, tonight we're going to start at an extreme moment of drama with a ship, a storm, and a body dragged from the sea. Take a look. Jason Bourne has been shot and he's fallen into the sea. He's a CIA agent. Soon after that scene, he recovers his amazing abilities. He's able to hunt down people, he's able to scan a room and find the best exit when he's on a mission. But his memory has been wiped. Jump forward in the film, not many minutes into that film, and Jason has gone to a uh safety deposit locker place, and he's got a key. He's found this key. And he opens up the safety deposit locker and he finds multiple passports. Because, of course, he's a CIA agent. Each of those passports allows him to navigate a certain territory. None of them let him understand who he is. Now, the films and the books that it's based on are all about recovering identity. They're not about him being uh CI agent and saving the world, it's about him re-finding who he is. And at the end of the books and at the end of the films, the moment of triumph is when he can say, I remember everything. He recovers his purpose, he recovers his autobiography. Now I want us to widen the lens a little bit now and think about different types of agent. And of course, I'm a professor of computer science and someone who studies AI. So we're going to be talking about AI agents. Over the last three lectures, this is the fourth, by the way, we've been considering what these incredible systems, the large language models, the generative systems, these things that seem to be able to do all of the things that we felt were part of being human, but now they seem to do them faster and quicker than us. And in the previous three lectures, we've considered well, is this going to mean that we are going to become second-class citizens? That we are going to become subjugated by these higher intelligences. In the second lecture, we considered, well, if we can't beat them, perhaps we have to join them and we become assimilated by these new forms of intelligence. In the third lecture, oh, there's my dogs. So my dogs enjoy my lectures, by the way, as you can see. That was the third lecture. The question we asked last time was perhaps we're not going to be subjugated, we're not going to be assimilated. What we're actually going to be is domesticated, tamed by AI. In today's lecture, I'm going to change the way that we look at this debate. When we think about those AI systems, what do we actually admire? We admire their performance. You press a button, and before your eyes, with these generative systems, fluently and precisely comes seemingly a wonderful answer or an image or a video. What we're seeing in these AI systems, and we'll explore that tonight, is the performance that we saw in Jason Bourne. Performance that is incredible but doesn't have an autobiography. It doesn't have that continuity over time. And that makes a difference. So in tonight's lecture, we're going to consider that instead of thinking, oh, is this going to be a lecture which is pro-AI or anti-AI or pro-human or anti none of that? AI is an incredible new technology, the forms we're seeing now in terms of generative systems, large language models. So we must grasp that technology and see how we're going to use it. But I do want us to consider tonight the differences. What is the difference between the trajectory of current artificial intelligence systems and the intelligence that is sitting inside you and then inside the group of us as we participate in this practice of having a lecture together. So instead of starting with AI, let's begin with some people, some incredible humans who have done amazing things with their human intelligence. So Mary Curie. Mary Curie spent many years in a makeshift lab, a shed, and she grafted physically with materials to understand and conceptualize radiography. Or Yo-Yo Ma, wonderful celloist. Lionel Messi, he scans the football pitch. He can see immediately where he might want to place himself and the ball to win the game. All of those people are demonstrating what we do as intelligent beings. They integrate. They integrate cognition inside the brain, they integrate emotion, they integrate physicality, and that integration isn't decoration, that is what makes human intelligence so incredible and something that we should use AI to platform. Now most of us, when we do things, you know, on a day-to-day basis, we aren't simply carrying out processes. What we're actually doing is taking part in practices. So chess computers have been around a very long time. They can beat almost every single chess player. Put your hand up if you still play chess. Yeah. Chess has not gone away because it's not a process, it's a practice with rivalry, with history, with tournaments, with interpersonal communication across the board. What about um autopilots? Autopilots in aircraft are amazing. They can carry out a process to stabilize the plane. Hands up if you'd like to fly in a part in a plane with no pilots. No one. Oh, there is some oh there we go. Well, there is a test what I can do, and you will be signed up. Well, most of us don't, because flying a plane isn't a process, it's a practice. A practice with mastery by the pilots, a practice of responsibility and accountability that requires a human. So, chess autopilots carry out processes. What we do is to enact practices, and those practices build up over time, and they require us to enter fully into the world and build and understand that world, build up memories, and then act together collectively. And that's what we're going to look at tonight. These are three cognitive architectures inside of all of us, and we're going to see how these work together to enable amazing intelligent actions and interactions and collaborations. As we go through each of those, uh we'll also have a look at the ways in which state of the art, AI, currently handles perception, memory, and collective intelligence. So we can see where there is resemblance and where there is deviation. So let's begin with um perception, how we take in the world and make sense of it. Now, for a very long time, there was a thought that we look out in a clear windscreen, and the world comes in, and then your brain whirs, and it makes sense of that world. Modern neuroscience, uh, by people like Carl Fristen suggests actually what you're doing right now is hallucinating. Do you feel like you're hallucinating? You are. You're hallucinating under constraint. So your brain is trying to predict what I'm gonna say and do. So your brain didn't get that right. Uh, next. And that prediction, that hallucination is modified by the information that's coming through our senses. Now, we can demonstrate this, I think, this architecture, with a couple of simple um uh optical illusions. Here's a very famous one which caused a lot of controversy a few years ago. So take a look at that dress. Would you raise your hand if you now see that dress as white and gold? Okay, hands down, blue and black. Whoa, it's the same image. Here you are, human intelligence, and you see it differently. Now, there's been lots of debates when when that opticillusion came out, um, and a current thesis is it's to do with what your brain is predicting the assumptions that you have. So, in a very um big study, Pascal, oh, Pascal Elliott, I think his name was, uh, got a large number of people and asked them whether they saw it as white and gold or blue and black. And you can see, as with this room, the majority saw it as white and gold. He then asked them, uh, what assumptions have you made about the lighting on that dress? Um, did you assume that the dress was in shadow or not? And the ones that assumed the dress was in shadow tended to see it as white and gold. So your brain's assumptions affect the way in which you interpret the world. Here's another one. Watch this figure. Hands up if you see that figure going in a clockwise direction. Hands down, hands up if you see it going in an anti-clockwise direction. A few. And now see if you can flip the direction. If you stare at the ankles of the dancer, you might see her flip in the opposite direction. I can see some nods. Isn't that amazing? It's the same figure. The data that's coming into your brain is ambiguous. So your prediction model is trying to cope and make the best guess. Fascinating, isn't it? Now, what about these things called large language models? And by that I mean the kind of things that many of us started to use, like ChatGPT and Gemini. Um, what do they do? Do they do anything similar to us? Well, in a sense, they do. When large language models are being trained, they're presented with millions of examples of text. And during the training, they try to predict what is going to come next, just like you are trying to predict what I'm going to say next. And if they make a mistake during training, they adjust millions of parameters in their model so that next time they're more accurate. It's called gradient descent and a way of fine-tuning. You can imagine lots of little knobs being twiddled to improve the accuracy of the model. But crucially, primarily, this uh updating of the model happens only during training. And when the model is fully trained, it freezes and it won't be updated fully until the next time it is completely retrained. Meanwhile, right now, your brain is doing millions of little calculations to update your world model of me, of this talk, of ideas in real time. Now we had some optical illusions a few minutes ago, and I want to continue with flickers and shadows and go way back in time now to 350 years before the common era. Because Plato in the Republic gives us a amazingly, that's what great philosophers do, gives us an amazing way of looking at how we perceive the world versus how AI perceives the world. Now, in this particular allegory, what Plato said, I want you to imagine prisoners. You can see some of them there, and they are chained, and they're staring at the wall of a cave. Behind them, there's a fire, and people carry objects past the fire, like I'm doing now. But the only thing that the prisoners can see are the shadows, they never get to see the real objects, just the pale shadows of reality. Now, what Plato was interested in was saying, you know, we can't trust our senses, we need to turn from the wall and we need to go out into the world and really understand what is happening if we want to understand the world as it is. Now, what does this tell us about perhaps large language models? Well, I want you to imagine large language models as those prisoners. They are sitting looking at a wall, and they are processing while they're being trained just pale shadows of our actual existence, traces of conversation, the final output of an artist's production. They never turn towards the fire, they never leave the cave, they never engage with the world, and that reduces what they can do in terms of understanding and perceiving the reality around them. Let's explore this a bit further. We're going to try an experiment where partly you're going to behave like a large language model, those models that can predict the next word. And hopefully, we'll also see how you differ because you have experienced the world. You have come out of the cave, you've walked past the fire, and your full senses have had an experience of life. So, this is an easy experiment. You're going to see some phrases appearing. Um, and when you see dot dot dot, your job is to shout out the answer. Okay? This is great. Are you ready? Excellent. First one, start easy. Excellent! Large language model does exactly the same. Let's try this one. Uh, and some of you even got there before the dot dot dots. Your prediction models are running hot. That's wonderful. Now, let's try this one. Okay. Yeah, okay, so a pause, not immediate, and I could hear different answers and slight hesitation. Now, the large language model probably would come up with many of the answers that you've just uttered. But I wondered how many of you also visualized the tomato, the sharpness of the nice. Did you feel it going into the skin of that fruit? Did you see the juice or feel the juice egg out? Let's just take this once more. This might bring back a trigger warning. This might bring back bad memories. It did for me. You ready? Close your eyes if you don't like it. Yeah, exactly. So what did you forget?
unknownTrousers.
SPEAKER_00All sorts of things, right? What'd you say?
unknownTrousers.
SPEAKER_00Trousers. Okay. Gresham audiences are the best. Um, large language model, I tried this, a most likely thing is gonna be um the pen, okay? And so the LLM is to be able to predict the pen, but you feel the dread. And it isn't just me being uh kind of romantic about that, you really do. Inside your brain, when you were processing those sentences, um, when they've scanned people's brains doing that, the areas, your sensory areas, not just your linguistic areas, are firing off. Your experience of life kind of wraps round your linguistic concepts. Let's just have a little break and see Jason Bournegaris. Very famous scene, this he's uh in a train station, and you can see his incredible capabilities. He's looking for his assailants, he's trying to find the exits. He's a predictive machine, but he's predicting under extreme consequences, right? If he gets the answer wrong, he could be dead. And that's the difference, really, between what large language models are doing and what we do. Our predictive systems, when we perceive the world, are helping us to survive. Carl Friston, who I mentioned uh a few slides ago, in his theory about how our brains predict and then change their predictions, says that we have evolved in that way to reduce the amount of surprise our body surprise. Sorry, wish you wouldn't have sat there there, get. Yes, we can see how it can do that and can help us. But because you've lived an experience under constraint over time, you have a cognitive architecture that's going to enable surprisingly different answers if you let it. So now let's move on to the second cognitive architecture. We've done perception now, memory. And as before, we're going to look briefly at the different types of memory structures that are in your brains and then think about how they relate to these generative frontier models, these large language models. Let's try a very simple. Experiment to illustrate working memory. Can you see that? Got it in your memories? It's gone. Right. Put your hands up if you can remember that. You think you could remember it? Very good. Oh, two. Well, provost, of course you can. That's why you're the provost. Let's just flip that a bit here. The same again. Now, probably many more of you could tell me that string. What this shows us is our the working memory is this short storage that we have in our brains to allow us to carry out our tasks moment by moment. And the neuroscientists show that you can hold modern studies show about four chunks, give or take. When I showed you that long string, each of those characters was a little chunk, and there were too many of them, so your brain just gave up. When I put them into little groups, you had four chunks, and you could remember them, most of you could. Let's now think about large language models. The working memory of a large language model like ChatGPT seems much more capable, seems huge. It's called the context window, and it can contain the equivalent of hundreds of pages of a book. So seems like it's better than us. Seems like it is has this born supremacy. But there is a big difference. Take a look at that chunk. Just one chunk in your working memory. Your brains right now are using that chunk to reference vast amounts of content. Maybe the whole story, the nursery rhyme, maybe the time, and I'm looking at some of my family here, where they acted out being uh Little Red Riding Hood, Tess. Maybe the emotional feeling that you had when you sat with your parent or grandparent and had that story read to you. All of that through one little chunk. Meanwhile, the large language model, it can't do that. It has to have everything in front of it, it has to see all that text in its working memory. You've got one chunk. There is a similarity though between the way in which these models use their working memory and the way we do as well. So we saw that we can lose track. You lost track when I showed you that string of nine to ten characters. Large language models also lose track. It's called the lost in the middle problem. So the way to think about this is that if you go to a meeting, uh I'm sure we all have, I was in one today, took six hours this meeting. I can sort of remember the start of this meeting. Um, and I can sort of remember the end of this meeting. But there's a bit in the middle which I've got no idea. And the same thing happens when your context window in a large language model gets very full, the way in which the system can um distribute its attention probabilistically tends to favor the start and the end of the prompt and the context window. And if there's vital information in the middle, it could be easily lost. And that's where you can see the kind of hallucinations that we've heard about when systems are generating um texts. Okay, working memory, now semantic memory. Have a look at these um three very simple concepts, and I need you now to put your hands out in front of you like this, or like this. And I want you to think that this one is um queen and king, and this one is apple. This is a bit complicated, isn't it? Right now, I want you to position your hands in 3D space that show me where you would put those concepts to make meaningful sense of them. People look at me like I don't get this. So this is like where move them around. Where are you gonna put Apple? Where's Apple going? Right, right, right, okay, let's have a look. Probably, if I'd explained it better, you might have put Apple a long way away from Queen and King. And this is what happens in the semantic memory of large language models. When they are being trained, the text that is coming in is sliced up into what are called tokens, usually smaller than words, but you can imagine them as words. And then those words are converted into a sequence of numbers, it's called a vector, and that vector places that token somewhere in multidimensional space. And with enough text and enough training, things cluster so that relating concepts are mathematically connected. So, what a large language model does is to turn meaning into maps, turning the relationships it's seen in the text into multidimensional space in a model. And then you can do some mathematical reasoning with it, like the example I've put on the slide. Inside our brains, our semantic memory, there are some similarities. We have an associative memory, so your brain will store and place related concepts close to each other. But the work of Lawrence Barzelou and others show that again the way in which our brain works with that memory is very different. So large language models, meaning is turned into maths. Inside your brain, when I say the word orange, it isn't just a word. Like we saw previously with the working memory, it is a rich association of concepts, of senses, of episodes in your life, perhaps when you squeeze an orange. Very rich web of information. Modern large language models don't have that richness. Now it's a very, very hot topic in research to try and do what is called grounding these models. So our semantic memory, and in fact, all of our memory, is grounded. That means we go out into the world, we experience the world, we pick up an orange, we bite into it, we feel the juice. The concept, the word orange is not just a word, it's an experience. So there's really, really um many research groups thinking, how can we do this for large language models? How can we ground it, bringing in multimodal perspectives, using robots to go out into the world and experience the world? But we're a long way off. Never say never as a scientist, but at the moment, the trajectory, we are very different to these large language models. Now I'm not going to spend long on procedural because Jason Bourne, he's procedural, he's got these abilities, his memory's been wiped, but he's still able to operate. Procedural memory are the things that we've learned to do, like riding a bike, like cutting a tomato. And we do them without thinking about them. AI systems, particularly robotic systems, are also able to learn new tasks. They can watch us and imitate us, they can learn how to grasp this bottle. But what they don't do, because they have no autobiography, they don't have any sense of mastery. When you learn to play the piano, you're not just learning how to clonk the keys, it's becoming part of your narrative identity. And you can do that because you have episodic memory. Taken a long time ago. Can anyone remember it when it was like that? Some of you might, and they'd be using your episodic memories. Episodic memories are what's just like episodes of a show. You are remembering things you have done in your past, and you are bringing them back to life. Now, again, for a long time, science felt that this is a bit like going into a video store, picking out a video tape, that's how old I am. I remember videotapes, putting them into the video player, pressing play. So they saw episodic memory as a recall process. But the surprising thing is that the architecture in our brains that allow us to predict the world as we take part of it right now is similar to the one that is used when you think about the past. So we're not really retrieving a memory, we're recreating it. What your brain does is tightly compress the things that you've experienced. And when you recall them, little fragments bubble up. And then your amazing predictive processing system thinks, ooh, how can I put these together to make a meaningful story? Now we know this is true because people like Elizabeth Loftus has done some incredible experiments. So take a look at just um one such experiment. Okay, so what participants were shown were videos like this. And once they'd watched the video, for some of the participants, Loftus and her colleagues would say, How fast were the cars going when they smashed into each other? And for other participants, they said, How fast was the car going when they hit into each other? What kind of effect do you think that had on the memory? Does anyone want to suggest? Yeah, leading the witness. Yeah? Well, the ones that heard the word smashed would report the speed of the cars as much faster than the ones that heard the word hit. They watched the same thing, but their memory was malleable because of the brain was thinking smashed, smashed, must have been fast. That's a reasonable assumption. A couple of weeks later, they brought the same participants back and said, um, Michelle, would you like to remember that video I showed you two weeks ago? Um, would you like to tell me what you saw? And if you had been given the word smashed, you would say, it was terrible. There was debris everywhere, there was glass, there was blood. If you said you've heard the word hit, then you would just say, well, it was a minor accident. So our episodic memories are being recreated using prediction engines, the same prediction engines we use in real-time perception. Another um really fascinating set of experiments is let me just try this with you. If I say to you now, um tired, bed, cocoa, snore. So I'm hope. I hope some of you are not snoring during this lecture. Okay, so there we are, set of words. I say, off you go, come back on the 25th of the 21st of April when my next lecture is, and then I come up to somebody and say, What words, sir, do you remember?
unknownCocoa.
SPEAKER_00Cocoa, excellent. Well, what um study showed was that people could remember all of the words they were told, but they also would say sleep. Because the brain was thinking, oh, all of those other words associate with sleep. I must have heard sleep. Now, why does all of this um matter? Well, first of all, it allows us as human intelligences, new acronym for you, HI, human intelligences to do what Endel Tulvik calls mental time travel. You know, we scientists use a complicated term, it just means you can think about the past. All right, you can go to the past and you can reflect on an event, and that reflection can allow you to think about future possibilities and create and be purposeful. And here's a fascinating finding the brain systems that you use when you are recalling the past are from scans, when they put people into these scanners, the same structures that are used when you're trying to stim uh simulate or imagine a future. That's something. At the moment, episodic memory, there is nothing like it in artificial intelligence. People are trying again to push the boundaries by creating persistent world models for artificial intelligence systems. But again, never say never, but at the moment, we're a long way from that. Now, so we talked about perception and we've talked about memory. Now I want us to bring us back to if you put those things together, what do they enable? Well, what they enable because we can share a history and we can share stories, it enables us not to be individual intelligence, but to be collectives. Let's have a look at um some examples. The thing that struck me when I saw that, I thought Theresa May was in the clip. It's not, is it? What we've just seen is um a common place. If anyone goes to concerts of any form, we've seen people uh in a practice together playing in an orchestra. And yes, they're second by second predictive systems, the perception I talked about, that's clearly important, and you get this amazing synchronization by different members of the orchestra. But actually, the episodic autobiographies of those individuals over a long period of time actually scaffolds those performances. They've shared rehearsals, they've shared lives, they've come to create norms about what makes for a good performance for them. Or take the work that I'm involved in, science. Recently there's there was a report that said a super AI researcher is coming that's going to be better than all of the professors in the world. So I thought, oh gosh, that's that's me redundant then. Um but it misses the point because science isn't about finding that one individual brilliant person. Of course, there will be, but science is a practice done under constraint with consequences. Right now I'm doing it, right? I've spent 30 years building up my reputation, and any second now it could crumble into dust. And because I have that constraint and those consequences, then that is going to push the kind of things that I might do. I will try to do things with conviction and with commitment. We know, of course, don't we, that these models, the large language models, can ingest all of science that has ever been written, all of poetry that has ever been created, every book that has or will be written, every piece of music. But what they then do is to perform processes. They're not part of the collective practice. Because they're in jest, you know, they're not a member of this collective, they're not carrying out their processes under constraint with consequences. And yes, they've read everything that my colleagues and myself have ever written, but they're not part of that collective. Aggregation clearly isn't membership. In the last um few minutes of this lecture, before we go into question and answer, I think I've got to address this question. You might be sitting there and saying, Oh, okay, we get it. We've got brains, we've got amazing cognitive architectures, and we're different from these generative AI, different from the large language models. But does it matter, Matt? If we can have one of these systems, these AIs that can do better diagnosis, or better policy writing, or better dissertations for your final year project, does it matter if it is a very different type of intelligence? Well, let me try to convince you that what you need to focus on is that word better. Large language models, generative systems can produce the most optimal answer, the best answer. But consider this. I want you to imagine that um there's a world where terrible injustice has been uh defeated. It was a world where one group of people was excluded while a small group of people thrived. And your job is to try and put that society back together again. Well, with an AI system, we could absolutely feed in masses of data about incidents of violence and incidents of calm. We could put in all of the stories of people and we could press a button, and our large language generative system could come up with the best optimal solution for a stable society. But would it come up with the right answer? Because in the actual world, when apartheid had crumbled, there was an incredible process and practice. It was called the Truth and Reconciliation Committee. If you want to read about it, that's the book that you should get. It's been out obviously a very long time. And in that book, you heard the stories of people who had memories. You heard people who were held accountable for their actions. It wasn't just a process, it was a practice of moral repair. So, what are AI systems better at? Well, I think as we move into thinking about future AI systems, the question for many of us building these systems, and I would say for all of you using the systems, is not to be anti-AI. Okay, that's mad. There's some amazing new systems that are out there in the world right now, and in the next five years, even more incredible capabilities. But those systems will be like Jason Bourne when he's pulled out of the sea. Memory, no memory of what he is there for. So, this talk, I hope, has helped you think about not being anti-AI, but being pro-agency, being pro-human. So we can build systems that enable us to be kept accountable, to be responsible, and to bring our full world experiences as we shape the future. I want us to come back to the beginning of the talk. I don't know, um, particularly the younger people in this audience. And maybe some of us older people. When we see this and we read about the incredible abilities of AI, we can get a little bit despairing, I think, and ask that question what are we for in this blazing hot AI summer? And the answer is to return to Jason Bourne. Jason Bourne only became purposeful and full when he recovered his autobiography. The machines that we're building can give us answers. But only you and you and you and me can live with the consequences. And as we live with those consequences, we make decisions with conviction, with commitment, bringing our full selves. And if we do that, then we can answer the question. That is what we're forgetting.