
Gresham College Lectures
Gresham College Lectures
Robots in Science Fiction
In the late nineteenth century, highly contentious debates about prostitution were central to broader questions about women’s status within society, including their rights to property, entitlement to suffrage, and claims over their own bodies. Political scandals such as those over the 1860s Contagious Diseases Acts (which criminalized sex workers, not their customers) and the 1885 Maiden Tribute of Modern Babylon (which was the first exposé of child prostitution in the UK) not only reveal attitudes towards the commercialization of the body but have left a legacy that we live with today.
A lecture by Jim Endersby
The transcript and downloadable versions of the lecture are available from the Gresham College website:
https://www.gresham.ac.uk/lectures-and-events/scifi-robots
Gresham College has been giving free public lectures since 1597. This tradition continues today with all of our five or so public lectures a week being made available for free download from our website. There are currently over 2,000 lectures free to access or download from the website.
Website: http://www.gresham.ac.uk
Twitter: http://twitter.com/GreshamCollege
Facebook: https://www.facebook.com/greshamcollege
Instagram: http://www.instagram.com/greshamcollege
- As you may have deduced from this, my name is Jim Endersby. And this is the third in a series of lectures about science fiction, which I've given the overarching title, how not to be human. And this evening's alternative to the human is the robot. And, of course, as with all of these, I have to start with a disclaimer. There are millions more robots in science fiction than I could possibly describe this evening, and many, many themes that I'm not going to touch on. So apologies if your favorite robot isn't featured. I want to start though by talking about the contrast between nature and culture, which is something that's come up kind of in the background to quite a few of these lectures in various ways. And it becomes particularly obvious when we start to think about artificial people, as robots are often imagined. So the whole idea, I suppose, with human nature, which is the kind of grand theme that links all these lectures is that there are naturally occurring properties in people, there are things in us that make us who we are and make us different to each other and make us distinct from other things, which as we've seen in other lectures are considered not to be human. And as I say, the idea is those differences are somehow just out there. So we looked in the last lecture at the way that women have sometimes been considered by male thinkers not to be human or not to be fully human. And I looked at the work of Mary Wollstonecraft, one of the Enlightenment philosophers who used the concept of natural rights to make her argument in favor of the rights of women. If everybody has natural rights that God has given to them, and everybody is created by God, then everybody should get equal rights. And she applies the same argument to enslaved people, arguing that people of African descent are just as human, just as much created by God, as white people are and so forth. But that notion of natural rights is a key idea for the enlightenment. And it motivates a lot of the thinking of that time. So her contemporary, Jean-Jacques Rousseau, with whom she was disagreeing when it came to the education and rights of women, of course, he also used the idea of the state of nature. He doesn't actually use the term noble or savage anywhere in his writings, but that notion of a primitive unspoiled natural condition is one that crops up in his writing. And I argued in the very first lecture that he used his idea of what apes were like, what he thought orangutans were like, to present an image of what an unspoiled natural human being would be like if they hadn't been corrupted by civilization, by culture, in effect. And part of the background to these things, as well as arguments about the rights of women and the rights of human beings, generally, the rights of slaves are much debated at this time. And as I said, Wollstonecraft extends her argument about human rights to women and to people of African descent. And I think it can be argued that as a result of those arguments, as a result of both the people in Haiti, Saint-Domingue, that then was Haiti rising up overthrowing slavery, and for a short period at least, managing to get it abolished throughout the French empire, they and women like Wollstonecraft take the notion of the rights of man, which really did mean the rights of male people, almost all of whom were white, and extend that into an argument about human rights that encompassed in theory, everybody. So I've used, as I said, women and apes as examples of the not human in the ideas of various thinkers and science fiction writers in earlier lectures. But, of course, those again buy into the idea of natural difference. Apes and women are things found in nature. And so ideas of natural rights and natural differences are embedded in the kinds of philosophy that we're looking at for the 18th century. This evening's example would seem to change the terms of the argument quite dramatically, because of course, robots are clearly cultural, they're artificial, they're things people have made, and they're made according to our ideas of what a person might be, or could be, or how a person might be improved. Which aspects of the human you would want to include in a robot, or remove from a robot, to make it better for different kinds of purposes. And that's the kind of broad frame of what I'm going to talk about. But actually I want to argue that the distinction isn't quite as clear cut as it looks. So obviously the kind of history of science that I do, and there are many other histories of many other sciences, but it's focused on the ideas that come out of Europe and out of the countries that are settled by Europeans and the idea of objective knowledge. So this is Francis Bacon, one of the founders of the European tradition of scientific thought. He's the man who comes up famously with the idea that knowledge is power, scientia potentia est. And part of the appeal of what's sometimes called the scientific revolution, which grows out of the thought of people like Bacon and out of the thought of the people who worked at Gresham College when it was first founded and became the founders of the Royal Society, which was set up in honor of Bacon to implement his ideas. The idea of objective knowledge was that there were truths out there that would common to everybody. If you could study nature without the obstacles of language, and culture, and religion, the things that are specific to us as different groups of people, you would find truths that everybody could accept. And that we could take those truths, and we could master nature, conquer nature, subdue nature, and improve the lot of human beings. That was Bacon's kind of great idea. So that knowledge wouldn't belong to any one group, it would be potentially accessible to everybody because it was objective. And by contrast, people like Bacon see culture as being very much about the differences between people, the specific languages that we speak, the customs and knowledge of our particular tribe of our people, our religion, whatever it might be, they separate us. Science was going to be the great unifier, was the kind of dream that motivated many people. But as we saw in earlier lectures, this kind of clear cut difference between nature and the realm of objective scientific knowledge and culture, the more subjective, the more human, the more culturally specific, breaks down all the time. So for example, when we looked at apes, we saw how particular things that are going on in the culture of the period deeply changed the way that people understood and studied apes. So this is an example from, of course, Kubrick's film, "2001", an imagination of the early origins of human beings depicted as savage apes who are violent and territorial and learn to kill and learn to eat meat. And that's what sets them on the road to evolution and improvement that will eventually take them to the stars. And as I explained in that lecture, very specific ideas of a writer called Robert Ardrey, drawing on contemporary scientific research, but placing it very much in the context of America in its global position at the time. They shaped Kubrick's ideas very directly. So you get these kinds of links between the politics and the culture and the time and the way that people do science. Another thing, example that I gave in that lecture was the way that various cultural shifts, including the rise of feminism in the 60s and 70s encouraged many more women to go into science. One of the sciences they go into is primatology. They become very actively involved in the study of apes and they discover all kinds of things about apes that the men who'd studied them previously hadn't noticed. To put it very simply, apes spend a lot of their time having sex rather than fighting each other, and that actually the females are very sexually active and actually are much more dominant in the social organization of groups of apes than had previously been recognized. So you begin to see all kinds of snippets of evidence that suggest that culture transforms nature, that the specifics of where we're coming from at a particular time in the history of science change the kind of science we do. So that's one reason I would think that culture-nature split isn't quite as clear cut as it might seem. I'll give you one more quick example. This my particular obsession. Any of you who've suffered through earlier lectures will know I have a thing for Darwin. And it's interesting to a historian that two people, Alfred Russel Wallace and Charles Darwin, both came up with the idea of natural selection, almost at exactly the same time and quite independently of each other. And it's interesting that they saw competition between different species and between individuals within a species as the motor of change that would transform the world. And I think it's really not a coincidence that they both live in the world's first industrialized capitalist economy, and they have different social roles within it in different places in the social hierarchy, but they see competition all around them as the motor of change in the world they're living in. And they both in effect see nature as a kind of perfect free market. That competition will lead to the best surviving prospering and dominating the marketplace. There's a lovely anecdote that when Darwin, you may know, he spent many years classifying barnacles, as you do, and on one occasion, he wrote a glowing description of the embryonic form of some barnacle, and when his children read it they said it sounded as though he was writing an advert. And this is not coincidental that that language of advertising, the language of improvement, and competition, and death, becomes part of the way that Darwin and his contemporaries see the world. And you can see the way Darwin explains his science, and this is true of all sciences, particularly new sciences, he's got to borrow from the language around him and use metaphors and images that his audience is going to understand. So for example, he writes a book on orchids, another of my Darwinian obsessions, and he's trying to explain how this particular orchid, the Cypripedium is a lady's slipper orchid, how it actually traps insects in order to ensure that it gets pollinated. And he writes that a small insect could crawl in, but not out, and so the labellum, that's the lower petal on an orchid flower, acts like one of those conical traps with the edges turned inwards, which are sold to catch beetles and cockroaches in the London kitchens. So there's that, the world of Victorian capitalism of mass produced artifacts that were created to solve the problems of urbanization and overcrowding and filthy dirty Victorian kitchens, that world sneaks into this text about orchids and becomes part of the way that his audiences are going to think about it. And, of course, it's worth noting that a trap is something that is made with a very clear purpose in mind. And so ideas of intention and design kind of sneak back into Darwin's writing, even though he and Wallace argued very strongly that there is no divine differences or another God did not make organisms, as they were made themselves, yet that language kind of percolates back in through these kinds of metaphors. So again, the language of science is always cultural as well. There's no neutral technical language that is stripped from all those associations. Nevertheless, once science starts to, whether in reality or in fiction, to create another, an alternative to the human, we ought to, I think, see something rather different than we saw when we are comparing what is human to other supposedly naturally occurring non-humans. And as I say, apes and women are the natural examples, the robots are new examples. And that's kind of one of the things I want to think about today is once we start to make our own others, how do that reflect the way we think about what is human and what is not, what kinds of people might be considered human. And very broadly, I want to argue that some people use robots to make people more like robots and others try to make robots that are more like people. And this becomes a contest between, as it were reason and emotion, between thinking and feeling. And I'm going to try and explore that, give you some examples, see if I can and persuade you of it. So let me start with model workers and the world, that world of which dominant Wallace are coming out of, the world of Victorian factories, Victorian mass production. This, slightly surprisingly, is the very first robot in the world, and it first appeared in British factories around 1835. Now it doesn't look anything like a robot as we would imagine it today. But this is a machine apparently instinct with the thought, feeling, and tact of the experienced work man, which gains the nickname, the Iron Man, even though as you can see, it doesn't look anything, not even vaguely anthropomorphic. But because it seems to do all by itself, things that previously took a human being to do, it becomes in some ways the precursor to the robots. And it's installed by mill owners in textile mills, like the one I just showed you, basically they put a stop to strikes. The strikes that are plaguing the textile industry, where highly skilled workers, weavers, and spinners will down tools in order to get better pay and better conditions, and the mill owners want to put a stop to that by employing what becomes known as self acting machinery. Machinery that has perhaps the first glimmerings of what we would now perhaps want to call artificial intelligence. And it was designed by this man, Richard Roberts. He had a firm in Manchester, which is the heart of the cotton industry, Sharp, Roberts and Company, and it's called a self acting spinning mule. Mule because the early version is a combination of a spinning two different machines. So it's like a hybrid between these two and self acting because it doesn't need as much skilled supervision as it did before. And it's part of a process that goes on through the cotton industry and the other textile industries in the early 19th century of fewer and fewer skilled workers being needed to produce more and more cotton with staggering results in terms of productivity and eventually global market share. It was actually patented in 1823, and it took many years of perfecting and improvement before it begins to be ready to actually be wheeled out into the factory environment. And it's in many ways, it's an automaton, it's a machine that works by itself. There are other kinds of automata as we'll see when we come and back to them a little bit later. And this is the kind of workplace that is created by the self acting mule. Increasingly, you get this, I think kind of vision of order. Rows and rows of identical machines, very few highly skilled workers, very few male workers, mostly women and children doing this who are paid a lot less, and are often put in grievous danger by the way the mill is run. And all these machines run at the same speed, one of the many things about this, which I have to restrain myself for get going otherwise, we'd be here all night. But all of those belts here, they're leather belts, they're all connected to drives to spindles across the top, all the spindles are connected to one vast steam engine that powers the whole factory. And this is really the first time in history that the rhythm of work where everybody shows up at the same time, clocks on at the same time, works for the same number of hours and so on, it's driven by the steam engine. You've got to get all those machines running together. You can't have people stop and start when they like and work whichever hours they like. So the whole nature of work changes because of these machines. And, of course, for the mill owner, this increasingly starts to look like a vision of utopia, fewer and fewer workers, less and less power in the hands of the workers. So one of the people who first celebrates that is this man, Andrew Ure, who's a Scottish doctor and chemist. He works in the textile industry on things like bleaching and dying and improving those processes, and he becomes fascinated by the whole process of automation and mechanization. And the sheer scale of the textile industry, the vast amounts of growth in productivity without a growth in wages or workforce really impresses him. And he's particularly, it's the productivity with very little skilled labor is the thing that really excites him. So he makes this tour of the textile industries and produces this famous, or some people say infamous book,
"The Philosophy of Manufactures:An Exposition of the Scientific, Moral, and Commercial Economy of the Factory System." And it's a title that may seem slightly bizarre to us now. I mean, we can understand a political economy, perhaps a scientific commercial one, but a moral economy doesn't quite fit with our emotions of factory work. But the year the moral economy was at the heart of it, that discipline imposed by the machine, the work rate and so on, profoundly excited him and other people at the same period write about the factory puts a stop to the celebration of Saint Monday. So Saint Monday was a tradition observed by many workers where basically you'd be so hung over on a Monday morning that you couldn't face work, so you would just declare it was Saint Monday's day and have the day off and pick up the pace on Tuesday and catch up with the work you'd missed. Well, the factory doesn't allow you to do that. And in the eyes of many factory owners and their boosters, that's a very good thing. And this again is that vision of harmony, of order, of mathematical precision, the whole way this is drawn with the people, as I said, mostly young women who are not paid very much, literally dwarfed, almost disappearing into a factory system, which increasingly seems to approach a condition where there would be no workers at all. And you can see that Ure is in love with this. He actually writes,"I have stood by for hours, admiring the rapidity and precision with which the self-actor executes its multifarious successions and reversals of movement." He's really turned on by this sight."This invention confirms the great doctrine already propounded that when capital enlists science in her service, the refractory hand of labor will always be taught docility." That we're going to tame, as he puts it, the Hydra of misrule with this Herculean prodigy, as he calls it. Now his near contemporary, Karl Marx, was equally excited, but as you can imagine, reached rather different conclusions about the desirability of this. Marx says, "The self-acting mule opened up a new epoch in the automatic system. Machinery, not only acts as a competitor who gets the better of the workman and is constantly on the point of making him superfluous. It is also a power inimicable to him and as such capital proclaims it from the roof tops and as such makes use of it." So the machine is the enemy of the worker in Marx's view. Constantly it is killing him, constantly threatening his livelihood, constantly pushing up the work rate without any concomitant increase in the pay rate. And it's interesting how similar their views are despite their very different politics. So this is Ure."How vastly productive human industry would become when no longer proportioned in its results to muscular effort, which is by nature fitful and capricious, but when made to consist in the task of guiding the work of mechanical fingers and arms, regularly impelled with great velocity by some indefatigable physical power." So all the kind of limitations of the human of mere muscle power, the capriciousness, the fitfulness, the unionization, all the other things that make humans unsatisfactory can be banished. This is Marx writing on the same subject in startling similar terms."Slavery cannot be abolished without the steam engine, and the mule, and spinning-jenny, because people cannot be liberated as long as they are unable to obtain food and drink housing and clothing in quality and quantity." So both of them agree that the machine is unleashing extraordinary productive forces that have never been seen before, and the result will be a huge amount of very cheap cotton, but, of course, they disagree very profoundly on how that cotton and the wealth it represents should be shared between the people in the factories and the people who do the work. Now that background of the factory inspires the first real robot, the first time the robot is used in fiction by this writer, Karel apek, a Czech writer, and he takes the word from the Czech word robota, which means sort of drudgery or slavery, it actually refers to a kind of serfdom that was common in the Austro-Hungarian Empire at the time. And he wrote this play,"Rossum's Universal Robots" or "R.U.R.", in 1921. And that's the first time the word robot appears in print and in the translations, it then becomes part of English and many other languages as well. Interestingly, the play is named, although R.U.R. is the name of the robot making company at the heart of the play, and it's named after its founder old Rossum who in Czech his name is deliberately very close to the Czech word, reason, rozum. And so you can translate the play's title as meaning reason's universal serfs, and that image of the factory as a place where humans are subordinated through the demands of reason, and rationality, and efficiency, is something that's going to become an important part of the way robots are imagined, and the way in which, in many cases, they'll actually be used. So the play itself is kind of fascinating and weird and well worth reading, I've never seen it produced, but if you get the chance, hope you go and see it. It's set in a future where Ure's vision of the totally mechanized factory has in effect come true because the robots have taken over all the world's manual labor, and that's supposed to free humans to do more creative tasks. So this is an early American picture of a production of the play. It begins with an idealistic young woman arriving in this remote island where the factory is based, where all the robots in the world are made. And she represents something called the humanity league, and she's come there to free the robots, to give them human rights, to make them paid, and give them leisure time, and so on. And she expects to be sort of chased off the island by the managers of the factory. But, in fact, they welcome her with open arms, not least because she appears to be the only woman on the island, and they're all a bit lonely, but they say, there's no way you can just upset our robots. So Domin, rather unsubtly named manager of the factory, tells them there's nothing to our back'cause the robots don't want freedom. They don't have any desires, they don't have anything, no point in paying them, they've got nothing to spend their money on. They have been built for one purpose only, which is to work, and nothing she can say to them is going to upset them. And initially that seems to be exactly what she finds. It's interesting that the robots here are biological rather than mechanical. So old Austin who founded this actually discovered a process of making a form of artificial protoplasm and they make the robots in facts and they have nerves and muscle and so on, but they're always depicted, as in this stage scene here, as being more mechanical to distinguish them from the people. But the play is kind of interesting,'cause there are a number of quite funny scenes where robots are mistaken for people and people are mistaken for robots. So the fact is they're almost indistinguishable, but obviously in staging it, it's kind of hard to get to work without making them look robotic. One of the many interesting things about them is they're supplied in male and female versions, although they can't reproduce, they're only made in factories, and in fact they inability to reproduce becomes a major point of the play. But they have male and female robots because people wanted female robots to do traditional female jobs like be secretaries and so on, so they made them for that reason alone. And Domin shows Helena that sex means nothing to them. There's no sign of any affection between them. So you can begin to see how a certain kind of perfect human is being created here. A certain set of standards are being imposed where affection, and passion, and emotion, have been stripped out to make them into better workers. They don't need to be paid. They're never going to go on strike. They don't get tired. They last for 20 years and then they're scrapped and replaced with new fresher models. And the cost of the robots gets cheaper and cheaper. There's actually a great bit where Helena asks one of the designers,"Why don't you give them souls?" And he just said it'll push up costs. So not worth doing. And this is Domin's explanation of why the whole thing has begun. He hoped the robots would abolish the appalling social structure based on work and inequality. I wanted to turn the whole of mankind into an aristocracy of the world, an aristocracy nourished by millions of mechanical slaves, unrestricted, free, and consummated in man, and maybe man more than man. And that sounds, I think, very like Marx's dream that people would be liberated from drudgery, liberated from inequality, the vast productive forces will be at the service of everybody. And this poster is for a production that was done by the Works Project Authority, the WPA, during the new deal. And some people interpret the play as a kind of left wing play, others saw it as an antibullshific satire, but certainly you can see adept to Marx and Marxism in language like this. But it's also interesting that when the robots start to rebel, sorry, spoiler alert, and they became increasingly dissatisfied with their role, and Helena asks one of the other managers in the factory why are they still making them given that they're clearly misbehaving? And she's told the shareholders won't hear of a reduction in production since the world's governments and manufacturers are demanding more robots. So the remorseless logic of the marketplace, the thing that motivates Ure's vision in the first place is also there. So the ghosts of Marx and Ure are both present in this text in kind of interesting ways. And the question of what it is that makes the robots rebel is kind of interesting. And the robots for me, these robots, and there's lots more you can say about them, they have other descendants, they are part of a trend, which I think of as making people more like robots. And this becomes the reality of factories particularly because of the work of this man, Frederick Winslow Taylor, who publishes this famous book,"The Principles of Scientific Management" just a few years before the "Rossum's Universal Robots" is published. And as I'm sure you know, Taylor has all kinds of techniques for improving the efficiency of factories, the most famous of which, or notorious, depending on your perspective, is the time and motions study, where he would calculate exactly how long a particular task should take and then made sure that every single worker worked at the optimum speed. And so again, the pressure of work is cranked up enormously as a result of what's sometimes called Taylorisation. Taylor himself says, "In the past the man has been first. In the future, the system must be first." So the factory, its workers, its machines, and everything are kind of increasingly seen as a kind of as a whole. They will all work together, subject to exactly the same rules, working at exactly the same speed with the maximum possible efficiency to produce the highest possible output. And, of course, his near contemporary, Henry Ford, implements a great many of these ideas in the world's first production lines. And the production line then becomes, for many writers on the left, a symbol of what is sometimes called wage slavery of the enslavement to the productive power that has been unleashed by the machines. One of the funniest and I think nicest expressions of that, if you don't know this film, I recommend it very highly, is Charlie Chaplin's "Modern Times" and Chaplin works on a production line like this. And in "R.U.R." there's a moment where the robots suffer from something called robots cramp, which is the first sign that they rebelling. And in this film, Chaplin has to do this action over and over again on the production line, and when he comes off the shift, he goes outside and he still, he can't stop. It's like he's still kind of shackled to the machine. It's amazingly pretty, and there's a great scene where the boss of the factory actually appears on the video screen in the toilet to tell people to get back to work. It's like long before Amazon was doing this kind of thing, Chaplin had foreseen this, and this scene, he's literally swallowed by the machine and becomes like the cog in the machine, he's almost chewed up and eaten by it. And in many ways, I'm sure Chaplin intended this primarily as an anticapitalist satire, thinking about the greed of American factory owners on the way they push their workers so hard in this inhumane way. But I think it's worth noting that Soviet factory managers read Taylor just as enthusiastically as their American counterparts, and that life on the Soviet production line would've been remarkably similar. Presumably you're supposed to be churning out more for the good of the proprietary in the future of the revolution, but the day-to-day pace of work wouldn't have been very different. And so, Richard Theodore, American economist and philosopher, said that Marx famously said that the hand mill gives you society with the few to load, the steam mill society with the industrial capitalist, and how actually the steam power and automation gives you society with the industrial manager and whether he's working for the good of the party or the good of the shareholders, doesn't actually make a lot of difference to the actual work and the way that it goes on. So that's one story of robots and we can see all kinds of ways in which that comes up in science fiction. I want to look at a different backstory because these are two stories that are often kind of tied together when people talk about robots and I want to suggest that they're actually quite different and worth thinking about. And I want to think about the kind of automata that are built in the 18th century, going back to that period of Wollstonecraft and of Sandman and the slave revolts and thinking about what these automata mean. So industrial robots literally embody reason, that's their whole purpose, to be efficient, and the work of the factory that Chaplin is satirizing is the God of efficiency has become the dominant force in everybody's life, forcing everybody to its pace. Chess-playing robots, this one was a famous fraud that was exposed later, but it's interesting how many people working on artificial intelligence and computing right up to the present have chosen chess as the perfect way to test whether a robot is, or a machine is intelligent because it's seen to be an entirely rational game, there's no luck involved and so on. And so, of course, lots of philosophers have said, well, what makes humans human is our ability to reason, we have a thinking animal. We can reason in a way that other animals cannot, that's the distinctive feature, and that's one strand in the creation of artificial people. But it's interesting that when I started thinking about this, that all the science fiction robots that you actually remember and that you like are anything but anonymous and mass produced. So this is Robby the Robot from "Forbidden Planet". Again, if time permitted, there's great kind of gender stuff in this, Robby is neither male nor female, he's super strong, hyper rational, but he's also a highly skill dress maker, and in this scene he's helping Cooky from the visiting ship, make some traditional rocket bourbon as a cooking ingredient. So he straddles male and female roles in interesting ways. Great movie, if you haven't seen it. These guys, I've talked about in an earlier lecturer, these are the drones from "Silent Running." Sadly, Doug Trumbull, who directed this, just died very recently. But these are the first cute robots in the history of cinema. They have names, they have personalities, they make you laugh, you feel for them, and they are the kind of spiritual fathers of R2-D2 and BB-8 and all the other cute robots who've come after them. I don't think he needs any introduction, I've talked about him before, the ultimate cute robot in my book, but full of personality. And one of the many things I love about this film is that opening 40 minutes where there isn't a line of dialogue and the purity of silence cinema is there and he's so expressive and so emotional without speaking at all is anything but an anonymous mass produced machine from a factory. And of course, robots don't have to be good to be memorable. This is the human machine, the Maschinenmensch, from "Metropolis", Fritz Lang's famous 1927 movie, who is built by Rotwang, the mad inventor here, as initially to allow him to recreate his dead lover. And then he's put to nefarious uses by the boss of Metropolis to ferment and mislead the workers and so on. But absolutely iconic unforgettable figure in cinema history. And the androids, the replicants, as they're called, from "Blade Runner", very memorable even when they're at their most sinister, I'll come back to "Blade Runner". And who can forget Big Arnie who teaches on the brink of seeming almost like a human on rare occasions in his films, never more so than when he is playing a robot. But he's a very memorable figure, actually because he has a crisis of conscience and he learns to understand why humans feel when he's in the second film when he's kind of redeemed from his original function as a killing machine. So there's all these kind of complex machines, and what they seem to have in common is emotions. And I've always been interested in how that story links to that other story of the factory and mass production and the anonymity and interchangeability, why the split? Well, one of the ways that this story is traditionally told is it goes back to 18th century automata. So Jacques de Vaucanson is one of the most famous builders of these. He built automaton Flute Player, which was enormously celebrated because you could actually see, apparently, the lungs moving as it blew air through the flute, and it could memorize several tunes, which was really remarkable. The one that really excited people's interest is the duck here. And the duck was remarkable because it looked like a duck, and it quacked like a duck, but what really got people was it seemed to eat and then to defecate like a duck. And that seemed too remarkable, and that was kind of a slight fraud as well. But this notion that machines could get more and more sophisticated, more and more like life, is often used to argue that there's a certain kind of vision of human beings as nothing more than machines, which comes out of the enlightenment, and is inspired by these automata. So these are among the most famous, these were made by the Jacquet-Droz, father and son, they are still there in Switzerland, still working, I've never been to see them, but apparently they are still operating and they built a musician, a draughtsman, and a writer. And I want you to just kind of retain that information that we got, flute players, musicians, writers, draughtsman, none of these things are built to do factory work or indeed any kind of useful work. And that's the first glimmer that there's something about that other story that's not quite right. Now the kind of key text that people point to when they talk about the kind of human beings are just machines is this one, Julien Offray de La Mettrie, famous 18th century thinker, notorious atheist, very shocking and controversial in his day. But he wrote this book called "L'Homme Machine","The Man Machine". And he argues there's only one substance in the universe, which is matter. There's no spirit, there's no soul, there's no God, there's just matter, it exists in more or less perfect forms, the more perfect it is, the more it resembles us. But it's all the same stuff. It's matter all the way down. And he says in this book,"It took Vaucanson more artistry to make his flautist than his duck. He would've needed even more to make a speaking machine, which can no longer be considered impossible." So speech is one of the many things that has been pointed to as uniquely human that distinguishes us from the other animals, and La Mettrie is saying, nothing special about it, all it needs is a new Vaucanson with better tools, better equipment. As machinery advances, everything is possible. We will end up making machines. And I used to buy that argument and I was talked out of it by Adelheid Voskuhl. Heidie is a friend of mine, I'm happy to say. And she wrote a very, very smart book about this, which made me see this whole question differently and solve that problem of where all those cute robots came from, for me, at least. So this book, "Androids in the Enlightenment", I recommend very highly. I'm going to give you a very brief, simplistic summary of a couple of key points, but I think it's a very smart book. And she just pointed out what I just pointed out to you that the automata made as musicians, as writers, as artists, the automata that were made were made to explore the emotional and the cultural side of human nature. They're also handmade unique, artisan-built luxury objects. They're not made in factories and they're not made by machines. So one way of thinking about them is they are very elaborate adverts for people like the Jacquet-Droz who are watchmakers and Jacquet-Droz watches are made and sold today. So they're showing off human skill, human craft, and the abilities of human beings. So that's one interesting aspect of them. The more interesting thing which she argues is that they're asking a question, the automata, and the question is things like, what happens to you when you learn to play a musical instrument? So automata like this one, which she writes about in some detail, does the learner become more cultured, more sensitive, in effect, more human as they learn, as they master cultural skills? So if we give people more education in the arts and things like that, are we making better people? So that notion of making better people has another rather different meaning to the kind of one that the robot makers are making in British textile factories. And again, the context of this, as I think is kind of very clear, is 18th century, the age of revolutions, this is the French Declaration of the Rights of Man, directly inspired by the American Declaration of Independence, and those debates were who counts as human and how we might extend the boundaries of human to include more people? Is the background to automata building, and it's the background, of course, to the beginnings of democracy as a system of government. And so the handmade automata in one sense are asking how can we mass produce the kind of people who could live in a democracy? The kind of people who are able to think and feel, because one of the traditional arguments of our aristocratic government is that only the best kind of people should have power and authority. They have the right feelings, the right sensibilities, the right education. One of the things the automata are asking is can we make more people like that through practices like learning music and other kinds of education? And that helps me understand one of my favorite "Star Trek" characters, among many, who is Mr. Data. Now I imagine many of you've come across, he's a character introduced in "The Next Generation", and he's played by Brent Spiner. He's one of only a tiny handful of humanoid robots who are made by a genius called Dr. Noonien Soong, who Brent Spiner also plays in some episodes. There are very few of them and the secret that their making has been lost. And again, they immediately began to remind me once I'd read Heidie's argument of those handmade luxury automata. And in fact, it goes deeper than that because some of the best episodes, I've listed a few of my favorites here, are about Data's attempts to become more fully human. So Data's day is full of comic misunderstandings about the forthcoming wedding of two of the humans, and so on. The measure of a man is great, it's actually about a legal trial where they have to decide whether Data is equipment, the Star Trek research, sorry, Starfleet Research team can take away and disassemble to find out how he was made so they can make more of them. And there's a great scene in that episode where Guinan, who is played by Whoopi Goldberg, describes what the federation are hoping to do when they can build a whole army of Datas as slavery. And so that issue of slavery constantly bubbles away whenever you start talking about robots, and then in theory is his first attempt at a romantic relationship, which again, I'm not going to spoil this if you haven't seen it, but is actually, it's very poignant, the way he tries to come to terms with what human emotions really mean in practice and how you might try and relate to another human being. And he's very cultured. So he's constantly learning music, he plays, he performs in several of the episodes, he reads novels, he paints, he performs Shakespeare. He has Patrick Stewart to teach him how to act Shakespeare, which is about as good a teacher as you can imagine. And so he's very, I think, very close kin to the enlightened automata that Wollstonecraft is writing about in her book. And I think there's another kind of poignant dissent here as Data struggles to understand human emotions, he tries to develop his own, he tries to make sense of the world of human and to become more human, and that he takes culture and the practices of culture as his root into understanding emotions and becoming more human. And it's emotions, not rationality. He has no problem with reason, reason is easy for him, he's a computer. In fact, he's far more rational, and thinks faster and more clearly and more accurately than any of the humans on the ship. It's the emotion, that's the bit he identifies as what he's missing. And it's worth thinking about a couple of the, pick the examples I used in early lectures, so this is E.T.A. Hoffman's story, "The Sandman", which I talked about in electron women, and the hero, Nathaniel, falls in love with his tutor's, Spalanzani's daughter, Olympia, who is tall, very slim, perfectly proportioned, and gorgeously dressed. And the first time he goes to a social event with her, her dancing is perfect, but has a disconcerting exactitude of rhythm, and her conversation is distinctly limited. All she ever says is, ah, ah, ah, and of course he says is, ah, you're such a good listener, you really understand me, you're great, the perfect woman. And of course, it turns out she's an automaton, and it's the excessive perfection of her performance is one of the things that gives her away, and Hoffman has kind of fun with that in the story. And it's interesting that one of the Data episodes, his violin playing is criticized for being a little too perfect, and he's really disconcerted by that as a criticism. I also talked about "Tomorrow's Eve", "L'Eve Future", Viller De L'Isle Adam, Hadaly the android at the center of this story , again, is unique, she's the only one in the world and is presumably fantastically expensive. So again, and she's clearly kind of kin to those 18th century androids. So you can trace as it were a different lineage, there's the factory machines, the Ure, Marx vision of the factory, whether that's good or bad or whatever, and then the Enlightenment android seems to lead us down a different path with different set of stories and some different conclusions. So what do android's dream of? Some of you may recognize the illusion here. I want to talk first of all about, I think the first sympathetic personality-rich android in history, first robot who does this, who appears in pop science fiction in 1939, he's written, the stories are written by Eando Binder, and he's called Adam Link. And the first story is called "I, Robot", which may seem vaguely familiar to you, a 19 year old called Isaac Asimov read the story, and thought, oh, I could do that, and it inspired him to write his first robot story. So Link comes first in several interesting ways. The stories are quite popular and they're rebooted several times. So this is actually in 1950s, kind of where the story is redone as what we would now call a graphic novel or a comic book, but he also appears in TV briefly in later episodes. And interestingly, the stories are told by him and they describe his confusion when he is born,'cause he's born with a blank of mind and he has to learn everything from scratch, and it's quite moving hearing him try to make sense of it. And he's initially, his capabilities are so limited that his inventor, Dr. Link, often is laying is going to scrap him because he's a failure, he hasn't worked. But actually the breakthrough when the doctor realizes that he does work is Adam shows that he understands pain. The dog jumps up at him, the doctor's dog, and he pushes it away too hard and the dog squeals and Adam immediately puts it down because he's seen the doctor tread on his tail a couple of days ago, heard the squeal, and understood that it feels pain and he shouldn't hurt it. And as Adam says in the story,"Dr. Link tells me he let out a cry of pure triumph. He knew at a stroke I had memory, he knew I was not a wanton monster, he knew I had a thinking organ, and a first class one." But it's the appreciation of another's mind and the level of empathy for another's feeling that convinces Link that Adam is really becoming a person. And he tells Adam,"You are not merely a thinking robot, a metal man, you are life, a new kind of life. You can be trained to think, to reason, to perform. In the future, your kind can be of inestimable aid to man and his civilization, and you are the first of your kind." And he is super strong and is clearly going to be a great servant, and perhaps a slave. It doesn't work out like that because Dr. Link is accidentally killed in an accident, and Adam is falsely accused of having murdered him because he rushes to his aid and he's caught with a bloody anvil in his hands with the doctor dead at his feet. So he goes off to explore the world, goes on the run. He quickly finds himself being pursued by an angry lynch mob. He goes back to the laboratory where he used to be safe and he finds a copy of Frankenstein on Link's desk and he reads it. And he's finally able to understand the mob's fears, why they are frightened of him, why they want to destroy him. And so he knows he could escape, the lab is surrounded, and he knows he's stronger than the humans, but he realizes he has to hurt them, he might even have to kill them to escape. And so he decides he's just going to switch himself off to avoid harming any humans. And the story ends with his last words, he's about to prove"that I have the very feelings you are so sure I lack." Now this story was so successful that, of course, he's reactivated very quickly and appears in a whole series of sequels which run over many, many years and he has all kinds of other adventures. And so he becomes a very popular thing, almost completely forgotten now, but he was really the first kind of name robot who had a kind of cult following. This one, I particularly like, very briefly, this is one of the second story,"Adam Link in Business" here. It opens with him being teased by a bullish character in a nightclub who waves a can opener at him to tease him about the fact that he's just a tin man. So he's acquitted of his creator's murder, he's allowed to develop, and he eventually acquires emotions, and is eventually declared legally a human being because of his feelings. He's got great intellect, of course, he's super rational. So he sets up a consulting business working for all kinds of people, solving problems. This becomes so successful that he finally needs a secretary. He hires a very attractive young woman called Kay, who being a good 1940 secretary, falls head over heels in love with him straight away. And Adam's reaction to this is kind of interesting. He describes himself in this story as neuter since he has no biological body, which even takes me back to Robby and his sexlessness in "Forbidden Planet". But Kay's behavior persuades him that I was a man in mind, not a woman. And reason he says that is, "I had begun life under Dr. Link purely from the man's viewpoint. That is, I had come to think of and see all things in that peculiar way human males do as distinguished from human females." And I think, maybe I'm reading a little too much of this, but it seems to me that he's recognizing the idea that gender is a performance. You learn to be male or female by emulating models of masculinity or femininity. It's not biological, he hasn't got any biology. He's learned this by association with males and has come to see the world as a male, and as a result, he longs to take Kay in arms of flesh and blood and know the secret joys of human love."I hated my metal body now, despite all its strength and power." But he realizes that even if he were able to do that, that would be a terrible thing to do because Kay's fiance, Jack, who got to the job in the first place, would be devastated. And again, it's the thought of hurting a human being, makes it impossible for him to act on his feelings. So he disappears at the end of the story, he leaves them a note,"I'm going away then, and I will not come back until Adam Link, the robot, the machine, is truly a machine again." So the human emotions become a kind of weakness that he has to purge from himself, but they're very much the key to his humanity. And I'm going to finish with a couple of movies that I particularly like, which I think, explore these ideas. Well, I suspect they'll be familiar to many of you. This is the first one,"Blade Runner" 1982, Ridley Scott's film, which is based on Philip K. Dick's novel,"Do Androids Dream of Electric Sheep?" If you only know the film and haven't read the novel, I would recommend it very highly. It's rich, complicated, subtle, and in some cases too complicated and subtle. They make a very interesting pair though, to see them side by side. And the premise of this, if you don't know, it is very human like androids indistinguishable from humans in any way. They are used to slave labor and colonizing the outer planets, they rebel, they try to get back onto earth, and they have to be detected. And it turns out that physically there's nothing to distinguish them. So they're physically indistinguishable. The only clue is their emotions are slightly different and their emotional responses are slightly flawed. So there's a test, the Voight-Kampff test, which exists in the book and the film, which distinguishes real and fake human beings according to the emotional response. And it's interesting that in the book, the corporation that makes the replicants wants the test abolished because they think that it would accidentally identify people with what it calls flattened emotional affect as being replicants, it would falsely identify them and they would be killed. So people with, the term didn't exist in 1968, but autism spectrum condition might be picked up as not being human because their emotional responses were not neurotypical. And so that fear is used to try and persuade the Blade Runner, the bounty hunter in the book, Deckard, that the test should be abandoned. It turns out there's a lot more to their plan than they let on. But in the book, there's a character called Garland who has no counterpart in the film, who is a android cop, one of a number, there's a kind of shadow police department in San Francisco staffed entirely by androids who are keeping each other safe and setting up a kind of android underground as they help each other. And he says that the androids lack empathy, and so have little sense of real solidarity. They're not good at helping each other."It would seem we lack a specific talent you humans possess. I believe it's called empathy." And that several of the android characters in the book claim to feel themselves to be inferior to humans because of this. And empathy is a very important theme in the book, and it comes up in the film as well. But actually the evidence of the book, and it's there in the film too, rather contradicts that. So in this scene, Pris and Roy, two of the escape replicants, they're helping each other, they're sheltering each other. They work together as a group to try and help each other. And we see similar patterns in the book. So the idea that they don't possess empathy is actually problematic and is questioned by book and film. And it's interesting that Deckard himself, this is done a little heavy handedly by Ridley Scott in the film, he seems to lack proper human emotional responses at times. So his boss in the film describes him as a goddamned one-man slaughterhouse, which is hardly the most noble vision of human and humanity. And there are several replicants in the book and the film who don't know that they're replicants. And so it raises the question, if something thinks it's human, and it looks human, and it acts human, how can we say that it isn't human? And then of course, Deckard as the ruthless bounty hunter raises the other question, if something acts like a heartless killing machine, in what sense is it still human? And those questions are the things that the film and the book both play with, and I think some very rich and interesting ways, but the whole, the Voight-Kampff test raises one of the most interesting things about this, which is the idea of passing, the idea of the test, how would you separate one from another? If you made robots more and more human, and humans more and more robotic, how would the boundary ever be maintained? And that's the idea of the key at the center of this film, which if you don't know it, I recommend this hugely highly,"Ex Machina", 2014. So there's a wealthy genius called Nathan Bateman, who, and again, Nathan, this is such a smart film, I strongly suspect the writer was thinking of Nathaniel from "The Sandman" when he gave him that name, I may be mistaken, but it wouldn't surprise me, he owns the world's most successful search engine and is obscenely wealthy, doesn't remind us of anyone in real life. And he summons one of his humble employees, Caleb Smith, with a fake, he's supposed to have won a prize, but this is a setup, and he's brought to this kind of isolated high tech home. And he meets Ava who is a robot played by Alicia Vikander, amazing performance apart from anything else. And Nathan explains he wants to see if she can pass the Turing Test. Now the Turing Test, I'm sure you know, was invented by the British mathematician and computer pioneer, Allen Turing. And the way it's usually done is that if you had a person and a robot, a person on a computer in another room, and you could only communicate via a keyboard or whatever, if you couldn't tell which was human and which was machine, you'd have to conclude the machine was thinking, there would be no difference. In the original version of the test, which Turing calls it the imitation game, it's a man and a woman on the other end of the tow and you have to tell who's male and who's female. And Turing plays with the notion that if they were able to fool you, and tease you, and make jokes, and throw you off the scent, you'd have to say there's nothing different about the way men and women think. And then he extends that idea to say that only by their responses can we judge whether things are thinking and that's how we would know whether a computer is thinking. Of course, here, there's no mystery about the fact that Ava is a machine. It's not concealed in any way. She's got exposed machinery and parts very deliberately. Although her face is very, very human. And so Nathan actually explains to Caleb,"The real test is to show you that she's a robot and then see if you can still feel she has consciousness." So he's kind of rewritten the test slightly, but without giving too much of the story here, the question of testing and passing, and whether you are really experiencing emotions, or whether you are merely simulating them, these questions become very explored in very complex ways. I have to say that because you sort of see the whole film and the world through the eyes of the two male protagonists, neither of whom are going to win prizes for services to feminism, the film has a slightly kind of misogynistic gloss to which people find off putting, and I can understand that, but actually, if you can get past that and understand what's going on below the surface, I think it's actually a very complex and rich and interesting film. But there's a great, many great scenes in this, but I'll share just this one. Ava asked what will happen if she fails the test. Caleb, "Ava, I don't know the answer to your question. It's not up to me." Ava, "Why is it up to anyone? Do you have people who test you and might switch you off?""No, I don't.""Then why do I?" And this movie asks that question, which I think comes up in a lot of the lectures that I talked about and a lot of science fiction, of not just how do we know what's human and what isn't, but who gets to set the test, who gets to evaluate it, and what happens to people who are deemed to have failed the test? Those become the kind of key questions in defining and applying notions of human nature. So I'm conscious of having raised a lot more questions than I've answered. I'm going to leave the last word with the ultimate embodiment of rationality here, Mr. Spock, who more than anybody else is responsible for my passion for science fiction. And he meets Data finally, in an episode of "The Next Generation" called "Unification", and he tells Data that most Vulcans would long to be him, he's their ideal, he's free from emotion, and yet he's spent all his life trying to acquire what most Vulcans would regard as a weakness and something they're trying to purge themselves of, what's with that? And Data responds that Spock is half human, and does he have any doubts about having turned his back on his human side, of having purged himself of human emotions, which Data sees as so precious and so important. And Spock says that he has no regrets, which Data observes is a human expression. And if you've never seen a single episode of "Star Trek", a still guess, but you'll be able to guess what Spock's one word response to Data's observation is,"Fascinating." And I hope that some of the questions we've raised this evening strike you the same way. Thank you very much for listening.(audience clapping) Now we're just going to see whether anybody has sent in any questions online while I've been talking. Ooh, boy. Well, depending on which science fiction writer or movie maker you are, it would probably have to kill all the humans to prove that it was artificially intelligent. But as I said, the really interesting question is whether or not it has empathy and emotion. And one of the many reasons I like "Ex Machina" is you have no doubt about consciousness and no doubt about intelligence, but you're left, again, without spoiling the ending, you're left with serious doubt about empathy. And we saw in earlier lectures that it's been argued by some primatologists, for example, that empathy is actually the key thing that makes humans humans, the ability to put yourself in someone else's place and see the world the way they see it, and feel the world the way they feel it. And that's something that I think we're a very long way for machines being able to do that. But it is, I mean, it's such a fascinating question and the degree to which you can simulate emotions and responses and fool people, opens up all those questions about passing and testing and who gets to decide and so on, which I find so interesting. But I think, and in a sense, I think intelligence is probably the easy one. It's emotion and empathy that I think will be the real challenge for any machines, and maybe that we never want machines to have those feelings, but I suspect they're going to sneak up on us the more and more intelligent they get, which, again, raises a much bigger question of where empathy comes from, but I don't begin to have the qualifications to answer that one. Does anybody else have a question of at all, if you want to stick a hand up, there's a microphone over there.- I think it's funny that we continue to put the word artificially intelligent in the frame. Surely, we just want to know it's intelligent. And do you think that's a step that has to be taken?- Yeah, it's interesting, isn't it? I mean, I suppose it's just supposed to distinguish naturally, it supposedly naturally occurring forms of intelligence like us from other kinds of intelligence. But one of the many questions it raises is suppose robots actually developed their own complete self consciousness, or computers did, and developed their own culture, would we recognize that as intelligent or cultural at all? And this is just, again, a topic, we'll come back to this in five weeks time for the final lecture, which is about aliens. Would we recognize aliens if we saw one, if they were really alien, and they were so unlike us that they thought and felt utterly differently, would we have any basis on which to communicate, or even as I say, to perceive intelligence in the first place? But I think the artificial intelligence, the classic definition of this is artificial intelligence is teaching computers to do badly what people do well. And so the word artificial has a certain purchase there, but of course, the kind of most smart people in AI research are using computers to do things that people struggle with rather than trying to emulate the things that people already do successfully. So there is something in the word artificial that is important, but the boundaries are very porous, absolutely. Yeah, I mean, it's arbitrary what we call an automaton robot, I mean, there are various labels and it carries different meanings and old meanings get carried forward. I think that, yeah, self acting machine is an earlier one, but it's interesting that robot has this specific connotation of servitude and serfdom, which I think gives a kind of poignancy and a focus to those arguments about automation and self acting machinery, which is there in implication in the factory system but isn't really brought out until Chappe coins that word and uses it in that way. So I think it has a particular meaning, which is interesting, but I mean, they're not really robots, they're androids in some sense,'cause they're biological and mechanical, so you can get how many science fiction nerds can dance on the head of a pin? An infinite number as we try and split hairs over the meanings of words.- So the lecture, you talked about how science fiction authors imagine robots as trying to empathize with humanity. I was wondering about the reverse, the ways in which humans often empathize with robots and what that means for how humans see robots or how we define humanity and see ourselves separate from robots.- Yes, yeah, that's a really good question because one of the things that prompts the robot revolt in "R.U.R." is having the sympathy for the robots leads her to persuade one of the researchers to give them more feelings. So he's trying to give them a sense of pain to stop them damaging themselves by mistake, they don't even have an instinct of self preservation originally, and that's expensive. And then she persuades him to make them more sensitive, and her goal, they make a robot called Radius, and he's the only, the first robot, I think, in the play that has a name who is more intelligent than the others, and she wants him to learn, and work in the library and philosophers, and then he will lead his people to freedom as friends and allies of humanity. And, again, spoil alert, doesn't work out like that. But the really, my favorite bit in "Do Androids Dream of Electric Sheep?" is when Deckard begins to suggest that he begins to suspect that he might be an android and he gives himself the Voight-Kampff test and he passes, he's not an android, which is not clear in the film, but he runs and says he can't do his job anymore because he has begun to empathize with the androids, or some androids, at least, and particularly female androids. So there's all kinds of really interesting things about gender, and some going on in the book as well. And again, Philip K. Dick is another person who wouldn't win any awards for services to feminism, I'm afraid, but it is a really interesting moment in the book when empathy begins to be, it betrays him, he can't do his job anymore because he has feelings he shouldn't have, if he's going to be a goddamned one-man slaughterhouse as they put it in the film. Thank you so much for listening.(audience clapping)