I think, therefore I am? – lecture by Jon Riding

Insight Lecture 9th Oct 2017

Jon Riding.

Good evening ladies and gentlemen. This evening I am going to give you a glimpse of a new world. It is a world in which there is scope for much relief of human suffering and where more of humanity might see hopes and dreams fulfilled. But it is equally a world which might easily become a dystopian nightmare. It is a world in which humankind are no longer the only intelligent beings and the good news is that it isn’t real, at least not yet. But the possibility that it might come to be is stronger than many of us might imagine. Indeed, some would say it is already creeping up on us unawares.

Human beings are, so far as we are able to reliably assess, unique. We are by no means unique in our vitality; we are part of a world full of countless living species, but even amongst those we think of as ‘higher’ animals, such as our close relatives the great apes, and perhaps dolphins and whales, humankind seems to be the only species on the planet that has developed the capacity for rational thought. How we have done this remains a mystery. We can point to physiological developments, which allowed us to grow brains with the capacity for reason and language (these two things seem to be very closely linked) and social developments which provided the environment for these attributes to flourish, but quite how this has all come together to make us what we are remains unclear.

This very uniqueness has contributed to human beings seeing themselves as occupying a privileged place within the natural world. We may not be the strongest or fastest (at least individually) but collectively we are very definitely ‘top dog’ and the world is indeed our oyster.

This idea of our place in the world is reflected in Christian understandings of what it means to be human, where mankind is sometimes seen as the pinnacle of creation and given dominion over all the other creatures. Whether or not we are comfortable with such a view the fact remains that it is this kind of thinking that has helped to shape how we imagine our place in the world.

The title of this talk is a well known quotation from the work of the French philosopher René Déscartes. Déscartes held that human beings were set apart from animals by virtue of possessing reason or intelligence and championed the idea that as human beings we have free will within the created order. These ideas have been a great comfort to humankind over the centuries. Our sense of who and what we are is very much founded upon being intelligent and reasoning beings who are able by virtue of these capacities to exercise freewill in the choices we make.

We have used this reason and freewill to very good effect, at least for ourselves. We have learnt how to harness the world around us to serve us. We have taken wild grains from the field and over millennia have learnt to adapt those grains to increase their yield. We have domesticated wild animals turning wolves into companionable dogs and wild cattle into cows  with higher milk yields and so forth. We exploit the resources of our world to our benefit  and, generally speaking, we don’t worry too much about the outcomes.  And all of this is possible because of who we are, reasoning beings with the free will to transform the world around us.

Unfortunately, This comfortable scenario is beginning to come under threat.  In a world where machines are learning to drive cars, diagnose skin cancers more accurately than skilled oncologists and even translate from one human language to another it is beginning to look as though there may soon be more than one kind of ‘thinking being’ in the world. But is that really the case? What is it that might qualify a system like one of these as ‘intelligent’?

One of the nice things about living in the lovely town of Sherborne is that you never quite know what you might discover about the place and what unexpected connections that might make. And it happens that Sherborne has a particular connection with this talk tonight.

Some of you will, no doubt, have seen the film ‘The Imitation Game’, in part recorded here in Sherborne, which told the story of Alan Turing’s contribution to breaking the Enigma Codes at Bletchley Park during the second world war. The film’s title was taken from a paper written by Alan Turing in 1950 which opens with the words: “I propose to consider the question, ‘Can machines think’”? He then went on to describe what has become known as the Turing Test although in the paper Turing called it The Imitation Game. The game Turing proposed was to provide a computer with the means to communicate with a person. Turing imagined this might be done using a teletype machine which the human would use to type questions or statements for the computer and via which the computer might print its responses. If it proved difficult or impossible for the human to be sure that the replies appearing on the teleprinter were being generated by a computer, and not by another person in a different room, one might conclude that for all practical purposes the machine was ‘thinking’. Recognising that ‘thinking’ is difficult to define, Turing chose to recast his question with another: “Can we imagine a computer which would do well in the imitation game?”

The Turing Test is a neat way of addressing the question ‘can machines think’? It levels the playing field by providing both parties with the same means of communication – the teleprinter – and it also focusses on a key measure for the demonstration of intelligence – language.

Language is central to learning and reason. It is primarily by language that we communicate each day, express our thoughts, share our discoveries and learn from the wisdom of others. Amongst those engaged in research into artificial intelligences (or AI) the task of understanding and translating human language is considered to be one of a small group of problems known as “AI complete” problems. By this they mean that a comprehensive artificial solution to understanding natural language would need to be as intelligent as a human being.

Interestingly, machines have begun to make significant inroads not only into our day to day lives but also into our conversation. The fact that our smart phones include voice responsive digital assistants such as Siri and Cortana shows us how far we have come from Turing’s teleprinter and demonstrates how accustomed we are to rely on the knowledge provided by machines in our daily lives. Some of you may be familiar with Skype, Microsoft’s video calling system. But did you know that Skype can now translate conversations in real time when the participants do not share a language? But are these systems really intelligent? Anyone who has wrestled with the likes of Siri or Cortana can testify to how well they are able to understand spoken questions and how helpful their answers are but whether what they do can be described as thinking is a different question.

Some thirty years after Turing proposed his famous test the philosopher John Searle wrote a paper entitled Minds, Brains and Programs. In this paper he proposed an experiment which has become known as The Chinese Room. It begins with a premise:

Suppose that Artificial Intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. The computer takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are engaging with another Chinese-speaking human being.

The question Searle asks is: does the machine literally “understand” Chinese? Or is it merely simulating the ability to understand Chinese?

Suppose you or I are placed in a closed room and given a book in which the same computer program is written out (in English) as a set of instructions. We have paper, pencils, rubbers, and filing cabinets full of information about Chinese. We could receive Chinese characters through a slot in the door, and by following the program’s instructions, produce Chinese characters in response. If the computer had passed the Turing test this way, it follows, says Searle, that you or I would do so as well, simply by slavishly following the instructions and without having the least idea about what the messages going back and forth might mean.

Searle suggests that there is no essential difference between the roles of the computer and the role played by you or me in the experiment. Each simply follows a program, step-by-step, producing an output. And it is this behaviour which is then interpreted as demonstrating intelligent conversation. We, however, would not have understood the conversation (unless we happen to speak Chinese). Therefore, Searle argued, it follows that the computer would not be able to understand the conversation either and that, without “understanding” (or “intentionality”), we cannot describe what the machine is doing as “thinking” and, since it does not think, it does not have a “mind” in anything like the normal sense of the word.

This is a helpful observation because highlights an important distinction. The term Artificial Intelligence is often used quite loosely. Many so called ‘AI’ systems are really ‘expert systems’. In this context the machine is given a lot of information which it searches to generate an answer to a question. We might say that the machine knows all the answers. This isn’t really any different from an ordinary computer program. The machine is given a set of information and instructions as to how to use that information to answer questions about it or to transform it in some way. True AI is different. For a system to be truly ‘intelligent’ it needs to be able to learn from the world around it. That world may be limited to a set of example questions and answers but the important distinction is that the machine is not told how to generate an answer to a given question. Instead it is given a series of questions, and the answers to those questions. From these pairs of questions and answers it learns how to answer other similar questions. This isn’t so much an expert system as a learning machine.

Now this may sound pretty esoteric but systems like this are a lot more more common than you might imagine and can be found in some unexpected places. Some of you will already know that my day job for much of the last thirty years has been working with the Bible Societies supporting global Bible translation. When people ask me what my role is I usually say that I am a translation consultant. This is perfectly true as far as it goes, I spend a lot of time consulting with translation teams all over the world about their translations, and very fascinating it is too. But a better description of my work would be that I build learning machines. The task of the team I lead is to develop computer systems which can do helpful things for Bible translators. And when I say Bible translators I mean all Bible translators, working in any one of the 7,000 active languages in the world. To do this by supplying detailed information about each language is impractical. In most cases the information simply doesn’t exist in a form we could use. Our focus is on building systems that, given examples of the text of a language can, for themselves, learn about that language and, and as they do so, then apply that knowledge to assist the translator. In other words, these are systems which learn from their environment.

This is the same kind of technology which drives self-driving cars  and, increasingly, things like Google’s language translation systems and IBM Watson’s cancer diagnosis system to mention just a few. Such a system is not programmed as such but is instead given the ability to learn from the world around it. The question arises, how similar is this kind of thing to human learning? The honest answer to this is that we still don’t really know but there are some interesting indications that human beings and state of the art AI may share some cognitive similarities.

Advances in neuro-science, particularly non-invasive scanning of the brain, seem to suggest that there are relatively few fundamental processes involved in cognition. It seems increasingly likely our ability to think is founded upon a relatively simple mechanism. Our brains seem to be organised into billions of stacks of neurons each designed to recognise a particular thing. So, for example, when our brains see the letter [“A”] the business of recognising it involves neuron stacks that recognise the stroke of an edge, [ | ] and more stacks that recognise a rotation where | becomes [ / \ or – ] . More stacks recognise that / \ and – have appeared in a particular relationship to one another and we conclude that we are looking at a letter [ A ]. In the human being this process is massively parallel and also hierarchical. The stacks that recognise rotation or particular coincidence of strokes can only operate once the stroke has been recognised. These synergies are honed/trained over many years of life experiences.

Systems like Apple’s Siri work in a similar fashion. The process that learns to recognise a particular thing imitates the neuron stacks in the human brain. For the human being this learning takes place over many years and is continually tuned and reinforced by the experiences of day to day living. In the machine the life experience of the human being is typically mimicked by using a huge number of examples to train the machine for its task. In other words, both learn from experience and both are shaped by the realities they encounter. The difference is that the human being learns by encounter and relationships over time (perhaps a lifetime) whereas the machine learns from whatever data it is presented with or can find. The similarity is that, within the field for which a machine is trained, it can be pretty hard to tell the difference between the machine’s behaviour and a human being’s behaviour.

So where does this leave us with respect to where we began, Descartes’s maxim “I think, therefore I am”? It seems that machines that can fool human beings into believing they are interacting with another human being may well qualify as ‘thinking’ machines, particularly if they are able to learn from their ongoing experience and adjust their behaviour accordingly. (Have you ever used a ‘chat’ window on a website to talk to customer services? If so, there’s an increasing chance you won’t be talking to a person).

But this leaves us with the other half of Descartes’s maxim, “I think, therefore I am”.

Machines may be well on the way to passing the Turing test but the ability to think or reason is only part of what it means to be human. The real fundamental is having a sense of self. A human mind is embodied and ultimately, mortal. Everything we experience and learn from, as human beings, is encountered through our bodily senses and this contributes to our sense of self. It is as though we know who we are because we know what that feels like.

For many the work we do is a strong defining characteristic. Our sense of belonging to the wider community also gives us an idea of our place in the world and the ethics and morality we share within that community all contribute to a sense of self as expressed in these shared values. This sense of the collective encourages us to strive towards virtue and shun depravity. And, broadly speaking, we define virtue as that which supports the common good for all – there are always exceptions; did anyone see that film? Hot Fuzz? Sometimes community expressed as common good can develop a life of its own…

At the heart of community is the understanding that we are in relationship one to another and beyond that to the wider world and, some of us would contend, to our creator. These relationships not only help to define who we are but also who, or perhaps what, we are not. This can be a complex and sometimes conflicting web of connections. The liberal values extolled by Theo Hobson here recently as the basis for our shared humanity are very much part of who we are as 21st century Europeans. One of their cardinal values is individual choice and freedom. But even this post-modern mantra of the individual as paramount only succeeds through the collective assent of society. There have been other collective philosophies which have been less beneficial.  Dietrich Bonhoeffer was faced with the choice of expressing who he believed himself to be, either as a citizen of Nazi Germany with all that implied, or as a servant of the Living God. To be the former might have enabled him to survive but he concluded it was not who he was and paid for it with his life.

The bottom line is that who we are is very much dependent upon our relationships with others, human or divine. If we value our fellow human beings such that we are always seeking what contributes to their flourishing then we can hope that they might respond in kind. Those of us that call ourselves Christian have an ultimate example of this in the relationship between God and human beings modelled by the life and work of Christ which shows us a relationship built on unconditional and transformative love.

If it seems intuitive that much of human identity is shaped by our abilities, experiences and relationships, why should the same not be true for an intelligent machine? As for abilities, the AI will think much faster than a person and may well have access to vast amounts of information which, unlike a human, it won’t forget. We might wonder where and with whom such a machine might build relationships and how those dependencies might shape the kind of being it could become and we might encounter? What it may not have is the knowledge of joy, love, pain, and suffering, to name just a few of the things that contribute to making us human. Even if it could understand these for itself how will it recognise them in others and learn to empathise and sympathise? And I wonder, what difference would that make? The people we are and the relationships we have, locate us in human society and place responsibilities on us to behave as society expects and we experience that in the nature and character of those with whom we form relationships. How will that work for an AI?

Let’s look at a real example of where this might take us. An experiment was carried out recently using a self-driving car. It explored a scenario which has long been imagined by ethicists but never actually tested. The car was instructed to drive along a test track the width of a typical two lane single carriageway road. Partway along the track a van was encountered coming the other way. No problem, there are two lanes, the van occupies only one so there is space to pass safely. At the last moment a motorbike pulls out from behind the lorry into the centre of the opposing lane directly in front of the car. The car first dodged towards the centre of the road looking for a gap between the two and then pulled back into its own lane driving head on into the motorbike and leaving the van to pass without collision. (I should reassure you that both the van and the bike were controlled remotely).

Now here’s the rub. Nobody knows, including the engineers that designed the AI that controlled the car, how it took that decision. Whilst it has some basic parameters that govern its behaviour it is continually learning from the situations it encounters each day on the road. We can speculate that faced with a choice of two bad options it took the one that would cause least damage to the occupants of the car. Such a constraint would be sensible in principle for a driving system; but how would that play out when the softer target, is a mother pushing a double buggy containing two small children? In the end, it is the nature of the AI system that will determine how those calls are made. (This is not a new ethical dilemma of course, it has long been known as the “Trolley problem” in which a runaway tramcar can only be directed down one of two tracks both of which involve human casualties). What is different is who (or perhaps, what) is making the decision and the choice made will, in the end, be governed by the kind of person, or intelligence, they are.

The last few years have seen great strides forward in our ability to build learning machines. Whilst we are (probably) not close to creating a truly human intelligence at present a recent survey of academics working in the field of AI research found that the average of all their predictions for the date by which a human level AI might appear was 2045. That’s not that far away and the clock is ticking. There are people in this room who can expect to be alive then should this happen.

We should also ask from where such an AI system might emerge? Whilst the expectation is that the likes of Google, Facebook or national governments are the front runners, this is the kind of thing that might emerge from a garage somewhere. My own team, building limited learning machines for linguistic analysis is based in two studies in southern England. You don’t need a lot of infrastructure to do this kind of research and it is now possible to buy computing power on an ad hoc basis on the net very cheaply. Nevertheless, with companies such as Microsoft, Google, Apple, Facebook et al. investing heavily in AI research it seems most likely that it is from a company like them, that a feasible AI may appear. Of course, we have no idea what governments are doing but we might ask what global companies and national governments might have to gain from such technologies.

A cancer diagnosing machine that out performs the best human diagnosis is clearly a good thing. Coupled with this is the reality that once such capabilities exist they will be a lot less expensive than training a human doctor to a similar level. This means we can offer treatment at lower cost and this is also a good thing except that we need to ask where the resources saved might go. Can we expect to see them redeployed in the service of human flourishing or will they simply turn into increased profits for share holders? Then again, Machines may be good at some cancer diagnoses but how comfortable would you be if the financial decisions which determined what treatment might be available for you were taken by machines? I suppose there would at least be a measure of consistency…

What about autonomous vehicles? These are already on the road in the US and it won’t be long before they appear on the motorways of Europe. If a machine can drive a long-distance lorry safely (and without the need for rest breaks) why not let it do the job? It is likely that the excellent safety records shown by trials will translate into fewer collisions. Again, a good thing, but what will that do to the insurance premiums of those of us that prefer to drive ourselves and, more importantly, what about the people who currently drive the trucks? There are about 350,000 long distance lorry drivers in the US alone. That’s a lot of people to put out of a job. And don’t forget the taxi drivers.  Uber is now trialling autonomous minicabs.

We have become accustomed to seeing blue collar workers put out of work by more efficient machines on assembly lines. But why bother with human managers in the financial sector and perhaps other industries? The machines can do it better. It might not be just truck drivers and welders who find themselves out of a job. What safeguards will we put in place to protect society from this kind of pressure? Of the 35 member countries of the Organisation for Economic Co-operation & Development (OECD – typically high income economies) it is estimated that 57% of all jobs are at risk of being given to machines. For the UK the proportion is 35%, for China 77% and for Ethiopia 88%.

Then there’s the military. Will we ever be content to allow a machine to decide when to launch a strike that may injure or kill human beings? Most people will say no but in environments where the speed of reaction may mean winning or losing an engagement there is huge pressure to take the advantage offered by autonomic systems. Even if we decide to outlaw this kind of thing, how do we know other countries will follow suit?

For some, and despite these concerns, the advent of a world served by artificial intelligence is something to look forward to. Supporters, who include Mark Zuckenberg of Facebook and Ray Kurweil who designed Siri, point to all the scientific and medical advances that might be accelerated if they were driven by a human level intelligence or perhaps even a super human intelligence working at the speed of a computer. They speculate that human beings will live longer, have more leisure time and that life will improve for all. Some imagine ‘uploading’ a human mind into a virtual container on line where the limits of biology can be transcended by computation. Amongst the AI research community, who are typically atheistic and unaware of the unconscious social and racial bias in many of their algorithms, the attitude is: “you don’t need faith, we can show you how it works”. The language they use to describe their hopes, however, has strong echoes of theology. The old spirit/body duality favoured by the Gnostics is present in many of their proposals, so much so that some commentators have spoken of a coming “Rapture of the Nerds”.

Others, including Nick Bostrom, Professor of Philosophy at Oxford University and Director of the Future Artificial Intelligence Research Centre, the author Yuval Harari and Elon Musk the founder of the Tesla electric vehicle company, fear we are sleep-walking into dystopia. What, they ask, would a human level AI be like? How would it reach its decisions? Who will benefit? All of humankind or perhaps just a few? Crucially, what would an AI’s goals be? This last question is particularly interesting.

Part of being human is that our humanity shapes and limits the goals we set out to achieve. But what will shape the goals for an AI? In the first instance human engineers would, no doubt, provide goals as part of the parameters for the machine as a whole. But if we are speaking of an AI, designed by human beings, with near human level intelligence, and the speed of thought of a computer, it wouldn’t be long before that AI redesigns itself in order to perform better. And having done so, why stop at doing this only once? Given continual improvements it is likely the rate of improvement will increase exponentially. Since we are speaking of virtual machines which exist as programs running on computers somewhere on the Cloud it would be a trivium for such a machine to clone better versions of itself. Each new version will be smarter than the old and will be able to design further improvements to itself. All of which will make it better able to achieve its goals. You see the problem. No matter how we have set the goals when the AI is first built, it has not only the ability, but a strong incentive, to reinvent itself in the pursuit of those goals. This could lead to some unexpected outcomes…

One way to imagine this kind of scenario is what is sometimes called The Paperclip Problem. Suppose we as humanity were to decide that the most important thing we need to keep us comfortable into the future is an endless supply of paper-clips. Let’s imagine that we construct an intelligent machine to manage the production of paperclips for us and we give the machine full control of the process. The machine, being a bright bunny – perhaps as a result of reinventing itself – soon realises that the more resources it has, the more paperclips it can make. At first this is limited by the amount of mild steel it is given as raw material, but then one day it realises that if it is able to buy more steel, it can make more paperclips and so be surer of fulfilling its goals. Of course to buy steel it needs money so it will need a connection to somewhere it can make some money. How about the financial markets – ah, yes, they are on line aren’t they… It uses its intelligence to trade and make money to buy more steel. As the supply of steel reaches its limit the solution for the machine is clearly to build more steel works and the ore mines and infra-structure they need will follow. You can probably see where this is going by now. In the end the machine gradually turns its whole world (universe?) into the raw materials for paperclips, thus ensuring humankind will never run out of the them but, possibly, at the expense of converting much of the world into the raw material for page fasteners in the process.

All of which sounds quite ludicrous. The trouble is, it is surprisingly hard to imagine how we might set goals for a machine that might not end up being perverted in this sort of way.

What, then, does all this mean for being human? I think it leaves us with some questions to consider. Many of us are defined by the work we do. As we hand more and more of that work over to machines more and more of us will need new ways to define what we are and how our lives are given value.

Who will really benefit from the improvements in efficiency promised? Are we content to allow global companies to reap profits to the benefit of their share holders or will we force a more equitable distribution of wealth; and if so, how will we enforce that?

If we were to discover one day that we were no longer top dog in the intelligence stakes, what might that mean for the future of humanity? We set much stock by human rights, but I wonder, what rights would we give an intelligent machine? It has to be said that for the period we have been at the top of the pile our track record caring for the rest of creation hasn’t been great and the question arises: from whom do we imagine our near human level AI might learn about how to treat those who aren’t quite as bright as itself? Our own example with the environment generally, and chimpanzees and the like more particularly, might not be the model we would want to encourage.

The way we treat our fellow human beings and the world around us helps define who we are. It can be argued that the way humankind treats the rest of creation not only devastates our world but diminishes us. If those who were created to care for the garden treat the creatures they share it with as no more than resources to be consumed then we become less that we were created to be. And if we discover one day that we are sharing our world with other intelligences we should, perhaps, consider what might be the best policy for negotiating that particular relationship.

As we wonder how to approach that conversation we might also reflect that for Christians, human beings draw their significance from the Incarnation. Rather than asserting with Déscartes, I think, therefore I am, we might do better to remember that: God loves me, therefore I am. That is the narrative that confers personhood on us. And that, if we call ourselves Christians, is the basis for our identity.

The Laws of Robotics (Isaac Asimov)

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  • Zeroth law, to precede the others: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Bibliography

Barrat J (2013), “Our Final Invention” New York, Thomas Dunne Books.*

Bostrom N (2014), “Superintelligence: Paths, Dangers, Strategies” Oxford, OUP.**

Hawkins J and Blakeslee S (2005), “On Intelligence” New York, Owl Books.***

Kurzweil R (2012), “How to create a mind” New York, Viking Penguin.

Yampolski R (2015), “Artificial Superintelligence: A Futuristic Approach” Boca Raton, London, CRC Press.

* See Barrat for a non-technical discussion of the benefits and dangers of AI.

** See Bostrom for more on the ethical and existential questions posed by AI.

*** See Hawkins for a discussion of the learning functions of brains and machines.

Share:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrFacebooktwittergoogle_plusredditpinterestlinkedintumblr
Follow Us:
FacebooktwitterinstagramFacebooktwitterinstagram