Andrew Maggs: Hello everyone. My name's Andrew Maggs. I'm a Principal Associate in Wragge Lawrence Graham & Co's Tech team. I'm here today with Murray Shanahan, Professor of Cognitive Robotics at Imperial College, London. Murray you've just written a book called "The Technological Singularity" which is shortly going to be published by MIT Press. Could you tell us a little bit about the technological singularity, what it is, will it happen and if it does happen, what will it mean for us?
Murray Shanahan: So the technological singularity is a hypothesized point in history when we create human level artificial intelligence. The term is used a little differently by different authors but that's the sense in which I mean it, so the point at which we make human level AI and the idea is that if we ever did and it is certainly an if, but if we ever did manage to create human level artificial intelligence then the potential ramifications for society would be so great that it would be like a sort of break in history and the term actually is by analogy with a black hole. So inside a black hole the mathematics sort of breaks down in a sense and it's called the singularity because, mathematically speaking, we don't really understand from a mathematical point of view what happens within a black hole.
So it's just an analogy with that, so similarly we don't really understand quite what will happen after this point in history, but it's really all about artificial intelligence, so it's about a point in history where we create human level AI and will it happen? Well we really don't know. It certainly isn't something that's around the corner. Creating human level artificial intelligence is something we don't know how to do and I imagine that there are perhaps a number of conceptual breakthroughs that we need to make before we'll ever get there, but some authors predict that we'll achieve that in the middle of the 21st century and other authors or other people working in AI are much more sceptical about it.
Now I don't want to make predictions and I don't want to be a futurologist so I'm not particularly bothered about thinking exactly when it will occur but just the very possibility I think is very very important and very very interesting and it raises a whole lot of really significant philosophical questions and then, if it did occur if we did manage to make human level artificial intelligence what would those ramifications be, I think a lot of people think that it could be very black and white. So we've been hearing a lot lately about dystopian scenarios, so we've heard people like Elon Musk and Stephen Hawking pronouncing about how dangerous it would be if we ever created human level artificial intelligence and certainly there are very important considerations there. We need to take very seriously those kinds of ramifications but then other authors like Ray Kurzweil are much more optimistic so they think it will lead to a kind of utopia, we'll all be able to cure all sorts of problems, disease and poverty and climate change will all be solved thanks to our having created this incredibly powerful technology.
Shall I go on? Then another important aspect of this idea is that and one reason why it's potentially so dramatic is a lot of authors think that once we create human level artificial intelligence then it will be able to improve itself so you'll very soon get to a kind of super intelligence and we'd end up with an AI that's very very powerful and we really don't understand quite what the ramifications of that are. But I should emphasise, and this is a very very important point, that this human level AI is something that's decades in the future if it will happen at all, and there is also at the moment of course, a great deal of interest in the kind of AI technology that is around the corner, so the kind of AI technology we're seeing coming out in things like self-driving cars or in personal assistants like Siri and Cortana and so on and in the way we process big data and are able to make computer programs that can make very sophisticated decisions on the basis of very very large amounts of data.
So all of that is gonna have a short term, even much more predictable but also quite dramatic, influence I think on the society and on our economy. It's very important to distinguish between short term specialised bits of AI technology that aren't human level AI, they're specialised bits of AI tech, and the longer term more science fiction kind of AI that we're talking about when we talk about a possible technological singularity.
Andrew: Thank you Murray for talking to us today and thank you very much for listening to our podcast. We hope you found it interesting and useful. If you have any further questions please get in contact with anyone in our Tech team.