Archive | robots RSS for this section

We might be in Jeopardy, my dear Watson

On August 29th, 1997, Skynet became self-aware. Afraid of what this might mean, Skynet’s operators attempted to shut it down. Seeing this as an attack, Skynet launched the whole of the United States’ nuclear arsenal at Russia, who retaliated with an all-out attack on the USA. Billions of people died in the exchange.

Fortunately, this was science fiction. It didn’t happen.

Fast forward to 2011. Watson, a supercomputer designed and built by IBM, took part in a game of Jeopardy. A peculiar game where the contestants are given the answers, but have to come up with the questions. I suppose it’s a bit like Mock the Week but without the jokes. Watson was put up against two of the best Jeopardy players ever, and the following 23 minute video shows the results (if you haven’t time for the whole 23 minutes, try this 3 minute one although the longer one really is worth the effort):

Should we worry about Watson? Will he (it’s hard not to anthropomorphise) become self-aware, instigate a nuclear exchange, decimate humanity? Well, probably not. For a start, “they” haven’t – as far as we know – connected him up to any nuclear missile systems, so we’re probably safe for now. And although Watson could probably pass the Turing test, he is little more than a language processing and inference engine connected to a large knowledge base. This isn’t what most people would understand by “intelligence” (let’s set aside the difficulty in defining exactly what intelligence is).

I am a proponent of “Strong AI”. I believe it is possible for a machine to have a mind in the same way that humans have minds, and I have absolutely no doubt that given enough time – and “enough” might not mean all that long, perhaps less than a decade – that we will eventually devise machines which possess real intelligence. This is such an obvious goal that it is inconceivable that it would not be attempted, and unless intelligence really is substrate-dependent, as philosophers like John Searle would have us believe, then eventually we will succeed. That intelligence might not be the same as ours – I consider it highly unlikely and undesirable that it would be – but it will almost certainly surpass our own. This has certain implications.

Human beings are not the strongest, fastest, most durable of species on planet Earth. Our single distinguishing characteristic, that which has driven our success, is our intelligence. When the day comes that we have created an artificially sentient being (a much better phrase, I think) how will we, as a species, react to no longer being the most intelligent beings on the planet?

What rights should we give to these new beings? Will we consider them our slaves, demanding that they do our bidding, with no rights because they are not human? Or will we recognise them as equals, or even superiors, and work in cooperation with them?

If we afford them rights similar to those humans enjoy, how long will it be before machines are given the right to vote? Will we permit them to stand for office and if elected will we really allow them to participate in the enactment of law? And how long, then, before they begin to enact laws which are in the favour of machines and to the detriment of humans? Once this process begins, if we assume they have some level of self interest, how long will it be before we are the slaves and they the masters?

Advertisements