Is Artificial Intelligence really ‘intelligent’? – TheArticle

Is Artificial Intelligence really ‘intelligent’? – TheArticle

When Artificial Intelligence was in its infancy it was quite natural to give it a sonorous name. It needed to attract money and talent. It has since become a mainstream subject that seeks to imitate human intelligence. See a recent definition: “Artificial Intelligence is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Speech recognition, I remember the first steps. It was the late 1950s. I worked in industry. A colleague of mine, two benches away, had the job of recognising and printing out some limited speech consisting of the numbers from one to ten. He talked to an oscilloscope and watched the appearing waveform. He hoped to identify the numbers from the zero crossings on the oscilloscope, ie when the waveform changed sign. One day he told me that the problem had been solved. His machine had been able to recognise all those numbers. “May I try it?” I asked. “By all means,” he said. I tried, and counted up to ten. The machine ignored me. Several other people tried and failed too. As it turned out later, the machine could only work if addressed in a Polish accent. That was a long time ago. Since then software has been commercially available that understands not only those born in this country but also Hungarians, known to be mercilessly massacring the English language.

Machines can of course do a lot more nowadays than understand the spoken word. But are they intelligent? Where should our quest for intelligence take us? Games are good candidates. Let us look at a number of them starting with a simple one: Noughts and Crosses.

It is a trivial example. There are only nine squares. The machine can look at all combinations of moves and countermoves. They amount to about 35,000. Draughts is a game incomparably more complicated, but there are too many possible moves. Brute force, ie looking at all the possibilities, does not work. So what can be done?

A strategy was envisaged by Arthur Samuel whose first program goes back to 1959. He introduced a score function, which assessed the chances that any given move would eventually lead to a win. The function took into account the number of kings and how close any of the pieces were to becoming king. Samuel also introduced machine learning. He fed thousands of games into the computer, pinpointing winning strategies. He did his programming on an IBM computer. His machine could beat amateurs but not professionals. But even this partial success led to IBM stock rising, with the birth of this new computer application: games.

The game that stands above them all is chess. There is no chance of exhausting all possible moves, so it comes down to Samuel’s methods, a score function and learning from examples. Oddly enough this way of learning was first practised by a fictional character in Stefan Zweig’s Schachnovelle published in 1941 (recently mentioned by Raymond Keene in a column in these pages). The main character, an Austrian aristocrat, was imprisoned by the Nazis. While in solitary confinement he managed to get hold of a book containing all the moves in a high level Chess Tournament. Not having anything else to read, he just played them in his mind again and again and again. When he was released, his mental state was affected, but his play was good enough to beat the World Champion. Deep Blue, IBM’s computer trained to play chess, beat Kasparov, the reigning champion in the real world, in 1997 in a six-game match. Deep Blue had a three-way strategy. It played countless games (like the Austrian aristocrat), it had a score function and used Brute Force to evaluate the game six or seven moves ahead.

The machine’s victory is regarded as the greatest triumph of Artificial Intelligence, although it was somewhat marred by Kasparov’s claim that IBM cheated. He said that the machine must have been occasionally overruled by a human player, and this amounted to cheating because he would play differently against a human player than against a machine. The controversy was never resolved. IBM dismantled the machine very soon after the end of the match. Was Deep Blue intelligent? Not really, because it just did what it was programmed for. Its main advantage was speed. The programmers were intelligent (even if they cheated), Deep Blue was not.

So let’s go to GO, regarded by orientals as the supreme game. The computer, Deep Mind, challenged grandmasters including the world champion about three years ago. The computer won hands down. The main reason for winning was that a lot has happened in AI since Deep Blue. There has been a radical change in programming philosophy, It started with no knowledge of the game and built up its expertise by studying millions of actual games. It trained itself for the singular purpose of playing GO. It was a radical departure from previous approaches by not feeding into the computer any preliminary information on the nature of the game in question. It started from scratch, just like a non-swimmer thrown in at the deep end of a swimming pool.

Games are games. They are excellent demonstrations of how to solve problems where the criteria of success are well defined, and the rules are known. But let us widen our scope and look at a much-predicted product of Artificial Intelligence — driverless cars. If perfected, could this be regarded as matching human intelligence? I think the answer is yes. Driverless cars would, no doubt, be a great improvement over human-driven cars. They have many advantages. They would never be under the influence of alcohol nor drugs, they would never race a fellow-driverless car. They would never try to show off to impress a girl-friend and they would never fall asleep.

Even so, we are still very far from the driverless stage. When will they be ready? In a year or two? In ten years? In thirty years? Next century, perhaps? Part of the reason is technical. How can they be trained? Not like GO. Driverless cars cannot learn by going up and down a street a million times. Even a thousand times would not go down well with those living there. And even if everything goes well with the first two thousand journeys down a street, something new — the development of a new junction — might invalidate all that training. And that was only one street.

If that wasn’t enough, there is a psychological barrier as well — the fear of accidents. It may very well happen that driverless cars turn out to be safer than those driven by ordinary mortals. They might cause only, say, 900 fatal accidents in a year in contrast to the 1,700 caused by human drivers in the UK. Will we be happy? Unlikely. We accept human errors because we often commit them ourselves. But if there were ever a fatal accident caused by a driverless car we would blame the manufacturers and demand that their product should be banned from the roads.

Much of what goes on at the moment as Artificial Intelligence is hype. Many of the functional applications already in existence need no intelligence, but use instead the assiduous collection of data combined with known techniques of automation. On the whole I would claim that the programmers are intelligent, but the machines are not. In one application, driving cars, machine intelligence might indeed surpass human intelligence but that application may never come. Machines could of course help humans in arriving at decisions, say diagnoses in medicine, but very few patients would be happy if the decisions were made by machines alone.

Source

Leave a Reply

%d bloggers like this: