When we stupefy a task, we construct another task or series of tasks, the sum of which require less intelligence overall.

The trivial example:

If my task is to produce a novel, you could stupefy the task for me by writing the novel and letting me copy it. The act of copying a novel requires less intelligence than creating a novel, yet it appears — if examined only in terms of its final product — identical to the act of creation. By letting me copy your novel, you have stupefied a potentially intelligent act by extracting the intellectual requirement from it, taking the burden of intelligence upon your own shoulders, doing the intelligence-requiring work, and letting me produce the output.

Some people may rightly object to this example by pointing out that, overall, the amount of intelligence utilized has not decreased at all (and may in fact have increased.) For whether or not I wrote the novel, a novel was written; and writing a novel requires an expenditure of intelligence no matter who writes it. But let us remember that stupefication is often a task that requires intelligence (see this post.) Thus, you may have stupefied my task of creating a novel at the expense of great expenditures of your own intelligence. Likewise, the makers of Deep Blue created a computer that could trounce any of its creators in a chess game — but making the machine was a long and difficult road.

But no matter how you slice it, my aptly named “trivial example” is so absurdly simple as to be useless in the context of manufacturing artificial intelligence. After all, virtually any task (with a few notable exceptions) can be stupefied in this way. But doing so will never constitute the creation of an artificially intelligent system.

Deep Blue would have been a silly machine indeed if its “internal” calculations had been performed in real-time by human grandmasters hiding inside its belly (which, by the way, would have been merely a modern resurrection of the famous Mechanical Turk hoax from the 18th century.) It is unlikely that even the most naive of individuals would have considered The Turk an example of artificial intelligence. For The Turk is merely an example of old fashioned human intelligence.

That being said, this idea of hidden human intelligence connects in an interesting way to the quotation in this post, which suggests that when Kasparov played against Deep Blue, he was actually playing against “the ghosts of grandmasters past.” If so, then although there were no literal grandmasters hiding in the belly of the machine, years of powerful over-the-board intellect managed, in some form or another, to find its way into Deep Blue’s physical and/or virtual innards.

I offer the trivial example in order to initiate a more complex discussion later and because it demonstrates clearly that stupefication of a task is not a sufficient condition for the creation of artificial intelligence.

Artificial intelligence is not always aptly defined, and I’m not going to enter the ongoing debate about what constitutes artificial intelligence. I will merely suggest that the trivial example above is not it.

For one thing, the machinery of The Turk was no different from a rock, in that it obeyed purely physical laws and moved only when the grandmaster hiding inside manipulated it with his hands and feet. The Turk made no decisions. I will tentatively suggest here that in order to create an artificially intelligent system, the process of stupefication must result in 1) a new task, which 2) can be performed by a different entity and which 3) involves some level of decision making.

I mention this last qualification because I could easily make a machine that “writes novels” by programming it to print out novels that I had written. This would be The Turk all over again. And few people would be impressed.

I will address the idea of decision making later. To that end, consider the following tasks.

A Less Trivial Task: Creating a novel that has never been written before.

An Even Less Trivial Task: Stupefying the previous task