Note: This post is a further unpacking of a concept introduced in this post. That concept can be stated briefly as follows: Stupefication works, in part, via the distillation of context-dependent human knowledge into a database. But the computer makes use of such “knowledge” in ways that do not mirror human usages. And to draw a naive parallel between the way computers use databases and the way human beings use databases is to commit a fallacy which I see committed all too often by those who espouse the glories of Good Old Fashioned Artificial Intelligence.

Human beings use chess opening books like maps to a territory. We recognize an intrinsic connection between games of chess and strings of numbers and letters like

“1) e4 e5 2) Nf3 Nc6 3) Bb5”

A human being familiar with algebraic notation will see these symbols and connect them with their meaning, just as many of us, upon seeing the word “apple,” conjure up an image of a red fruit. Knowing the connection between symbol and meaning, between the syntax and its semantics, the human chess player can use the recorded chess game like a map, locating a previous game that reached a position identical to (or very similar to) her current game’s position. Then she can use that game to follow a previously cut path through the thicket of chess variations which lie ahead of her.

Computers don’t. By this, I mean that a computer “follows” the map in the sense that I might “follow” a straight line when falling down a well. The machine makes no semiotic connection between the syntax of algebraic notation and the semantics of chess activity. It merely obeys physical laws — albeit a very complex set of them. That’s the neat thing about algorithms. They allow computer scientists to subject computers to a mind-bogglingly complex set of physical laws (in this case, a series of high and low voltages, represented as ones and zeros) such that they can play chess, emulate bad conversations, and display web pages like this one. But despite the many layers of abstraction that lie between Deep Blue’s spectacular middle-game play and the succession of high and low voltages that make it all possible, the fact remains that the computer functions in a purely deterministic way. Incidentally, this is exactly how you or I would function when subjected to the law of gravity after being dropped down a well.

“Ah, yes,” say the gallant defenders of Strong AI (in Searle’s sense of the term.) “But how can you prove that human beings are actually doing anything different? Even human beings may operate according to purely deterministic rules — in which case, we all ‘fall down wells’ when we think.”

The only weight this objection carries is the weight of confusion. When I say, “Humans do semantics; computers just operate deterministically on syntax,” the objection says, “Ah, but how do you know semantics isn’t just a series of deterministic operations on syntax?” Philosophers like Searle and Floridi have tackled this issue before. Here’s my stab at dealing with it concisely:

Even if I grant that when we say “semantics” we are actually referring to a deterministic and essentially digital process, the fact remains that the deterministic-and-essentially-digital-process that I am engaged in when doing “semantics” is of a fundamentally different nature than the deterministic-and-essentially-digital-process that Deep Blue is engaged in when its software obeys the lines of code that tell it to refer to its opening book and to play 2) Nf3 in response to 1) … e5. So can I prove that the universe doesn’t operate deterministically?  No. Can I prove that our minds aren’t, by nature, digital? No. But I can reveal the aforementioned objection for the house of smoke and mirrors that it is. My point (and Searle’s point) is that human beings do one thing and computers do something very different. And making such a proposition doesn’t require a complete knowledge of the nature of semantics or any amount of empirical evidence that thought isn’t digitally based. The objection mentioned above implies the need for such a presupposition even though no such need exists.  And the onus lies on the objector to prove that human beings and computers engage in processes that share any kind of functional similarity.

Advertisements