Note: This post is a further unpacking of a concept introduced in this post. That concept can be stated briefly as follows: Stupefication works, in part, via the distillation of context-dependent human knowledge into a database. But the computer makes use of such “knowledge” in ways that do not mirror human usages. And to draw a naive parallel between the way computers use databases and the way human beings use databases is to commit a fallacy which I see committed all too often by those who espouse the glories of Good Old Fashioned Artificial Intelligence.

Human beings use chess opening books like maps to a territory. We recognize an intrinsic connection between games of chess and strings of numbers and letters like

“1) e4 e5 2) Nf3 Nc6 3) Bb5”

A human being familiar with algebraic notation will see these symbols and connect them with their meaning, just as many of us, upon seeing the word “apple,” conjure up an image of a red fruit. Knowing the connection between symbol and meaning, between the syntax and its semantics, the human chess player can use the recorded chess game like a map, locating a previous game that reached a position identical to (or very similar to) her current game’s position. Then she can use that game to follow a previously cut path through the thicket of chess variations which lie ahead of her.

Computers don’t. By this, I mean that a computer “follows” the map in the sense that I might “follow” a straight line when falling down a well. The machine makes no semiotic connection between the syntax of algebraic notation and the semantics of chess activity. It merely obeys physical laws — albeit a very complex set of them. That’s the neat thing about algorithms. They allow computer scientists to subject computers to a mind-bogglingly complex set of physical laws (in this case, a series of high and low voltages, represented as ones and zeros) such that they can play chess, emulate bad conversations, and display web pages like this one. But despite the many layers of abstraction that lie between Deep Blue’s spectacular middle-game play and the succession of high and low voltages that make it all possible, the fact remains that the computer functions in a purely deterministic way. Incidentally, this is exactly how you or I would function when subjected to the law of gravity after being dropped down a well.

“Ah, yes,” say the gallant defenders of Strong AI (in Searle’s sense of the term.) “But how can you prove that human beings are actually doing anything different? Even human beings may operate according to purely deterministic rules — in which case, we all ‘fall down wells’ when we think.”

The only weight this objection carries is the weight of confusion. When I say, “Humans do semantics; computers just operate deterministically on syntax,” the objection says, “Ah, but how do you know semantics isn’t just a series of deterministic operations on syntax?” Philosophers like Searle and Floridi have tackled this issue before. Here’s my stab at dealing with it concisely:

Even if I grant that when we say “semantics” we are actually referring to a deterministic and essentially digital process, the fact remains that the deterministic-and-essentially-digital-process that I am engaged in when doing “semantics” is of a fundamentally different nature than the deterministic-and-essentially-digital-process that Deep Blue is engaged in when its software obeys the lines of code that tell it to refer to its opening book and to play 2) Nf3 in response to 1) … e5. So can I prove that the universe doesn’t operate deterministically?  No. Can I prove that our minds aren’t, by nature, digital? No. But I can reveal the aforementioned objection for the house of smoke and mirrors that it is. My point (and Searle’s point) is that human beings do one thing and computers do something very different. And making such a proposition doesn’t require a complete knowledge of the nature of semantics or any amount of empirical evidence that thought isn’t digitally based. The objection mentioned above implies the need for such a presupposition even though no such need exists.  And the onus lies on the objector to prove that human beings and computers engage in processes that share any kind of functional similarity.

Advertisements

The Problem of Relevance

April 26, 2008

Let me start by giving a quotation we’ve dealt with before:

“[For an AI project to be successful, it is necessary that] all relevant knowledge/understanding that is presupposed and required by the successful performance of the intelligent task, can be discovered, circumscribed, analyzed, formally structured and hence made fully manageable through computable processes” (Floridi, 146.)

Floridi’s use of the word “relevant” is suspect here. He gives us no indication of whether he means “that which is relevant to us when performing a task X” or “that which is relevant to a computer when performing the stupefied version of task X.” Considering that Floridi advocates a non-mimetic approach to artificial intelligence, I think we should assume that he means the latter.

But this leaves the would-be creators of an artificially intelligent system in a pickle:

Q: How do we stupefy task X?

A: You need to make a new task or series of tasks which require less intelligence overall.

Q: Ah. How do we do that?

A: Luciano Floridi might suggest (given the above quotation) that you need to first discover, circumscribe, analyze, and formally structure all the knowledge relevant to the stupefied task.

Q: But how do I know what’s relevant before the stupefied task even exists?

In other words, I know what’s relevant to me when, say, playing chess. But I also know that the stupefied task of chess-playing bears little resemblance to the task I perform when playing chess. So I can conclude that the things relevant to the stupefied task of chess-playing might be things that aren’t at all relevant to me when playing chess.

Here’s a more formal statement of the problem of relevance:

How can we discover rules for governing a formal system that does X if we don’t yet know how the formal system will do X?

I don’t have an answer except to say that this problem reveals that the task of stupefication — in addition to being one that can require tremendous human intelligence — is also one that can require a great deal of human creativity. I think you have to say, “Hmmm. Maybe a chess-playing machine could benefit from a complicated formula for weighing the comparative values of a chess position’s material, structural, and temporal imbalances.” Human beings don’t use such a strict formula, so it’s simply an educated guess that a computer could effectively put one into practice. The reason the guess is educated, though, is that the following mental maneuver gets performed: “If a human being could evaluate a position based on a complicated formula, precisely and accurately calculated, she might play a better game of chess.” This hypothetical reasoning is where the creative act resides.

Of course, some such guesses are more educated than others. For example, it makes sense to assume that a computer could benefit (at least in the opening phase of a chess game) from the ability to reference the moves of a few thousand grandmaster games and to copy those moves in its own games. This assumption makes sense because a human too would benefit from such grandmaster assistance — which is why this activity is generally frowned upon by chess tournament administrators.

Unfortunately, the problem of relevance (and the creativity required to surmount it) can suddenly become critical when the domain of the problem is such that we cannot reason by analogy, cannot consider that which is relevant to us and hypothesize that something similar might be relevant to a formal system. For example, when trying to stupefy linguistic tasks instead of chess-playing tasks, we are immediately confronted by the problem that we don’t even know what’s relevant to us when we speak let alone what might be relevant to a formal system which functions nothing like us. Why do we say what we say? How do we know what we say is correct? How do we understand each other? These are questions that have been attacked in countless ways by philosophers for over two-thousand years.

And the upshot of the problem of relevance is that, even if some lucky philosopher happened to have gotten it right, happened to have discovered the essence of language and how we use it, that answer may or may not have anything to do with how we might go about creating a formal system that mimics human linguistic practices.

Citations:

Floridi, Luciano. Philosophy and Computing: An Introduction. London and New York: Routledge, 1999.