Note: This post is a further unpacking of a concept introduced in this post. That concept can be stated briefly as follows: Stupefication works, in part, via the distillation of context-dependent human knowledge into a database. But the computer makes use of such “knowledge” in ways that do not mirror human usages. And to draw a naive parallel between the way computers use databases and the way human beings use databases is to commit a fallacy which I see committed all too often by those who espouse the glories of Good Old Fashioned Artificial Intelligence.

Human beings use chess opening books like maps to a territory. We recognize an intrinsic connection between games of chess and strings of numbers and letters like

“1) e4 e5 2) Nf3 Nc6 3) Bb5”

A human being familiar with algebraic notation will see these symbols and connect them with their meaning, just as many of us, upon seeing the word “apple,” conjure up an image of a red fruit. Knowing the connection between symbol and meaning, between the syntax and its semantics, the human chess player can use the recorded chess game like a map, locating a previous game that reached a position identical to (or very similar to) her current game’s position. Then she can use that game to follow a previously cut path through the thicket of chess variations which lie ahead of her.

Computers don’t. By this, I mean that a computer “follows” the map in the sense that I might “follow” a straight line when falling down a well. The machine makes no semiotic connection between the syntax of algebraic notation and the semantics of chess activity. It merely obeys physical laws — albeit a very complex set of them. That’s the neat thing about algorithms. They allow computer scientists to subject computers to a mind-bogglingly complex set of physical laws (in this case, a series of high and low voltages, represented as ones and zeros) such that they can play chess, emulate bad conversations, and display web pages like this one. But despite the many layers of abstraction that lie between Deep Blue’s spectacular middle-game play and the succession of high and low voltages that make it all possible, the fact remains that the computer functions in a purely deterministic way. Incidentally, this is exactly how you or I would function when subjected to the law of gravity after being dropped down a well.

“Ah, yes,” say the gallant defenders of Strong AI (in Searle’s sense of the term.) “But how can you prove that human beings are actually doing anything different? Even human beings may operate according to purely deterministic rules — in which case, we all ‘fall down wells’ when we think.”

The only weight this objection carries is the weight of confusion. When I say, “Humans do semantics; computers just operate deterministically on syntax,” the objection says, “Ah, but how do you know semantics isn’t just a series of deterministic operations on syntax?” Philosophers like Searle and Floridi have tackled this issue before. Here’s my stab at dealing with it concisely:

Even if I grant that when we say “semantics” we are actually referring to a deterministic and essentially digital process, the fact remains that the deterministic-and-essentially-digital-process that I am engaged in when doing “semantics” is of a fundamentally different nature than the deterministic-and-essentially-digital-process that Deep Blue is engaged in when its software obeys the lines of code that tell it to refer to its opening book and to play 2) Nf3 in response to 1) … e5. So can I prove that the universe doesn’t operate deterministically?  No. Can I prove that our minds aren’t, by nature, digital? No. But I can reveal the aforementioned objection for the house of smoke and mirrors that it is. My point (and Searle’s point) is that human beings do one thing and computers do something very different. And making such a proposition doesn’t require a complete knowledge of the nature of semantics or any amount of empirical evidence that thought isn’t digitally based. The objection mentioned above implies the need for such a presupposition even though no such need exists.  And the onus lies on the objector to prove that human beings and computers engage in processes that share any kind of functional similarity.

Advertisements

Dreyfus attacks several of the foundational presuppositions of AI in his book What Computers Can’t Do.

1) The Biological Assumption — That we act, on the biological level, according to formal rules, i.e., that our brain is a digital computer and our minds are analogous to software.

2) The Psychological Assumption — That, regardless of whether our brains are digital computers, our minds function by performing calculations, i.e., by running algorithms.

3) The Epistemological Assumption — That regardless of whether our brains are digital computers or whether our minds run algorithms, the things our minds do can be described according to formal rules (and hence, by algorithms.) This is, naturally, a weaker assumption, yet one required by the idea of stupefication.

4) The Ontological Assumption — That “the world can be exhaustively analysed in terms of context free data or atomic facts” (205).

The epistemological assumption is the one that we ought to be concerned with at the moment, as evidenced by this quotation on the matter:

“[The question] is not, as Turing seemed to think, merely a question of whether there are rules governing what we should do, which can be legitimately ignored. It is a question of whether there can be rules even describing what speakers in fact do” (203).

In light of the previous post on descriptive rules, we can posit that stupefication requires a kind of epistemological assumption: that mental tasks like that of playing chess and (perhaps) of communicating in a natural language can be described by formal rules, even if those formal rules have nothing to do with what we happen to be doing while performing that task.

In Dreyfus’s book, he undermines the epistemological assumption (along with the three other assumptions) by showing that they cannot be held in all cases and with regard to all human activities. However, I don’t think this is necessarily very crippling. Even if there can be no comprehensive set of rules or formal systems that fully describe all intelligent human behaviour, AI is hardly done for. The questions merely change from ones like “Can we make a formal system that fully describes task X?” to “How close can we get to describing task X in a formal system?” And this may well put us back in Turing’s court, where the benchmark is how many people the formal system can fool.

In other words, I’m questioning the validity of this proposition:

“[The] assumption that the world can be exhaustively analyzed in terms of context free data or atomic facts is the deepest assumption underlying work in AI and the whole philosophical tradition” (Dreyfus, 205).

Dreyfus is probably right in terms of the majority of the research that has been done in AI over the past century. But this ontological assumption need not be the “deepest assumption” underlying projects that seek to stupefy. For in stupefication, one takes up the mantel of the epistemological assumption while relegating the ontological assumption to a hypothesis that must be empirically verified, not necessarily assumed — and certainly not assumed “exhaustively,” as Dreyfus suggests.

Citations:

Dreyfus, Hubert L. What Computers Can’t Do. New York : Harper and Row, 1979

“Consider the planets. They are not solving differential equations as they swing around the sun. They are not following any rules at all; but their behavior is nonetheless lawful, and to understand their behavior we find a formalism — in this case — differential equations — which expresses their behavior according to a rule” (Dreyfus, 189).

In other words, rules are descriptive, not prescriptive. Given the proper descriptive rules, computer scientists and mathematicians can model the movements of the planets, even though the planets never do any mathematical calculations. In a similar way, given the proper descriptive rules, computer scientists might be able to model the movements of a human chess player and (perhaps) the “movements” of a human interlocutor. But the planets, the chess player, and the interlocutor need not have anything whatsoever to do with the formal systems that describe them. This is the fallacy that stupefication helps us skirt and which traditional GOFAI often fails to skirt.

Thus, the kind of language games mentioned at the end of my previous post, and which we’ll talk about later, need not be games that human beings play and need not be governed by rules that govern human linguistic practices.

Citations:

Dreyfus, Hubert L.  What Computers Can’t Do. New York : Harper and Row, 1979.

Hubert Dreyfus wrote his book What Computers Can’t Do long before Luciano Floridi came onto the scene. Yet the following point seems specifically constructed to shed light on the problem of relevance (mentioned in this post):

“As long as the domain in question can be treated as a game, i.e., as long as what is relevant is fixed, and the possibly relevant factors can be defined in terms of context-free primitives, then computers can do well in the domain” (Dreyfus, 27).

Dreyfus doesn’t expound upon exactly what kinds of games he has in mind; but I think it’s safe to say that he isn’t talking about all games. After all, there are certainly games like soccer (which is analogue) and nomic (which is unstable) that would foil a computer readily.

But there are certain games with qualities that make them ideal domains for attack by projects in artificial intelligence. Chess is one of these games. Let us try to itemize the qualities that make such games so conducive to formalization:

1) Such games consist of states.

2) Such games have rules that govern changes in state.

3) Such games are stable, i.e., the rules either stay constant or change only in correspondence with other rules that do stay constant.

4) Such games are transparent, i.e., the rules can be known because they are simple enough to understand.

5) Such games have a bounded set of rules, i.e., the rules can be itemized because they are finite in number.

6) Such games have a bounded set of states, i.e., the number of possible game states is finite, even if astronomical.

7) Such games have winning conditions that can be assessed from within the system itself, i.e., there are rules that can designate some states as won and others as lost. (Note: we can weaken this condition to include games that cannot be won or lost; but there must still exist rules that designate some states as better than others or worse than others, in order for such games to be conducive to productive computational analysis.)

To wrap all of this into a tidy package: such games (considered to be a collection of states, transition rules, and evaluation rules) must be representable as a finite state machine. If so, then they can be represented syntactically. And algorithms can be written for their governance.

Bear in mind, however, that this is a necessary condition, not a sufficient one. The above criteria merely distinguish games that can be formalized from ones that can’t. But within the set of games that can be formalized, there can (and most likely do) exists games with such complex states or such complex transition rules that they are computationally intractable. So we must add another necessary condition:

8) Such games must be tractable, i.e., not only must they have a finite number of states, transition rules, and evaluation rules; these states and rules must be few enough and simple enough to effectively compute the state-to-state transitions required for playing the game.

But even this addition doesn’t guarantee that the game will be a domain in which artificial intelligence projects can thrive. Formalization and tractability don’t imply that an artificially intelligent system (or its creators) will be capable of applying heuristics and/or strategic rules necessary for a high level of play.

Nonetheless, considering Deep Blue’s success in the face of so much skepticism, a little optimism might be in order if the above conditions happen to be met.

In closing, my food-for-thought question of the day is, “Can linguistic domains be transformed into games that meet the above criteria?” I think we’ll visit Wittgenstein soon. He has quite a bit to say about language games.

Citations:

Dreyfus, Hubert L. What Computers Can’t Do. New York : Harper and Row, 1979.

The Problem of Relevance

April 26, 2008

Let me start by giving a quotation we’ve dealt with before:

“[For an AI project to be successful, it is necessary that] all relevant knowledge/understanding that is presupposed and required by the successful performance of the intelligent task, can be discovered, circumscribed, analyzed, formally structured and hence made fully manageable through computable processes” (Floridi, 146.)

Floridi’s use of the word “relevant” is suspect here. He gives us no indication of whether he means “that which is relevant to us when performing a task X” or “that which is relevant to a computer when performing the stupefied version of task X.” Considering that Floridi advocates a non-mimetic approach to artificial intelligence, I think we should assume that he means the latter.

But this leaves the would-be creators of an artificially intelligent system in a pickle:

Q: How do we stupefy task X?

A: You need to make a new task or series of tasks which require less intelligence overall.

Q: Ah. How do we do that?

A: Luciano Floridi might suggest (given the above quotation) that you need to first discover, circumscribe, analyze, and formally structure all the knowledge relevant to the stupefied task.

Q: But how do I know what’s relevant before the stupefied task even exists?

In other words, I know what’s relevant to me when, say, playing chess. But I also know that the stupefied task of chess-playing bears little resemblance to the task I perform when playing chess. So I can conclude that the things relevant to the stupefied task of chess-playing might be things that aren’t at all relevant to me when playing chess.

Here’s a more formal statement of the problem of relevance:

How can we discover rules for governing a formal system that does X if we don’t yet know how the formal system will do X?

I don’t have an answer except to say that this problem reveals that the task of stupefication — in addition to being one that can require tremendous human intelligence — is also one that can require a great deal of human creativity. I think you have to say, “Hmmm. Maybe a chess-playing machine could benefit from a complicated formula for weighing the comparative values of a chess position’s material, structural, and temporal imbalances.” Human beings don’t use such a strict formula, so it’s simply an educated guess that a computer could effectively put one into practice. The reason the guess is educated, though, is that the following mental maneuver gets performed: “If a human being could evaluate a position based on a complicated formula, precisely and accurately calculated, she might play a better game of chess.” This hypothetical reasoning is where the creative act resides.

Of course, some such guesses are more educated than others. For example, it makes sense to assume that a computer could benefit (at least in the opening phase of a chess game) from the ability to reference the moves of a few thousand grandmaster games and to copy those moves in its own games. This assumption makes sense because a human too would benefit from such grandmaster assistance — which is why this activity is generally frowned upon by chess tournament administrators.

Unfortunately, the problem of relevance (and the creativity required to surmount it) can suddenly become critical when the domain of the problem is such that we cannot reason by analogy, cannot consider that which is relevant to us and hypothesize that something similar might be relevant to a formal system. For example, when trying to stupefy linguistic tasks instead of chess-playing tasks, we are immediately confronted by the problem that we don’t even know what’s relevant to us when we speak let alone what might be relevant to a formal system which functions nothing like us. Why do we say what we say? How do we know what we say is correct? How do we understand each other? These are questions that have been attacked in countless ways by philosophers for over two-thousand years.

And the upshot of the problem of relevance is that, even if some lucky philosopher happened to have gotten it right, happened to have discovered the essence of language and how we use it, that answer may or may not have anything to do with how we might go about creating a formal system that mimics human linguistic practices.

Citations:

Floridi, Luciano. Philosophy and Computing: An Introduction. London and New York: Routledge, 1999.

When we stupefy a task, we construct another task or series of tasks, the sum of which require less intelligence overall.

The trivial example:

If my task is to produce a novel, you could stupefy the task for me by writing the novel and letting me copy it. The act of copying a novel requires less intelligence than creating a novel, yet it appears — if examined only in terms of its final product — identical to the act of creation. By letting me copy your novel, you have stupefied a potentially intelligent act by extracting the intellectual requirement from it, taking the burden of intelligence upon your own shoulders, doing the intelligence-requiring work, and letting me produce the output.

Some people may rightly object to this example by pointing out that, overall, the amount of intelligence utilized has not decreased at all (and may in fact have increased.) For whether or not I wrote the novel, a novel was written; and writing a novel requires an expenditure of intelligence no matter who writes it. But let us remember that stupefication is often a task that requires intelligence (see this post.) Thus, you may have stupefied my task of creating a novel at the expense of great expenditures of your own intelligence. Likewise, the makers of Deep Blue created a computer that could trounce any of its creators in a chess game — but making the machine was a long and difficult road.

But no matter how you slice it, my aptly named “trivial example” is so absurdly simple as to be useless in the context of manufacturing artificial intelligence. After all, virtually any task (with a few notable exceptions) can be stupefied in this way. But doing so will never constitute the creation of an artificially intelligent system.

Deep Blue would have been a silly machine indeed if its “internal” calculations had been performed in real-time by human grandmasters hiding inside its belly (which, by the way, would have been merely a modern resurrection of the famous Mechanical Turk hoax from the 18th century.) It is unlikely that even the most naive of individuals would have considered The Turk an example of artificial intelligence. For The Turk is merely an example of old fashioned human intelligence.

That being said, this idea of hidden human intelligence connects in an interesting way to the quotation in this post, which suggests that when Kasparov played against Deep Blue, he was actually playing against “the ghosts of grandmasters past.” If so, then although there were no literal grandmasters hiding in the belly of the machine, years of powerful over-the-board intellect managed, in some form or another, to find its way into Deep Blue’s physical and/or virtual innards.

I offer the trivial example in order to initiate a more complex discussion later and because it demonstrates clearly that stupefication of a task is not a sufficient condition for the creation of artificial intelligence.

Artificial intelligence is not always aptly defined, and I’m not going to enter the ongoing debate about what constitutes artificial intelligence. I will merely suggest that the trivial example above is not it.

For one thing, the machinery of The Turk was no different from a rock, in that it obeyed purely physical laws and moved only when the grandmaster hiding inside manipulated it with his hands and feet. The Turk made no decisions. I will tentatively suggest here that in order to create an artificially intelligent system, the process of stupefication must result in 1) a new task, which 2) can be performed by a different entity and which 3) involves some level of decision making.

I mention this last qualification because I could easily make a machine that “writes novels” by programming it to print out novels that I had written. This would be The Turk all over again. And few people would be impressed.

I will address the idea of decision making later. To that end, consider the following tasks.

A Less Trivial Task: Creating a novel that has never been written before.

An Even Less Trivial Task: Stupefying the previous task

“Stupefication” is a word we will henceforth apply to tasks, but let us note that the process of stupefication is itself a task — and it tends to require tremendous amounts of human intelligence, years of research, and a storehouse of distilled knowledge. In short, making a particular task stupid can be a highly intelligent act.

In the case of Deep Blue, the stupefication of chess playing required the construction of an astronomically complex formal system capable of drawing upon and organizing an enormous amount of existing human knowledge, which — furthermore — had to be previously codified into a rigid, database-friendly form that might make us reluctant even to call it “knowledge.”

“Deep Blue applies brute force aplenty, but the ‘intelligence’ is the old-fashioned kind. Think about the 100 years of grandmaster games.  Kasparov isn’t playing a computer, he’s playing the ghosts of grandmasters past. That Deep Blue can organize such a storehouse of knowledge — and apply it on the fly to the ever-changing complexities on the chessboard — is what makes this particular heap of silicon an arrow pointing to the future”

–The makers of Deep Blue, (http://www.research.ibm.com/deepblue/meet/html/d.3.3a.html).

The reason I’m currently researching databases is that I have a strong suspicion involving a connection between the stupefication of a particular task and the codification of the knowledge that a human being would use to perform that task. More to come. I’ll be more specific later.