In chess, controlling the center isn’t always a good idea. For example, when you have the opportunity to effectively sacrifice your bishop on h6, or when you’re at risk of getting checkmated, controlling the center ought to be far from your mind. It would seem that, in addition to the heuristic “Control the center,” a computer also needs heuristical methods for choosing which heuristics to apply in a given situation: i.e., “When the sacrifice on h6 doesn’t lead to a win, try to control the center” or “When you’re about to get checkmated, forget about controlling the center.” Notice how each of these second-order heuristics serve to locate the first-order heuristics in a particular context. By contextualizing essentially context-independent chess wisdom like “Control the center” second-order heuristics help raise chess playing machines to a much higher level of play than any collection of first-order heuristics would be capable of. The problem, however, should be evident: which second-order heuristic should be applied at any given time? For example, what if I’m about to get checkmated but I also have a potentially fruitful bishop sacrifice on h6? The answer is, it depends on the context. Does the bishop sacrifice put the other king in check? Just how close are you to getting checkmated anyway? So a third-order heuristic is necessary. Etc.

It may sound like I’m setting up an ad infinitum proof that computers cannot play chess. But that, of course, would be silly, thanks to Deep Blue, who proved such arguments to be deeply suspect. Instead, I’m trying to show that the act of stupefication involves, at least in part, the navigation of and formalization of a potentially infinite recursive stack of heuristics. That the layers of heuristical guidance are not piled infinitely deep and, indeed, that they obviously need not be piled infinitely deep is proof that there exists a discoverable stopping place amidst the recursive descent. After all, we know that a computer can, with a finite stack of recursive rules, accomplish a high level of chess play.

This should indicate to us that arguments which seek to predict “what computers can’t do” and which cite as evidence the infinitely recursive nature of a particular problem domain may need to be reevaluated — given that a finite number of recursive heuristics may be sufficient to emulate human-caliber performance. And any suppositions about a potentially infinite stack of heuristics may be wholly irrelevant.

I mention this because linguistic systems are known to possess potentially unbridled (and largely uncharted) recursive structures. And incidentally, linguistic systems happen to be an important frontier in AI research. So my question (to paraphrase Turing) is, can a machine be programmed with enough heuristics (of various orders) to function at an apparently high level of linguistic competence? If the answer is yes (which is a huge “if”), then perhaps one way to effect such a breakthrough would be to first begin by programming systems to play very simple languages games that have clearly defined winning conditions and clearly defined transition rules (as mentioned in this post.)

More to come regarding language games.

Note: This post is a further unpacking of a concept introduced in this post. That concept can be stated briefly as follows: Stupefication works, in part, via the distillation of context-dependent human knowledge into a database. But the computer makes use of such “knowledge” in ways that do not mirror human usages. And to draw a naive parallel between the way computers use databases and the way human beings use databases is to commit a fallacy which I see committed all too often by those who espouse the glories of Good Old Fashioned Artificial Intelligence.

Human beings use chess opening books like maps to a territory. We recognize an intrinsic connection between games of chess and strings of numbers and letters like

“1) e4 e5 2) Nf3 Nc6 3) Bb5”

A human being familiar with algebraic notation will see these symbols and connect them with their meaning, just as many of us, upon seeing the word “apple,” conjure up an image of a red fruit. Knowing the connection between symbol and meaning, between the syntax and its semantics, the human chess player can use the recorded chess game like a map, locating a previous game that reached a position identical to (or very similar to) her current game’s position. Then she can use that game to follow a previously cut path through the thicket of chess variations which lie ahead of her.

Computers don’t. By this, I mean that a computer “follows” the map in the sense that I might “follow” a straight line when falling down a well. The machine makes no semiotic connection between the syntax of algebraic notation and the semantics of chess activity. It merely obeys physical laws — albeit a very complex set of them. That’s the neat thing about algorithms. They allow computer scientists to subject computers to a mind-bogglingly complex set of physical laws (in this case, a series of high and low voltages, represented as ones and zeros) such that they can play chess, emulate bad conversations, and display web pages like this one. But despite the many layers of abstraction that lie between Deep Blue’s spectacular middle-game play and the succession of high and low voltages that make it all possible, the fact remains that the computer functions in a purely deterministic way. Incidentally, this is exactly how you or I would function when subjected to the law of gravity after being dropped down a well.

“Ah, yes,” say the gallant defenders of Strong AI (in Searle’s sense of the term.) “But how can you prove that human beings are actually doing anything different? Even human beings may operate according to purely deterministic rules — in which case, we all ‘fall down wells’ when we think.”

The only weight this objection carries is the weight of confusion. When I say, “Humans do semantics; computers just operate deterministically on syntax,” the objection says, “Ah, but how do you know semantics isn’t just a series of deterministic operations on syntax?” Philosophers like Searle and Floridi have tackled this issue before. Here’s my stab at dealing with it concisely:

Even if I grant that when we say “semantics” we are actually referring to a deterministic and essentially digital process, the fact remains that the deterministic-and-essentially-digital-process that I am engaged in when doing “semantics” is of a fundamentally different nature than the deterministic-and-essentially-digital-process that Deep Blue is engaged in when its software obeys the lines of code that tell it to refer to its opening book and to play 2) Nf3 in response to 1) … e5. So can I prove that the universe doesn’t operate deterministically?  No. Can I prove that our minds aren’t, by nature, digital? No. But I can reveal the aforementioned objection for the house of smoke and mirrors that it is. My point (and Searle’s point) is that human beings do one thing and computers do something very different. And making such a proposition doesn’t require a complete knowledge of the nature of semantics or any amount of empirical evidence that thought isn’t digitally based. The objection mentioned above implies the need for such a presupposition even though no such need exists.  And the onus lies on the objector to prove that human beings and computers engage in processes that share any kind of functional similarity.

“Stupefication” is a word we will henceforth apply to tasks, but let us note that the process of stupefication is itself a task — and it tends to require tremendous amounts of human intelligence, years of research, and a storehouse of distilled knowledge. In short, making a particular task stupid can be a highly intelligent act.

In the case of Deep Blue, the stupefication of chess playing required the construction of an astronomically complex formal system capable of drawing upon and organizing an enormous amount of existing human knowledge, which — furthermore — had to be previously codified into a rigid, database-friendly form that might make us reluctant even to call it “knowledge.”

“Deep Blue applies brute force aplenty, but the ‘intelligence’ is the old-fashioned kind. Think about the 100 years of grandmaster games.  Kasparov isn’t playing a computer, he’s playing the ghosts of grandmasters past. That Deep Blue can organize such a storehouse of knowledge — and apply it on the fly to the ever-changing complexities on the chessboard — is what makes this particular heap of silicon an arrow pointing to the future”

–The makers of Deep Blue, (

The reason I’m currently researching databases is that I have a strong suspicion involving a connection between the stupefication of a particular task and the codification of the knowledge that a human being would use to perform that task. More to come. I’ll be more specific later.


April 19, 2008

I’ve pasted below yet another wonderful insight by Floridi. The quotation should be understood in the context of a discussion regarding the historical emergence of knowledge as a cumulative, social, and intersubjective substance — capable of continual growth due to its ability to be recorded, passed on, and synthesized. The following is commentary about the state of that ever growing body of knowledge prior to the advent of the database system (and, of course, prior to Wikipedia.)

“New knowledge could obviously be found; centuries of successful accumulation prove it unequivocally. Yet the new world represented by the human encyclopaedia had become as uncontrollable and impenetrable as the natural one, and a more sophisticated version of Meno’s paradox could now be formulated. How can a single scholar or scientist find the relevant information he requires for his own work? Moreover, what about ordinary people in their everyday lives? Meno could indeed ask: “how will you find, Socrates, what you know is already in the ever growing human encyclopaedia? Where can you find the path through the region of the known? And if you find what you are searching for, what will save you from the petrifying experience of das historische Wissen?” (95)

(Das historische Wissen is a phrase coined by Nietszche in response to Goethe’s statement that, “If I had actually been aware of all the [literary] efforts made in the past centuries, I would never have written a line, but would have done something else” [93.])

The following quotations suggest to me that Floridi believes the prevalence of database systems to have negated the pre-database state in which “the new world represented by the human encyclopaedia had become as uncontrollable and impenetrable as the natural one.” He suggests (correctly, to be sure) that “we all let our computers search, at fantastic speeds, for the required needles in those huge, well-ordered, electronic haystacks that are our databases” (97). And he says, “the growth of knowledge has followed the path of fragmentation of the infosphere and has been held in check by the new version of Meno’s paradox until the second half of the twentieth century, when information technology has finally provided a new physical form for our intellectual environment and hence the medium and the tools to manage it in a thoroughly new way, more economically and efficiently and in a less piecemeal way” (97; my emphasis).

I, however, am reluctant to say, simply because Google and Wikipedia now facilitate the quick retrieval of information, that today’s hyper complex and constantly shifting storehouses of knowledge are any less uncontrollable and impenetrable than they have been historically. Complex information management systems have certainly changed the way information is managed; but that doesn’t necessarily mean that our information storehouses have become more manageable. In other words, the fact that we have revved up the speed at which we can access information (and the speed at which we can change it, tag it, categorize it, relate it, synthesize it, use it, cite it, and undermine it) may have actually increased the complexity of our knowledge aggregations. Let me try my hand at a new, post-second-half-of-the-twentieth-century version of Meno’s paradox:

He might ask, “How can you find, Socrates, the knowledge you seek, given that if you find it, you cannot guarantee that it will remain in the same form, and in the same context, and related to the same metadata in which you found it? How can you choose a path through the region of the known, given that there are now billions of paths to choose from?” Indeed, Floridi is probably correct that, at one time, there was simply so much knowledge (codified in so many books) that the “path through the region of the known” was hard to find; but in postmodern times, we have a related problem: lightning fast search engines and effective database management systems have given us a surplus of paths through the known. Instead of drowning in books, we drown in hyperlinks. Instead of losing ourselves in the land of primary data, we find ourselves lost in the land of metadata. In postmodernity, the internet provides a thoroughly linked and navigable map of things that would, historically, have been housed in countless physical libraries. But the map happens to be just as complicated as the territory it is supposed to help us navigate, and any sufficiently complicated map begins to alter the nature of the territory itself. Borges gives a wonderful example of this in the following short story:

On Exactitude in Science

. . . In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

In a similar way, the landscape of the internet now lies atop a vast sea of knowledge, spanning it so completely that we can scarcely interact with discrete, linear information anymore. Information always comes with its complex arrays of relations, fibers connecting it to countless other pieces of information. Information now has place. Where did you find it on the internet? What other quasi-discrete information surrounds it? In other words, the internet (viewed as a map) doesn’t just indicate place, it assigns place and even creates place-ness for our accumulated stores of human knowledge.

Such is the world of the aptly named “relational database.”


Floridi, Luciano. Philosophy and Computing: An Introduction. London and New York: Routledge, 1999.