Stupefication and the Epistemological Assumption

April 27, 2008

Dreyfus attacks several of the foundational presuppositions of AI in his book What Computers Can’t Do.

1) The Biological Assumption — That we act, on the biological level, according to formal rules, i.e., that our brain is a digital computer and our minds are analogous to software.

2) The Psychological Assumption — That, regardless of whether our brains are digital computers, our minds function by performing calculations, i.e., by running algorithms.

3) The Epistemological Assumption — That regardless of whether our brains are digital computers or whether our minds run algorithms, the things our minds do can be described according to formal rules (and hence, by algorithms.) This is, naturally, a weaker assumption, yet one required by the idea of stupefication.

4) The Ontological Assumption — That “the world can be exhaustively analysed in terms of context free data or atomic facts” (205).

The epistemological assumption is the one that we ought to be concerned with at the moment, as evidenced by this quotation on the matter:

“[The question] is not, as Turing seemed to think, merely a question of whether there are rules governing what we should do, which can be legitimately ignored. It is a question of whether there can be rules even describing what speakers in fact do” (203).

In light of the previous post on descriptive rules, we can posit that stupefication requires a kind of epistemological assumption: that mental tasks like that of playing chess and (perhaps) of communicating in a natural language can be described by formal rules, even if those formal rules have nothing to do with what we happen to be doing while performing that task.

In Dreyfus’s book, he undermines the epistemological assumption (along with the three other assumptions) by showing that they cannot be held in all cases and with regard to all human activities. However, I don’t think this is necessarily very crippling. Even if there can be no comprehensive set of rules or formal systems that fully describe all intelligent human behaviour, AI is hardly done for. The questions merely change from ones like “Can we make a formal system that fully describes task X?” to “How close can we get to describing task X in a formal system?” And this may well put us back in Turing’s court, where the benchmark is how many people the formal system can fool.

In other words, I’m questioning the validity of this proposition:

“[The] assumption that the world can be exhaustively analyzed in terms of context free data or atomic facts is the deepest assumption underlying work in AI and the whole philosophical tradition” (Dreyfus, 205).

Dreyfus is probably right in terms of the majority of the research that has been done in AI over the past century. But this ontological assumption need not be the “deepest assumption” underlying projects that seek to stupefy. For in stupefication, one takes up the mantel of the epistemological assumption while relegating the ontological assumption to a hypothesis that must be empirically verified, not necessarily assumed — and certainly not assumed “exhaustively,” as Dreyfus suggests.

Citations:

Dreyfus, Hubert L. What Computers Can’t Do. New York : Harper and Row, 1979

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: