AccessMyLibrary provides FREE access to millions of articles from top publications available through your library.
UNDEBUGGABILITY AND COGNITIVE SCIENCE A central tenet of much recent work in cognitive science is a commitment to realism about the agent's resources. Perhaps there is something mean-spirited about challenging presuppositions that human beings have unbounded potentialities, but a theory, however normative or regulative, that proceeds on an assumption that people have God's brain seems at l east interestingly incomplete. What is sought are models that are both computationally and psychologically realistic, as well as neurally and genetically realistic.
Starting with such a resource-realistic framework, this article explores some properties we can now expect to be inextricable from any computer superprogram that purports to represent all of human mentality. A complete computational approximation of the mind would be a (1) huge, (2) "branchy" and holistically structured, and (3) quick-and-dirty (i.e., computationally tractable, but formally incorrect/incomplete) (4) kludge (i.e., a radically inelegant set of procedures). The mind's program thus turns out to be fundamentally dissimilar to more familiar software, and software, in general, to be dissimilar to more familiar types of machines. In particular, these properties seem inherently to exclude our reasonably establishing that we have the right program. In this way, the full mind's program appears to be a type of practically unknowable thing-in-itself.
Generally, cognitive science seems to presuppose the manageability and feasibility of such a total mind's program. For example, one of the current standard textbooks of artificial intelligence (AI) begins, "The ultimate goal of AI research (which we are very far from achieving) is to build a person, or, more humbly, an animal" [12, p. 7]; that is, to construct a program for the entire creature. A standard ground plan is then enumerated, including input modules for vision and language, then deduction, planning, explanation, and learning units, and, finally, output modules for robotics and speech. An important example from the philosophical side of cognitive science has been Fodor's thesis that psychological processes are typically computational, and so that much human behavior can only be explained by a computational model of cognition (see, e.g., , especially chap. 1).
A strong tendency to posit "impossibility engines"--profoundly unfeasible mental mechanisms--has not been confined to the most abstract reaches of philosophy. It seems to be a pervasive, unnoticed, but problematic element of methodology throughout mind/brain science, even when that field is explicitly taken to extend all the way down to neurophysiology and neuroanatomy. The most extreme philosophical case I know of is the deductive ability presupposed by conventional rationality idealizations. Standard rationality models require the agent to be some sort of perfect logician; in particular, to be able to determine in finite time whether or not any given formal sentence is a first-order logical consequence of a given set of premises. Half a century ago, however, Church's theorem showed that the predicate calculus was undecidable--that is, no algorithm for this task is possible. What surprised me most about this simple observation was at a metalevel: I could find no previous discussion of the point. Now, what would explain not even raising the question, "Does the ideal agent's perfect logical ability have to exceed even that covered by the classical unsolvability theorems?" One explanation would be the ultimate inattention to scale: not distinguishing between an agent under the usual constraints of the absolute unsolvability results--finite space and (for each input) run time--and some more highly idealized reasoner that could not even in principle employ any algorithm.
A less extreme philosophical example of inattention to scale of resources can be seen in some of the computational costs implicit in perfect "charity" of interpretation along conventional Quine-Davidson lines, where the fundamental methodological principle is that (except for "corrigible confusions") an agent must be regarded as maintaining perfect deductive consistency. Figure 1 reviews the literally cosmic resources consumed by the agent merely maintaining full truth-functional consistency, a "trivially" decidable problem. Quite moderate cases of the task would computationally hog-tie a rather physically ideal truth-table machine to an extent so vast that we have no familiar names for the numbers involved.
Our basic methodological instinct, at whatever explanatory level, seems to be to work out a model of the mind for "typical" cases--most importantly, very small instances--and then implicitly to suppose a grand induction to the full-scale case of a complete human mind. Cocktail-party anthropology would deride Hottentots for allegedly counting "1, 2, many"; the methodology of mind/brain science seems unthinkingly to approach "1, 2, 3, infinity." The unrealism regarding resources that I am suggesting is still pervasive is an inattention to scale, in particular, to consequences of scaling up models of the mind/brain. (As a more moderate example, this scale-up difficulty lurks in the familiar microworlds approach to the problem of knowledge representation in AI--that is, the divide-and-conquer strategy of beginning with small, tightly constrained domains of total human common sense for formalization.) Perhaps overlooking limits to growth of models is part of our Cartesian heritage, stemming from the picture of mind as a substance that has spatial dimensions no more than do mathematical structures.
Suppose we take seriously the working hypothesis of computational psychology, that a major part of the human mind is a cognitive system of rules and representations, captured by some programlike object (one paradigmatic model would be Fodor's language of thought hypothesis). One way of looking at this framework assumption is simply to note that this program, as a syntactic object, is text that has a size. That is, we can ask, "how big is the mind's program?" It may seem unnatural to ask such a question if one does not distinguish between a concrete algorithm and the corresponding abstract function (asking the size of a mathematical function seems as much a category mistake as asking how much the number 15 weighs). Moreover, individuating such cognitive elements as beliefs and concepts is a notoriously messy business.
This "mind as warehouse" approach is rather primitive, but some evidence that the mind's storage capacities are not unlimited can be found in the renowned overflow phenomena of information explosion. For instance, the preface to the famous 1910 edition of the Encyclopaedia Britannica states the following:
Whereas two or three centuries ago a single mind was able to acquire and retain in some one particular field of knowledge ... nearly all that was worth knowing, and perhaps a good deal of what was best worth knowing in several other fields also, nobody now-a-days could, whatever his industry, cover more than a small part of any of those fields. 
Sixty years later the mathematician S. M. Ulam reported estimates that 200,000 theorems were …