AccessMyLibrary provides FREE access to millions of articles from top publications available through your library.
Marr's distinction between three levels of explanation of a computational system has become a familiar part of the methodology of cognitive science. Marr distinguishes between the top level of computational theory, the middle level of representation and algorithm, and the bottom level of hardware implementation, and claims that we understand an information-processing system completely only when we understand it at all three levels (Marr , p. 24). This ordering from top to bottom reflects his conviction that understanding at the lower levels will be achieved through understanding at the higher levels, a vision Dennett conveys with the image of 'a triumphant cascade through Marr's levels' (Dennett , p. 227).(1) In an article in this journal, Bradley Franks has argued that this cascade of explanation is blocked by various idealizations used in cognitive science. in particular those employed by competence theories of linguistic knowledge (Franks , p. 476). He concludes that cognitive scientists face a dilemma: abandon idealizations (a tall order for any science), or abandon the possibility of achieving the cascade of explanation which would yield full understanding of a cognitive system. If Franks is right, a prominent form of theorizing in cognitive science (especially about language) faces serious methodological problems. In this reply I outline Franks' argument for this conclusion and explain why I believe that it is unsound. Put briefly, my claim is that Franks' argument depends on assimilating Chomsky's distinction between competence and performance to Marr's distinction between the level of computational theory and that of representation and algorithm, and that this assimilation is mistaken.
2 Idealizations and the explanatory cascade
To understand why Franks believes that certain kinds of idealization current in cognitive science block the explanatory cascade, we need to know more about the cascade itself, and hence about Marr's levels of explanation.
Level 1, the level of computational theory, specifies 'what is being computed and why' (Marr , p. 129); Level 2 specifies a 'representation for the input and output and the algorithm to be used to transform one into the other' (Mart , p. 25); and Level 3 provides 'the details of how the algorithm and representation are realized physically' (ibid.). As we move down through these levels, our explanations become progressively more detailed; we understand first what function a system computes, then the procedure by which it computes it, and finally how that procedure is implemented physically in the system. As Franks remarks, a successful cascade of this kind requires what he calls 'inheritance of the superordinate': 'given a particular Level 1 starting point, any algorithm must compute the same function, and any implementation must implement the same algorithm and compute the same function' (Franks , p. 478). In other words, the Level 1 function must map on to the Level 2 algorithm, which must map on to the Level 3 implementation.(2) A mismatch between any two levels will block the cascade of explanation. If a system S is physically unable to implement the algorithm specified at Level 2, we cannot explain S's ability to compute the Level I function in terms of its executing that Level 2 algorithm. If the Level 2 algorithm does not compute the function specified at Level 1, we cannot explain S's ability to compute that function in terms of that algorithm.
It is now easy enough to see how, in principle, idealizations could block the cascade. A successful cascade requires that we map descriptions at higher levels on to descriptions at lower levels. Idealizations involve at least selective description, if not misdescription, and so they have the potential to create mismatches between descriptions at different levels, mismatches which block the cascade of explanation.(3) For example, if we are working with an idealized conception of system S's hardware, we may think that S can implement an algorithm which in fact it cannot. Then our Level 2 algorithm will not map on to S's actual physical structure. Or if we are working with an idealized conception of S's abilities, so that the function specified at Level I is not one which S in fact computes, we will be unable to complete the cascade by mapping that function on to an algorithm which S actually implements.
Presumably some idealizations could block the explanatory cascade in this
way. Franks notes, following Cartwright , that there is a trade-off between idealization and 'facticity': 'the more we idealize, the less directly related to actual phenomena will be our theories' (Franks , p. 490). Idealizations reflect our views of which aspects of a system are theoretically important and which are incidental. These views can be and sometimes are wrong, and this may be revealed by a mismatch between our higher-level picture and the actual detailed workings of the system. Since this can happen in science in general, it can happen in cognitive science, and in particular in attempts to provide cascade explanations …