AccessMyLibrary provides FREE access to millions of articles from top publications available through your library.
1. Whither quantitative macroeconomics?
The relationship between theory and data has been, from the beginning, a central concern of the new-classical macroeconomics. This much is evident in the title of Robert E. Lucas's and Thomas J. Sargent's landmark edited volume, Rational Expectations and Econometric Practice (1981). With the advent of real-business-cycle models, many new classical economists have turned to calibration methods. The new classical macroeconomics is now divided between calibrators and estimators. But the debate is not a parochial one, raising, as it does, issues about the relationships of models to reality and the nature of econometrics that should be important to every school of macroeconomic thought, indeed to all applied economics. The stake in this debate is the future direction of quantitative macroeconomics. It is, therefore, critical to understand the root issues.
Lucas begins the second chapter of his Models of Business Cycles with the remark:
Discussions of economic policy, if they are to be productive in any practical sense, necessarily involve quantitative assessments of the way proposed policies are likely to affect resource allocation and individual welfare. (Lucas 1987, p. 6; emphasis added)
This might appear to be a clarion call for econometric estimation. But appearances are deceiving. After mentioning Sumru Altug's (1989) estimation and rejection of the validity of a variant of Finn E. Kydland and Edward C. Prescott's (1982) real-business-cycle model (a model which takes up a large portion of his book), Lucas writes:
. . . the interesting question is surely not whether [the real-business-cycle model] can be accepted as 'true' when nested within some broader class of models. Of course the model is not 'true': this much is evident from the axioms on which it is constructed. We know from the onset in an enterprise like this (I would say, in any effort in positive economics) that what will emerge - at best - is a workable approximation that is useful in answering a limited set of questions. (Lucas 1987, p. 45)
Lucas abandons not only truth but also the hitherto accepted standards of empirical economics. Models that clearly do not fit the data, he argues, may nonetheless be calibrated to provide useful quantitative guides to policy.
Calibration techniques are commonly applied to so-called 'computable general-equilibrium' models. They were imported into macroeconomics as a means of quantifying real-business-cycle models, but now have a wide range of applications. Some issues raised by calibration are common to all computable general-equilibrium models; the concern of this paper, however, is with real-business-cycle models and related macroeconomic applications; and, as will appear presently, these raise special issues. A model is calibrated when its parameters are quantified from casual empiricism or unrelated econometric studies or are chosen to guarantee that the model precisely mimics some particular feature of the historical data. For example, in Kydland and Prescott (1982), the coefficient of relative risk aversion is justified on the basis of microeconomic studies, while the free parameters of the model are set to force the model to match the variance of GNP without any attempt to find the value of empirical analogues to them.
Allan W. Gregory and Gregor W. Smith (1991, p. 3) conclude that calibration '. . . is beginning to predominate in the quantitative application of macroeconomic models'. While indicative of the importance of the calibration methodology, Gregory and Smith's conclusion is too strong. Aside from the new classical school, few macroeconomists are staunch advocates of calibration. Within the new classical school, opinion remains divided. Even with reference to real-business-cycle models, some practitioners have insisted that calibration is at best a first step, which must be followed '. . . by setting down a metric (e.g. one induced by a likelihood function) and estimating parameters by finding values that make the metric attain a minimum' (Gary Hansen and Sargent 1988, p. 293).(1)
Sargent advocates estimation or what Kydland and Prescott (1991) call the 'system-of-equations approach'. Estimation has been the standard approach in macroeconometrics for over 40 years. Sargent and like-minded new classical economists modify the standard approach only in their insistence that the restrictions implied by dynamic-optimization models be integrated into the estimations. The standard of empirical assessment is the usual one: how well does the model fit the data statistically? Lucas and Kydland and Prescott reject statistical goodness of fit as a relevant standard of assessment. The issue at hand might then be summarized: who is right - Lucas and Kydland and Prescott, or Sargent?
The answer to this question is not transparent. Estimation is the status quo. And, although enthusiastic advocates of calibration already announce its triumph, its methodological foundations remain largely unarticulated. An uncharitable interpretation of the calibration methodology might be that the advocates of real-business-cycle models are so enamored of their creations that they would prefer to abandon commonly accepted, neutral standards of empirical evaluation (i.e. econometric hypothesis testing) to preserve their models. This would be an ad hoc defensive move typical of a degenerating research program.
This interpretation is not only uncharitable, it is wrong. Presently, we shall see that Herbert Simon's (1969) Sciences of the Artifical provides the materials from which to construct a methodological foundation for calibration, and that calibration is compatible with a well-established approach to econometrics that is nonetheless very different from the Cowles Commission emphasis on the estimation of systems of equations. Before addressing these issues, however, it will be useful to describe the calibration methodology and its place in the history and practice of econometrics in more detail.
2. The calibration methodology
2.1. The paradigm case
Kydland and Prescott (1982) is the paradigm new-classical equilibrium, real-business-cycle model. It is neoclassical optimal-growth model with stochastic shocks to technology which cause the equilibrium growth. path to fluctuate about its steady state.(2) Concrete functional forms are chosen to capture some general features of business cycles. Production is governed by a constant-elasticity-of-substitution production function in which inventories, fixed capital, and labor combine to generate a single homogeneous output that may either be consumed or reinvested. Fixed capital requires a finite time to be built before it becomes a useful input. The constant-relative-risk-aversion utility function is rigged to possess a high degree of intertemporal substitutability of leisure. Shocks to technology are serially correlated. Together the structure of the serial correlation of the technology shocks and the degree of intertemporal substitution in consumption and leisure choices govern the manner in which shocks are propagagated through time and the speed of convergence back towards the steady state.
Once the model is specified, the next step is to parameterize its concrete functional forms. Most of the parameters of the model are chosen from values culled from other applied econometric literatures or from general facts about national-income accounting. For example, Thomas Mayer (1960) estimated the average time to construct complete facilities to be 21 months; Robert E. Hall (1977) estimated the average time from start of projects to beginning of production to be two years. Citing these papers, but noting that consumer durable goods take considerably less time to produce, Kydland and Prescott (1982, p. 1361) assume that the parameters governing capital formation are set to imply steady construction over four quarters.(3) The values for depreciation rates and the capital/inventory ratio are set to rough averages from the national-income accounts. Ready estimates from similar sources were not available for the remaining six parameters of the model, which include parameters governing intertemporal substitution of leisure and the shocks to technology. These were chosen by searching over possible parameter values for a combination that best reproduced certain key variances and covariances of the data. In particular, the technology shock variance was chosen in order to exactly match the variance of output in the postwar US economy.
To test the model's performance, Kydland and Prescott generate a large number of realizations of the technology shocks for 118 periods corresponding to their postwar data. They then compute the variances and covariances implied by the model for a number of important variables: output, consumption, investment, inventories, the capital stock, hours worked, productivity, and the real rate of interest.(4) These are then compared with the corresponding variances and covariances of the actual US data.(5)
Kydland and Prescott offer no formal measure of the success of their model. They do note that hours are insufficiently variable with respect to the variability of productivity to correspond accurately to the data, but otherwise they are pleased with the model's ability to mimic the second moments of the data.
Real-business-cycle models, treated in the manner of Kydland and Prescott, are a species of the genus computable (or applied) general-equilibrium models. The accepted standards for implementing computable general-equilibrium models, as codified, for example, in Ahsan Mansur and John Whalley (1984), do not appear to have been adopted in the real-business-cycle literature. For example, while some practitioners of computable general-equilibrium models engage in extensive searches of the literature in order to get some measure of the central tendency of assumed elasticities, Kydland and Prescott's (1982) choice of parameterization appears almost casual. Similarly, although Kydland and Prescott report some checks on robustness, these appear to be perfunctory.(6)
In the context of computable …