AccessMyLibrary provides FREE access to millions of articles from top publications available through your library.
THE question that motivates this paper is: How close was recent monetary policy to the behavior recommended by an optimal policy rule? An optimal rule can be derived with a structural model and a loss function for policymakers. For example, Rudebusch and Svensson (1999) used a small empirical model of the U.S. economy and a loss function penalizing output, inflation, and interestrate variability to derive the optimal coefficients of a Taylor rule. A notable feature of this optimal Taylor rule was the large size of the inflation and output-gap response coefficients, which suggest that the ideal monetary policy behavior by the Fed would be quite responsive to economic conditions.(1) A Taylor rule can also be used to model historical monetary policy, and empirical estimates of such a rule appear to capture recent Fed behavior fairly well. However, the historical Taylor rule estimated using recent data appears to have relatively low response coefficients for output and inflation. That is, this estimated rule demonstrates a more cautious adjustment of the monetary policy instrument than is recommended by the optimal rule. This paper attempts to reconcile historical and optimal policy rules.
Of course, one possibility is that historical monetary policy cannot be described as the outcome of an economic optimization problem. This resolution, besides cutting short the current paper, would seem unsatisfactory on several levels. First, although it is hard to fit a stable reaction function or rule to the entire postwar history of U.S. monetary policy (Rudebusch, 1998), as described below, some success has been achieved in this regard for the past decade or so. Second, recent Fed policy and the economic performance it has helped foster have garnered both academic and general acclaim, so it seems likely that some sort of optimum has been approximated. Finally, a long-standing principle of economics is that any economic behavior can be understood as a problem in constrained optimization, and this principle should apply to central banks as forcefully as to the representative firm or agent.
The obvious avenue for reconciling the historical policy rule and the optimal rule is to alter the macroeconomic model or objective function used in deriving the latter in order to obtain a better match with real-world policy. With regard to the objective function, this paper does not explore in any great detail possible variations in the goals postulated for the central bank. It maintains a fairly standard assumption that the Fed is concerned with minimizing (in a quadratic fashion) output variation around potential, inflation variation around a target, and interest-rate volatility. This paper instead focuses on the context for decision making and, in particular, on how the addition of uncertainty into the model may alter the calculation of optimal policy. I also consider the uncertainty about the model used by policymakers and examine plausible model variation.
Especially since Brainard (1967), it has been recognized that uncertainty about model parameters can produce smaller responses, or "stodginess" (Blinder, 1998), in optimal policy rules. Indeed, policymakers often note that typically little new information is obtained between policy meetings (or from quarter to quarter) to justify large changes in the stance of policy. In particular, uncertainty about the state of the economy (data uncertainty) and about the trajectory and responsiveness of the economy (model or parameter uncertainty) appear to weigh heavily on policymakers. For example, at the Federal Open Market Committee (FOMC) meeting on December 16, 1987, a Federal Reserve Board research director, after summarizing the staff forecast, stated:
By depicting these two [forecast] scenarios, I certainly don't want to suggest that a wide range of other possibilities doesn't exist. However, I believe both scenarios are well within the range of plausible outcomes, and they point up what we perceive to be a dilemma for the Committee: namely, given the lags in the effect of policy action, an easing or tightening step might be appropriate now, but it isn't clear which. This, of course, isn't an unprecedented problem,....
Similarly, in discussing rules for policy, another Fed research director (Kohn, 1999) notes that members of the FOMC "are quite uncertain about the quantitative specifications of the most basic inputs required by most rules and model exercises. They have little confidence in estimates of the size of the output gap [or] the level of the natural or equilibrium real interest rate ..." (p. 195).
This paper then is an attempt to reconcile recommendations about optimal policy rules with actual estimates of the historical policy rule. I largely focus on how much and what type of uncertainty must be added to the model so that the resulting calculated optimal policy rule matches the historical one. This reverse engineering is conducted in the context of a Taylor rule for policy. The next two sections set the stage by presenting actual historical policy--in the form of estimated Taylor rules--and the contrasting optimal Taylor rules derived without uncertainty. Sections IV, V, and VI introduce, in isolation, parameter uncertainty, model variation, and data uncertainty, respectively, into the derivation of optimal policy. Section VII combines various types of uncertainty, and section VIII concludes.
H. Historical Estimates of the Policy Rule
Taylor (1993) proposed a simple rule for monetary policy:
(1) [i.sub.t] = r* - 0.5[Pi]* + 1.5[[bar][Pi].sub.t] + 0.5[y.sub.t],
where [i.sub.t] is the quarterly average federal funds rate at an annual rate in percent;
[[bar][Pi].sub.t] is the four-quarter inflation rate in percent;
[y.sub.t] is the percent gap between actual real GDP ([Q.sub.t]) and potential real GDP [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], that is, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII];
r* is the equilibrium real interest-rate in percent; and
[Pi]* is the inflation target.
As a descriptive rule, Taylor (1993, 1999) argued that the rule (1), with r* = [Pi]* = 2.0, seemed to capture some important factors influencing monetary policy and the general stance of policy from the mid-1980s onward. (Also see Judd and Rudebusch (1998).)
Actual estimates of a generalized Taylor rule of the form
(2) [i.sub.t] = k + [g.sub.[Pi]][[bar][Pi].sub.t] + [g.sub.y][y.sub.t],
(with the constant term k [equivalent] r* - ([g.sub.[Pi]] - 1)[Pi]*) seem to bear this out. At the most rudimentary level, a simple least-squares regression of equation (2) from 1987:Q4 to 1996:Q4 yields
(3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],
where inflation is defined using the GDP chain-weighted price index (denoted [P.sub.t] so [[Pi].sub.t] = 400(ln [P.sub.t] - In [P.sub.t-1]) and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]), and the output gap is defined with potential output as estimated by the Congressional Budget Office (1995). In this regression, the values of the estimated rule parameters (namely, [g.sub.[Pi]] = 1.78 for the inflation response and [g.sub.y] = 0.82 for the output response) are just slightly higher than the 1.5 and 0.5 that Taylor (1993) originally proposed. (Robust standard errors are reported in parentheses.)
More careful econometric analysis also supports such moderate policy response parameters. A key feature of the various studies that estimate Taylor rules with the historical data is that they take account of the apparent slow adjustment of the actual rate to the level recommended by the Taylor rule; thus, lagged interest rates are added to the regression to account for the apparent serial correlation in the [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].(2) For example, Judd and Rudebusch (1998) estimate a Taylor rule like equation (2) in the context of an error-correction framework (from 1987:Q3 to 1997:Q4) and find an inflation response of [g.sub.[Pi]] = 1.54 and an output response of [g.sub.y] = 0.99 (with standard errors of 0.18 and 0.13, respectively). Similarly, with closely related dynamic Taylor rule specifications, Kozicki (1999) estimates [g.sub.[Pi]] = 1.42 and [g.sub.y] = 0.49 (from 1983 to 1997), and Clarida, Gali, and Gertler (2000) estimate [g.sub.[Pi]] = 2.02 and [g.sub.y] = 0.99 (from 1982:Q4 to 1996:Q4).
As a rough benchmark then, the historical estimates of U.S. monetary policy during the late 1980s and the 1990s suggest that policy can be broadly described by a Taylor rule with a [g.sub.[Pi]] in the range of 1.4 to 2.0 and a g.sub.y] in the range of 0.5 to 1.0. This paper will attempt to find a control problem that produces an optimal Taylor rule with response coefficients in these ranges. The next section considers this issue under certainty.
III. Optimal Policy Under Certainty
A. An Empirical Model of Output and Inflation
The optimal policy rules in this paper are derived in a simple model of output and inflation:
(4) [[Pi].sub.t] = [[Alpha].sub.0] + [[alpha].sub.[Pi]1] [[Pi].sub.t-1] + [[Alpha].sub.[Pi]2] [[Pi].sub.t-2] + [[Alpha].sub.[Pi]3] [[Pi].sub.t-3] + [[Alpha].sub.[Pi]4] [[Pi].sub.t-4] + [[Alpha].sub.y][y.sub.t-1] + [[Epsilon].sub.t]
(5) [y.sub.t] = [[Beta].sub.0] + [[Beta].sub.y1][y.sub.t-1] + [[Beta].sub.y2][y.sub.t-2] - [[Beta].sub.r]([[bar]l.sub.t-1] - [[bar][Pi].sub.t-1]) + [[Eta].sub.t],
with [[bar]l.sub.t-1] equal to the four-quarter average federal funds rate [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], and the other variables defined as above.
The first equation is a Phillips curve that relates inflation to a lagged output gap and to lags of inflation, which represent an autoregressive or adaptive form of inflation expectations. The second equation is an IS curve that relates the output gap to its own lags and to the difference between the average funds rate and average inflation over the previous four quarters--an approximate ex post real rate. As described by Rudebusch and Svensson (1999, 2000), the use of this model can be motivated by a variety of considerations. In particular, although its simple structure facilitates the production of benchmark results, this model also appears to roughly capture the views about the dynamics of the economy held by some monetary policymakers, including Federal Reserve Governor Meyer (1997) and former Federal Reserve Vice-Chairman Blinder (1998). This point is fundamental to my analysis, which is predicated on the assumption that policymakers acted in an optimal manner. If I find an optimal policy rule for a particular model that matches the historical policy rule, this result is surely undercut if policymakers believed that they were optimizing in a completely different model.
The empirical fit of the model is also quite good. The estimated equations, using the sample period 1961:1 to 1996:4, are shown below. (Coefficient standard errors are given in parentheses, and the standard error of the residuals and Durbin-Watson statistics are reported.)
(6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
(7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
These equations were estimated individually by OLS.(3) The hypothesis that the sum of the lag coefficients of inflation equals 1 had a p-value of 0.48, so this restriction was imposed in estimation. Thus, this is an accelerationist form of the Phillips curve, which implies a long-run vertical Phillips curve. (This is reconsidered in Section VI.) The fit and dynamics of this model compare favorably to an unrestricted VAR. Indeed, the model can be interpreted as a restricted VAR, in which the restrictions imposed are not at odds with the data as judged, for example, with standard model information criteria. (See Rudebusch and Svensson (1999).)
In addition, the model appears to be stable over various subsamples, which is an important condition for drawing inference. With a backward-looking model, the Lucas critique may apply with particular force, so it is important to gauge its historical importance with econometric stability tests (Oliner, Rudebusch, and Sichel, 1996). For …