Fine Tuning Out of Control

  • Details
  • Transcript
  • Audio
  • Downloads
  • Extra Reading

If the economy was to be regulated like a mechanism, what rules might be used to govern its fine-tuning? A number of insights have arisen on the importance of rules, gradualism and choice of instrument that this lecture will outline and explore.


Download Transcript

05 February 2015

Fine Tuning Out of Control

Professor Jagjit Chadha

1.  Introduction


Monetary policy as we have seen started as a form of standardisation: `this amount of paper is worth this much shiny metal and always will be' was the language of monetary policy. Wider participation in the political decision making process and a broader understanding of the problems caused by excessive business cycle volatility, increased the onus on monetary policy to go further than merely setting standards. The move from a commodity currency to the fixed but adjustable exchange rate system of Bretton Woods may superficially not seem very important because rather than fixing domestic currency value in terms of a block of shiny metal, it was now fixed in terms of some other reference currency, in this case the US$ that was itself fixed to the same shiny metal. But a small advantage had been gained, we might say later that it was misused, but advantage there was. And it was that the value of domestic currency could be changed relative to the reference currency and the standardiser could now also set his price to reflect changes in domestic conditions. But did the standardiser have any practical or theoretical principles to guide him? Of course, he neither wanted to become a debaser of currency or a wearer of a monetary hairshirt. How could he use his new powers wisely, particularly if the people had asked for them to be unleashed?


2.  The Economist as Engineer


How to put together the nuts and bolts of monetary policy can be hard to fathom in the opaque waters of the economy. There are a number of key elements to overcome. First, how exactly does setting a single short term interest rate act to stabilise an economy that comprises so many households, firms, financial institutions, a substantial government sector and a host of (possibly malign) overseas relationships? Secondly, there are a large number of institutional details to consider such as the framework for monetary policy, the relationship between the Finance Ministry and the Central Bank and what might be the ultimate objectives of any stabilisation policy. Thirdly, the theory of monetary policy, whilst first non-existent, quickly developed into a branch of ‘control theory’ and so became subject to severe technical barriers at the frontier. And finally, there is the aspect of the real data: how do we make decisions when the observed economy is not some clearly identifiable mass but a spongy construct based upon a myriad of observations or surveys announced on a daily basis? The mixture of institutional detail, high theory, data and, at times, low politics can make monetary policy analysis a daunting mix for practitioner, instructor and student alike.


Metaphors can be useful when we are dealing with complex ideas and sometimes it can be a good idea, unlike drinks, to mix them. We tend to start with an analogy related to one of driving cars, steering ships or taking a shower. In which, the policy maker is cast as the driver, pilot or bather. But, let us say, the pilot has severe information problems, as he (or she) cannot know with a high degree of certainty where he is compared to where he would like to be. He also does not quite know how the machine will react when he asks it to help him get to where he would like to be. Finally, it may also be some time before he realises that he is or is not where he thinks he would like to be and so he may frequently under -or even overshoot his final destination. Should your head be reeling, you will now be pleased to know that I am going to ease you through these rather difficult kinds of control issues.

What I do though in this chapter is tell the story of the evolution of thinking on monetary policy in the first few decades after World War II after the argument that the government had responsibility for the short term evolution of the economy had been won. And so even though, there were widespread controls on exchange rate movements under Bretton Woods, also on capital controls and not a small degree of financial repression, monetary policy evolved into a control problem. By which I mean a theory of policy developed that treated the economy as a system that could be manipulated by use of tools into behaving, in the short run, in an aggregate manner where both inflation was low and growth was significantly positive and stable. Actually this kind of `bliss' point proved rather harder to engineer, as a number of increasingly problematic trade-offs emerged between output and inflation that effected a change of plan. And so from a simple control problem of a known system with well understood tools, monetary policy subsequently evolved into a game concerned with the interplay between smart agents, not so smart agents, central banks with imperfect information and governments determined on staying in power. But let us first look at the most basic of trade-offs between inflation and output, the Phillips curve.


3.  Stabilisation Policy


The technical steps in defining a monetary policy problem were spelt out in the immediate postwar period. The case for directing the economy resulted in part from the memory of inter-war economic failures and the experiences of a war economy had contributed much, but ultimately it was a posthumous and durable triumph for Maynard Keynes. So much so that direct responsibility for the growth rate in a market economy still seems to lie with the government. But once people turned to the question of how to set and think about policy in real time, it was clear that a number of gaps existed. The basic propositions for equilibrium in the goods and money markets were established under fixed prices and a large degree of assumed knowledge about the state of the economy and how it would respond to policy.  It turned out that these foundations were a little shaky.


Many ideas seeped in from engineering with, perhaps most famously, Bill Phillips’ observation (a hydraulic engineer with an economic bent), of a trade-off between spare capacity and inflationary pressure. The Phillips curve, as it still known, tells the policy maker what is the likely current rate of exchange if we want to engineer a little more (less) unemployment and a little less (more) inflationary pressure. In effect, it offers a menu of alternatives that the policy maker may wish to set against his current preferences for output and inflation. It is such a crucial question for policymakers and theoreticians that the curve itself has gone through many permutations, as the profession has tried to understand shifts or breakdowns in seemingly stable relationships.

The original Phillips paper, published in 1958, traced a negative relationship between the rate of change of money wages and unemployment from 1861 to 1913 and offered a statistical observation, rather than a theory, that was tested against subsequent observations over two sub-periods from 1913-48 and then from 1948-1957. To some degree it allowed both proponents of importance of a demand-led inflation and a cost-push inflation to make their cases, where the former argues that inflation tends to result from excessive growth in spending and the latter that it results from escalating prices on inputs into the production process. But it was the seeming robustness of the relationship in other countries as well that was persuasive and so the ideas were developed at length by US economists Paul Samuelson and Bob Solow into a specific menu of trade-offs between inflation and output rather than money wages and unemployment. The basic idea was simply that increasing levels of capacity utilisation in aggregate were likely to be associated with greater upward pressure on wages and prices and, I add a sub-clause, at an increasing rate, as resources became more scarce their prices would tend to rise more quickly. The observation was also reasonably symmetric as reducing levels of capacity utilisation were likely to be associated with greater downward pressure on wages and prices.


But in the 1960s, inflation seemed to become increasingly unhinged from the level of unemployment, or more generally spare capacity. And this led to the development of the idea of a so-called accelerationist Phillips curve, that posited a relationship between the rate of change in inflation and the output gap,[1] which meant simply that the level of inflation itself was independent of the state of spare capacity and any shifts in capacity constraints will tend to alter permanently the inflation rate in one direction or the other. This reformulation of the Phillips curve allowed any given level of output to be associated with any given level of inflation and this seemed to help people understand why inflation had become so sticky when it had ratcheted up. It also implied that to get inflation down from a high level, might require several bouts of lower demand, which ultimately became known as disinflationary paths.


As economists then tried to think about the implications of rationality and the use of information, an interest in the formation of expectations for future wage and price setting started to take a serious hold. A simple thought experiment will explain why: if you are setting production levels today and you know that prices or wages are likely to be high tomorrow, as opposed to yesterday when prices and wages were low, should you build these higher prices and wages rather than yesterday's lower prices into your plans? If you do not respond to these expectations you may lose some value of your sales, depending on much extra demand for your product results at these lower prices and, more importantly, you will not have sufficient cash flow to buy the next round of inputs at these elevated prices. So you will tend to raise your prices along with your expectations.


Once output has been set in line with expected inflation, only if prices and wages differed substantially from those expectations would output change from the level that had been planned but only as long as it took for producers and wage setters to learn the new likely level of inflation. It was then not such a large leap to locate what became known as the expectations-augmented Phillips curve. This idea was championed by Milton Friedman famously in his 1968 AEA address, where he told us that inflation had to surprise us in order to effect some change in labour supply, and hence, output but that this effect was strictly temporary as households and firms would quickly come to accept and plan around this new inflation rate. In the language that was used at the time, there was a level of unemployment qua output at which inflation would not change or accelerate, and this became known rather unattractively but memorably as the non-accelerating inflation rate of unemployment, NAIRU.


The key point here is that we had moved decisively away from price pressures emerging from a given level of output, which could thus be controlled relatively simply with reference to output. But to a world in which it was expectations of inflation that acted as a constraint on the ability of policy makers to bring about any given level of inflation and where all that mattered were these beliefs. To this day, the role of expectations remains key to the understanding macroeconomic models. Bob Lucas and Lionel Rapping went further in late 1960s and early 1970s and suggested that it was only inflation `surprises', as they became known, relative to rationally formed expectations that would induce unplanned levels of aggregate output. The unplanned disruption in output would thus only last as long as the surprise. Just as a surprisingly empty or congested road will tend to induce deviations in the actual time of arrival at one's destination for dinner compared to the expected time of arrival but it should not lead to you being persistently late for dinner, as eventually you ought to learn to leave a tad earlier. The imperative to surprise had profound implications for the operation of monetary policy, as it motivated procedure by stealth and operation by surprise, on which more later, which by its nature implied quite forcefully that it has strictly short-lived effects.


Whether a Phillips curve in terms of employment or a (Bob) Lucas surprise (aggregate) supply curve, the introduction of expectations into the framework was not only long overdue but also an explicit recognition that economist-engineers had to deal with a dynamic system that would itself both learn and be difficult to fool - a point to which we shall also return. The Phillips curve has had as many incarnations as some rather important Gods to the extent that if you want to know something about an economist's views about the importance of stabilisation policy, ask him (or her) in what kind of Phillips curve do they believe. The most recent incarnation is the so-called New Keynesian Phillips curve, which relates the current level of the output gap or (economy-wide marginal costs) to a point on the trade-off between inflation today and tomorrow. Some will plump for a purely forward-looking Phillips curve, some will prefer a backward-looking curve and some will prefer a hybrid which combines the two. I have also had conversations with prominent central bankers who deny that any such thing actually exists and certainly that any empirical estimates of any trade-off should not be used to inform monetary policy.


The differences in the trade-off of choice will tell you much about any given the economist's views about how policy works. Let us imagine a policymaker called Schulze who believes only in a forward-looking Phillips curve. It turns out that this belief is actually shorthand for inflation that can only be stabilised by controlling our expectations of the current and future stream of output gaps (or marginal costs). In Schulze's world it is the expected path of future policy rates that is the key to determining the extent to which any shock to demand or costs that may come along today will impact on inflation today because plans for future output will depend on the expected responses of policy makers. So let us suppose that there is some shock increase in expenditure today and/or tomorrow this would not necessarily be inflationary if the policy maker pre-emptively raised interest rates by just enough to offset the present value of this shock. Actually it is quite self-serving because if such a sequence of events this turned out to be perfectly true we mortals may not even see the shock.



Alternatively, policymaker Chatelain might believe in a backward-looking Phillips curve, in which inflation responds only to past output gaps, and so really thinks that inflation can only be controlled by the actual recent path of output. In this world we get an increase in expenditure that today but we wait until tomorrow before we do anything because we need to see that output and inflation have risen before acting. Of course when we act, inflation only falls after we bring output down, so in this system there are lags galore, as output accelerates followed by inflation with policy-makers slowly rousing themselves to bear down on output with inflation returning eventually to target. So comparing the two in the presence of high output and inflation today, what will these two people advise? Schulze will argue for plans to be announced about the need for tighter policy so that in expectation output will be lower and this will actually lower inflation now in his world. Chatelain though wants more tangible evidence of the effects of policy before he acts. He wants to see higher interest rate bring down output in reality so that agents in the midst of a recession dare not seek inflationary wage rises or mark-ups and thus deflation has to be engineered by a painful operation. One might imagine that Schulze is an activist who always calls for interest rate action plans in response to economic news so that exuberant expectations can be tempered whereas Chatelain may be a more considered type of policymaker because he thinks that actual output has to change to bring about changes in inflation and this mean the loss of jobs and livelihoods. A real policy maker as well as examining what happens if they are right to behave as a Schulze or a Chatelain, also has to consider what happens if they behave in a particular way and they turn out to be wrong and so add up gains and losses across both possibilities.


There may as well then even be a third hybrid type of person, let's call him Clegg, who may accept both the case for expectations but also some possibility that agents may also need recession or booms to nudge their own lessons about the state of the economy. Although seemingly mixed up, Clegg actually occupies the high ground because to the extent that we do not know and probably cannot know whether Schulze or Chatelain are right, Clegg gives us an answer that will be least bad in the event of either of Schulze or Chatelain winning the policy argument by force of character but getting the facts wrong. The Table below illustrates: alongside the beliefs we look at whether the world in actually one of backward-looking, forward-looking or a hybrid: Schulze has a forward belief, Chatelain a backward and Clegg a hybrid. We show the rank of each belief with the world actually on of the three. And in the final two columns we show the average rank and the standard deviation of that rank across the evenly possible outcomes. Even though the hybrid will not dominate if we know for certain which version of the Phillips curve is the ‘truth', if we do not know which one and perhaps this may even change from time to time it makes a lot of sense to be a hybrid thinker because we do better on average and have the least variance in rank ordered outcomes. One thing to remember in economics is that convex combinations tend to be a good hand versus even a straight flush.



The Phillips curve though is more than just a menu; at a general level it represents a metaphor for the structure of the economy as we understand it. Phillips himself created a number of machines that showed the circulation of demand through an economy and corresponded closely to the workings of a system of water pumps. From which a generation of post war economists learnt that it was possible to think at some sectorial level but that these sectors added up to an aggregate flow of expenditures that would determine national income. So the Phillips curve really is just a statement of some economic structure that traces the impacts of shocks to output or inflation in the short run and so when we consider the theory of economic policy, we perhaps ought to move onto consider the role of structure in determining optimal choice.



4.  Tinbergen-Theil


The apogee of the first scientific approach to monetary policy making was the development of the Tinbergen-Theil framework, developed by Dutch economists Jan Tinbergen and Herni Theil. This approach had three ingredients: the preferences of the policy maker over inflation and output growth, the structure of the economy - typically an estimated trade-off (Phillips curve) between inflation and output - and the policy instrument or rule. The basic idea was simple, decide social preferences towards inflation and/or output, estimate the relationship in aggregate between output and inflation, and work out the appropriate level for policy rates or more accurately the typical way in which policy would respond to evolutions in the economy, which we might call a feedback rule. With sufficient clarity on preferences, structure and instrument choice, even an optimal feedback rule could be written down and, in principle, followed.

From the considerable and helpful perspective of time, it seems to me that post war optimism about building a better future through empirical observation and engineered calculation had clearly infected monetary policy-making. Another good and parallel example was the development of high-rise concrete blocks - we discovered that, although the views were often remarkable, the lack of personal and public space led to deterioration in the quality of life (and mutual respect within communities) but also may have involved the creation of unstable structures. The dangers of over-engineered monetary policy were actually not that dissimilar, rather than leading to more economic stability, the search for an optimal policy rule may have led to more instability as ‘fine-tuning’ demand to match supply might have asked too much of uncertain data and unstable economic structures.


In order to locate an optimal policy is was necessary to choose an economic structure that was thought to be stable and one that was close enough to the behaviour of the actual so that is might be called accurate. For this we needed a set of behavioural relationships between the components of aggregate demand and the factors that explained their behaviour, or what some economists call drivers. As Ragnar Frisch put it in his Nobel Prize lecture, delivered alongside that of Tinbergen, in 1970: The English mathematician and economist Stanley Jevons (1835-1882) dreamed of the day when we would be able to quantify at least some of the laws and regularities of economics. Today - since the break-through of econometrics - this is not a dream anymore but a reality.[2] With no wish to denigrate Ragnar Frisch, it is surprising how similar this statement sounds to those of central bankers over the last decade who had started to take much the credit for an apparent reduction in macroeconomic volatility. As ever, hubris lay in wait. We typically seem to be most confident just before a crisis hits.


Let us not go too far, too quickly though, because some of the contributions have proved durable. The Tinbergen-Theil framework allowed us to consider questions such as the number of degrees of freedom available for policy-makers. By which I mean that with one independent instrument of policy-making it was only possible to stabilise fully one argument in the preference function, which is inflation or output. So typically, if we only had the short term policy rate under our control we would have to decide on whether it was inflation or output that we wanted to stabilise in the face of stochastic shocks. This axiom became known as the Tinbergen counting principle: that we could target as many variables as we had independent instruments of monetary policy. Furthermore, it implied that if there were more objectives than available policy instruments then we would have to accept a trade-off between the objectives: that is choose some point that did not meet both objectives fully but each only in some limited degree.

The analysis of the question of policy-trade-offs can be viewed as a direct descendent of the idea of the Phillips curve in the first place. That is having observed some tendency for inflation and output (or capacity) to offset each other, we can choose to exploit this trade-off systematically in terms of the level of output or inflation we might expect to see given a shock and our best guess of the impact of any policy action. But, if we are accept that we have little impact on the long run rate of output, we might with considerably less authority (to control the economy) simply accept that in trying to stabilise the economy there will be some variance in inflation and output around their long run trend and the choice is really about minimising these variances, subject to some deep preferences about whether, as a society, we dislike inflation or output variance more. Again, with only one independent policy instrument available it is possible to categorise different policy responses (or rules) in terms of the resultant inflation-output variance frontier.[3] Either way in terms of levels or variances, then accepting that social welfare is some function of both inflation and output, as with most choices, we will have to accept some trade-off.


The debate on monetary policy then shifted from the implications of any given instrument or set of instruments for the available set of choices to an analysis of the correct instrument itself. Let us suppose that the market for money clears at a given quantity and interest rate. Let us further assume that the current quantity of money is consistent with a level of aggregate demand that also meets price stability. So the policy rate, determined in the cleared money market, will then also be consistent with an acceptable degree of macroeconomic stability. Let us now consider some perturbations, aka shocks, in the money market that lead to temporary changes in the market clearing level of policy rates and will lead to temporary deviations in the policy rate from this initial, what might be termed natural, level and act to alter aggregate demand. What should be done?


The key question is the extent to which shocks emanating from the money market can or should be stabilised by setting interest rates directly, or indeed whether an alternative method may be required for example controlling the money stock directly. In the seminal analysis of this question, Bill Poole (1970) analysed the impact on the variance of output from either one of setting interest rates and letting money find its level or the controlling the quantity of the money supply and allowing interest rates find their level - because we cannot set both money supply and interest rates. The basic answer is to choose the instrument that leads to the least variance in the output objective, so if we first fix interest rates and changes in money demand lead to considerable output volatility it might be better to try and control the quantity of money and let interest rates move or conversely if we first try and fix money supply but find that interest rates are very sensitive to changes in the demand for money and hence output starts to become volatile we might prefer to move back to a choice over the interest rate.


A simple analogy here is found at the gym. If we want to lose 300 calories, we can set the time we spend on the treadmill and let the speed be determined by the need to lose 300 calories. On the other hand, we can set the speed and let the time flow until we have lost the 300 calories. If we set both time and speed, we cannot - unless by accident - be sure whether we have lost 300 calories more or less and we have then to choose one or the other and we would choose the one that results most likely in 300 calories being lost.[4] If we introduce an element of uncertainty by which I mean that given that we may be determined to lose 300 calories at the gym but may not be able to rely on the speedometer we might be better fixing our time or if we cannot rely on our clock we might better fixing our speed, the choice thus depends on the quality of the instruments, as well as the fact that we simply want to lose 300 calories.


Poole also showed that, in general, neither method would necessarily stabilise the economy better than the other as it depended on the relative magnitude of shocks in the money market or the interest rate sensitive sectors, as well as the sensitivity of output to these respective shocks. And, of course, shocks and sensitivities are parameters to be estimated and thus subject to considerable measurement uncertainty. An often overlooked implication of his analysis was that in general some partial use of both instruments - looking at the clock and monitoring speed but not being bound to both - was likely to stabilise the output better than one instrument alone, a point to which we shall return, but one that is perhaps echoed by the recent experience of policy makers world-wide as they have had to augment interest rate tools with direct expansion of the central bank balance sheet, or the issuance of monetary liabilities. In other words we are back to some convex combination and this axiom becomes problematic for the policy maker, even with one independent instrument as it may matter quite a lot how we decide to control its level. That is how we operate in the money markets to bring about market clearing at an acceptable level of both money and interest rates may itself be subject to considerable uncertainty.


To some extent, Robert Mundell gave us the end point of this type of reasoning. The Tinbergen-Theil framework helps us think about when given a simple economic structure, what kind of policy with kind of instrument might bring about the best outcomes. To this Mundell made two contributions. First, let us imagine we have two independent instruments, let us call them monetary and fiscal policy. And let us suppose we have two objectives, inflation and output. Should we use a combination of both instruments to hit some combination of both targets - the nuclear parenting option with two parents and two children? Or ought we to assign one instrument to one target, like one parent to each child? The answer is, in general, the latter as we might have an idea of which instrument (or parent) might do best with each objective and so, typically, assign monetary policy to inflation and fiscal policy to output stabilisation: a broad classification that still has much support across the profession. Of course, it is more complicated than that, as we shall later learn, because neither monetary nor fiscal policy nor output and inflation are strictly independent of each other.



His second contribution was to ask when two or more economies should decide to use the same policy instrument, for which the answer was when their respective economies suffer or benefit from similar synchronised aggregate demand shocks. For his answer to this contribution he was awarded the Nobel Prize in 1999, exactly 30 years after Jan Tinbergen. And this answer inverts the earlier reasoning. Remember, we need many independent instruments to meet many independent objectives. But we might only need one instrument if the objectives, to an acceptable level of comprise, are not independent. That argument will go through for inflation and output as much as it might for a set of economies considering monetary union. In this sense, he simply pointed out that if economies were highly synchronised, then they may not lose very much in social welfare terms from having a single interest rate across the countries, set centrally. As early as 1961 James Meade thought they were not sufficiently well synchronised and in his famous paper on optimal currency areas Mundell writes that “Meade...argues that the conditions for a common currency in Western Europe do not exist, and that, especially because of the lack of labour mobility, a system of flexible exchange rates would be more effective in promoting balance-of-payments equilibrium and internal stability”.


This point about sufficient synchronisation was pursued and investigated to an absurd degree by many analysts as they tried to answer Mundell's question with empirical investigation in the run-up to the creation of European Economic and Monetary Union.[5] And it was found that many further questions needed to be answered. Let me list a few. What matters is not whether all the exogenous shocks were common but whether they arose from the kind of demand shocks that monetary policy was good at stabilising through interest rates. If they arose from other kinds of sources, for example, productivity or fiscal policy or indeed monetary policy itself, then this aspect of any synchronisation or not was essentially irrelevant. As well as the over-riding source of shocks being spending shocks, it was therefore necessary to establish whether the primitive shocks, particularly real shocks, were strongly correlated across the EU countries - to the extent that we might then treat independent states as one state. The question then was how correlated was sufficiently correlated? Finally, we might also find that even if shocks are of the type that can be stabilised and are sufficiently correlated, we want to know that the response to a monetary policy action would be similar across these states, so that we would not be creating divergence by adopting the same interest rates. The empirical case could not I think be made one way or the other, which is strikes as something of a failure for economic science.

As we drill deeper into the implications of any guiding principle for setting monetary policy, we find that its application tends to be a little more difficult than the rhetorical echo of an original paper, or insight, might imply. And so we find a train of thought that moves us radically away from the calculation of optimal operating procedures and towards very simple policy prescriptions, or what might be called heuristics.



5.  The Limits of our Knowledge


The engineering solution that the Tinbergen-Theil framework hankered after was, I think, a subject of distaste for many: it asked too much of our state of knowledge and, in particular, of our ability to measure accurately the state of the economy and our likely impact on it as policymakers. That is not to say that the policy stabilisation problem was therefore one from which we ought to walk away as economists. But one that should have at its heart a considerable degree of circumspection: cura te ipsum as one might say to a physician.


A Popperian challenge was issued early on in this process. How can we construct monetary policy subject to the poor state of attainable knowledge for our heroic social engineers? Milton Friedman (1958) in his Program for Monetary Stability made an astounding yet incredibly simple point. If we imagine the economy moving from year to year it will have a defined variance, some movement around its mean. For example, output may grow at 2% on average per year but it may be known to vary around 2% either side of this mean some 95% of the time. The mean and variance of output growth in the absence of any policy action is then known as defined, let us call this exogenous output growth. Now if we apply some policy function to that rate of economic growth, economic growth will consequently take on the characteristics of both the exogenous rate of output growth and the impact on output from the policy instrument. Output, as we observe it, will thus be the property of a joint function, that of exogenous output plus the response of output with respect to any policy.


Output, post-policy, may thus tend to have a higher variance, as it will incorporate the variance of exogenous output and the variance induced by policy. Output growth may then actually become more uncertain, with a higher variance, than before unless we can be pretty sure that the impact of policy on exogenous output growth is significantly stabilising. And in practice this means that exogenous output growth moves negatively, and consistently so, with the growth in output induced by the policy. That is we have to be sure, that despite the observation, execution and transmission lags of policy, and our uncertainty about policy multipliers, that when we cut interest rates output will rise in time to offset a recession and when we raise them output will fall in time to prevent a boom. A moment's thought will make us think about all the difficulties that this kind of negative or countercyclical policy has to overcome. As much as any other insight based on his work with Anna Schwartz, published in 1963, on the importance of maintaining medium term growth in money in line with the growth in planned nominal expenditures, this little warning about the danger of stirring up more rather than less volatility when using monetary policy has always stayed clearest in my mind.



William Brainard (1967) took this idea about how to minimise the variance of the objective variable further in an important way. From the Tinbergen-Theil perspective, one instrument could be used to stabilise one objective. But then given uncertainty in outcomes it became an issue of probability as to whether an instrument of policy can be manipulated in such a manner to bring about the preferred stabilising rather than destabilising outcome. But what, if in a manner, similar to Milton Friedman's policy maker, we do not know the precise impact of a given change in the policy instrument on the objective? It is one thing to try and identify the shock and work out what the best response but it is quite another if the response itself induces even more uncertainty in expected outcomes. In Brainard's case the insight is that because any estimated response of the target variable to a change in the policy instrument is subject to considerable uncertainty, if we try to close the gap between the current value of the objective and our target for that objective, the only definite outcome is that we will be injecting some expected variance of the objective into the system.


And this insight introduced another trade-off. Even if on average you can calculate the best response to close the gap between where you want output to be and where it currently is, because that very action is costly in terms of the expected variance of output, you would be better off, as would society, if you chose to do a little less. That is the policy-maker will be willing to trade-off a small miss in the target for a little less expected increase in variance. At a stroke the fine-tuner's ability to control outcomes was, once again, undermined. It was not only that ill-timed monetary policy may end up stoking rather than attenuating the business cycle, it turned out that the policy maker ought to restrict him or herself to baby steps when stabilising the economy, because it could never be known what impact a given change in rates, or some other policy instrument, might have on the economy. A new term entered the language of monetary policy makers, who as well as dealing with additive or exogenous shocks, also now had to consider the probability that their own actions, when interacting with the economy, were likely to lead to multiplicative uncertainty.


Worse was to come for the economist-engineers. In a series of hugely influential papers, Bob Lucas and people like Lionel Rapping and Tom Sargent, had considered the implications of rational expectations for policy making. As already explained in this chapter, it was first necessary to try and locate the decision rules which underpinned the Phillips curve. Lucas suggested a rather attractive parable of workers, whose supply curves were a function of the relative rather than the absolute price of their production, and who simply could not tell whether any increase in the price of their product was an increase in all prices or simply an increase in their own product alone. With no central agency to tell them which was which, it was entirely likely that this confusion about absolute versus relative price changes would induce unwarranted increases in production. That is agents might increase production in the production of a good, thinking that the higher price was signalling an incentive for more output. But eventually they would discover that they had misperceptions over the relative price and production would fall to the previous level.


This kind of thinking captured both the Human observation that the overall price level, or inflation, did not matter in the long run as people would learn about the true source of the shock but also that it might matter in the short run as people may make mistakes about value in the presence of price jumps.[6] In fact, the implication was any short term impact of any increases in inflation mattered for output only to the extent they were thought to be relative rather than absolute price changes. And to the extent that large scale monetary policy operations were always concerned with changes in the absolute price level, the increasing use of these would actually bias agents away from increasing production in the face of inflation shocks as agents began to assume that any change in price was an absolute rather than relative price change.


These rational agents were going to be difficult to fool with systematic monetary policy. And so the Phillips curve was not a trade-off between capacity and aggregate price pressure but actually the accidental result of inferences agents drew about the origin of any observed price shocks.


The next insight was even more astounding. Following Tinbergen-Theil it had been the practice to estimate the structural relationships within and economy and use these structural relationships to derive the optimal policy rule. A considerable effort had been placed in working out how to estimate key economic relationships from a system of equations and identify parameters in a manner that would allow some calculus of policy alternatives. But it turned out that the econometric evaluation of policy was fundamentally flawed. This is because once we tried to exploit the parameters of a previously estimated econometric relationship and change the policy rule in some manner agents would tend to factor this new rule into their calculations and the previously estimated relationship would no longer hold.


Could it be that the profession's contortions with different forms of the Phillips curve were not simply an attempt to locate the truth about how about the trade-off between capacity and inflation but were themselves a result of the way the economy changed as a result of changes in policy setting? And even worse that systematic policy might be pretty much ineffective and even that the costs of output volatility were not that great anyway. And all policy should do is to explain its plans in the form of Rules to sceptical households and firms so that these plans and objectives would, at best, simply be incorporated into the plans of those people. But before I tell where we went to next and started to prefer Rules, let me first tell you what went wrong with all this new-fangled ‘fine tuning’.



Professor Jagjit Chadha, 2015




Brainard, W. W., (1967). Uncertainty and the Effectiveness of Policy, American Economic Review, 57(2), pp411-425.

Freidman, M., (1959). A Program for Monetary Stability. New York 1959.

Freidman, M., (1968). The Role of Monetary Policy, American Economic Review, 58(1), pp1-17.

Friedman, M., and A. J. Schwartz, (1963). A Monetary History of the United States, 1867--1960. Princeton: Princeton University Press for NBER.

Frisch, R., (1970). From Utopian Theory to Practical Applications: the case of econometrics, Lecture to the memory of Alfred Nobel, June 17, 1970.

Fuhrer, J., (1997). Inflation/Output Variance Trade-offs and Optimal Monetary Policy. Journal of Money, Credit and Banking. 29(2), pp214-234.

Hume, D., (1777). Essays Moral, Political, Literary, Part II, Essay III, Of Money.

Lucas, R., (1976). Econometric Policy Evaluation: A Critique, in K. Brunner and A. Meltzer, The Phillips Curve and Labor Markets. Carnegie-Rochester, Conference Series on Public Policy, pp19--46.

Meade, J. E., (1961). The Case for Variable Exchange Rates, Three Banks Review, 27, pp3-27.

Mundell, R. A., (1961). A Theory of Optimum Currency Areas, American Economic Review, 51(4), pp657-665.

Phillips, A. W., (1958). The Relation Between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom, 1861-1957, Economica, 25(100), pp283-299. 

Poole, W., (1970). Optimal Choice of Monetary Policy Instruments in a Simple Stochastic Macro Model, Quarterly Journal of Economics, 84(2), pp197-216.

Popper, K., (1959). The Logic of Scientific Discovery. New York: Harper and Row.

Theil, H., (1957). Optimal Decision Rules for Government and Industry. Amsterdam: North-Holland, 1964.

Tinbergen, J., (1966). Economic Policy: Principles and Design. Amsterdam: North-Holland.


[1]So rather than controlling the first derivative of the price level, we then became interested in the second derivative, or the rate at which inflation changed.

[2]From Utopian Theory to Practical Applications: the case of econometrics, 1970, p.12.

[3]See Fuhrer, 1997.

[4]A keen photographer might talk about aperture priority or shutter priority for manually evaluating correct exposures but younger people may not have a clue about what that means.

[5]I must point to my own culpability here!

[6]David Hume (1777) put is very well ‘that it is of no manner of consequence, with regard to the domestic happiness of a state, whether money be in a greater or less quantity’.

Jagjit Chadha

Professor Jagjit Chadha

Mercers’ School Memorial Professor of Business

Jagjit Chadha was the Mercers’ School Memorial Professor of Commerce at Gresham College from 2014-2018.

Find out more

Support Gresham

Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds.