Time for a Change: Introducing irreversible time in economics
- Extra Reading
An exploration of the remarkable consequences of using Boltzmann's 1870s probability theory and cutting-edge 20th Century mathematics in economic settings. An understanding of risk, market stability and economic inequality emerges.
The lecture presents two problems from economics: the leverage problem "by how much should an investment be leveraged", and the St Petersburg paradox. Neither can be solved with the concepts of randomness prevalent in economics today. However, owing to 20th-century developments in mathematics these problems have complete formal solutions that agree with our intuition. The theme of risk will feature prominently, presented as a consequence of irreversible time.
Our conceptual understanding of randomness underwent a silent revolution in the late 19th century. Prior to this, formal treatments of randomness consisted of counting favourable instances in a suitable set of possibilities. But the development of statistical mechanics, beginning in the 1850s, forced a refinement of our concepts. Crucially, it was recognised that whether possibilities exist is often irrelevant -- only what really materialises matters. This finds expression in a different role of time: different states of the universe can really be sampled over time, and not just as a set of hypothetical possibilities. We are then faced with the ergodicity problem: is an average taken over time in a single system identical to an average over a suitable set of hypothetical possibilities? For systems in equilibrium the answer is generally yes, for non-equilibrium systems no. Economic systems are usually not well described as equilibrium systems, and the novel techniques are appropriate. However, having used probabilistic descriptions since the 1650s economics retains its original concepts of randomness to the present day.
The solution of the leverage problem is well known to professional gamblers, under the name of the Kelly criterion, famously used by Ed Thorp to solve blackjack. The solution can be phrased in many different ways, in gambling typically in the language of information theory. Peters pointed out that this is an application of the ergodicity problem and has to do with our notion of time. This conceptual insight changes the appearance of Kelly's work, Thorp's work and that of many others. Their work - fiercely rejected by leading economists in the 1960s and 1970s - is not an oddity of a specific case of an unsolvable problem solved. Instead, it is a reflection of a deeply meaningful conceptual shift that allows the solution of a host of other problems.
22 November 2012
Time, for a change:
Time in economics
Dr Ole Peters
This lecture explores the remarkable consequences of using modern mathematics, developed to tackle problems in statistical physics in the late 19th century and refined over the last century, in economic settings. A deeper understanding of risk, market stability, and wealth inequality emerges.
I present two illustrative problems from economics: the leverage problem, "by how much should an investment be leveraged," and the St Petersburg paradox. Neither can be solved with the concepts of randomness prevalent in economics today. However, owing to 20th century developments in mathematics these problems have complete formal solutions that agree with our intuition (Peters 2011a, 2011b). The notion of risk, presented as the consequence of an uncertain future and irreversible time, features prominently.
Our conceptual understanding of randomness underwent a silent revolution in the late 19th century. Prior to this, formal treatments of randomness consisted of counting favourable instances in a suitable set of possibilities. But the development of statistical mechanics, beginning in the 1850s, forced a refinement of our concepts. Crucially, it was recognised that whether possibilities exist is often irrelevant. What matters is what actually materialises. This finds expression in a different role of time: different states of the universe can be sampled over time, and not just as a set of hypothetical possibilities.
We are then faced with a question: is an average taken over time in a single system identical to an average over a suitable set of hypothetical possibilities (hereafter the ensemble average)? For systems in equilibrium the answer is generally yes. This property, known as ergodicity, means that the time and ensemble averages can be interchanged, which is useful as time averages in physical phenomena are typically harder to calculate. Non-equilibrium systems, however, are generally non-ergodic and their analysis requires the novel mathematical techniques alluded to above.
Economic systems are usually not well described as equilibrium systems and these novel techniques are, therefore, appropriate. However, having used probabilistic descriptions since the 1650s, economics, to its detriment, retains its original concepts of randomness to the present day.
When faced with a favourable investment, how much of our wealth should we commit to it? Clearly the answer depends on suitably-defined notions of risk (how likely we are to gain) and reward (how much we stand to gain). I present a simple gambling game in which a coin is tossed. Heads we win 50% of our wager; tails we lose 40%. The asymmetry in the payouts favours us, so we ought to play. The question is what fraction, or leverage, of our wealth should we bet? There is a natural tension here: bet too little and we won’t make the best of the opportunity; too much and we will suffer unacceptable losses.
The solution of the leverage problem is well-known to professional gamblers, under the name of the Kelly criterion, famously used by Ed Thorp to “beat the dealer” at the Las Vegas blackjack tables. It can be applied not only to traditional gambling games but also to any asset, or indeed any quantity, whose value has a possibility of going up or down over time, such as a stock price. The solution can be interpreted in many different ways, in gambling typically in the language of information theory.
I pointed out an alternative interpretation (Peters 2011a) which recognised that this was, in fact, a direct application of ergodicity and inextricably bound to our notion of time. If we consider, as Kelly did, how we would play if the game were repeated, then it becomes natural to look not at the average result of a single toss but instead at how our wealth grows, on average, over time. The game is non-ergodic so these results are different, sometimes starkly so.
We can now substantiate our intuitions about risk. The reason we do not apply too high a leverage is that the large losses thereby sustained will limit or, in the case of bankruptcy, completely exclude us from continued participation. Too low and our participation is also limited, albeit deliberately.
This conceptual insight changes the appearance of Kelly's, Thorp’s, and others’ work. Their contribution, fiercely rejected by leading economists in the 1960s and 1970s, is not an oddity of a specific case of an unsolvable problem solved. Instead, it is the reflection of a deeply meaningful conceptual shift that allows the solution of a host of other problems.
St Petersburg paradox
The St Petersburg paradox is a famous thought experiment in economics, proposed by Nicolas Bernoulli in 1713, which highlights how human intuition is inconsistent with a mathematical framework based purely on ensemble averages. It invites us to consider how much we would pay for a ticket in the following lottery. A coin is tossed repeatedly and, for each consecutive head that appears, the prize, which starts at £1, is doubled. As soon as a tail appears, the game stops and the prize is awarded.
It is straightforward to show that the ensemble average prize for the lottery is infinite, due to the presence of astronomically large and unlikely payouts. Naively, then, we should be prepared to pay any price, our entire wealth or even more, to enter. The paradox is, of course, that nobody would. Indeed, most people wouldn’t pay more than £10.
The classical resolution of the paradox, presented by Daniel Bernoulli in 1738, is to invoke what modern economists refer to as utility functions. The idea is that the value we assign to money depends on how wealthy we already are: £100 to someone with nothing is far more useful than the same amount to a millionaire. Converting from dollars to utility reduces the contributions from the prizes, particularly the astronomical ones, such that the ensemble average becomes finite. After all, there’s not much difference between having half the money in the world and all the money in the world.
This approach is arbitrary in that there is a plethora of utility functions which will reweight the prizes to remove the infinity. This is often presented as a virtue in that it allows the particular financial preferences of individuals to be encoded as personalised utility functions. However, that we as humans act to maximise our ensemble-averaged utility is as unfounded an assumption as the naive assumption that we act to maximise our ensemble-averaged wealth. While it may resolve the paradox mathematically, it has no a priori justification and does not enrich our understanding about why we don’t pay any price for the ticket. As such, it adds little to the initial observation that we don’t.
I presented an alternative resolution (Peters 2011b) which rests instead on a proper treatment of irreversible time and on the non-ergodicity of the lottery. As with the leverage problem, I argue that what really matters to us as humans is what actually happens as our lives unfold over time. The resolution of the paradox flows naturally from considering the time average growth rate of an investment in the lottery. This rigorous treatment also sheds light on an error that had lain hidden for 77 years in a famous 1934 paper by Karl Menger, considered a classic by leading economists and one of the pillars of utility theory in modern economics (Peters 2011c).
The two problems I have presented in this lecture scratch the surface of what may be possible by grasping the nettle of irreversible time and non-ergodicity in economic systems. I shall briefly mention two directions being actively pursued.
A recent study (Peters & Adamou 2011) followed on from my perspective of the leverage problem by asking the question: are there levels of optimal leverage for which markets become unstable? This was motivated in large part by the financial bubble, inflated by highly leveraged investments in assets whose riskiness was misunderstood, which burst in 2007-08.
The argument runs that if the risk and reward associated with an asset make it optimal to borrow money to invest in it (that is, apply a leverage greater than one) then all market participants should be borrowing to invest. But who will provide the loans and who will sell the assets? Likewise, if market conditions are such that everyone should be borrowing assets to sell them short, then there will be no-one to lend the assets and no-one to buy them back?
In these situations, the theory of supply and demand tell us that certain things should occur. If there is great demand for assets and borrowed cash to buy them, then the prices of both will rise and the reward associated with the investment will be reduced. Furthermore, borrowing money to invest leads to the possibility of negative equity and margin calls, which increase volatility and, therefore, the risk component of the investment. Thus emerges a tendency for the market to self-correct the risk and reward such that it is no longer optimal to borrow to invest. The converse arguments apply to situations in which it is optimal to borrow stock to sell short: the market will self-correct to make this non-optimal too.
This leads to a new type of efficient market hypothesis which says, in essence, that it is not possible over time to beat the market using the simple strategy of leveraging or deleveraging one’s investment in it. The optimal leverage to apply to investments should lie between zero (all one’s money in cash) and one (all one’s money in the asset), ideally the latter. This result was verified empirically against 55 years of daily data from American stock markets.
This result has enormous consequences for market stability and economic policy, since it implies that when optimal leverage strays outside this range it will, eventually, have to return. If it strays a long way, as it did in the previous decade’s financial bubble, then this return may take the form of a disastrous crash. It also tells us about the role that policies implemented by governments, central banks, and regulators can have in this process: if they incentivise leverage, for example by setting low interest rates, then they can promote instability.
It is not only stock markets that can be modelled as non-ergodic systems: the wealth of individuals and entire populations can also be treated as quantities that move up and down in a random way over time. Accordingly, the techniques that have been developed for the former can be applied to the latter.
At present the most widely reported measure of national economic growth is the percentage change in the Gross Domestic Product or GDP. This measure has the advantages of simplicity and easy evaluation via national accounts. However, it has many drawbacks as a measure of economic well-being, one of which is that it is insensitive to the income distribution. When GDP grows, it is irrelevant if that growth has been shared across the population or, to take an extreme example, has been concentrated entirely in a single individual. GDP is an aggregate measure, much like ensemble averages, and as such can be very misleading.
Recent work, to be released next year, shows that by applying the analogy of a time average to the growth of individual incomes, it is possible to construct an alternative measure to GDP which not only incorporates growth but also how the growth is distributed and, therefore, actually experienced by a population.
Little of the mathematics I have presented is new. Many of the results, such as the solution to the leverage problem, are also old. This is not so much a story of ever-increasing scientific complexity and sophistication, of technical and technological accomplishments. Rather, it is an attempt to recast and reinterpret our understanding of economics in a new light. By addressing historical misconceptions, and the occasional error, about how we should think about economics in the context of time and non-ergodic systems, it is hoped that a deeper understanding will emerge which will allow us to tackle economic problems which have been hitherto inaccessible under the prevailing paradigms for risk and randomness.
Peters, O. (2011a) Optimal leverage from non-ergodicity. Quantitative Finance, 11, 1593-1602.
Peters, O. (2011b) The time resolution of the St Petersburg paradox. Philosophical Transactions of the Royal Society A, 369, 4913-4931.
Peters, O. (2011c) Menger 1934 revisited. arXiv, 1110.1578.
Peters, O. & Adamou, A. T. I. (2011) Stochastic market efficiency. arXiv, 1101.4548.
© Dr Ole Peters2012
This event was on Thu, 22 Nov 2012
Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds.