PHYSICS V. MATHEMATICS:
RIGOR (MORTIS) AND OTHER IMPEDIMENTS TO
UNDERSTANDING FINANCIAL MARKETS
Professor Doyne Farmer
Thank you. Well, knowing nothing about history, I thought I should at least talk about something that nobody knew more about than me, so I decided to talk about the future, since none of us really knows very much about it , so I can freely speculate.
I'm going to begin by sort of asking, at a very high level, I mean, why do we have this whole system of markets and prices - what are they about, what are prices for? I'm actually curious what people say a little bit - I mean, somebody tell me, what are prices for? What's the purpose? I mean, if you think about it in the same kind of vein as...you could say, well, why do we have an immune system? We have an immune system to keep out invading things that might take over. So it's like, you know, why do we have an army? Well, we have an army, at least we hope, maybe not - maybe in Europe you have one, in the United States, it's a little harder to say - to keep out other people from entering. Well, why do we have a financial...in that same kind of mode, what would be the analogy for prices in markets? Any thoughts?
Audience Member: The efficient allocation of resources.
Allocation of resources - I would agree with that. I just want to...I want to say it a little bit differently, in that when we're allocating resources, what are we really doing, and I would argue we're setting our goals as a society, because of course, if the price of pork bellies goes up, then people go out and raise more hogs, and so it's a kind of a self-organised way of allocating resources, but of deciding, as a society, what we're going to do, without anybody actually making the decision. It's not the only way we do that at all, but it's at least one powerful way. As I said, it's a self-organised method for directing the activities of individuals. It's, as emphasised by Hayek and others, it's an efficient method for processing information and making a distributed set of decisions. Many argued this is why the Soviet Union and social economies in general have failed because of the inability to do this kind of thing correctly.
I think it's remarkable how it's highly specialised and geographically concentrated, and particularly now, increasingly automated. We heard a lot about that in the last talk.
I would argue that the most entertaining thing to do in any of the major cities in the world - well, at least London or Chicago or New York - is to actually go to the market. If you've never been, for example, to the Chicago Board of Exchange, sort of the best of them all, it's a complete zoo! You have people running from one place to another, you have people yelling and screaming, all in close proximity, making bizarre hand signals, and it's particularly interesting to be there when there's some real new information. I noticed this because the instant something's really happening, you immediately hear it. You know that you have to look around - you hear a change in the background noise, and immediately, everybody looks up at the boards to see what's going on, because you can literally feel the waves go across. In fact, we once thought about having a microphone on the floor just to measure the background noise level so we'd know as early as possible when something's happening.
Actually, here in London, you have the London Metals Exchange, which is the last hold-out in London of this kind of thing, which, if you haven't been there, I highly recommend trying to talk somebody into giving you a tour. I found it amazing.
I think the other thing I'm going to try and address today is how well does it work, because I think the story that's told in the academic literature is at variance with what I would say is really going on. So, in that note, I'm going to jump to another topic, which is market efficiency.
In the standard literature, there's three kinds of market efficiency: one is informational efficiency - are prices predictable; arbitrage efficiency - can you make profits without taking risk, or let's say, can one kind of strategy make better profits than another strategy holding some variable like risk constant; and allocative efficiency - are we making sensible allocations in the sense that, say, Pareto efficiency would say you can't make somebody else better off without making somebody else worse off, and do markets actually achieve a state of high allocative efficiency, most important of all.
Now, at a conference we had in Santa Fe in 2000, where we gathered practitioners, academics, physicists and biologists, it was kind of striking how many of the famous practitioners said - we would say, we addressed "How efficient are markets?" They said, "Well, about 98% efficient," but when pushed to explain that, nobody had a clear view of it. I personally think the figure is probably closer to 20 or 30%, but...you know, until I can present a hard way to do that, I can't really say I'm right and they're wrong, but nor can they.
The still dominant theory of economics, and this is according to a poll taken a few years ago - this was brought to my attention by colleague Mauro Gallegati - is rational choice in a neoclassical form, namely, you know, the idea that all agents are omniscient. Why do I say it that way? Well, because in a rational model, it's not just that the agents are really smart; it's that they have access to the correct models of the world, and they, in a sense, know what everybody else is doing, so they're, in that sense, omniscient. They're selfish, they maximise their utility, under what I would argue are highly unrealistic utility functions based on psychological surveys of human behaviour. They assume that markets clear, that people are price-takers, that is, they accept the prices that are offered without affecting those prices, and they...that the result is a Nash equilibrium where, in some sense, you have...the strategies that the agents are using...it's not possible to modify those strategies and around that point do better. Now, the reason I mentioned that, okay, in this poll, 92.2% of economists support this, Maura Gallegati pointed out that actually there's another poll taken of how many people believe that aliens have landed on the Earth, and that's 7.8%...so the economists that don't support rational choice as the main tool are on a par with people who believe that aliens have landed on the Earth.
Now, in finance, what this means is that all information is properly incorporated into prices, that new information is therefore, by definition, random, and that prices are perfectly efficient, and that changes in future prices are random, and it implies both informational and arbitrage efficiency. Now, it's not that I think that efficiency is a bad approximation for a lot of purposes. It's had a brilliant success in option pricing, as, you know, we heard about in Mark Davis' talk today. I think in some domains, it works really well.
There is a paradox that was pointed out actually originally, as far as I know, by Milton Friedman, which is that for this theory to work, the story behind it is that you have to have arbitrageurs to incorporate the information into prices; if the market is really efficient though, the arbitrageurs shouldn't be able to make better profits than anybody else, in which case, if the arbitrageurs are rational, they'll leave the market, in which case, the market can't really be efficient. So this paradox has been sitting around now for more than 50 years, and I would say it's not really well resolved in the theory, because somehow, as we'd say in physics, that it may - I think at first order, markets are efficient, at least in certain situations, but at second order, there has to be a violation of the principle and I think it's probably very important to really understand the way in which this second order violation occurs, because it's essential for the way the market functions.
As an example, Michael mentioned I co-founded in 1991 something called Prediction Company, with Norman Packard. We did proprietary trading. We actually weren't a hedge fund; we were proprietary trade advisors, although we actually did all the trading ourselves, just under their ticket on the stock exchange.
I think of this, in line with the last talk, we heard what we did is a cerebella approach to market forecasting, that is, the models we built didn't have a real rational model of what was going on in the market - they were stimulus response boxes. We looked through all the data, we found situations where, when unusual conditions occurred, or maybe when usual conditions occurred, with high statistical probability, prices would move in a direction that we could then predict. The key to what we did was feature extraction, that is, the key was knowing what to ignore and what features of the market actually seemed to be important and cause movements.
I think one can make an analogy to work that Hubel and Wiesel did. They were two neurophysiologists who were trying to understand the visual cortex. They did experiments on spider monkeys. They would hook a spider monkey up with a little, you know, a brain helmet, and looking at it with probes in the neurons in the skull of the spider monkey, and they would then show them patterns, like moving bars or spots or things like that, and they would try and figure out which part of the spider monkey's brain was responding to this and how was this organised. The key principle they came up with is that the spider monkey doesn't just take, you know, the pixels of the visual image and process things pixel by pixel, but rather breaks, in a sort of cascading process, breaks the image down into features and sends these high level features back into deeper parts of the brain where, from there, we don't really know what's going on, but it's clear that the feature extraction, the pre-processing that spider monkeys do, is key to understanding what's going on.
That's what we did. We found the right features, we pre-processed them, and then we did actually relatively simple regressions to interpret what those things really meant. We didn't really understand the origin of most of these patterns. We could really only make this work in situations where we had abundant data, where we traded at reasonably high frequency, so that we could get a lot of examples, and where we had reasonably stationary conditions. But I think of it as cerebella in the sense that, you know, when a baseball player - sorry, I'm using an American analogy - when a soccer player responds to somebody kicking the ball all the way down the field, they're anticipating where the ball is going to go. They aren't using the laws of physics directly to do that. They're using some stimulus response. They've seen thousands upon thousands of soccer balls getting kicked, they have a little look-up table in their brain that says, well, it looks about like that, I see the ball has got some spin on it, I know the wind is blowing about like this, so they may do a little correction, but they know roughly where to go to be in the right place when the ball comes down. That's about what our machine was doing. Now, our machine wasn't thinking deeply about the market, but was processing much more data than a person could ever process, and so it was doing something that a human genuinely can't do, and I have to say was fully automated. It wasn't at first, but we, at my insistence, began taking statistics on how we would have done without a trader overriding or changing the decision in some way, and what we actually did with those overrides, and once we were two standard deviations down with the overrides, I convinced everybody to shut off the overrides entirely. So it was a completely automated system, and, as we heard in the last talk, this is becoming more and more common. In the future, I think it's going to be even more common. We increasingly see machines trading with other machines, not just for mechanical trade execution, but for information processing and decision making, and I think that's a trend that's only going to increase in the future, which then, coming back to the note I opened the talk on, it's interesting to think that we're leaving in the hands of these markets the control over something that's really pretty essential to human wellbeing.
Now, I also wanted to just mention a little bit about this point about first order, second order nature of market efficiency. This is one of the few slides we convinced the Swiss to let us release. I'm actually not quite sure how we did it, because normally they won't - they're paranoid about the silliest things imaginable. What I'm showing in this plot is the correlation, that is, between a signal that we would generate, and the signals, we think of a signal as something that relates to a cluster of inputs of a particular type, and our trading systems were built out of several signals that we then combined. The signals in and of themselves though should have predictive power.
So signal one here, and by the way, we're looking at data from 1975 to 1998, and in fact the model was built just on the latter part of that data, from about 1990 onward, and then later, only later, tested on the latter part. The correlation that we're seeing up here is indicating how well the signal correlates with the movement of the stock about two weeks in the future. So if this said 100, it would be a perfect prediction; if it says 0, it's a random prediction; if it's minus, it's actually predicting backwards. So you see that it starts around 12, 13%, and there's a slow decline during the course of this 23 year period to something more in the vicinity of 3 or 4 or 5%.
Now, on one hand, it agrees with what is predicted by efficient markets. You could say Friedman was right, because in fact the market is getting more efficient through time. On the other hand, it's taken 23 years to do that, and whereas when we started in 1991, I would have guessed there maybe were 10 firms doing the kind of statistical arbitrage that we were doing, there's probably 1,000 of them now, and yet, these signals still get traded on, even if they occasionally take large losses, as they did in August of last year.
Even more surprising is this one down here, where, for reasons I'm not going to explain in detail, but if you know the dates, you might guess why, we have a different signal. The signal actually doesn't exist prior to that date - it's impossible to formulate that signal. There was a change in the market structure, and what we see is, to the surprise of efficient markets, the signal actually builds through time. Now, it's not that the market isn't pretty efficient. Neither of these signals are really, really strong. We're not talking about 100 or even 50% correlations, but nonetheless, they are sufficient to make a fair amount of money.
Now, the other argument that people have made against efficient markets and rationality is, well, this is something I actually got in junk mail, so to speak, because they got it on some list, which is a kind of entertaining list. You notice what they're doing here is they're actually giving you advice about which stocks to trade based on astrology. What I haven't figured out is do they cast the horoscope based on when the company was born or on when you were born, I don't understand, but they do that.
This is a guy named Robert Prechter, who actually, interestingly, won a trading contest. Many people follow him. He's developed this theory of Elliot waves, which is based on Fibonacci numbers. We actually heard something about Fibonacci earlier in the day. They have cycles and super-cycles, and you can even use this to predict, you know, when we're going to have horror movies versus Mary Poppins, according to them, but okay...so people aren't rational - that's not too surprising to anybody. I know I'm not rational, and I suspect most of you aren't either.
But even within mainstream economics, there's been a widespread debate over whether prices - how well do prices actually match fundamental values, how well are these allocations being made. This is from Campbell & Shiller, and the two plots I'm showing are plotting prices against fundamental values based on historical dividends over more than a century, and what you see is there are periods of decades at a time where prices - this is a logarithmic scale - so there are periods of decades where prices and values are out of line by factors of two.
This is a slide actually due to Cutler, [?] and [Summers] comparing - what they did was they took a 40 year period in the S&P, they looked at the 100 largest moves in the US stock market, as measured by the S&P index. I've showed the top 12 here, I've showed the dates, I've showed the size of the moves in percent. Then they went to the library, and they looked at the New York Times on that day, and they picked out a sentence or two corresponding to the New York Times' explanation of what went on. I've shown in black the things that they didn't label as genuine news, or you might call it market-generated news, like - look at the top one, worry over dollar decline, fear of US not supporting dollar. As a market predictioner, I experienced fear and worry every day. That's one reason I was very happy to sell our company to UBS finally. So I would say if the people who manage your money aren't experiencing fear and worry, you should have somebody else manage your money. So in no way should fear and worry be viewed as news. In contrast, you know, the outbreak of the Korean War, that seems like news. You can see the news items are in a minority. You can also see the decline in news reporting. On the fourth item, September 3rd 1946, the New York Times actually had the courage to say no basic reason for the assault on prices - I don't think they've ever been that honest since!
Another slide here shows, again, over about a 100 year span, the volatility of the US stock market measured based on the monthly standard deviation of daily price moves. So every month, you take the daily price moves for that month, you take the standard deviation, and you make a dot, and you do that for every month since 1885. The striking thing that hits the eye when you look at this, and let's remember - let me go back a few slides here'to the standard view of market efficiency, and that is that - the rational choice, is that all information is properly incorporated into prices, new information is, by definition, random, and so if you have a large price move, it's because you must have more information on that day. Now, go back and look at this plot? What you see is there's a period, under that interpretation, corresponding roughly to the Great Depression, where for some reason, they were getting a lot more information than we're getting now. It seems strange that they should have had so much more information in the Depression. I would argue that something else has to be driving these large scale and persistent changes in volatility.
I mean, we know in economic theory, there's been, there was a Nobel prize in the econometrics in this cluster of volatility. There's actually, other than the clearly wrong theory that I mentioned, I think there's essentially no understanding of why we have periods of more and less volatility. I will say what I believe and that it's related to liquidity. There are periods where, if somebody wants to make a trade, it causes a large price change; there are other periods where, if somebody wants to make a trade, that is, if you're a buyer, all else being equal, you enter and you say, "I want to buy," and you initiate a trade with somebody, when you do that, you're going to push the price up a little bit. There are periods when you're going to push the price up a lot, and other periods where you're not going to push the price up very much, and the reason for that change, there may be many reasons - I mean it may be that, for example, during the Depression, that people were just more nervous; it may be that there were, for whatever reasons, more instabilities in financial markets - but anyway, I believe there are reasons there that we can understand, and it's not just that they had more information. As a result, we have significant changes in liquidity and, being in the middle of a liquidity crisis, it's a very topical thing at this point in time.
It's highly variable, as I already said. It's persistent, meaning if we have a lot of liquidity today, we're likely to have a lot tomorrow, and if we don't have much today, we're likely not to have much tomorrow. It's the main driver, as we've shown in some of our papers, of volatility and of changes like the ones I showed you on that last slide.
I'm going to skip this slide because I don't want to run over time.
Let me just say that I think if, once you realise that liquidity is the main driver of volatility, then it presents an interesting opportunity because it's something we have control over, at least partial control over - that is, if we can make it easier for counterparties to find each other, if we can bring all the right people together in one place, then liquidity gets better. It's been a constant battle, for example, in New York stock exchanges where there's been a tendency for liquidity fragment, in part for good reasons I think. There was essentially a scandal in the NASDAQ over collusion between market makers. The specialist system in the New York stock exchange has been a scandal since it was instituted, in my opinion. So that's driven people to be constantly looking for other ways to find more efficient ways to trade. So we can change the way the market is structured. We can change the fees for liquidity providers versus liquidity takers. In the London stock exchange, for instance, if you're providing liquidity by posting orders that sit on the book, and if you have a lot of orders sitting in the book, somebody can then enter and take liquidity off the book and initiate a trade and generate a smaller change. People are actually compensated in terms of their fees for that.
You can change the way that information revelation happens to make people feel more comfortable or to change human behaviour. In London for example, all the orders in the book are completely transparent and visible to everybody - everybody who can pay for the feed. That's not a trivial thing actually. But you don't even know after the fact who you traded with, so your anonymity is protected very strongly. The rules in New York are quite different, and the rules differ on virtually every exchange, and what we're seeing I think is a kind of a Darwinian experiment in which methods of trading do people prefer and which methods result perhaps in more social welfare, although the utility of the exchanges can differ from the utility of the clients of the exchanges. I believe, frankly, that this actually can make a difference in long term, not just in liquidity, but in long term volatility.
Now, as a physicist looking at markets, I feel...I debated whether to even put this slide in, because it's of course easy to walk in and criticise the other guys. As you start working in economics, you really are struck by how hard these problems are, but nonetheless, just to be very blunt in my criticism, I mean - and this was the title of my original talk title about rigor mortis - I mean, you're struck, when you come in from physics, how much theorising there is in economics. Papers are written commonly in theorem proof format, which is...per se could be okay if the hypotheses the theorems were based on you thought had any real correspondence with reality, which I think often they don?t.
There was a kind of a change in about the 1950s, where economics became very mathematical. It was mainly a good thing, but I think in some cases, common sense got tossed out with it. For my taste, there's a lack of ambition in data gathering because the incentives in economics departments don't favour really ambitious data gathering. You know, about 80% of physics is actually data gathering. There's a lot of data gathering in economics too, but...there are many rich data sets and some, let me also say, some people are really beginning to do this, but data gathering is a pain in the ass, and there need to be better incentives for people to really do that and get tenure from doing it, because I think the data sets that are typically being used are just a minor hint of what we could do if we had better data sets. You know, theory and data are not well connected. That's changing in economics, it's getting much better. There's much more of a pushing for economists to really try and make theories connect to data, but I think it still tends to be awfully qualitative. The slavish adherence to one paradigm - it's not that that paradigm is wrong, it's just that it's not the only way to look at the world.
Finally I think it's asking, well, what is the right set of questions? What are the appropriate set of goals for the theory? How should one go about it? Physicists have a blind belief that there are regularities in the world and one should find those regularities and try and understand them in the most mathematical way possible. Whereas, you know, in economics, you really can't use the word "law" in a paper. I always have to go through because I tend to put it in - you know, that we're trying to find a law - and my economist friends say, no, no, you can't do that - take it out. It's okay - I'll use it here!
You know, one can ask, again, looking for the future, I believe that if we do find another civilisation out there in the universe, we'll discover that they do trade - I'd be very surprised if they didn't trade - and that their markets probably will go through some evolutions that are somewhat similar to ours. I mean, we had a wonderful perspective today on the way that markets have changed, on the way that markets have actually affected the way we do something as basic as arithmetic, and the interplay back and forth. I think that we would discover that they've gone through a lot of the things. The details will all be different, but I think there will be some common principles. We might discover, for example, they have options. I would argue that the Black Scholes, you know, pricing formula, you can view it as it's an algorithm for pricing an option; it's actually, in a certain sense, become a law because in fact options pretty well follow the Black Scholes pricing formula, and through time, have come to follow it better than they did before. It's actually one of the remarkable things with equilibrium theories, is that by creating a theory about how something should be done, you can change the way it is done, and then it becomes a kind of a law. Some of the other laws may actually be more derived from psychology, they might be a bit slipperier, but I nonetheless think we will see more and more examples of such things existing, and not just on derivative pricing. Derivative pricing is the realm where it's been very successful, but I think we can really begin to think about other topics, like the underlying.
Now, in my last few minutes, I'm just going to throw out a few things that have been found in the last 5 years or so, 5 to 10 years, some of them maybe a little longer.
Well, what is volatility? I showed you this remarkable picture, showing the persistence of volatility. I could show you a picture on a 15 minute timescale that would look essentially identical to the picture I showed you on century timescale. The thing the pictures have in common is these bursts of high periods of volatility and then low periods of volatility. Prices change a lot for a while, and then they don't change so much for a while. There are common features across vastly different timescales and, more technically, we would say this means there's a long memory. You can make that precise in terms of the auto-correlation function - I'll do that in a minute.
There's a very nice recent paper showing that there's an equivalence between the bid/ask spread, the market impact, and volatility and transaction time. They're literally about the same size. They can't differ by more than about a factor of 2, from some very simple efficiency arguments, the parallel behaviour of volume, the long memory of order flow. I probably won't even get to the last one, but I'll just show you these other examples in more detail.
One of the things that we see pretty consistently - and let me just say, there's been a lot of debate about which of these kind of things are really robust. There have been some other claims that I don't support because we haven't found they're robust, but this one seems to be fairly robust, that is, if you look at trading volume in markets where people can freely trade large sizes, like not in the order book of the LSE, but in the off-book market where people negotiate trades over the phone, which is the ones that are off there to the left. I should explain this picture, since everybody is staring at it. What we're plotting on the x axis is the volume of a trade measured in some arbitrary units, set so that we've centred one in the middle - we're dividing things by the standard deviation. You see that we're looking at a range of variation of about 9 orders of magnitude, so it's a very large range. Then, on this axis, what we plot is the probability that the volume of a trade exceeds some threshold x, and so we're plotting that threshold as a function of the probability that we see trades that are above that threshold in size. So in other words, as we go off to the right, we're looking at increasingly large trades, which are getting increasingly rare, and we're plotting this on double logarithmic scale, so that if there's what's called a parallel relation - if you don't know what that is, don't worry about it - you see a straight line, and we see, in many markets, we see a good approximation of a straight line when we look at that upper curve. So, we feel that this may be something a bit like a law. We're trying to explain it, and we don't have a good explanation yet.
On the other hand, we do know some of the things this affects, and so, I mentioned auto-correlation a moment ago, this is just a way of saying if you have a relation between the same variable x at two different times, t and some time in the future from there, it depends on the product of the two, and all you have to know is that it's a number that's one if they are really exactly the same, it's minus one if they're exactly the opposite, and it's zero if they're randomly related, and it's somewhere in between if they're somewhere in between. So you look at the auto-correlation of the signs of trades in the London stock exchange - and here, the sign is plus one if a buyer initiates the trade. If the buyer is the one that takes the order out of the order book and actually causes that trade to happen, we'll call it plus one. It's minus one if it's initiated by a seller. So we take a couple of years of data?
This happens to be the stock of Astra Zeneca, but they all look the same. They look the same in the Paris market, they look the same in the London market, or sorry the New York market, they look the same in the Spanish market. Every market we looked at, every stock, it always looks the same. So you take this sequence of signs, like say of a million trades, that's about what this corresponds to, plus one, minus one, plus one, and you take this auto-correlation function. Now, what you see - again, we're plotting this on this funny way, but for a lag of one, that is, if I look at one trade and I look at the next trade, then we see an auto-correlation of about 15% between those trades. So it's telling you that it's not exactly predictive, but there's a pretty good relation from one to the next. As we go out to longer lags, now we go 10 trades later, or 100 trades later, or 1,000 trades later, 10,000 trades later, at 10,000 trades later, we're talking about a time span of two weeks. So I walk in the market - I can't walk in the LSE, okay, I look on the screen, because there is no there there - I look at one trade on the screen and I look at its sign. I can then look, two weeks later, without knowing anything else at all, and I can predict the sign of that next trade, and I can do it sufficiently accurately that if I collect data over the course of a year, that's actually going to be a statistically significant prediction, because we still have these values statistically significantly above zero, two weeks out.
Actually, I was thinking of this in a remark Mark made earlier about, well, if we don't have equilibrium, then - and I'm going to misquote you, Mark, so I apologise - but we'll have everybody piling in, I think you said, on one side or the other. In fact, people are piling in on one side or the other, because that's what this is about. The supply and demand is sloshing in and out of the market, like, you know, if you get in the bathtub and you put your hands and you start sloshing the water around, it's sloshing on lots of timescales. It's maybe more like climate or the ocean or something, which also show this kind of long memory, but the remarkable thing is the market, the prices do stay pretty efficient, and what we're seeing is that the market has to go through all kinds of gyrations and adjustments to maintain that efficiency, and they have side consequences. We believe that clustered volatility is one of them.
So, the point being, as we go into the future, I think there are things like laws, and we're seeing some examples of them. So one of the ones we've been able to derive is - I've given you two possible laws here. One is this relation about volume and the heavy tails of large trades. Another is this auto-correlation, and in both cases, there's a slope. There's a rate at which this curve is dropping, and it's roughly linear as you go from left to right. Actually, there are better ways of showing that this really is, in some sense, linear, but when you measure these things, those slopes are simply related in this. We can predict that relation: the slope of the volume curve is equal to - sorry, the slope of the auto-correlation is one plus...one plus that is equal to the slope of the volume curve, and we actually have a theory for why that's true. We think it's because people trade incrementally. They don't - if Warren Buffett wants to buy 10% of Coca Cola, he doesn't just place an order for 10 million shares in the order book of Coca Cola Company. He talks to his brokers, and they work out a strategy, and over the course of months, they incrementally buy up little bits of Coca Cola. That behaviour is what's causing this kind of thing.
I'm running out of time, so I'm going to...I'm going to just do one last slide and then give my conclusions.
For me, the big fascination with financial markets is that they provide a perfect laboratory in which to study social evolution, something that's been talked about since the time of Herbert Spencer, but about which I would maintain we know very little, in part because I think we manage to gather much less data about it, in a quantitative way, than biologists have. But if evolution means dissent, variation and selection, that is, you transmit information through generations, that information has some variations, some errors or variations in it, and then you select, based on some principle, one thing or another. We see that strongly in financial markets. What - because we're talking about strategies, there's a certain kind of trading strategy, it gets transmitted across generations, people pick new trading strategies, the strategies are competing with each other, and we are able - we have data sets - we managed to gather, together with Terry [O'Dean] and Brad Barber and some collaborators, we have about 12 years of data from the Taiwan stock exchange, in which we can see not just every order that was placed in the order book, but we know the identity of the broker, the individual who placed the trade, and the account of which that trade got made. So we can actually really study the heterogeneity of markets, markets as an ecology of human decision making. The obvious difference with biology is that people can think. Economics have worked very hard - the theory of rational expectations is centred around that idea, which we don't discount, but markets provide an interesting way to see how people actually think and how they actually make decisions.
So just to conclude, I think mathematics is going to continue to play an ever-increasing role in markets. Markets have the great advantage that we can record what people do in great detail and study it, and we've only just begun to do that. I think we'll be able to go to a deeper level. We'll have laws, eventually, that will be more like physics. I think that we are going to be in a situation where the control of markets and the participation in markets will be increasingly non-human, simply because machines can process more information and process it faster, and that by sort of...as we begin to get a better understanding of how efficient markets really are, if I'm right and they're really not very efficient now, I actually think that's a good thing because it means we can maybe actually improve how efficient they are and make them work better in the future.
©Professor Doyne Farner, Gresham College, 25 April 2008