How to be a Winner: The maths of race fixing and money laundering

  • Details
  • Transcript
  • Audio
  • Downloads
  • Extra Reading

How does maths shed light on the fixing of horse races? Why disabling the favourite is the best strategy to ensure a winning return. We will find the simple condition that the odds must obey in order that you can always have a winning return no matter what the outcome of the race. How this knowledge seems to have been used by criminal organisations to launder money. How to ensure a game is fair even though the gaming device is not. The remarkable counter-intuitive 3-box problem first used on a German TV game show.

Download Transcript

 

HOW TO BE A WINNER:
THE MATHS OF RACE FIXING AND
MONEY LAUNDERING

 

Professor John D Barrow FRS

 

As you may be aware, today is a very special day: it is Square Root Day, because today is the third of the third 2009, so it is one of those occasional dates when the year is the square of the month and the day.  The next one will be the fourth of the fourth in 2016.  But on this Square Root day, I want to talk today about various aspects of probability, couched in environments surrounding games, betting, winning and losing, but we will start off by asking one or two more historical and philosophical questions. 

If you are interested in the history and the origins of different parts of mathematics, one of the great mysteries is why the theory of probability is not an ancient subject.  Arithmetic, algebra, geometry and astronomy are all ancient areas of study.  The ancient Babylonians and the ancient Islamic cultures investigated algebra, arithmetic and geometry, and they were also studying these in India at that time, in great depth, but you find no study of the theory of probability in ancient times.  In fact, you really have to get to the 17th Century before you find any systematic study of probability in the way that we learn today at school and elsewhere.  So it is an interesting question as to why that is, because in all those cultures that are not studying probability, betting and gaming was going on in the marketplace and people were trying to win money in competition against others, and yet, there is no study of probability. 

There are two reasons as to why that might be: the first is to do with prevailing religious beliefs about the nature of the gods and the nature of the world; and the other is more subtle, and we will come to it later on, but it is about what we might call the lack of a concept of equally likely outcomes. 

But, to begin with, what do I mean by religious beliefs?  We have seen, in many stories in the ancient world, there is this view that somehow chance and randomness is the way that the gods control the world, and it is the way that they speak and dictate things to us.  For example, in the famous biblical story of Jonah and the Great Fish, one person says to his fellow: 'Come, let us cast lots that we may know for whose cause this evil is upon us,' so they cast lots, and the lot fell upon Jonah.  So that typifies the way in which chance and randomness was used in many parts of the ancient world, and so you can see that, to start studying it and turning it into numbers and symbols and algebra and arithmetic, could be viewed as blasphemous or sacrilegious.  It was perhaps a dangerous thing to get into.

You can find some similar thoughts in St. Augustine, much later on, when he says, 'We say that there are causes that are said to be by chance, and we attribute them to the will of the true God.'  So even long after the establishment of the early Christian Church, there is still this view that somehow chance events are the way in which the gods control the world.

Just two years ago, I was in Taiwan to give some lectures, and I visited a Buddhist temple in Tainan, which is the ancient capital, right in the south of the island.  While I was there, there was a young man who was standing in front of the shrine, and he had taken two wooden objects and he was casting them in front of the shrine and then examining them.  I asked my guide what I was witnessing and he said that the man was, in effect, divining; he was casting these randomising devices, he would ask for something at the shrine, and then he would throw the dice, as it were, and if he got some numbers: if it was a yes, he should go and do what he said he would do, but if he got a no, then he would go away, but, more likely, he would get a sort of a heads and a tails, and if that came up, he had to go back and take some large sticks from this vase, and on each of them would be written something that he must commit himself to do.  It might be to make a donation to the temple, it might be to help people in some way, or give of his time in some other capacity, and so he would then agree to do that, and he would return, and then he would throw the dice again to see whether his extra commitment now produced a 'yes' answer.  So even in modern times, this idea of using probability to somehow divine the will of the gods was still current.

If we go back to the earliest known artefacts associated with this type of activity, you have to go back to about 3,600 BC, where we find the use of astragali.  These were the bit of the bone between a sheep's ankle and heel which are uneven.  They were used for this purpose of predicting the future thousands of years ago.  One would have a collection of them, maybe five, and they would be labelled on different faces with numbers, and then you would let them fall, and each collection of five numbers, say, 12345, or 55555, would have a particular god or activity associated with it.  So if you wanted to know what the future held for you, and you got 55555, this might tell you that it was in the control of some particular God or other.  Eventually, these objects were replaced by the type of dice that we know about today.

You may remember that I mentioned the other problem in the ancient world was something I called the lack of a notion of equally likely outcomes.  So when one does probability today, you are dealing with something like a die, it is highly symmetrical, and the probability of any of the faces falling upwards is one in six, to a very good approximation, so you are then on the path to having a mathematical theory of what is going on; but with objects like the astragali, they are asymmetrical since every one of them is a little different.  Therefore, if you run into the chap in the market who has got a couple of these and wants to play a game against you, no general theory can really help you.  He knows what the bias of his device is, that it is twice as likely to fall on this face as this face, so it is just his experience of his device that counts.  Until you have gaming devices where there is equal likelihood for each outcome, then it does not help to have a general theory.  So this is the other practical impediment as to why you can have games of chance going on, but no one is really motivated to have a theory of what they are doing.

Then there came the earliest types of symmetrical dice, which gave rise to the game of Hazard, that people would play in the Middle Ages, before cards came along.  This was imported again from the Arabic, just like algebra is an Arab word, so 'al zhar', which means a die or one of these objects, was turned into our word 'hazard', for chance, and it was the name given to this particular game.

Roman artefacts show that there were many forms of dice used which did not have six faces.  Any one of the Platonic solids, where there is an equal chance of falling on any face, would do.  For instance, a perfectly made icosahedron, with twenty faces, if you threw it up with no bias, would be equally likely to fall on any of them.

An interesting fact about dice is that they come in right and left-handed varieties.  So if you go and buy one in the shop round here, it is very likely it will look like the right-handed variety.

So if you sit it down with the one spot pointing upwards, and the two on the left, you will find the three on its right.  But if you go to China, Taiwan, Japan, Korea, and buy a die there, you will find it is almost certainly left-handed, and you will find if you put the two down, you will have the three on its left.  So if you can find a 'Made in China' dice set around here, this is the sort of thing that you can check, to see whether they have been made right or left-handed.

In modern times, this notion of the equally likely outcome starts to come into the study of chance and probability only in the mid-1700s.  One person who played an interesting, stimulating role in this was not particularly a mathematician but he was more of a professional gambler, Chevalier de Méré.  Although, importantly, he was an acquaintance of the philosopher and mathematician Blaise Pascal.  Whenever he had a puzzle, he would tend to ask Pascal what his thoughts on it were, and Pascal, often in correspondence with Fermat, and out of this correspondence, deriving from one of de Méré's questions, grew this notion of equally likely outcomes and the way to analyse probability systematically. 

Chevalier de Méré brought up this problem, which has become known as the Problem of the Points.  The situation is that two people are playing a game against each other.  It is a fair game and not at all biased, so it could be something like tossing a coin; there is an equal chance of either of them winning the game in each round.  The rules of the game are that the first person to win six points is the winner and takes the jackpot.  But in this instance, they have been playing for a while, and one of the players has won five points and the other has won three, but they cannot continue, so the game is stopped, and the question then is, how should they divide the jackpot, given the fact that the score is currently 5:3?  What is the proportion of the total jackpot that each player should get?

This is an interesting problem, the sort of thing that you might study at school if you are doing probability, and it tests your ability to think clearly about a fairly simple problem.  You can see that because it is the first to six points, the key is the person who has only won three has got to win all the next three games in order to win.  If they lose any of them, then they lose.  So what you have got to ask is: what are all the possible ways in which the next three games could fall out? 

Let us suppose they are tossing a coin, so I will call the outcomes of any game heads or tails.  So one possible outcome is that it goes H-H-H, and we will assume that this person with three has chosen heads.  It could go H-H-T, H-T-H, T-T-T; there are eight possible sequences of the coin that could fall out.  They are all equally likely, so there is a one in eight chance of any of those sequences arising.   The only one that allows this person to take the jackpot is that first one, where they have H-H-H, so they win all the next three games.  All the other sequences result in them failing to win a sixth game before the other person does and losing out.  So the answer to de Méré's question is that there is a one in eight chance of this person winning the jackpot, and a seven in eight chance of the other person winning, and so you should divide the jackpot of the interrupted game by giving seven-eights of it to one person and one-eighth to the other.

De Méré had another problem at the time that he posed, and it shows you how someone had learnt a little bit about probability from experience, but it just was not quite enough to see him through in a more complicated situation.  So what he had done, over a long period of time, is that he had won rather a lot of money by betting that if he rolled a die four times, that one six was going to come up, so he would challenge people to bet against him.

Well, what is the probability of rolling a six?  It is one in six, so the probability of having no six when you roll the die is five-sixths.  So if you roll it four times, and they are all independent, the chance of having no six in each of the four roles is: 5/6 x 5/6 x 5/6 x 5/6, which comes out at 625/1296, which is the probability of having no six.  Therefore, the probability of having one six is just one minus that number, and that is 671/1296.  If you convert it to a percentage, it is 51.77.  So he had judged rather wisely because, in the long run, he had a 51.7% chance of winning.  It is a rather small profit margin, but enough to make him a lot of money if he played a lot.

Then he thought of another game he could then play: betting that there is going to be one or more double six if I throw two dice, 24 times.  If you apply the same logic, the probability of a double six is 1 in 36, so the probability of not getting a double six is 35/36.  If you throw it 24 times, it is 35/36 to the power 24, which is 0.508, so the probability of one double six is one minus that, and that is 49.1%.  So he played that for a while, but he was astute enough to realise that he was losing, and so he stopped doing this, and he asked Pascal and his friends what was going on, and they performed this calculation, and so really defined the way we think about probability in terms of equally likely outcomes.

But what if a coin is biased?  What would you do if you suspected somebody was going to introduce a biased coin into some type of randomising competition?

Let us suppose that we are just using a coin and the probability of heads or of tails is not equal to a half, so the coin's biased in some way - perhaps within it on one side is a different sort of metal or something like that.  Well, let us assume that the probability of heads is p and of tails it is 1-p.  What we are going to do is we are going to toss the coin twice all the time, so we will think of pairs of tosses.   If a pair comes downs H-H or T-T, we will ignore it and toss again.  The reason you want to do that is that the probability that you got ones that are not like that - that you got a head followed by a tail - is p(1-p); but the probability you have got a tail followed by a head is (1-p)p.  But these are the same, and it does not matter what p is.  So p could even be a quarter and the coin would be horribly biased.  So all you do is say that you are going to call this combination, H-T, heads followed by tails, 'Newheads', and I am going to call the combination T-H, tails followed by heads, 'Newtails', and they are equally likely, even though the coin is biased.  So there is a way of taking a biased situation and turning it into a fair one.

What you notice about this is that, as an operation, it is fairly inefficient.  Of all the tosses of the pairs you make, you have to throw away half of them: you have to ignore the heads/heads and the tails/tails.  So you would regard this, if it was part of a big process of information in some way, as being fairly inefficient (since you have to throw away half of the things in order to get a good result).  So maybe there are ways of doing better than that.

Everybody has something of a feel for what is meant by 'random'.  If you stop people in the street and show them random sequences, you will find their intuition is rather unusual, and it always tends to go in the same sense.  People's feel for what a random sequence is usually makes them think it is far more ordered than random sequences really are.  Suppose I challenge you to make up a random sequence, and we will stick with heads and tails, just for concreteness.  Usually, with school groups who we work with, I will challenge them, 'Can you make a fake random sequence, and then hide it in amongst some real ones, and I bet I can identify your fake random sequence every time?' and usually you can.  We can see this if we look at some example of series:

            THHTHTHTHTHTHTHTHTTTHTHTHTHTHTHH
            THHTHTHTHHTHTHHHTTHHTHTTHHHTHTTT
            HTHHTHTTTHTHTHTHHTHTTTHHTHTHTHTT

Do you think these look like random sequences?  It might help to look at another set of examples:

            THHHHTTTTHTTHHHHTTHTHHTTHTTHTHHH
            HTTTTHHHTHTTHHHHTTTHTTTTHHTTTTTH
            TTHTTHHTHTTTTTHTTHHTTHHHHHHTTTTH

Do these series look random to you?  Most people do not think these ones are random.  But how do they differ?  You will notice that in the first group there are not many long runs of heads and tails.  But in the second group, there are some very long runs here of heads and tails.  So does that look a bit suspicious?

To make things clearer, we can write out the second group of series and highlight the runs:

            THHHHTTTTHTT HHHHTTHTHHTTHTTHTHHH
            HTTTT HHHTHTTHHHHTTTHTTTTHHTTTTTH
            TTHTTHHTHTTTTTHTTHHTT HHHHHHHTTTTH

In fact, these are the random sequences.  The first ones are not random at all, and the telltale factor is that they do not have those long runs.  If you were writing random sequences like that, by just playing on your keyboard, if you find yourself pressing the key H four or five times in a row, you tend to think this does not feel very random and so you tend to effectively bounce backwards and forwards and you keep the runs in one head or tail rather short.  But real random sequences are not like that.

Let us try and understand that.  Suppose that you are interested in what the chance is of having a run of, let us say, r heads or r tails.  Well, each toss is independent, so if you are going to have r heads coming up in a run, the chance of that is a ½ times ½, r times, (1/2)r.

But there is a bit more to it because you have got all sorts of different places where you could start that run.  So if the whole list of numbers is of length n, where n is very large, much bigger than r, then, roughly speaking, you have got n different starting points for that run to occur, and so the chance of getting the run is about n times the chance that we found earlier: n(1/2)r.  I say 'about' because I am ignoring the n's, because it is just a finite list. 

The question now is when does it become roughly one?  That was when it would be a certainty or bigger than 50%, and you can see that that happens when n is equal to 2r.  What that is saying is that if you have got a sequence of heads and tails which is of length n, then you are very likely to find runs of length r.  So with our series we had earlier, which are 32 symbols long, 32 is 25, so what this argument says is you have got a much better than a 50:50 chance of finding runs of length five in random sequences of heads and tails which are 32 symbols long.  So you are extremely likely to find runs of length four, and even more likely of length three.  For most people this is very counter-intuitive, but it is these runs or winning streaks, if you like, that are really characteristic of truly random sequences, and if you do not see them there, like in this first group of series as had, that is what makes me instantly say that I know these are not random sequences.  They do not have enough continuous runs in them.

There is an interesting instance of this from a few years ago, which I will here call the Nasser Hussain Effect.  Nasser Hussain was England cricket captain.  He was captain on 101 occasions, but during the 2000/2001 season, he had a remarkable run: he lost the toss in fourteen consecutive test matches, and just to rub it in, he was captain for seven tests, and he lost the toss every time, and then he was injured and couldn't play, Atherton substituted and he won the toss, and then Hussain came back and lost another seven.  The BBC called it 'Flipping useless!'  So his chance of losing all fourteen tosses is one in two to the power fourteen, (1/2)14, which is about one in 16,000, but then you have to factor in the analogue of the fact that you can start at different places in the sequence.  In his case, he captained England 101 times, so you have to multiply in that factor.  He has still got roughly a one in 180 chance of this losing streak, so it is pretty bad luck, but by England cricket standards, it's perhaps not so bad!

Let us now move on to horse racing and other types of betting events.  A question that you might pose is: is it possible to always win?  Or a slightly weaker question: is it possible to always know when you might always win?

My interest in this was aroused a few years ago by a television programme.  It was one of these Midsomer Murder mystery type of programmes that goes on for hours, with people like John Nettles in it.  There were lots of people being murdered, of course, but the story was really about fixing a horse race. Originally, they were going to nobble the favourite of the race in some way and then this was going to lead some people to make lots of money, but somebody had to be murdered along the way, and then more people had to be murdered to cover it up, and so on.   But what was never explained in the programme was why nobbling the favourite helps you or anybody else definitely win anything.  For example, it may be that you are betting on the person who is going to come second - which is still a bit risky. So why does nobbling the favourite help anybody?  This was never explained in the story, but I hope, in a moment or two, you will understand why nobbling the favourite certainly can be advantageous.

Let us start by having a look at some formulae.  If we have a race, there will be odds on the runners, there might be 3:1, 2:1 etc., but let us call them a1 to 1, a2 to 1, and so on.  If one of them is not something to one - 5:4 - we call that five divided by four to one, 1¼ to 1.  So these 'a's are going to be the odds of all the runners in the race, and what we want to know is whether there is some way of apportioning our stake money, say betting on every one of the runners, using a bit of the money on one runner, a different fraction of the money on somebody else, so that whatever the outcome, we win.  The interesting thing is that there is.

This is a mathematical optimisation problem: you want to optimise the fraction you put on runner one times its odds, plus the fraction you put on runner two times its odds, and so forth, and there is a constraint, which is that the fractions all add up to 1 - which is your total amount of money.  The simple solution that arises is that you do indeed bet on every horse, and if the horse has got odds of ai to 1, the fraction you put on that horse is 1 over 1-plus-the-odds.  So the horse with 2:1, you bet the fraction 1 over 2-plus-1 - you put a third of your money on that horse.

The interesting thing is that if the sum of all these fractions that you put on the horses, adds up to be less than 1, then the winnings you will take will be 1 over this sum, which is bigger than 1, minus 1, times the total stake.  So you will get that money back, plus your stake money.  So if the sum of these reciprocals of the odds adds up to less than 1, you are always a winner if you bet in this way.

            Q = 1/(a1 +1) + 1/(a2 +1) + 1/(a3 +1) + ... + 1/(aN +1) < 1
            Winnings = (1/Q - 1) x the total stake

We can see this in action with a simple example.  Suppose there is just four runners and the odds are 6:1, 7:2, 2:1 and 8:1.  This makes the a's in our equations are as follows: a1=6, a2=7/2, a3=2 and a4=8.  When we feed this into our equation, we get the following sum:

            Q = 1/(6+1) + 1(7/2+1) + 1/(2+1) + 1/(8+1)
            Q = 1/7 + 2/9 + 1/3 + 1/9
            Q = 51/60
            Q < 1

So we can see that Q comes out as less than one.  So if you bet one seventh of your stake money on runner One, two-ninths on runner Two, a third on runner Three, and a ninth on runner Four, then you will get back your stake money plus at least 12/51st of the stake money.  So you are always a winner.

This will immediately give you an idea of how the basic idea of race fixing goes.  So, in our sum, the favourite is the one that has got the smallest number for ai.  The favourite is always the biggest contributor to that sum.  So it could be that, when you look at the odds with all the runners included, you work out that quantity, and it comes to be bigger than one, just as the bookmakers would intend, but if you know that the favourite has been nobbled, you should discount the favourite from the list of horses that you bet on.  So if you take the favourite factor here out of the sum, it can flip it to become less than 1, and then you just bet on all the other horses in the fractions that I just told you, and you would have an inevitable win.  So removing the favourite, if you alone know the favourite is removed, can turn an unwinnable betting situation into a winnable one.  So that, I suspect, was what must have been behind the plot in this murder story; that you remove the biggest contributor to the sum and you flip this total to become less than one.

Of course, you might suspect that bookmakers and people offering odds know some of these things as well, at least by intuition, and it would be a pretty stupid bookmaker, who would have been eliminated by natural selection long before, who was consistently offering odds which totalled less than one.  But the way that this game can be played of course is that you can have several bookmakers who, when you consider them in combination, if you place a bet on one horse and with another on another horse, you can create this winning situation. 

Suppose that we are betting on a simple two horse situation, Oxford versus Cambridge say, in the Boat Race, and we have got two bookmakers and one bookmaker offers 1.25 on an Oxford win and 3.9 on a Cambridge win, and the other bookmaker offers 1.43 on an Oxford win and 2.85 on a Cambridge win.  The interesting thing here is that, if you work out that quantity Q, the sum of the reciprocal to the odds, for bookie One, since he knows what he is doing, it comes out at 1.056, which is bigger than 1, and so, in the long run, he will gain 5.6% on all the bets that are placed.  It is a rather small margin I think - usually they pick these numbers to be a bit bigger.  Bookie Two, if we work out his Q,  it is 1.051, so he gains 5.1% in the long run.  So both of them feel very happy they cannot lose, but what you need to do is to adopt a mixed strategy: so you back Oxford to win with bookmaker number Two, and Cambridge to win with bookmaker number One, because when you work out the Q that you then have for that strategy, it is less than 1.  You would win 4.6% in the long run, plus your stake money. 

So the way you would do it, you would bet £100 on Oxford with the second bookie, and  100 times 1.43, divided by 3.9, on Cambridge with Bookie Number One, which comes out as 36.67.  By doing this, if Oxford win you collect 143 pounds from Bookie Number Two but if Cambridge win you collect 143 pounds from the Bookie Number One.  So this is the way this works in practice.  You do not try to play upon the naivety of one person, but, by considering two or many more bookies, by analysing odds in complicated ways, this is how you might gain a positive return. 

What about those situations where Q is bigger than 1?  In betting, these are the unsuccessful, losing situations.  But, if you look into this a bit further, this is actually the situation for money laundering case; this is the money laundering case.

So when Q is bigger than 1, you do not have a guaranteed win, but in the long run, you have a guaranteed and predictable loss of 1 minus 1 over Q of your stake money (1-(1/Q)).  If we suppose Q to come out to be three, then it would be one minus a third, two-thirds of your stake money would be lost.  That is not very good.  If Q was ten, then you would lose nine-tenths.  So, the closer Q is to one, the less money you lose.  So the point is that if you work out Q for a particular race, you could work out what your loss would be.  It seems that, in some parts of the world, there was a strategy of laundering the proceeds of bank robberies, so I think in Ireland this went on for some time, by using on course betting.  Of course, you did not want to put your robbery proceeds on one horse - that would be really stupid, you would soon lose most of it, and it would be a bit conspicuous - but if you spread it around huge numbers of races, on different race courses, and you bet on all the horses, and you understand this type of formula, then you will know what the cost will be of your money laundering.  So it might just be that you lose 10% of your stake money, or 20%, in the long run, but that will be the cost of the laundering of the cash.

So much for these sort of rather untoward and curious aspects of probability, I want to move on and finish off by talking about some aspects of probability in sport, and particularly judging in sport.  Sometimes, judging is very weird, and there is no sport where the judging is weirder than ice skating.  It has recovered a little credibility since the Salt Lake City Olympics - the rules have been changed for the judging of figure skating - but it is interesting to look back and see why they had to be changed, and what was odd about the judging situation.

At the winter Olympics in Salt Lake City, the key event that caused problems was the Ladies Singles Figure Skating.  This was not the judging bias problem; this was just the strange situation that arose because of the way the rules had been structured, and it turned out that the structure of the rules allowed a situation to happen, which usually people working in probability and voting theory outlaw at the beginning, so it is an axiom of a voting system that this not be allowed to happen.

Just to recap on how skating rules used to be, skaters would skate a short programme first, and then, the following day, they would skate a long programme.  All those marks you used to see - the 6s and the 5.9s and so forth - you can forget about, because the judges did rather quickly.  Those were simply marks not carried forward and not used for any purpose other than arriving at the ordering of the skaters.  So it does not matter how much better you are than anyone else - that information counts for nothing.  All that matters is how you rank compared to other skaters.

So what happened in the short programme is that the first ranked skater, in this Salt Lake City case, Michelle Kwan, was given a mark of a 0.5, so the lowest mark was best.  The second skater, Slutskaya, was given a mark of 1, the third-placed skater, Cohen, 1.5, and Sara Hughes, 2.  So the marks are basically the positions of the skaters.

Then, in the long programme, they were given twice as much credit, and how that was done is that if you are in first place in the long programme, you are not given 0.5, but you are given 1, so they double the mark.  So what then happened was that, after three of the skaters had skated, Hughes skated the best long programme so far, and was given a 1, the second best was Kwan, given a 2, and Cohen was third best and given a 3.  So they then add together those scores, and the lowest score is the best, so Kwan was in the lead, with 2.5, Hughes was second, with 3, and Cohen was third, with 4.5. 

At this stage there was one skater left to skate, which is Slutskaya, and what happened then was really rather perverse: Slutskaya skated, and is ranked second in the long programme.  So suddenly, the leader-board has changed: Slutskaya has moved up with a 3, Kwan has gone down to third, and Cohen has dropped down to fourth.  These two had the same score, and when that happens, the better performance in the long programme is taken to break the tie, and so Hughes, who was coming in second when Slutskaya did her long skate, moves up into first position and so received the gold medal.  But what has happened here is that Kwan is better than Hughes, but then Hughes is better than Kwan because of the performance of somebody else.  So, logically, this is very strange: so the relative merits of Hughes and Kwan have been altered by the performance of somebody else.  Now, usually, in a voting system, you exclude this possibility ab initio, and it is called the Exclusion of Irrelevant Alternatives.  So, if you prefer A to B, whether somebody votes for C should not alter that.  But you can see that skating judging was infested by this strange voting paradox, and you could not really know what you had to do to win until everybody had skated.  As a result of that, the rules were changed, and scores were given for lots of individual elements of the programme, and the actual scores were added together.  It was always very mysterious why that was never done in the first place - why not just add all the scores together?

So what the fallacy is here is that somebody has not added together scores, but they have added together ranking places.  These numbers are really saying she is first, she is second, she is third, and she is fourth, and then just added the rankings together.  You must not do that.  If you convert the scores to rankings and add the rankings, you run foul of possible paradoxes of this sort.

Another classically corrupt judging system can be found in boxing.  One case in point is a particular world heavyweight bout in 1999, between Holyfield and Lewis.  The general situation is that there are three judges and they give a vote out of ten for each round - it's very mysterious because they always seem to go 10:9, 9:10, but nobody ever wins a round by more than this one point.  But, again, they do not keep the scores.  So, here is the number of the round, here are the three judges, and here is what they vote - who they vote is the winner of each round, or sometimes there is a draw:

So the first judge, shown by the first row, has given seven rounds to Holyfield, and five for Lewis, so no draws - 12 rounds total.  Judge number Two scores it a draw - so he has got five for Holyfield and five for Lewis, with two draws.  The third judge gives 7:4 to Lewis.  Therefore, the result of this fight then is judged to be a draw: one judge for Holyfield, one for Lewis, and one drawn.  But you can see, if you count the rounds, that Lewis has won by 17 rounds to 16.  Indeed, it would not have mattered how much either judge gave the win to either of the fighters - for instance, the third judge could have scored it 12:0 to Lewis - it would not have made any difference.  So it is a very strange judging system.  It is not too different, in some ways, to tennis, where it does not matter how much you win the set by - you only get a one-set win counted for you.  So throughout sport, you have this type of sort of hierarchical scoring system that decides to throw away some information but keeps others.

Lastly, I just want to show you again something I mentioned in the very first lecture in the series - one cannot miss it out in any discussion of competitions and probability, because it is so unusual.  So every time you see it, it should surprise you, and allow you to win pounds off your friends.

But first, the summary of what we have seen of these last examples is that what has gone wrong with the skating is that you are adding preferences rather than scores.  So just because A beats B and B beats C does not mean that A is going to beat C.  If you do not believe me, suppose that you are a judge and you have three competitors in some event - it might be skating - and you rank them so that one judge ranks them in the order ABC, the second judge ranks them BCA, and the third judge ranks them in the order CAB.  So you see A beats B, A beats B, and B beats A, so A beats B by 2:1, if you count the rankings.  B beats C, B beats C, but C beats B, so B beats C by 2:1.  So if A beats B, and B beats C, you expect A obviously must beat C.  But A does not, C beats A by 2:1.  So if you start counting preferences, you can arrive in this strange paradoxical situation.

A nice case in point are school league tables.  Suppose you have two schools, and league tables have ranked the performance of the schools in every subject they do - Chemistry, Physics, English, Languages, PE, Music, and so forth - in terms of their exam performances, and you find that School One beats School Two in every subject, .and then they put in their publicity 'We are the best school.'  But it can be the case that, when you combine all the examination results, that School Two is ranked higher than School One.  Even though it is ranked behind it in every single category taken one at a time, when they are combined, it can still be that School Two is ranked more highly than School One.  Therefore, in its publicity, it says 'We're the best school'.

The last example I want to show you, in case you had forgotten it from the first lecture, goes back to a famous and now rather antiquated television show from many years ago from America, the Monty Hall Show.  This particular trick became known as the Three Box Trick.  In the original situation, it was that there was a price behind one of these three boxes - 1, 2 or 3 - and it is a very valuable prize - perhaps it was a big sports car - and behind the other two boxes, there was nothing particularly worth having.  I think, in the original show, it was a goat, in each case, so you either won a goat or you won the sports car.

What you were asked to do is to choose one of the three boxes, so you make your choice, and then Mr Hall, who knows where the car is, opens one of the other two boxes that you have not chosen, which has got a goat in it, and shows you the car is not in there.  Then he asks you if you want to change your choice.  So let us suppose that you have picked door One, and he opens door Three, and he says, 'Do you want to stick with number One or do you want to swap to number 2?'  So what do you do?  Are you thinking, 'Well, I bet he knows it is really in the one I have chosen and he is trying to make me change my mind'?  But the situation, in terms of probability, is very unusual. 

So you have made a choice for one of the boxes.  There is a one in three chance that the car is in any one of the boxes.  So let us say that you have picked box One.  So there is a one-third chance that you have a car in you box, box One, and there is a two-third chance that it is in either box Two or box Three, but you do not know which.  Now, what Mr Hall does is that he opens Number Three, and there is nothing in it.  So that must mean that there is now a two-thirds probability that it is actually in Two, because there was a two-thirds probability it was in Two or Three, but it is not in Three, so the whole probability is on Two.  So you should switch; you will be twice as likely to win if you switch to box number Two than if you stick with your original choice.  So if you want to go home and try this with somebody, go and have a go, it is very counter-intuitive - it is a good example of this type of equally likely outcome counting that we talked about early on.

That is all we have really got time for.  Thank you very much for coming to this year's series.  The Geometry lectures will be starting again in October, and working through another six lectures on a completely different set of topics.  I think we are starting off with tightrope walking and after that we will be looking at the cutting of diamonds and other tricks of the light and many other interesting things like that.

Thank you!

 

 

© Professor John D Barrow, 3 March 2009

 

This event was on Tue, 03 Mar 2009

professor-john-d-barrow-frs

Professor John D Barrow FRS

Professor of Astronomy

Professor John D Barrow FRS has been a Professor of Mathematical Sciences at the University of Cambridge since 1999, carrying out research in mathematical physics, with special interest in cosmology, gravitation, particle physics and associated applied mathematics.

Find out more

Support Gresham

Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds.