User Error: Why it's not your fault

  • Details
  • Transcript
  • Audio
  • Downloads
  • Extra Reading

We all make mistakes when we use computer systems - forgetting to save the last document version, neglecting to add the promised email attachment, texting the wrong person, or buying the wrong train ticket. Mistakes can be just irritating or have serious consequences.  Exploration of how users can get the best from their systems will connect interaction design to magic and illusion as well as to mathematics. A propensity for blaming ourselves when something goes wrong can make life harder for designers.

 

Download Transcript

 

20 January 2014   User Error: Why it's not your fault!   Professor Tony Mann     Welcome to my first Gresham College lecture of 2014, and a particularly warm welcome to anyone who is at their first Gresham College talk.  I hope that you will attend many more of the wonderfully varied lectures offered by this remarkable institution.   This is the first of three lectures I am giving this term.  The next two are about what I think are particularly interesting topics in computing and mathematics.  On 17 February I shall be talking about matching algorithms – the ways in which we can pair off couples (as in computer dating) or assign people to jobs in a way that takes into account everyone’s preferences – and discussing some of the surprising insights that emerge. We’ll find out, for example, that creating tennis pairings for a ladies or men’s doubles is much harder than for mixed doubles!  Then on 17 March I shall be exploring the use of randomness in mathematical computing.  You might think that with mathematics being a subject of rigour and logic, there is no place for tossing coins in mathematical decision-making – but it turns out that randomness is a valuable tool in solving exact problems.  Neither of these lectures will assume that the audience has any particular mathematical background.   Today’s talk, however, has a different focus.  I want to explore human error, especially the mistakes we make when we use computers, and the difficulties that interaction designers face in creating systems that we can use effectively and comfortably.   Computers have given us power I never dreamt of in my youth.  We carry around with us more computing power in our phones than was available from the most powerful computers in the world only a few decades ago.  Computers in cameras, cars, satnavs, do much to make our life easier.  The laptop beside me and the data projector above you (not to mention the information content of the internet and the power of search engines) allow me to illustrate my lecture in a way that was not available to speakers at Gresham College for the first 400 years or so of its history, and indeed you may be listening to me long after I physically delivered the lecture in Gresham College, through your computer or phone or tablet, via the invaluable Gresham College app.   So computers give us huge power.  Yet it often seems that, more so than many other technologies, they bring with them immense frustration too.  We have all struggled to negotiate computerised ticket machines, set a wake-up alarm on a mobile or navigate a confusing website.  As I was preparing this talk, I opened up the software Adobe Acrobat when I intended to open Adobe Photoshop, then cursed, closed it down and did exactly the same thing again!)  Many of us have failed to save the final version of a document or have deleted the wrong file.  I regularly send emails to colleagues minus the attachment which was the sole purpose of sending the email (and I am far from being alone in this habit).  Thanks to the wonders of auto-complete, I have on occasion sent emails to the wrong person, and thanks to predictive text, some of my text messages have baffled the recipient.  (I may be a mathematician, but I don’t think that is a good enough reason for my phone to change “beer” to “adds” in a text I sent recently.)   Such problems generally do no more harm than cause frustration and minor inconvenience.  But occasionally the consequences of an error with new technology can be more serious.  According to a recent press report, Larry Barnett of Arkansas wished to resolve a dispute with a former employee by hiring a hitman to kill him.  He was talking face-to-face with a contract killer about this project when, changing his position, he inadvertently pressed a button on the mobile phone in his pocket, causing it to dial his intended victim, who as a result was able to hear the whole conversation about the assassination plan and take appropriate action.  Mr Barnett is now facing a charge of conspiracy to commit capital murder.   I have never been in that particular situation or made that particular mistake.  But I have made some very annoying errors with simple computer systems.  For example, one of my tasks as a University lecturer is to provide references for graduates, and a dozen or so times every year I have to go through one particular electronic system to write references for students who wish to train as teachers.  I therefore know the system reasonably well. Yet here is what happened recently.  I had been away and I knew that in this case the reference had to be completed rather urgently or it would prejudice the graduate’s chance of acceptance by the university of his choice.  So I was keen to do complete the reference quickly.   I typed my details into the system, which brought up a screen asking for confirmation that I was the right person.  I have to check three tick boxes – “I can confirm that I am Tony Mann”, “I am able and willing to provide a reference for this applicant” and “I have read the help text”.  On confirming these I am presented with a second screen, which shows my personal details and offers two tick boxes – “I confirm that the details above are correct” and “I no longer wish to provide an on-line reference”.     So – I’m impatient to get through these preliminary screens.  Having ticked the three boxes on the first screen – remember, I’ve done this many times before – I check my details on the second screen.  They’re right, I want to proceed, so I check the two tick boxes. I click on “Save” to continue, and I find that I’ve done the wrong thing – I have said that I am no longer willing to provide the reference (only a few seconds after saying the opposite on the previous screen) and the system now denies me access!  I can no longer give the reference and my student’s application will be delayed, putting his place in jeopardy.   I can’t believe I have been so stupid.  How could I make such a ridiculous mistake, ticking a box saying the opposite of what I intended?   Of course I phone the organisation immediately, very apologetic.  The person I talk to is sympathetic – she gets these calls all the time.  People make the mistake I made regularly.  It is very natural, when confronted with five routine tick-boxes where the first four must be ticked to continue, to tick the final one automatically without thinking about it.  We know we are ready to proceed and we assume that all the tick boxes are confirming that.  Anyway, it turns out that there is a frequently-used (but slow) procedure for recovering from the error (I have to contact my student and ask him to resubmit my details).  This is a common blunder – the design of the screens leads people into it regularly.  It causes delay for the applicant and the institution, inconvenience for the referee, and extra work for the administrator who has to deal with the phone calls.  (And yet, after several years, the last time I looked nobody has bothered to change the screen design so this mistake is still being made, causing frustration and unnecessary work, and delaying applications.)   But it’s not just with computer systems that we blunder.  Most of us make mistakes frequently.  We try to open a security door with our Oyster travel card instead of the correct keycard, we panic when our Oyster card isn’t in the correct pocket, because we took it out twenty seconds earlier and it is in our other hand, we make sandwiches for lunch and then leave them on the kitchen table instead of putting them in our bag, and we make chilli and forget to put in the chilli powder (to mention only mistakes I have myself made during the last week).  Sometimes we have difficulties operating simple devices like doors.  Here is what happened at the end of my lecture last week.    [A video shows two of us desperately pulling at the handle of a door which refuses to open, before a third person walks past us and, pushing the door instead of pulling, goes through without difficulty.]   (Thanks to Reece and Anneka for their assistance.)   Now doors are commonplace, and uncomplicated, machines, offering few choices.  You can push or you can pull.  Why did that one cause so much difficulty?  It’s because it gives the wrong visual cues.  A graspable handle like this one invites pulling, whereas a flat plate can only be pushed.  These clues are so suggestive that it is baffling if the action they lead one to take automatically is the wrong one.  Logically, if pulling doesn’t open the door, one should try pushing instead, but often my response in that situation is a feeling of panic.  I am trapped, I can’t get out.  But, worse, I feel incompetent and very embarrassed.  How is it that, as an experienced adult, I can fail to work something as simple as a door?   This embarrassment is typical – we blame ourselves when we cannot work something.  In the case of this door, arguably the fault lies with the designer.  A bad design presents users with the wrong visual cues.  We make the deduction that the design encourages, and then blame ourselves for getting it wrong, when really the responsibility lies with the designer!  As we will see, our tendency to blame ourselves is actually unhelpful for all concerned.   As an aside, doors present wonderful examples of good and bad interaction design.  The designer only has to show us, discreetly, whether to push or pull, and perhaps, if the door is too symmetric, whether the door is hinged to the left or right.  (This is the starting point for Donald Norman’s wonderful and influential book The Design of Everyday Things, which is hugely entertaining and insightful.)  How often do we see the words “push” or “pull” displayed on a door?  Here are some examples.  The door which we have seen me struggle with actually carries the instruction “push”, but the strong visual cue from the bars leads us to miss the written instruction.  The bars on the doors in my next example are engraved with the letters “PULL” or “PUSH”, but many users miss them.  Arguably, if a door carries written instructions, that is an indication of design failure because the door shouldn’t need them – and it is double design  failure in the case of those examples where the word, necessary because the visual look of the door leads us astray, is too discreet to counteract the immediate misleading appearance of the door.   The next illustration shows a door with a pull bar.  Clearly, people had trouble opening it from the other side, so the word “PUSH” has been placed prominently on the glass.  However, many of us read this through the glass (not noticing that the lettering is reversed) and try to push from the inside.  I once made a five-minute detour to get out after I failed to open this door because I was pushing rather than pulling!   But the right visual cues make it easy.  My final door picture shows a pair of doors where no-one is going to get it wrong, because the push and pull cues are clear and match the operation required.   The doors show us that when we interact with the world we take information from many different channels, not just the most obvious.  A tragic example of this is seen in the terrible Kegworth air crash of 1989, in which 47 people were killed.  A British Midland Boeing 737-400 suffered failure of one of its two engines.  This should not have been disastrous because a single working engine was enough to land the plane.  The pilot shut off the right hand engine – but it was the left hand one which was malfunctioning.  With its one good engine switched off, the plane lost power and crashed into the ground.      Why did the pilot shut down the wrong engine?  There was a bang, vibration, and smoke entered the cabin.  The instruments showed that it was the left engine that was giving trouble, but it seems the pilot ignored these indicators (which were small, hard to read if the plane was vibrating, and had recently been redesigned, without further training for the pilots).  He knew that the air conditioning intake for the cabin was to the right of the plane, so if smoke was in the cabin, the fire had to be in the right hand engine.  The pilot didn’t know that this latest model of the 737, unlike those he was used to, had air intakes on both sides. So one of the factors in the crash was that the pilot was making quick but incorrect deductions from information other than the instruments, and giving considerable weight to these.  We use whatever information we think is relevant, not just what the designer intends us to use. This is difficult for designers to foresee.   As human beings, we make mistakes. Usually the consequences are minor, and this is in part because interaction designers are aware of this human propensity and build systems that are robust and resilient: they make it more difficult for us to make mistakes and build in checks to ensure that, when we do err, the outcome is not too serious. The history of railway safety technology, for example, is largely the story of continuous improvement of systems as we learnt from accidents what might go wrong and as engineers worked imaginatively to make errors impossible or to minimise the consequences if they should occur.    The difficulty of the task is shown by the accident at Hull Paragon station on 14th February 1927, when two trains met head-on following an unfortunate combination of mistakes. One signalman, intending to move lever number 96 to set the points for the incoming train, instead moved the adjacent lever 95, which diverted the outgoing train into the path of the other. This should have been impossible, because the advanced safety features should have prevented the points from being moved unless the route was clear: but unfortunately the other signalman had cleared the signal controlling the incoming train a little early, before it had completely passed the signal.  There was a window of 1.9 seconds provided by the early clearance of the signal, and it took 1.6 seconds to move the points.  You can see how unlucky it was that the wrong lever was moved precisely at the only time when it could have adverse consequences.  All the precautions against error were subverted by a combination of two human mistakes and very bad luck.   Systems for railway signalling are obviously safety-critical so it is natural that designers should be very aware of the dangers of error.  But sometimes systems in which safety does not appear to be an obvious issue can present the possibility of fatal error.  Some years ago a colleague told me about an incident in which his wife had played a part.     She was a pharmacist, and someone came to her pharmacy with a prescription which included two drugs which she knew would be fatal if taken together.  (We’ll call them A and B.)  She immediately phoned the doctor who had issued the prescription  His response was, “I am well aware that A and B must not be prescribed together.  I would certainly never write them on the same prescription.”  But despite his protestations, he had done so.  How?   The doctor was printing the prescription from a computer system which presented a list of available drugs on a drop-down menu.  He intended to prescribe drug C for his patient, along with drug B.  Drug C happened to have a name which was 24 characters long.  The text box in the computer system which displayed the name of the drug showed only 24 characters.  As it happened, there were two versions of drug C available: C itself, but also a version which combined C with drug A.  The first 24 characters of “C” and “C with A” were identical, and the doctor had, unawares, chosen “C with A” rather than straight C.  Since he was also prescribing B, the result was that he had written a prescription for two pills, which if taken together would have killed the patient.  Fortunately the pharmacist was alert and the danger was avoided.   What is interesting about this error is that it is essentially due to a decision taken by a database designer who could hardly have foreseen this situation.  A decision about the capacity of a text box in a database doesn’t seem to have any serious safety implications, yet the consequences of this decision could in this case have been disastrous.  This shows again just how difficult interaction design can be.   So we see that user errors can be dangerous as well as frustrating.  The task of those who design interaction is to help the user interact with the system as successfully as possible.  How can they do this?   We interact best with a system when we have a helpful mental model of how that system works.  As human beings, we have evolved to make sense of the world.  Our brains take in impressions from our senses and build an understanding – a mental model - of what is going on.  We then use the models our brains have constructed to determine our behaviour.  If my model of my next-door neighbour is that they are someone who gets angry over trivial things, I won’t park my car in their parking space: if my model is that they are easy-going then I might be tempted to do so.  I depend on the accuracy of my mental model.   Thanks to evolution, we have excellent intuitive understanding of the behaviour of the physical world.  We have evolved to make sense of the world around us and we have generally good intuition about interacting with physical objects (unless the door we are trying to open has been designed by someone who values appearance over utility).     The importance of this intuition is easy to demonstrate by considering how it can be exploited.  A great many of the baffling effects achieved by magicians result from our creating the wrong mental model of a situation.   For example, I have here three ropes which are clearly all of different lengths.  I hold them up, I say a magic word and display them, all now the same length.  Most audiences will assume that these are special ropes.  But in fact what is happening is that, if you were impressed by the trick, you have formed the wrong mental model of the situation you are observing.  If you don’t want to know how the trick works, look away now!  This shows what you see, with the ropes behind my hand.  You assume that this is the situation – but in fact what is really happening is shown in this picture.  You can now look back at the screen.   What this shows is that these mental models are powerful.  Once formed, we want to stick with them.  We don’t discard them at the first sign of trouble: instead we try to find an explanation consistent with our mental models, unlikely though that explanation may be.  That is why we find magic tricks so baffling – it is comes more easily to us to think that something supernatural is going on than to question the mental models we have already formed.  The expertise of a magician is not so much in sleight of hand as in leading us to form the wrong model of the situation we are observing.   We see the same phenomenon with optical illusions.  Presented with an ambiguous image, the brain tries to make sense of it.  It forms a model – this is a duck, or an old woman: that is a vase, this cube is oriented with its front face at the right – and tries to interpret the image to support that model.  After we look at it for some time the contradictory information forces itself to our attention, the model fails and we switch to another model – this is a rabbit, or a young woman: that is two faces, the cube is oriented with the front face to the left – but we cannot see both possibilities at the same time.  We form a mental model, and that model is what the brain works with, not the actual image that is present on our retinas.   So when we use physical objects, we deduce how to use them from the mental model based largely on their immediate appearance.  Since computer software doesn’t display its physical construction, it is much harder to see how we should work it.  The job of the interaction designer is to lead us build a helpful mental model of how the system works.  This is in a sense the opposite of what the magician does.  The magician wants to lead us to form an incorrect mental model that will cause us to misinterpret what we are seeing.  The interaction designer wants to lead us to form a mental model which will ensure that we navigate the system effectively.     But when the mental model conflicts with the behaviour of the system we have trouble.  That is what causes so much of the frustration we all experience using computer systems which don’t match our mental models.  For example, when I use web-based systems I often use the “Back” button on my browser.  There is one system which I have to use occasionally which provides links labelled “back” on each page, but although these links appear to take one back to the previous page, they don’t function in exactly the same way as the back button: they reopen a new copy of the page.  This means that, if you are keeping track of the navigation in your head, you are not where your mental map says you are.  Although I understand the distinction in principle, in practice I find this literally maddening: when I use this system the navigation is baffling because “back” doesn’t take me back.  The mismatch between what I think is happening and what the computer does is physically painful – on one occasion it made me want to bash my head against the nearest wall.  That a slightly awkward quirk in a straightforward and useful computer system can have such a strong effect on an experienced user shows how much is at stake in the design of interaction.   So the designer’s task is to suggest the right mental model.  Perhaps counter-intuitively, this model doesn’t have to be an accurate representation of what is really going on.  When I am editing a document on my computer, I think of it as a continuous sequence of pages.  In the computer’s memory the file might be split into scattered fragments which are tracked and collated by the computer’s operating system, but I don’t need to know that.  My model, based on my familiarity with different technology, is sufficient to help me do what I wish to do, whereas thinking in detail about how the software is manipulating low-level data would be extremely unhelpful.  The designer’s task is more difficult than helping me understand the low-level details of the functioning of the system.   An effective strategy is for a system to present its unfamiliar workings in terms we already understand.  That’s why one operating system refers to “windows”, “desktop”, “files” and so on.  The match doesn’t need to be perfect for this strategy to be effective.  We understand that a window on a computer screen gives us a view of something we are working with, while another window lets us see something else.  We bring to the computer our understanding of some of the features of real windows, desktops and files.  The fact that, unlike windows on my computer screen, the windows of my house cannot be resized whenever I wish doesn’t stop me using the analogy to work effectively on my laptop.   But there is another very strong reason why interaction design is difficult.  The problem is that the designer, who knows how the system works, has to put themself in the place of the user, who doesn’t.  And that is very difficult.  To build the system, the designer has to think like a computer; to make it easy to use, they have to think like a human. These are very different modes of thinking, and trying to use both perspectives simultaneously is rather like trying to see both readings of one of the visual illusions at the same time.  In my experience, many of the best systems programmers eventually “go native” and find it very hard to communicate with their users, because they can only see the system from the computer’s viewpoint and they cannot switch to see it the user’s way.     To appreciate the user’s point of view of a system is an inherently difficult thing.  When you have designed a system and someone presses the wrong button, one’s natural reaction is “what a stupid user”, not  “I didn’t design that very well!”  When you have placed a sign saying “push” on a door, it is hard to imagine that an intelligent user might still pull!  It is very annoying when you have put thought into making something easy to use and yet the users don’t find it so! The temptation to regard the user as an unthinking idiot who doesn’t have the intelligence to appreciate the fine points of one’s design, is absolutely overwhelming.  Fortunately, good designers are able to reject this gut reaction and can come to understand the reasons for the user’s difficulties.   And this is where designers are not helped by users.  When we make a mistake, we blame ourselves.  When I can’t open the door because I push instead of pulling, I feel it is my fault!  So I don’t complain, I don’t tell the designer they have got it wrong.  And since most of us are like me, the designer gets no feedback and never realises that their design is causing problems.   The design guru Donald Norman tells of an occasion when he was asked to advise about the usability of a computer system that was under development.  This system made different uses of the “Enter” and “Return” keys (a distinction that has largely vanished from keyboards now).  Use of the wrong one might cause the user to lose unsaved work.   Even although this was made clear in the system instructions, Norman thought this was asking for trouble: people used to using these keys interchangeably would be likely to hit the wrong one from time to time.  The designer assured Norman that this was never happening.  He had wondered whether this might happen so he had asked for feedback, and no user had ever mentioned this as an issue. Clearly there was no problem.   But when Norman spoke to the users, every one of them said that they had lost work in this way, and that it was very frustrating.  So he asked why they hadn’t reported it when they gave feedback: the answer was, “It’s a stupid error I’ve made; the instructions are clear; and it’s entirely my fault I got it wrong.  The designer doesn’t want to know about my mistakes, only about problems where the system isn’t right.” So our instinct to blame ourselves means designers don’t get the feedback that would help them improve their systems, even when they specifically ask for it.    So good design can help us make fewer errors.  But we can also help ourselves.  Where we have a tendency to make particular errors, we can take precautions.  These may be as simple as making it a habit, when I get up from my seat at the end of a train journey, to look around to make sure that I haven’t left a scarf, book or phone on the seat or a bag on the luggage rack.     These little tricks are “resilience strategies”.  I’m sure we all have them, and I will invite you to share yours at the end of this talk!   Another example deals with flash drives – these helpful USB devices which we use to transfer files from one computer to another.  But how easy it is to transfer a file from the flash drive, and then leave the USB stick in the computer at the end of my lecture!  Well, some people avoid this problem by attaching their flash drive to their keyring: we’re much less likely to leave that dangling from a shared computer when we have finished with it.  The danger, of course, is that if we do leave it, we find when we get home that we are without our housekeys, so we have traded the minor inconvenience of a lost flash drive for the major crisis of being locked out of our house and car.  My personal preference is to make it a rigorous policy to remove the flash drive from the computer and put it in my pocket as soon as I have copied the files.   So, last week, when I made my sandwiches and left them on the kitchen table, what strategy might have helped me avoid my mistake?  The sandwiches left my mind while I was washing the dishes.  So a good resilience strategy would be to put my sandwiches in my briefcase as soon as I have made them, before doing the washing up.  The risk in so doing is that I forget about the dirty dishes, but leaving my washing up to be done by someone else is much less costly for me than leaving my lunch at home.   How about my chilli-less chilli?  Well, if I had got all (and only) the ingredients that I needed out of the cupboard before I started cooking, then I wouldn’t have made that mistake.   So what might be resilience strategies for common computing mistakes?  One which is built into most software is to make all actions that cannot be reversed (like deleting a file) a two-stage process – the user is asked, “Do you really want to do this?”  So when I decide to rename a folder, and poor mouse control means I select “delete” which is next to “rename” on the drop-down list, I have a chance to recover.  So long as we are alert, and don’t automatically click on “yes” without thinking, we are protected. (Unfortunately, for some common actions, clicking on “confirm” becomes so automatic that we can still make mistakes.)   How about my problem with email attachments?  A good rule here is to attach the file first, before writing the message.  You are much more likely to forget the attachment after writing the message than you are to forget to write the message after attaching the file.  Unfortunately I don’t follow my own advice on this one, which I think is because I am already thinking about the message I am going to write when I click on the “new email” button: my mental model of the process doesn’t allow me to attach the file first!   Here’s another situation where a simple strategy may avert disaster.  It is very easy to write an email and press “send” before one has finalised it.  In the excitement of writing “best wishes, Tony” I may click “send” before remembering that I had intended to go back to tidy up the opening paragraph, or to spell-check, or to make sure that I haven’t said anything that might upset someone in the recipient list.  But if I form the habit of writing the email body before completing the “To” field, I give myself time to stop and think before the email has gone and to prevent a premature release of my message.  It also gives me a final opportunity to check the subject line of the email for typos – which I often make because I give it less care than the message body – and since that is the first thing the recipient will see, these errors will create an immediate bad first impression.   Next time you make a mistake, it is worth reflecting on whether a slight change to your working practice might reduce the likelihood of repeating it.   So in conclusion I would like to draw three lessons from the examples we have looked at today.  The first is that we are too ready to blame ourselves for “user error”.  As human beings we are highly sophisticated creatures who have evolved to deal effectively with a complex world.  When we make a blunder, there is often a good reason behind it.  We would like systems to be designed with a knowledge and understanding of human frailties, systems which protect us from human error.  So sometimes when systems fail to do this, it is a design weakness.   User error is usually not the user’s fault.  From the perspective of many users, designers who are more interested in ideas of style and visual attractiveness than in resilience to errors have the wrong priorities.  Designing to help users avoid errors is more valuable, even if no-one notices the effort that has gone into the success of the resulting product.   Secondly, our willingness to blame ourselves makes it harder for designers, because they don’t get feedback about the weaknesses of their systems.  Without that feedback, how are they to know that the system isn’t as effective as it could be, because we are going wrong in ways that they did not foresee?   Thirdly, for these and other reasons, effective interaction design is extremely difficult.  It requires one to see the system simultaneously from two points of view, that of the machine and that of the very human user.  While knowing exactly how the system is intended to be used, the designers must put themselves in the position of someone who knows nothing about it. I used to think that writing complex mathematical software for real-time systems was the most demanding part of computing: but I now realise that that is nothing compared to enabling complex systems to be used effectively and painlessly.  Computer systems can be frustrating, but generally we get good results with most of them, and interaction designers deserve a great deal of credit.   And a final thought.  If a designer creates a system where good results can be obtained only by users with considerable natural talent who have undergone several years of intensive training, and where a large amount of regular practice is necessary if the user is to continue to be effective, would we value this product?   Well, that description applies pretty well to a violin.  It is certainly not user-friendly enough for  a novice to get good results straight away.  So do we regard Stradivarius as a disastrous failure?   No, we revere him, and his creations are worth millions of pounds.  This is one final example to illustrate the complexity of interaction design.   Before I finish, I should mention a project which aims to develop our understanding of human error and to share the resilience strategies that help us avoid simple mistakes.  This is the Errordiary – www.errordiary.org – where anyone can post about their blunders, and discuss ways to avoid repeating the mistakes.  Use the Twitter tag #errordiary to report your failures, and search for the tag to be entertained and instructed by those of other people!   Thank you for listening.       © Professor Tony Mann 2014

This event was on Mon, 20 Jan 2014

Speaker_TonyMann_370x370.jpg

Professor Tony Mann

Visiting Professor of Computing Mathematics

Professor Tony Mann has taught mathematics and computing at the University of Greenwich for over twenty years. He was President of the British Society for...

Find out more

Support Gresham

Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds. 

You May Also Like