17 April 2013
Technology and Vision
Professor William Ayliffe FRCS
Professor William Ayliffe
Good evening, ladies and gentlemen, and welcome to the penultimate lecture of the Gresham Physic Series, over the last three years. The final one will come up in six weeks’ time and will be an overview of the last three years.
We are very privileged tonight to have two esteemed Professors who are experts in their field, who will be talking about different aspects of technology and vision.
Firstly, we will have Professor Marshall, who holds many patents in this area, has an extensive knowledge, and also an extensive knowledge of the recent history. Ophthalmology is known as an early adopter of technology. We were the first specialty, for example, to use lasers for a variety of different reasons, and different types of lasers are used for different purposes on ophthalmology. High-end optics and aspheric have been adapted, as you know, for cataract surgery. These, and many others, will be described for you by John Marshall.
Following that, Professor Reinstein is one of the world experts in laser refractive surgery, which is the ability to correct the aberrations of the eye to enable people to see more clearly, and to see clearly without spectacles, will be speaking for the final section.
So, without any more ado, I would like you to welcome Professor John Marshall, and I am very grateful that he is able to come and talk to us tonight.
Professor John Marshall
I always think it is interesting when you mix technology and biology. I was at a lecture in Penn State University, when I did a sabbatical, and I saw this lecture announced: “The Brain is a Computer”, and a young professor walked onto the stage and said, “Ladies and gentlemen, I am going to talk about the brain as a computer, and I want you to think – just think, how light this computer is,” and we walked around the stage and then he jumped up and down, and then he said: “Think of the peripherals: it has got these amazing grabs and it has got fantastic control as to where it grabs and incredible sensitivity. Then think of the input [loud noise] – it has audio input. It has visual impact, olfactory input, and tactile input.” And I thought, my goodness me, where is he going…? And then he said, “But you know the best thing about this computer? It takes unskilled labour to create it!”
Okay, so I am going to take you through a journey in terms of technology, and I want you for a moment to imagine a world without light because, if we had no light, we could have no vision. If you think of Judeo-Christian religions, “In the beginning, God created Heaven and Earth, Earth was without form, void, darkness…and the Lord said…let there be light.” Light is the vital energy for life. Now, can you imagine, as a primitive man, coming to terms with the fact that the Sun disappeared every night and you prayed and hoped it would come back the next day, and how difficult it must have been to understand the tilt of the Earth and the summer and winter seasons? I guess it is not surprising, therefore, that so many early civilisations worshiped the Sun, and I guess this reached its peak with the Sun God in Egypt. So why is light and sight so important?
Well, just think, you are now a communal animal and you really always need to know whether you are going to fight or run away when you are confronted with a stranger. If you think of the respective performance of your ears and your eyes, your ears operate over yards, maximally – sound does not travel very quickly, but with your vision, you can see up to nine miles, the horizon, and it would take 1.3 seconds for a photon of light to leave Earth and arrive at the Moon. So, light and vision is clearly the best advantage for communal living.
Light is merely visible radiation, and I guess most of us have an innate fear of radiation because of the connotations of radiation in the modern world. Now, I want you to look at the person next to you for a moment and tell me how old they are… Do not tell them… Now, I have to tell you, you are all rotten observers, because the bits you are looking at are dead – i.e. the surface of your skin, it is dead, or your hair, and the bits that are alive, the surface of your eyes or the surface of your lips, are maximally five days old.
First of all, in terms of protection, your body has cell populations which are divided into to: there are those cells in high stress situations, like your skin, the lining of your gut, which last a few days, and they are constantly renewed throughout your life. Then there are cells which are formed during embryogenesis and form your brain, and indeed the lining of your eyes, and that has to last over your whole lifetime – those cells cannot renew.
Even with something turning over as rapidly as the skin, if you abuse your body, like this lady here – she is not wearing a leather jacket…that is prolonged exposure to the Sun. So we know that, even with a wonderful protective mechanism, the skin ages, and you just have to think for a moment, rub your hands on the back of your hand and on the back of your backside, which is beautifully smooth because we do not expose it – well, not many of us…
So, what about the eye? Well, now, this is a unique situation because, in order to see, you need to let radiation inside, and in order to stimulate vision, that radiation has to be absorbed within the cells, and those cells are not dividing. So you have cells in the back of the eye, in the retina, which have to last your entire lifetime under a radiation load. It is not surprising that the eye suffers in the same way as the skin. If we look at the onset of cataracts, the aging lens, as a function of degrees latitude, you see that people that live nearer to the Equator have their cataracts five to fifteen years earlier than us in the UK, who never see the Sun, and this is becoming quite serious because the hole in the ozone layer is about three times the size of the United States, and we are beginning to see an accelerated incidence of age-related cataracts younger in Argentina and those countries in that area. So, the eye is going to have the same problem, in fact a worse problem, than the skin.
Imagine living in a really extreme environment, like the desert or a snowfield. Most people think that their light exposure comes from over their head. That is not the case because your eyes are set back and, from overhead sunlight, your eyes are in shadow, and when you rock your head back, you will find the lids come down, and even the little bit of light that falls on the surface of your eye, most of it is reflected off. So what is important is the ground. If you are walking on grass, about 2.5% of the overhead ultraviolet is reflected back into your face; if you are walking on snow, it can go up to more than 40%, which is why you get UV keratitus in snowfields.
The first technologies were eye protections in extreme environments. It is taken to an extreme here by Elizabeth Taylor… If you go to the British Museum and look at Egyptian paintings, etc. they put dark eye-shadow or henna to line the orbital cavity in order to reduce the amount of light, and in extreme situations, they have what is effectively a slit visor in order to further limit. If we look at the Inuit, this goes back a thousand years or more, they made these wonderful slit goggles, out of wood or whalebone or indeed ivory, to protect them. This was probably the first technologies that were looking after the eye.
Well, let us look at the eye at the moment. The eye is roughly divided into two bits. It allows radiation to go inside and the front surface, the cornea, is the major focusing element. There is a little bit of adjustment by the lens inside the eye, and the radiation then falls on the back here, the retina. The front is concerned with image formation and the back is concerned with image detection and interpreting the image and sending it back to the brain to create the sensation we know as vision.
Most of the early technology played around with image formation. The first lenses occurred about 2600BC, and they were polished crystals. We are not quite sure if the Egyptians used these to magnify images or merely to light fires, by concentrating the rays through them.
Certainly by the Syrian Empire, these were being used as aids to magnification, and this is a unique specimen which is held in the British Museum.
Now, how do we know? If you look at artwork in that two thousand years BC, much of it is so small, it could not have been done without the use of lenses. So, the very first technology then was lenses.
The good old Vikings not only had the lodestones to guide their ships, but they were the first mass producers of lenses, grinding lenses on wooden lathes, and out of crystal again, but they used them to light fires, and I guess it was interesting in the rape and pillage bit.
Where did technology go next? Well, next, it was spectacles. These are the earliest form of spectacles. These are rivet spectacles. The first painting is 1352, and the first time they appear in print, 1452. So, early on now, we have got lenses being used to aid vision.
My next picture is not Dan Reinstein, but early surgeons got in on this and, clearly, it was extremely helpful to have these aids to vision. But, again, I would like you to think, the educated elite, mainly the clergy, were dealing almost exclusively with calligraphy, manuscripts with very large print, and it was not until the advent of the printing press when the masses suddenly had the availability to have books and printed material. In the early days, because of the cost of paper, many of these were very small and the print was not very big, so now you have a real pressure for mass use of spectacles for mass education.
I think it is a great credit to the United Kingdom that, in London, we had a City Guild, the Worshipful Company of Spectacle Makers, founded by Royal Charter in 1629. This is the oldest optical standards organisation in the world. Part of the incentive to create this was the European problem, as always – let us keep bloody European imports out! Dutch spectacles were not very good, and the Spectacle Makers had right to seize sub-standard, as they saw it, spectacles within the City of London and, with a silver hammer, smash them on the London stone, which was in Cannon Street and is now in a wall of what is a very, very moribund building – but it is still there, and that is the stone where the spectacles were seized and broken.
Spectacles were important, but someone like Samuel Pepys actually stopped writing in 1669 because he felt he had exhausted his eyes. Now, part of this was working with candlelight and one pair of spectacles. So, he had a magnifier, and when he took them off, of course, it took a little time to accommodate to the distance again, and he felt that this was not doing him any good. There is also some potential suggestion that he was developing a cataract, but he did not have the right sort of spectacles and, unfortunately, it did not come along until a little bit later, with Benjamin Franklin, who had the same problem, and he decided that he had to take his distance spectacles and his reading spectacles, saw them in half, put them together and create bifocals – great, a wonderful, wonderful innovation. For the first few hundred years, the only impact of technology – and a fantastic impact – was improving the image.
So what happened next? Well, as I said, the eye is divided into these two bits, and we had played around with the front bit and done quite well. Now, what about the back bit, the retina, the bit that is connected to the brain and the bit that actually captures the images?
This was a real problem for early workers. They were not helped by Galen, who rather messed things up a little bit, and even someone as distinguished as Leonardo got it wrong because they all assumed that the image of the world on the back of the eye, on the retina, was the same way up as the world that you are looking at, and in order for Leonardo to make that happen, he had to reflect the light in and out off the lenses to turn the image up the right day. Descartes did a really neat experiment by cutting the back of the eye and looking at the image, and sure enough, it is upside down. So, your eye sees the world with an image which is inverted, and it is turned the right way up in your brain. Even really smart people get things wrong on occasion.
I guess the next big move was Sir Isaac Newton and his wonderful experiments, aided by Hooke, in order to determine the nature of light. He thought that light was particles and light could be broken up by his prism into various colours. What most people do not know was he also looked at the wiring of the eye to the brain and saw that the nerves crossed over – something called the optic chiasma. He discovered that in 1682, but never published, and it was up to William Briggs to see the same thing and publish it, so he is remembered for describing this optic chiasma, whereas Newton is not, but Newton obviously is remembered for discovering the colours in light and the particulate theory of light transmission.
But then there was another work, of Thomas Young, an extraordinary man, who thought, well, actually, light does not always behave like particles – sometimes it behaves like waves. He came up with the wave theory of light, and now we have a mixed theory of particles and waves. But Young’s contribution to the technology of vision was to recognise that, for us to see colour vision, we only really needed three light-sensitive cells responding to different wavelengths: so we needed one type of cell that responded to red, one that responded to green, and one that responded to blue, and that would give us a full spectrum. Subsequent work over the years has demonstrated that is exactly the case. In our eyes, we have special cells called cones. We have short wavelength cones that respond to blue, long wavelength cones that respond to red, and middle wavelength cones that respond to green.
Not only did Young describe this, he was also the Young’s modulus, the elastic. He also spoke fourteen languages and wrote an essay describing the grammar and vocabulary of 400 languages, and he designed a method of tuning musical instruments, and then, in his spare time, he was a major contributor to decoding the Rosetta Stone and hieroglyphics. So, this was before television and people and people really worked hard…
Let us move on in technology, and the next big technology step was to look into the eye, and that was first done by Charles Babbage, who designed what was called an ophthalmoscope in 1847. He also designed the first computer. Babbage was another one of these absolutely brilliant men. But Babbage made a mistake, and he took this little device that could see into the back of the eye and he spoke to an ophthalmologist, Thomas Wharton Jones, who said, “What a lot of rubbish – cannot see a future in that!”
Fortunately, or unfortunately, Hermann Helmholtz designed the same thing, and he went to German ophthalmologists, who said, wow, this is fantastic – we can look into the back of the eye and describe the diseases, etc. and he also gave one of his early instruments to Sir William Bowman, a very notable surgeon at Moorfields Eye Hospital. Helmholtz is remembered as the man that invented the ophthalmoscope, but, remember, it was really Babbage.
For about 100 years, with these ophthalmoscope devices, the images were recorded by drawing and painting, and indeed, when I first went to the Institute of Ophthalmology in 1965, there was an artist there called Terry Tarrant who spent his whole time looking into the eyes of patients and painting the images.
Well, there were cameras to do that. The first one was in the rabbit in the 1860s, but in man, it really did not take off until the 1930s to 1950s, and the reason it did not take off was because, in order to get a picture, you put in a huge flash of light – the patients absolutely hated you, could not see for several hours afterwards – and a lot of people said it is dangerous, you should not do that.
Now, the real advance started by a man called Sir Harold Ridley, who is famous for something else, but he was the first person to try and reduce the energy by not putting in a big bolus of light but by putting in a spot that he scanned. Unfortunately, this was the device he had. It got very, very hot, and the power-pack kept setting fire to his clinic. It was a great idea, and now, we actually have devices that can capture the back of the eye by scanning.
If you look at that big device over there, in the bottom corner is a handheld device that one of my PhD students designed, which replaces that huge device.
We can easily take images of the inside of the eye. What you are looking at there is that whitish spot with the spiders coming out of it is the head of the optic nerve and the blood vessels, and the dark spot over here is the centre of vision that you are all using now to actually visualise that image.
A few years ago, there was another advance, called optical coherence tomography, which enabled us to section the eye. So, we can take sections through the front, sections through the lens, and sections through the retina, almost like histology but in the living eye. That was a huge advance, and that is one that is on-going today.
Beginning to understand some of the neurophysiology and getting much better imaging meant that we really could begin to make real progress. We all know now that the front of the eye, the lens and the cornea, do the focusing. It is focused on that little yellow spot there, the fovea, the retina does the processing, and within your retina, you have this honeycomb of cells, about six million cone cells, at the bottom of that little pit, and about 130 million rod cells which give you black and white vision and orientation vision around the edge.
So, you will have heard of the implants, and that is an x-ray at the bottom there where we put a little array of light sensitive detectors in the back of the eye in people who have gone blind, wired it up to a little amplifier and back to the brain. At the moment, it is not very successful because there is only about 200 of these things, and you have 130 million plus, but it is like cochlear implants – it is going to move.
One of the biggest problems that is going to happen to all of you sitting there, if you live long enough, is you are going to have a cataract. Why? Well, the lens is formed by a little invagination of the surface of the embryo, which buds off, and all of the cells that are ever formed in the lens are there for the rest of your life. So this is a problem of waste disposal. If you take a human lens and cut it, it is rather like the rings of a tree, with right at the centre, the embryonic nucleus, and then added to and added to and added to throughout life. But it is going to go bad if you live long enough and you will not be able to see, and so, in the early days, 800 BC, in India, you had something called couching, where you took a pointed device, you poked through the front of the eye, you poked the lens and just pushed it out of the way into the jelly inside the eye and cleared the optical pathway. The optics were not terribly good but you could see again – great, you would pay the doctor and go home, and then everything went wrong weeks later when you lost track of the doctor.
So now, we know we have to take the lens out, and we used to take the whole lens out, but now, we take out the bit in the middle and leave the bag that holds the lens in place. The idea was, well, if you did that, could you put another lens inside the eye? One of the first people to think, well, that is a great idea, was Casanova, not because he was an ophthalmologist, but he was actually in prison with someone who could make these little glass spheres, and the discussion was, that is really interesting – if you put one of these glass spheres in the eye, you might actually be able to see again. It did not quite work like that, needed a lot more technology, and the first bit of technology really was in instrumentation.
The Germans developed some wonderful delicate blades, Graefe knifes and other devices, for going inside the eye and very carefully removing bits. It features in the “Silver Blaze”, a Sherlock Holmes story, where you remember the villain used a Graefe knife to cut the Achilles tendon of some sheep in practice for crippling the racehorse Silver Blaze.
Things move on, so now we have the most wonderful instruments, some of which are diamond, ultra-sharp, but diamond sits next to silicon in the Periodic Table, and at least one company now makes the most amazing blades out of silicon, very inexpensive and very, very useful blades.
The biggest advance really was Harold Ridley. Harold Ridley was working with a student, and they were working on a Spitfire Hurricane pilot who had had some canopy Flexiglass in his eyes, and the student said, “There is not an immune response – what would happen if you made a lens of that and put it in the eye?” Ridley thought about it for some time and then contacted a company called Rayner and, in 1950, he took one of these lenses and put it in a human eye. Most of them, I have to say, did not do very well, but some did extremely well.
But it was not untill a man called Charlie Kelman, who was sitting in his dentist’s chair, and the dentist was using ultrasound to clean his teeth, and Charlie Kelman thought, wow, we could use that to shake the lens to pieces inside the bag, and so that is what happened: shook the lens to pieces and sucked out the sort of syrup that it was converted now, and now he has got the bag, beautifully intact, and can put a plastic lens in there. It was not until the ‘70s to ’80s, but now, virtually everybody has a so-called intraocular lens, the result of Spitfires and Hurricanes in the Second World War and a visit to the dentist.
Now, before the next speaker, I have been told I was the Bugs Bunny and I have just got to set him up. Lasers. Everybody that visited me thought it stood for Lucrative Acquisition Scheme for Expensive Research. It is actually Light Amplification by Stimulated Emission of Radiation, or Light Oscillation by Stimulated Emission of Radiation, but that would make “LASERs” “LOSERs”…
Einstein postulated the concept in 1917 and he got his Nobel Prize in 1921. The concept was demonstrated with microwaves, in a maser, by Charles Townes in 1955, and he got his Nobel Prize in 1964. Then, in 1960, Theodore Maiman practically demonstrated a laser, and he got his Nobel Prize four years later.
I was lucky enough to get a grant from the Royal Air Force to try to determine how hazardous lasers were for the eye at a time when they were not, and so I spent a lot of time plotting something called ED50 – that is the 50% probability of producing damage by lasers, and what that enabled us to do was to say, well, look, there are three ways lasers are going to damage you. If you can get enough photons of light onto a target in a very short time, then the field vector you build up will strip electrons off the target and you will get an ionisation or a plasma, like an atomic bomb. If it takes a few seconds, then, as the photons arrive, they will cause vibration amongst the molecules, and vibration is heat, and if it takes perhaps a few seconds to a human lifetime to get a change, then those changes are photochemical. That meant we understood the mechanisms and were able to produce codes of practice to protect people all around the world from the damaging effects of lasers.
That guy is in an unsecured trench. There is 40 tons of rubbish over his head with that grab. What was concern? “There is a laser on my shoe.” So, he was more concerned about the laser than anything else. But we have codes of practice.
Then lasers become so powerful that I have worked with the International Red Cross to get the Geneva Convention to ban anti-personnel laser weapons, and I spoke to the United Nations in Geneva and in New York, and eventually we got a ban. Great.
But because one understood mechanisms, you can start designing new systems. In the 1970s, this was the workhorse – that big desk thing there is a cross between a washing machine and a valve radio, but that is what was used in the clinics. What did we do? We made something the size of a telephone, and patented it, and that sold very well, and that revolutionised the way that lasers were used to treat conditions like diabetes and age-related macular.
Just recently, we have designed a laser that only produces changes in cells in very few areas, so you get all the good news and none of the bad news.
But because I was wearing two hats, I was doing laser safety and laser clinically, and in the late-‘70s, there was a new laser called an Excimer laser, which had very, very energetic photons, in fact, so energetic they were absorbed in a fraction of a micron – a micron is a thousandth of a millimetre – and that is a human hair with grooves cut in it by this laser. So I thought, my goodness me, this was being used to make electronic circuits – if we could use this in the eye, just think, we could change the curvature of the front of the eye and change the ability to focus light.
We formed an early committee: Stephen Trokel, on the extreme left there, Theo Seiler, the German, and myself. The American looked after the money, the German, unfortunately, looked after the food, and I looked after the science, and Steve got to know my wife.
Okay, so, what was the idea? Well, we want to cut curves. If you fire a laser through an aperture and slowly open it, you can cut a thing like a Greek amphitheatre and if you put it under computer control, you can make it really smooth. So that was great. You just needed now a formula to relate how big an area, how deep, to produce what sort of optical correct - did that. We then patent it, we form a company, you go through the FDA, which stands not for Food & Drug Administration but Funeral Data Analysis.
Then you have another idea and so, instead of firing through an aperture, you now make a template and you destroy the template but cut the information in the template onto the surface of the eye, and that means you can cut positive lenses as well as negative lenses, so you can correct short-sight and long-sight. Great!
The problem was, with those big areas, plumes would come off, and as the beams came down, it would hit the plume and you had things called central islands. We ended up shooting the laser, under computer control, rather like a plain-ish bowl, which produced any curvature we wanted – correct any shape, any imperfection.
Now, my idea was, every time I landed at Heathrow, the guy up the front was there but the computer had the experience of thousands of landings and thousands of pilots, and I thought, well, look at air accidents. Technology is improving, but the yellow columns are pilot error. I challenge any surgeon to say when did you last have system error? So I wanted to remove the surgeon. But what did the surgeons do? They did not want to work on the surface of the eye, for lots of reasons. They wanted to cut a flap and work under the flap, so immediately, you are back under the control of surgeons, at which point I hand over to a surgeon…
My job is to speak as a surgeon and to maybe put your mind at ease a little bit that not all surgeons are the same. The way that I would like to develop this talk for you in the last few minutes is, first, to talk a bit about the history but going slightly before that, and then summarise some of the aspects of how the technology has evolved and how the safety has changed since the early days, so that we can then talk about who can now be got rid of glasses, and then we can talk a little bit about the future, perhaps what might be coming down the pike.
If we go to the historical component, we go to the middle of last century, when this Spanish ophthalmologist had a fight with his brother in Barcelona, and decided to leave the familial dynasty, Barraquer Clinic of Barcelona, and move to Colombia, as far away as he could from his brother, and establish his own set-up because he wanted to work on changing the curvature of the cornea. He was convinced that this was the way of correcting vision, and in 1948, he published his first paper. This is his 1980 textbook – it is about that thick, showing a massive amount of thought that went into this experimental and beautiful, elegant scientific work.
The basis of everything he did was what he called the Law of Thicknesses, and what he did was to take basic trigonometry and used it to derive simple variables that could be used to predict what volume of tissue would need to be removed from the cornea in order to change the curvature by the right amount to change the power of the eye and therefore do the job that the glasses or contact lens would be doing.
His early work was done at a clinic in Bogota. He had developed a carpenter’s plane which was adapted to work on the cornea, and he would pass this over the cornea, cutting off a disc of tissue, which he then put in liquid nitrogen, froze, place it on a watchmaker’s lathe, and milled the frozen tissue, milling, and therefore coming up with the word “keratomileusis”, milling of cornea, unfreezing it, putting it on the eye, and then sewing it back on. This procedure would take him 45 minutes and it could correct twenty diopters of myopia, which is people with Coke bottle glasses, but it had an accuracy of plus or minus four diopters, so it really was for the ultra-blind to become kind of functional.
His disciples worked on methods of doing this, but without freezing, based on some plastic dyes, over which this disc, which would be cut off the cornea, would be screwed into, and then, on this unfrozen tissue, they would then pass a microkeratome over the underside of the disc to lob off the tissue required, and then, again, the same thing – put it back on the eye and sew it on the eye. The fact that you were not freezing the tissue meant that there was less trauma to the eye and so the visual recovery was quicker, but it still suffered from inaccuracies, for example, of centration and of angle of attack and regularity of cut.
Another one of his students worked on the concept of passing this microkeratome, removing the cap, and then making measurements so that you could pass the microkeratome again on the same surface, on the cut surface, so that, automatically, you were removing a second disc of tissue from the bed, creating the shape change over which you would then put the cap. He used the fact that the cornea itself has a pump that pumps water out of it, the endothelium, the cells inside the cornea, and so this pump creates a negative water pressure, like a capillary action that actually sticks the cap on, and he realised that, if you just closed the eye, it would seal because the surface skin would grow over this cap overnight and you would not require sutures. So, the technology was evolving – we are into the mid-‘80s here, when he also described the idea that you could stop the microkeratome, not cut the cap off and leave it on a hinge, and continue with the in-situ keratomileusis, where you would do the second cut, but now, because you have a hinge, the replacement is of course facilitated, and the chances of losing this cap go down significantly.
We will stop for a second in the mid-‘80s here to go a little bit to interject, where the IBM researchers, quite unaware of the impact it was going to have on vision, had been experimenting, initially on bits of turkey that they had left over from Thanksgiving, and shown that a particular wavelength of ultraviolet light was particularly good at cutting organic tissue without causing the charring that could be caused by other frequencies, and this fact was noting by an American ophthalmologist, who was thinking about imitating the Russian technique of slicing the cornea and weakening it to change its shape, and thought that he could use this laser to weaken the cornea to change its shape more accurately.
It was Professor John Marshall who thought, wait a minute – I could use this Excimer laser right on the visual axis, we can scrap the skin off the surface of the cornea, and then we can apply the laser pulses onto the surface, and just put a bandage contact lens in for a few days while the skin grows back, and we have changed the shape, using the fact that there was no thermal damage caused by an Excimer laser. So, the idea that you could use the accuracy of an Excimer laser to remove the tissue but use the fast healing, and lack of pain, as I did not mention, on these surface procedures, you could combine this fast healing, lack of pain, and accuracy of the Excimer. This idea was starting to spring up in various parts of the world.
Amazingly, the first place it sprung up was in Novosibirsk, in Russia, where Razhev and co-workers – here they are – used a trephine to cut down into the cornea and then literally cut the flap open, leave it on a hinge, and then they marked the centre, and you will see little blue pulses there going of Excimer laser, and then put this little flap back. So, technically, this was the very first LASIK, Laser-Assisted In-Situ Keratomileusis, procedure every performed – this was 1988. They presented their results in 1990, with a two-year follow-up. The credit by the Europeans was taken in 1990, and that was done by a surgeon called Pallikaris from Greece, where he had taken a microkeratome that he was using for rabbit experiments and he had created an actual flap with a microkeratome, left it on a hinge, did the laser, put the flap back, and published a paper called Laser In-Situ Keratomileusis, which was published in 1993, with ten patients with very high myopia, and so the name LASIK came into being in 1990.
The accuracy of all this was phenomenal, but putting the variables back into the hands of surgeons meant that we were now using the tissue plane, which was originally designed to cut a thick 300 micron cap, from which we would mill the tissue out and sew it back on, now, our aim was to make these caps much thinner because we were going to take the tissue away from the bed of what is left, and it was the same instrument, so it was not designed for that purpose, and so cutting thinner flaps meant that the complications of using this instrument were…well, it was a one in twenty. So, one in twenty eyes ended up with a little bit of damage to the vision, which is not great, but if you are an early adopter and you are in the 95% group, this was a godsend.
What has changed? As the years have gone on and the safety of these microkeratomes and the design of the microkeratomes has improved, in order to make thinner flaps, so the safety went up, and of course the confidence in the procedure has increased, and that is evidenced by the actual data on millions of eyes being performed. We are now stabilised at about 3.5 million procedures per year. There have been over 35 million procedures performed in the world following John Marshall’s concept.
If we look at certain technological advantages, like the ability to track the position of the eye, so that the Excimer laser is delivered exactly where it is supposed to be… Here is an image of an eye of a patient that I treated who has a condition called nystagmus, where there is a neurological problem and the eye is not stable – the eye is always jittering. You can see that the aiming beam of this laser is being maintained absolutely in the centre of this pupil by tracking the position of the pupil a thousand times a second, readjusting the mirrors that are diverting the laser to the centre of the pupil, and we are able now to of course then apply this laser energy even while someone is playing tennis. So, the technology and the ability to use the improvements has meant that we have made the complication rate go significantly down. About ten years ago, microkeratome technology reached a peak, and then, in the introduction of lasers for cutting the flap, the flap complication rate has gone down to a very, very low level, so we are about 50 times safer now than we were in 1990.
Now, the thing that is left over is the long-term safety, and the long-term safety was all to do with two elements. There is a condition called keratoconus in the population – one in 2000 people have it, and detecting this condition is important because, if you have got a weak cornea and you take tissue out, you weaken it further. The other problem with long-term safety is taking out too much tissue and going too deep. Both of these problems were predicted and described by Barraquer many years ago.
In terms of detecting this bending disease of keratoconus, there have been a lot of advances. The main one started about a hundred years ago, when Placido, who devised this disc where he would look through this little hole and shine these discs onto someone’s cornea and he would see the distortion of the pattern of the discs and he could then tell what the contour of the cornea was or what irregularities there were in the contours of the cornea by looking at the distortion of the discs. This of course has been computerised, and so we are now able to, for example, tell the curvature of cornea as being either spherical or rugby-ball shape, which is when you have astigmatism – the curvature in one direction is different from the curvature in another direction – and it is this machine which is enabling us to pick up keratoconus early because asymmetry in the astigmatism is the early sign of keratoconus. The problem is that, as I said, keratoconus is only one in 2000 of the general population, but 10% of people have asymmetry, so we would be rejecting a lot of people who would be suitable for surgery if all we went by was asymmetry.
Research that we have done with high-frequency ultrasound has led us to be able to measure the thickness of the skin of the cornea, the epithelium, so that, in a keratoconic eye, where the deformity leads to front and back surface bulging, and the skin is partially compensating for that difference by thinning, can now be detected before the front surface changes occur on the topography map. So the cases that were slipping under the radar screen, getting laser eye surgery and destabilising are now being caught because the profile of the epithelium in a keratoconic eye can be detected by looking at the epithelial skin thickness profile, and not just the topography.
This is an example of the thickness profile of normal and this is an example of keratoconics, where it is thinner over the cone and thicker surrounding the cone.
We have detected keratoconus, and as far as cutting too deep within the cornea, one of the errors that was being made in the early days is that we make this flap, we take the tissue out from under the flap, and we have got to leave a certain amount of tissue behind for stability, but what was being forgotten was that the accuracy of the flaps was not 100%, and so this inaccuracy was leading to the fact that some eyes were ending up with a lot less tissue than we intended. So the solutions would be, of course, either to make the flaps thinner or to make the imprecision smaller. In fact, as we look at the evolution of flap-cutting devices, the uncertainty of 100 microns going down to 90, with the more modern microkeratomes, introducing the laser, going down to 55, and now, the most modern versions of these lasers, which can reduce the imprecision down to a very small fraction of the whole thickness of the cornea, our chances of going too deep are now greatly reduced.
These femtosecond lasers, which is infrared light, work by creating little micro-bubble explosions within the stroma of the cornea. Here is an image of the bubbles being formed inside the cornea… And while those bubbles are being formed, of course they are separating and cutting through tissue, and so flaps are now being created through this micro-bubble structure, and we separate that as surgeons, so leading us to not cut too deep because we’re now able to cut very thin flaps.
That means that our treatment range has gone up. If you look, from the early ‘90s, when we could only correct up to about minus eight safely, now we are able to correct – well, if we look at the distribution of refractive error in the population, we are able to correct practically everybody with short-sightedness and 98% of people with long-sightedness. Of course, astigmatism has been corrected since 1993 – most people ask about that.
The thing that is left is the condition that everyone gets, which is called presbyopia, or aging eyes, and many of us in the room have learnt that, as we get older, the mechanism of zoom within our eyes starts to function less good and we need to hold things further away to be able to see them. Why is this? Well, it is because of the loss of the ability to change the shape of the lens that is inside our eye. That is called accommodation. Accommodation is accompanied by pupil constriction, which gives a pupil constricting effect, gives a pinhole effect, increases depth of field, and of course convergence – the eyes have to focus onto the point that is near.
Without going into too much detail, we are now using something that we thought was a bad thing for eyes, which is an aberration, it is an optical defect called spherical aberration, and we are using this defect to take a perfect optical system like the eye, where you cannot really see things that are focused in front or behind the retina because the perfect focusing of the high quality eye means that if you move an object slightly behind or in front of the focus, you are out of focus. Whereas, if you have some imperfections and the focus is a bit sprayed, then you can actually get a little more depth of field. You can detect edges even if you are inside the focal range.
If you look here, this is an image of how an eye will degrade within increasing short-sightedness. This is how the vision degrades with the same amount of short-sightedness but adding this spherical aberration, and you can see that the increase in depth of field means that we can detect edges. We are using this now in surgery to increase the depth of field to give us the ability to see further with each eye, and rather than focusing the eye on a point, we are focusing the eyes to a zone, and this means that when we combine the spherical aberration with the pupil constriction that is there anyway, we get a fairly good retinal image which the brain is able to interpret as a completely sharp image, and by combining this 1.5 diopter depth of field that we have got between the eyes, we are able to cover the entire range from near to intermediate to distance, and now have a solution for the condition that everyone gets, presbyopia.
Just to finish, I am going to talk about the future. The future is to be able to do all of this through a keyhole and to remove the surgeon. This is the newest level of the procedure, where the femtosecond laser is being used to cut the two sides of the lens that we want to remove, create a keyhole, and then allow a surgeon, without having to have more than basic eye surgery dexterity, to remove this tissue through a eyehole. So, this procedure takes us to the next level because, whereas the cornea is designed that the front is the strongest and the back is the weakest, and that we were removing, in the PRK procedure, the front, and in the LASIK procedure, most of the front, now, we are able to remove these lenticules from further back and so we are able to take out the weaker tissue to do the correction, leaving the cornea in a much stronger state for long-term stability and to be able to correct higher amounts.
Of course, Barraquer had predicted this 50 years ago and had developed manual instruments to try and do this but, with the difficulties of manual problems, never managed to do it. Now, with his genius, we have managed to do exactly what he predicted with lasers.
© Professor William Ayliffe FRCS 2013