The Search for Dark Matter

  • Details
  • Transcript
  • Audio
  • Downloads
  • Extra Reading

Unlike the stars and galaxies, dark matter does not give off any radiation – we can only detect it through its gravitational pull. It accounts for a quarter of the Universe, yet we do not yet understand what it is made of. The search for a better understanding of dark matter is carried out both out in space and deep underground, and where astrophysics meets particle physics.


Download Transcript

22 OCTOBER 2014


Astronomers are only able to unlock the secrets of cosmic objects beyond the confines of the Solar System by analysing the light that happens to fall towards Earth. Everything that we infer about the contents of the Universe is learnt from the way that light is emitted, absorbed, reflected and refracted.  Surprisingly, one of the most important things that we have learnt over the last century is that only a very small fraction of the Universe interacts with light (in any waveband) through any of these processes – there is far more to the Universe than meets the eye… or even the telescope! In previous lectures (such as The Age of the Universe) I have presented the case for the dominant component of the cosmos that we refer to as dark energy. Today we shall concentrate on the next most prevalent (and almost as equally mysterious) component, dark matter.

Dark matter pervades the Universe on the largest length-scales. It may not reveal its presence through any interaction with electro-magnetic radiation, but it does have mass, and that mass – whether invisible or not – exerts a gravitational pull. This gravity influences the movement and behaviour of nearby cosmic objects that do radiate and can thus be studied through our telescopes. The requirement for an invisible component of matter was first mooted in the 1930’s, and since then we have made huge progress in understanding its distribution and behaviour. But the answer to the fundamental question of what dark matter actually is still eludes us, and even eighty-five years after its discovery, remains a topic of major research. Welcome to the world of astro-particle physics, where observations of the very largest structures in the Universe (such as galaxies, clusters, and even superclusters of galaxies) inform our understanding about the nature and behaviour of particles and forces on the subatomic scale… and of course, vice-versa.



Physicists infer that there are four basic forces at work to control the way that both matter and radiation behave throughout the Universe.

The fundamental force that concerns astronomers is the attractive pull of gravity. This force is comparatively weak in strength compared to the others, and it diminishes sharply with increasing distance according to the inverse square law. Gravity is important as it operates over the largest possible distances: within the body of a star; the path that planets take around it; the orbit of the star within its host galaxy, and how that galaxy interacts with the further environment. The strength of gravitational force is in direct proportion to the amount of mass present, and it is always additive as you can’t get ‘negative’ mass. Finally, gravity affects the motions of all matter whether it be ‘ordinary’ or ‘dark’, electrically charged or neutral. 

Gravity is formulated by the laws of General Relativity. The reactions due to other three forces – the electromagnetic, weak and strong nuclear forces – inhabit a world that is encapsulated instead by quantum physics. We can describe their behaviour using the ‘Standard Model’ of particle physics, much of which theory has been validated by experimental results. It remains an incomplete description of all of physics, however, due to its failure to incorporate the fourth force of gravity. On the basis that simplicity brings elegance, many scientists believe that the four forces can be united in a single theory, such as those which fall under the umbrella of Super Symmetry (or SUSY).  


Of the three forces that rule the quantum world, the electromagnetic force operates over the longest range (well, where ‘long’ here is compared to the size of an atom…). Like gravity, its strength falls rapidly with distance following the inverse square law; but it doesn’t affect all subatomic particles, only those which carry an electric charge. Depending on the polarity of that charge, the force can be either attractive or repulsive. This means that unlike gravity, increasing the number of particles does not necessarily increase the force, as different charges produce opposing forces that can cancel each other out. The electromagnetic force ties the (negatively charged) electrons in orbit around the (positively-charged) nucleus, and strongly influences the behaviour of a hot ionised plasma (of which there are so many in the Universe). As the name suggests, it is also responsible for the absorption and production of all the kinds of light: from radio to gamma-rays, all across the electro-magnetic spectrum.


The remaining two forces operate only over tiny distances, and hence dominate physics only on a subatomic scale. The strong force binds the subatomic quarks together to make up protons and neutrons. At close range, the strong force can over-ride the repulsion between positively-charged protons, enabling them to join together with neutrons to make the atomic nucleus. The weak force controls the processes involved with the breakdown of atomic nuclei in radioactive decay, the process that can transform a nucleus to that of a different element by the emission of electrons or alpha particles. 


The idea of unification appeals to the elegance of the idea that the four forces merge into one at the very highest particle energies, such as those inherent at the instant of the Big Bang. As the Universe expands and inevitably cools, the energy of the particles drops, leaving the forces to gradually decouple from one another. Gravity is the first to become distinct from the others, followed by the strong force. Finally the electromagnetic and weak forces (which up to then are regarded as one ‘electroweak’ force) separate from each other and we end up with the four forces of the present Universe.  



The ubiquity of dark matter on the largest scales is most apparent through a glaring discrepancy that arises between the amount of matter observed to be present through the light that it emits, and the amount of gravitational force experienced (which is strictly related to the amount of mass present). 



The great Swiss astronomer Fritz Zwicky was the first to remark on this mismatch, in 1933. Zwicky was famous for discovering and cataloguing the properties of huge numbers of clusters of galaxies identified from photographic plates of the sky. He examined the relatively close, and particularly rich, Coma cluste, using the relative motions of individual galaxies it contains to infer the total mass of the cluster. Over a thousand galaxies inhabit a volume of space spanning tens of millions of light-years, each swarming about on an orbit in response to the combined gravitational force of all the galaxies in the cluster. To his surprise, Zwicky discovered that the galaxies were moving too fast to remain attached to the cluster, at typical speeds of the order of 1000 km/s. The galaxies should have dispersed long since, and there should be no coherent cluster left… unless there was more gravity present (and thus more mass) than that obviously contained in the visible galaxies alone.

Zwicky expressed the problem as one of ‘missing mass’ (though arguably it’s not the mass that’s missing, but any light emitted by it…) and in a 1937 research paper, he introduced the term dunkle materie (‘dark matter’). Zwicky suggested that the dark matter was another, unseen component of the cluster mass; a form of matter that still generated the attractive pull of gravity, but which had to be some 500 times less luminous that the matter that was typically in the form of stars. Soon afterwards it became apparent that the Coma cluster was no exception. Plenty of other clusters also required the presence of dark matter to account for the dynamics of the galaxies they contained. 


We know now that all the stars in all the galaxies don’t account for all the light emitted from a cluster. Although the contribution from cold gas within the few spirals is negligible, there is far more mass contained in a hot (multi-million degree) atmosphere which forms the sparse but all-pervasive intra-cluster medium completely filling the space between the galaxies. Observable only in the X-ray wavebands, this intracluster medium contains around 5-10 times more mass than is in the galaxies. But it’s still not enough – inclusion of all the X-ray gas still can’t account for the gravitational deficit. All the ordinary matter can only account for 17% or the mass required in a cluster. 



Roughly 30% of all the galaxies in the nearby Universe (including our own) are shaped into a flat disc which is constantly spinning. In our own Milky Way the Sun lies half-way out from the centre to the edge, and it takes us 220 million years to make a single orbit of the galaxy (meaning we’ve only made about 20 circumnavigations since the Sun was formed!). The rotational motion of stars within the disc allows us to weigh a galaxy. In the same way that the speed of the Earth in its orbit around the Sun is dictated by both the mass it is in orbit around, and how far away it is from that mass, the orbital speed of a star revolving around the centre of a spiral galaxy is due to the combined gravitational pull from all the mass contained within that orbit.  


We measure how fast parts of a galaxy are moving either towards or away from us by using the Doppler effect, measuring the red/blue shifts in the spectra of stars and gas clouds contained in the spiral arms in the disc. The radio emission from cold clouds of neutral hydrogen can extend the measurements to an even greater distance, beyond the extent of the visible structure in the disc. These observations can be carried out for both the Milky Way and in other spiral galaxies outside our own (which are, perversely easier to observe as our view to the outer reaches of our own galaxy are obscured by clouds and dust clouds). 

The pattern of motion is plotted as a rotation curve which graphs the orbital speed as a function of the radial distance from the heart of the galaxy. The shape of this rotation curve reveals the distribution of matter in the galaxy. If the mass distribution followed that of the light, the orbital motion of stars outside the central bulge would decrease sharply with distance; in the same way that the inner planets of a Solar System orbit faster than outer objects. The first serious attempt to weigh a galaxy was made by Horace Babcock in the mid 1930’s, using observations of our nearest neighbouring spiral, the Andromeda galaxy. Counter-intuitively, the rotational velocity of the stars in the disc did not drop at larger radii as expected, but seemed to keep more or less level. It wasn’t until the 1970’s that this ubiquity of this general behaviour was confirmed, mainly due to the systematic study of distant spiral galaxies carried out by Vera Rubin, Kent Ford and their collaborators. No spiral galaxies show the sharp decline in rotational velocity expected with distance. In fact nearly all the stars in the disc are orbiting far too quickly to stay attached – they should have been flung out from their host galaxy unless there is more gravity, and thus more mass, than is seen to be contained in all the luminous matter, the stars and gases of the galaxy.

The absolute requirement for the presence of dark matter had by now become unavoidable, and this is true also about our own Milky Way, where even the Sun is moving about 70 km/s faster than one would expect. Confirmation of total gravitational mass of our Milky Way comes from observations of the motions of objects not just in the flat disc of stars and gas, but other neighbouring matter. Astronomers study the motion and distribution of globular clusters - tightly bound balls of hundreds of thousands of stars that orbit in a roughly spherical cloud around the bulge of a galaxy - and small satellite galaxies such as the Magellanic clouds, which orbit at distances well beyond the extent of the flat disc. Even the gravitational response of Andromeda to our own Galaxy is used.  


Detailed mathematical modelling of the resulting rotation curve suggests that each spiral galaxy is enveloped within a much larger, and flattened spherical halo of invisible dark matter that extends way beyond (both above and below) its visible disc. The dark matter halo around the Milky Way, for example, it is thought to extend out further than 600,000 light-years, out some ten times further than the spiral disc, and a third of the distance to the nearby Andromeda galaxy!

The distribution of the halo is smooth and the mass of dark matter required is so large that it is the gravity of the dark matter that is holding the whole structure of a spiral galaxy together rather than that from the stars and gas. The total mass inferred for the Milky Way is 1000 – 1500 billion solar masses, which when compared to the total luminous mass of  120 billion solar masses implies there is as much as 10 times the mass contained in dark matter as in luminous matter. Overall, the general form of the halo appears to roughly track that of the contained galaxy: dark matter halos tend to have triaxial (ie ellipisoidal) shapes, where the shortest axis of the visible part of the galaxy is in the same direction as the shortest axis of the dark matter halo.



Although the stars within the much larger elliptical galaxies don’t show the same systematic rotational motion that is seen in spirals, they still follow orbits, albeit more random ones that take them looping either around or through the galaxy. Each star again responding to the combined gravitational mass contained within its orbit. Analysis of their motions is rather more complicated than for those in spiral galaxies, but also reveals a stronger gravitational pull than can be accounted for by the luminous mass of the galaxy alone.


This result is supported by observations of the extensive halo of hot X-ray gas that envelops the visible stars of an elliptical galaxy. This gas is so hot that it is in the form of a plasma, an enormous cloud of charged particles moving at such incredibly high speeds, that again all the matter should have completely dispersed long ago. The fact that the gas remains bound to its host galaxy once more implies the presence of a far stronger gravitational force than you would infer from the observable components of the galaxy alone.



We have already mentioned how the luminous mass in clusters of galaxies is dominated by the hot intra-cluster medium by a factor of about 5-10 times in mass, and we can use observations of the hot gas to confirm the need for a large gravitational mass. The luminosity of the X-ray emission given off by the gas depends directly on its temperature and density, and from these physical properties we can deduce the pressure within the gas. The pressure at any depth in the atmosphere depends on how much the gas is being squeezed by the gravitational pull acting on all the outer layers: it has to support that weight, so the outwards pressure produced within the gas rises until it balances the inward gravitational pull. By measuring the X-ray radiation we can calculate the pressure in the gas at different radii and thus the gravity required to produce this pressure, and the total mass of the cluster. Again we typically deduce a ratio of ordinary to total mass approximately of 15 percent.  


Einstein expressed gravity as a distortion in the shape of space, with more extreme curvature closer in to a mass. Should light from a distant background source – such as a quasar or a galaxy – travel through the misshapen space in the proximity of a large mass, it will take a distorted route rather than follow a straight line, forming a mirage in a process we term gravitational lensing. Indeed, confirmation of Einstein’s prediction that gravity could affect the course of light through space was important in establishing the pre-eminence of relativity. The image of the background source is greatly magnified as many more beams of light (that would have dispersed into other directions) have been focussed towards the observer. Very often the deflection of the light results in multiple and distorted images, where the shapes of distant galaxies are smeared out into arcs and lines. The amount of gravitational lensing we observe again traces the total gravitating mass present; thus by examining the distribution and shapes of the images produced by gravitational lensing of background galaxies by a foreground cluster we can obtain an independent estimate of the cluster’s total mass. Despite relying on different types of observations, and assuming completely different physics from other methods (the pressure in the X-ray gas, the motions of the galaxies), the results agree in the requirement of a total mass greatly in excess of that contained in the visible matter. 


Clusters are extended sources, and observations of gravitational lensing occurring at different locations across a cluster allows one to map out the distribution of the gravitational mass and compare it to that of the stellar mass. In the same way as found for the dark matter halos of spiral galaxies, it seems that the dark matter has the same approximate shape as the visible matter; the two track each other, but the dark matter structure is far larger.  


A distant pair of clusters of galaxies observed to be colliding together at high velocity known as the ‘Bullet’ cluster demonstrates the behaviour of dark matter, and how it doesn’t react to any other mass except through the pull of gravity. The galaxies in each cluster pass by one another, largely unaffected, as they are typically separated by distances typically much larger than their size. Most of the observable mass in both clusters is contained in the X-ray intracluster mediums of each, which collide and compress, thus slowing down to separate out from the galaxies and to get left behind to fill the space between the two clusters. If there were no dark matter present, nearly all of the mass from the two clusters would be concentrated at this mid-way point, marked by the location of the X-ray gas. But the gravitational lensing of background sources seen through the clusters shows that instead, most of the mass is still clumped along with the galaxies, having emerged largely unaffected from the collision. 


Dark matter is everywhere. 



The dominating presence of dark matter was clearly first inferred from observations, but it also emerges as a necessary component of the Universe from separate theoretical studies.



There is too much helium in the Universe to have been created only through nuclear fusion reactions at the cores of stars. Instead, most of the helium around today had to have been formed in one brief moment a first few minutes after the Big Bang: when the conditions within the early fireball Universe were sufficiently hot and dense for nuclear reactions to occur, and before the extremely rapid expansion of the universe known as ‘inflation’. The relative proportion of the light elements predicted from the calculations is strongly sensitive to the ratio of ordinary to dark matter in the Universe at that time, and only matches that observed if they are contained in a Universe that was comprehensively dominated by dark matter at the time.



All the observational evidence shows us that it is the gravity of the dark (rather than the ordinary) matter that holds large (galaxy-size and above) structures together. Thus the nature of the dark matter strongly determines the scales of structure that develop, as well as when they develop in the Universe. Unfortunately this is not something that we can (yet) observe happening directly - even though we view further back in time the further away we look, the timescales involved are so enormous that we cannot follow the evolution of any single galaxy. Instead we try to reproduce what has happened using cosmological simulations, which aim to mimic the evolution of the Universe and the growth of structures within it… but of course, much speeded up! The simulations use physical conditions we measure to be present at the time of the cosmic microwave background as the starting point. The tiny fluctuations from the average density that were around then, when the Universe was only 380,000 years old, provide the focus for gravitational collapse to occur. The simulations then let this early version of the Universe evolve, carefully following the laws of physics, but making an assumption about the nature of the dark matter, to try to predict a Universe that resembles what we observe around us today. In particular we try to match the pattern of structure: ideally on both the largest scales, which is lumpy, separated out into walls and filaments of clusters and galaxies cobwebbed around huge voids of empty space; and on the smaller scale of individual galaxies. 


Dark matter is essential for any structures to form in the first place. If the original perturbations we see in the cosmic microwave background were due to ordinary matter alone, it would not provide sufficient gravity by itself for these condensations to grow quickly enough. They would not evolve to anything that resembled the current Universe in the time available. This is because the visible matter is subject to other forces than gravity – forces due to thermal pressure, magnetism or centrifugal motion, for example, that resist its concentration – whereas the dark matter remains unaffected by these and can thus contract under gravity much more efficiently. Dark matter is required to drive the initial gravitational concentration of matter around which all other matter can aggregate. The simulations that are successful are those that require dark matter in quantities of over ten times the mass contained in the ordinary matter, in agreement with the observational results. Furthermore, the results show that the fraction of ordinary to total mass in the Universe can’t have changed since the time of the cosmic microwave background, and probably much earlier. Dark matter can’t turn into ordinary matter or vice versa.  


Many such numerical simulations have been carried out over the past couple of decades, but have improved enormously with better access to the huge amount of computer power that is required. One of the most successful and most recent of these computer simulations so far was published in early 2014. Known as Illustris it follows 13 billion years of cosmic history within a volume of space 350 million light-years on a side to produce a Universe which compares well to the one around us. In particular, this simulation was the first to reproduce the scales and patterns of structure in the Universe on both the large (clusters and superclusters forming the pattern of the cosmic web) and small (individual galaxy) scales. The more than 41,000 galaxies ‘grown’ within the space show a healthy mix of both spiral and elliptical shapes. The simulation models not only the gravitational concentration of matter into structures, but also includes many of the complex and chaotic physical processes that happen subsequently within the galaxies, such as the chemical enrichment and the energy input from star formation, supernova explosions and supermassive black holes. Like all other simulations it confirms that you’ll only get something that resembles the real Universe if the visible structures emerge from a framework of dominant dark matter.



In summary, many different lines of evidence – both from a variety of observations and from theoretical modelling – require that dark matter is widespread throughout the Universe, and has been since the Big Bang. It dominates the observable/ordinary matter by a ratio of at least ten to one, and is not smoothly distributed, but clumped to follow the overall shape and distribution of the ordinary matter: it is just far more widely spread and less clumpy. It is difficult to rule out the possibility of dark matter that could be completely separate from the ordinary matter, but this would be the hardest of all to detect, given our reliance on the behaviour of that visible matter under the influence of gravity to detect the dark matter at all. Attempts to measure the curvature of space by looking for a statistical excess of the minute distortions or smears in the expected shape of galaxies viewed along lines of sight that pass through some of the emptiest voids of the cosmos have failed to find dark matter in a distribution very different from that of the ordinary matter. 



The need to invoke the presence of invisible dark matter has survived many decades of thorough testing through both observations and theory, and it is now accepted by the mainstream scientific community. There are a few astronomers who continue to challenge the idea, aiming to explain away the observations by instead making slight changes to the laws of gravity: for example, questioning the assumption that the inverse square law is always valid, or that Newton’s second law changes in nature at tiny accelerations. As yet, none of these modifications have been anywhere as successful as dark matter in explaining all the observations, and particularly explaining what is seen simultaneously on the scales of both galaxies and clusters. 



At the simplest level we have only two possible alternative explanations for dark matter. Either it is ordinary matter that is in a form that is just so much darker than everything else - perhaps not completely invisible, but it can hardly radiate any light in any waveband. Or it is a completely different type of matter that does not interact with light in any way at all. 



Ordinary matter is the type of matter we are familiar with, and best understand. It is made out of baryons such as electrons, protons and neutrons, and there is, of course, plenty of baryonic matter that is much less luminous than an average star! The most obvious candidates that were first considered for the role of dark matter were objects which are much smaller than stars. These are known collectively as MACHOs, where the acronym stands for Massive Astrophysical Compact Halo Objects.  


A large population of isolated rocky planets could be responsible if they were small, ranging from perhaps half the mass of the Earth down to asteroids. Alternatively, on slightly larger scales one could invoke brown dwarfs, which are ‘failed stars’; they form from the interstellar medium in the same way as stars, but just never amass sufficient material to reach high enough temperatures at their core to start the process of hydrogen fusion that makes a star shine. These will be on average perhaps about 60-80 times heavier than Jupiter. There is a natural trend in any population that the smaller objects tend to be the most abundant, and astronomers are trying to map out the number of brown dwarf stars in our neighbourhood and finding them (of course!) very difficult to detect in any great number. There are also many exotic types of very compact object that are formed when a star runs out of the fuel sustaining it against collapse under gravity.  Solar-mass stars create white dwarfs that are eventually cool to become less luminous, and much more massive stars produce neutron stars and black holes.  


A fundamental problem with interpreting dark matter as either rocky planets or stellar remnants, however, is that you can’t make them without starting off with stars. The vital ingredient of rocky planets are the heavy elements such as carbon, silicon, oxygen etc which are only created the core of very massive stars (ones far more massive than the Sun). Thus in order to make sufficient rocky planets to begin to account for the dark matter (and remember, this outweighs the mass in stars in a galaxy by at least ten times…) you have to postulate enormous numbers of early massive stars in order to not only turn primordial gas rapidly into suitable rocky planet material, but also to distribute it widely into the surrounding interstellar medium ready for the formation of planets. The same argument applies to making the compact objects, as to create these in sufficient number, you would need the same early, short-lived, and extremely massive stars. We don’t see any observational evidence for such a stellar population - so many supernovae would be required that the light from the earliest galaxies would blaze across the Universe far brighter than those in the present day. Such an early population of stars would only have accentuated the mismatch between ordinary and dark matter in the early universe still further, which is ruled out by the conditions required by the primordial nucleosynthesis.

There are two further problems. All of these three populations are associated with star formation regions where material is densest in a galaxy, so it is an additional challenge to understand how the dark material created could then migrate out to be distributed as a vastly more extended halo. Lastly, even though baryonic matter may be dim, it will still interact with electromagnetic radiation to absorb and radiate energy, even if not at visible wavelengths. All baryonic matter that is above absolute zero will emit some level of thermal radiation. Objects such as brown dwarfs or rocky planets (or comets, gas clouds, dust particles…) are at temperature where they will give off infrared radiation, and won’t be sufficiently ‘dark’ if they are present in the numbers required to provide the gravity required. Even if the black holes remain invisible, material accreting onto black holes will be heated and radiate.  


Nonetheless, searches have been undertaken to detect any MACHOs that could be living well outside the rest of the Galaxy, at the edges of the stellar halo by using gravitational microlensing. This is the small cousin of the gravitational lensing mentioned earlier, where the same process occurs around a much smaller, and more compact mass. Accordingly, detecting it is a much greater observational task. The distortion of the image of any background object such as a star is far too small to be resolved. But while the background source and lensing mass are in line, the light that would otherwise be spread out in all directions is directed forward to greatly magnify the source. The level of brightness increase depends on the relative distances of the lens and source from the observer, and the alignment required is very precise. Thus such alignments are a rare occurrence, and given that the outskirts of the galaxy will be in continuous motion, it will be very brief. The duration of any microlensing event depends on how fast the lens and source are moving relative to each other, and the mass of the lensing object. Oh, and if that weren’t enough, you have to remember that ordinary stars will also produce micro-lensing of background sources, as well as having intrinsic variability of brightness on a range of timescales!

In order to maximise the number of possible events from background objects microlensing surveys monitor fields crowded with millions of stars, such as along the line of sight to the two nearby satellite galaxies known as the Magellanic clouds which lie at distances of 170,000 and 200,000 light-years away from us. The micro-lensing of a star in one of these galaxies caused by a MACHO in our halo could last anywhere between 20 - 200 days (for a lensing mass 1-100% of the Sun’s), or towards the Andromeda galaxy 2.5 million light-years away, where you have the added possibility of detecting events due to MACHOs in the halos of both Andromeda and the Milky Way! 


As yet, only very few micro-lensing events that could plausibly due to MACHOs have been observed, and each one involves a great complexity of analysis and interpretation of the data. The events observed last on timescales less than a month, giving upper limits to the mass of the lensing objects to be around 100 Solar masses, and the consensus is that certainly less than 20% (and some surveys place a much lower limit at less than 5%) of the dark matter in the halo of our Milky Way that can be locked into MACHOs. They do not make up a significant contribution to the majority of the dark matter mass in our Galaxy, so we need to come up with another explanation.  


Anti-matter, made up of anti-particles that balance every particle, is still part of the baryonic Universe. Dark matter thus cannot be made of the antimatter to ordinary matter; dark matter particles would have their own corresponding antimatter and must be something different. The only remaining wriggle space is a much more speculative possibility, that of primordial black holes which would have been created in the very early few fractions of a second after the Big Bang. They would have to be much smaller than the astronomical black holes we currently study, ranging from microscopic masses to perhaps about that of the Earth. Today these would just hang around as unresponsive relics of that early Universe, and would be very difficult to discover, and are correspondingly hard to rule out. They would be expected to evaporate via Hawking radiation giving flashes of gamma-rays of a type not seen. 



If the dark matter is not baryonic, then we have to appeal to a more complex explanation that invokes non-baryonic matter, usually in the form of vast numbers of tiny particles. A dark matter particles must have mass, as it produces and experiences the force of gravity; as it is dark, it does not respond to the electromagnetic force; as it is not bound into ordinary atoms, it can’t respond to the strong nuclear force; but it can still react to the weak nuclear force. 



The most obvious such particle to consider – in that it is at least is known to exist, and in vast quantities! – is the neutrino. It also seems to tick some of the right boxes: neutrinos are electrically neutral, so do not interact with the electromagnetic force; they react to the weak nuclear force but not the strong force (which means that all neutrino reactions occurred before nucleosynthesis, and so their numbers are fixed in place before the elemental abundance of hydrogen and helium is determined); and they respond to gravity. The first question is whether these tiny particles have enough mass to account for dark matter. Estimates of their density are around 400 per cubic centimetre, so there are plenty of them around and so they don’t need to be at all heavy; a mass about 10 million times less than that of a proton would be sufficient to make them a viable candidate. It’s currently difficult to measure the masses of neutrinos experimentally, although upper limits suggest that they are far too light. The mass limit is already at 1% of that needed for neutrinos to account for the dark matter content of the Universe.


This lack of weight, however, presents a much more fundamental problem, as it means that neutrinos travel very fast, at nearly the speed of light, and as they are not slowed like normal matter, they don’t get rid of this energy. It is thus easy for them to escape from small gravitational concentrations, and they can only remain trapped in much larger condensations. Remember, the dark matter provides the gravitational framework needed as the focus for the ordinary matter to collect into visible structures – the dark matter clumps and concentrates to form the seeds for the first galaxies;  the ordinary matter of stars, planets and galaxies only emerges later.  ‘Hot’ dark matter, ie dark matter that is moving at high speed (like the neutrinos) is less easily confined in one place. The movement of the particles will ‘wash out’ any small scale fluctuations in density, smearing them out to much bigger scales with crucial consequences for the size (and thus mass) scale for the first structures that will condense in the early Universe. Hot dark matter will produce structure in the cosmos following a top down hierarchy: large entities – on the scale of superclusters and clusters – will be the first to form, with individual galaxies only forming much later through the fragmentation of the larger structures. Slower-moving ‘cold’ dark matter leads to a distinctly different and bottom up scenario, where the smaller structures (on the scale of dwarf galaxies) collapse first, gradually merging and congregating to grow into the present-day galaxies and then clusters of galaxies. Observations show the distribution of the large-scale structure to be far more consistent with the idea that galaxies form first: many of the earliest galaxies we observe are smaller, and appear more blobby than current galaxies, and the results from the cosmological simulations strongly confirm the ‘bottom-up’ scenario. These simulations are run under assumptions about the nature of the dark matter, which can be assumed to be cold, warm, hot, or mixes of these possibilities.  


Hot dark matter models predict much more structure on the largest scales than is seen. The computer simulations (including that of the Illustris collaboration already mentioned) that are successful in matching the present-day demonstrate that structures can form under gravity much more rapidly and effectively, only if the dark matter is ‘cold’. The first structures in the early Universe would have a mass around a million solar masses, and these protogalactic fragments merge together to form today’s galaxies, while larger scale features take longer to develop by further collapse. So – along with its behaviour with regard to the four forces –  we can list to our wish-list that any dark matter candidate has to move slowly. There are no known such stable, heavy particles that interact only weakly with other matter.



An alternative possibility which naturally arises out of the theory of quantum chromodynamics (which you may be relieved to know that I’m not going to go into here!) is the axion. Like all sensible dark matter candidates, axions are just about impossible to detect directly. Specific detection experiments such as ADMX (the Axion Dark Matter eXperiment) are being developed, which aim instead to observe photons emitted as axions decay. Just this week (Oct 2014) an exciting preliminary result was announced from researchers analysing 15 years’ worth of data taken with the XMM-Newton X-ray satellite. A slightly higher intensity of X-radiation was recorded whenever the telescope observed the boundary of the Earth’s magnetic field that faces towards the Sun. This is at odds with uniform glow of the X-ray background light seen everywhere else in the sky, and has so far defied traditional explanation. Instead a tentative idea is that the X-rays could be created when axions produced at the core of the Sun crash into the Earth’s magnetic field. If this is eventually confirmed as a bona fide detection of axions, then they would have a mass of only about a hundred billionth of an electron.



The remaining non-baryonic candidate for dark matter is the one that is taken most seriously by astronomers, and it moves us from the realm of the MACHOs to the even more exotic world of WIMPs (Weakly Interacting Massive Particles) – and even further into the wilds of speculation.  


The driving idea is that such particles could have been produced in the tiniest fraction of time after the Big Bang, at the very highest energies. And while they might have played an important role in reactions occurring during the very first moments of history, any WIMPs would have rapidly decoupled from the ordinary matter and radiation, persisting to today only as some kind of unreactive left-overs. The candidates for WIMPs are thus often considered in the context of theories attempting to extend the standard model to unify the four forces. SUSY uses a framework of mathematical symmetries to predict that every particle we know about has a heavier ‘supersymmetric’ partner. These will exist at the highest energies, with many annihilating each other early on. Most will decay through a chain of nuclear reactions until they transmute into lighter particles stable enough to survive to the present day, typically expected to have masses between a few tens and a few tens of thousands that of a proton; examples of possible WIMPS mooted include neutralinos or winos.


The challenge presented by such candidates concerns both astronomers and particle physicists. First we have to determine whether these supersymmetric particles actually exist; then we have to determine whether they are present in the sufficient amounts, and in the right place to account for dark matter; and finally that they really are stable enough to hang round for 13.8 billion years!



By their very definition, WIMPS are almost impossible to detect directly - remember they only interact with anything else through either gravity or the weak force, and this means they will happily travel through everything, including the Earth and any detectors we try to build! Many of the experiments focus on searching for indirect evidence of their existence. Either they seek the expected byproducts resulting from WIMP annihilations (from WIMPs generated in particle physics experiments or associated with a cosmic origin); or they look for evidence from incredibly rare interactions between WIMPs and particles of ordinary matter.  


WIMPS should be created in high-energy subatomic collisions such as those undertaken in particle accelerators such as the Large Hadron Collider (LHC) at CERN. Protons are accelerated to enormous speeds (and thus energies) around a 27-km long ring underground at the French-Swiss border on the outskirts of Geneva. They are then smashed together, the whole attempting to simulate particle reactions in extreme physical conditions, such as those that prevail in the early Universe. Any WIMPs produced in the collision won’t themselves directly react with the detectors. Observers can only infer that they have been created in a reaction if a discrepancy is discovered in the amount of energy and momentum carried by the detectable particles into and out of the smash. These quantities are always conserved, so a shortfall implies that some of the energy and momentum have been stolen away by an invisible WIMP. So far the collisions in the particle accelerators have not produced any evidence for supersymmetric particles, but the limits from the lack of signal are not restrictive and so far only rule out the less massive possibilities. The LHC is currently being upgraded, and is due to resume operations in March 2015 when it will operate higher energy collisions that hold the possibility of creating heavier particles. If those results aren’t conclusive, then we shall have to await future developments in yet more energetic particle collider facilities.  


Ideally we want to not just discover dark matter particles, but to prove they are of cosmic origin. As the particles themselves are so elusive, an alternative approach is to try to detect the secondary products created as the dark matter particles decay, or annihilate each other. Much of this action will have happened long ago in the early Universe, but some annihilations might still be occurring in places where they are particularly numerous, such as in the far outer halo of the Galaxy, or the central core. The experiments are thus trying to detect end-products of this decay, such as photons, anti-particles and energetic neutrinos, and then also associate any detected excesses of these products with directions in the Sky where we might expect high concentrations of dark matter. Exactly which by-products result, and in what quantity, depends on assumptions about the dark matter itself.  


One of the predicted products of dark matter annihilation are high-energy photons known as gamma-rays. These do not penetrate through air, and are best detected directly from space, by satellite missions such as NASA’s Fermi telescope, or ESA’s INTEGRAL (International Gamma-Ray Astrophysics Laboratory). A gamma-ray excess at a wide spread of energies has been recently confirmed by the Fermi satellite, and it is associated with the centre of the galaxy, where some models predict a concentration of dark matter. There was a particularly exciting initial report of an excess of gamma-rays at one particular energy, which if true, could reveal the mass of the WIMP responsible for producing that radiation; however, the detection has decreased in significance with subsequent observation. The general gamma-ray excess could well be due to collision and annihilations between WIMPS, but there are many other possible sources of gamma-rays in this direction on the sky (for example pulsars or supernova remnants) that might confuse and contaminate the detections. Future prospects of a different kind of analysis of the gamma-ray excess now most likely awaits the international Cherenkov Telescope Array, a future ground-based gamma-ray detector that should be sensitive to photons with energies expected from the very highest WIMP masses.   


Other possible end products of WIMP annihilations include energetic particles such as pairs of electrons and positrons (anti-electrons) that could be amongst those that bombard the top of Earth’s atmosphere from outer space as ‘cosmic rays’. Electrons are common, but positrons are very rare - thus a large flux of positrons could suggest high levels of dark matter annihilation. Cosmic rays are best studied before they enter the atmosphere, and first indications of an excess of positrons was found from detectors carried aloft on balloons, and subsequently supported by data collected by the Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) satellite experiment, as well as the Fermi satellite, and most recently by the first results from the Alpha Magnetic Spectrometer AMS-02. AMS-02 is a 7-ton particle physics detector mounted as an external module on the International Space Station, and it measures the composition and flux of around 1,000 incoming cosmic rays per second. All three experiments report an excess of positrons in cosmic rays over that which would be otherwise expected, which indeed could be due to dark matter particle annihilations. The proportion of electrons to positrons increases with energy in a way that conforms to some of the theory, although the actual ratio is not consistent with standard SUSY WIMP candidates without some moderation of the theory. In addition there is no association of the excess of positrons with any preferred location on the sky. This is not surprising, given that positrons are electrically charged and so can suffer electromagnetic interference en route, scrambling the direction they appear to arrive from. We thus have no confirmation that the cosmic rays come from dark matter, and there are other consistent (and still astrophysical) explanations for this excess, such as energetic winds from pulsars, which are copious emitters of electrons and positrons. The cosmic ray experiments will continue – AMS-02 is expected to operate for about 10 years and collect billions more cosmic rays, and the statistical results will only improve with time.  


Neutrinos are electrically neutral, and so follow a straight path toward us, unaffected by magnetic fields; the location of any detected excesses relates to the direction of origin on the sky. One experiment looking for neutrino excesses is ICECUBE at the South Pole, where thousands of optical sensors are buried under the Antarctic ice. The experiment has been running since 2011, and in that time has detected 28 neutrinos thought to originate beyond the Solar System. While this holds great promise, it may be a few years yet before it had accumulated enough detections to produce a definitive result.  


The final strategy used to search experimentally for evidence of dark matter candidates is to look for the incredibly rare interactions between dark particles and ordinary matter. We expect that dark matter particles continually rain through Earth all the time, at the rate of thousands of WIMPs passing through every square centimetre each second. Occasionally one will undergo a head-on smash with the atomic nucleus of ordinary baryonic material, but the probability of this happening is incredibly low. Both the WIMP and an atomic nucleus are physically so small that they will more often miss each other than meet; and the chance of occurrence also depends on the mass of the WIMP. However, if a collision does happen, the WIMP scatters off the nucleus, which will in turn recoil. The aim of the experiments is to detect the tiny amount of energy imparted to the nucleus in the collision, and it is revealed either as a microscopic change in the amount of heat energy, or as the release of a single photon of light in a process known as scintillation, where the recoil energy has removed electrons from the atom. In both cases the strength of the signal depends on the amount of energy released in the collision. 


The experiments thus consist of sensors that patiently observe a ‘detector’ consisting of vast vats of very pure ordinary material, waiting to catch the tiny energy release that signals a collision. The expected rate of collision events is expected to be fewer than one a year per 10 kg of ordinary material, so the likelihood of recording a collision is increased with the more ordinary material used to net the dark matter, and the longer you can monitor it for. The type of ordinary material used as the trap is also crucial, choosing the most appropriate involves a balance of the amount of energy expected to be released (which makes a difference to how easy it is to detect) against the expected frequency of the collisions.

Experiments that track heat release - such as CDMS, CRESST, EDELWEISS, EURECA - have to be run under extremely cold, cryogenic, conditions; they use germanium or silicon in the detector. Experiments using highly sensitive photo-receptors waiting to catch the bright flash of light from scintillation - such as ZEPLIN, XENON, DEAP, ArDM, WARP, DarkSide and LUX - use liquids of elements with heavier nuclei in such as xenon or argon as the trap. All the experiments are susceptible to contamination from background radiation, and are best located deep beneath the ground so that they are shielded by the rock layers overhead. Sometimes that is still insufficient protection, and natural radioactivity produced from the surrounding rock, and even the instrumental equipment, can produce false signals. Sometimes it is necessary to regard the outer regions of the detector material as an extra shield, and trust signals detected only from the very core.  


Currently the most sensitive detector of this kind is the LUX (Large Underground Xenon) experiment, which is buried a mile underground in the Homestake Mine in South Dakota. It is a light scintillation experiment able to detect the release of a single photon in 370 kg of ultra-pure xenon liquid. In 2013, the first results following three months of continuous operation were announced, and no definitive signal of WIMP collisions had been detected. However it is still early days; the longer the xenon is monitored, the better the statistics and scientific results that will emerge.  Failing that, a future upgrade in 5 years will increase the amount of xenon to a full 7 tonnes! 


The same problem remains with direct detection as with the indirect detection, in that even if we are successful in detecting the presence of WIMPS through the particle collisions, we still need to establish their distribution around the sky to determine that they are of cosmic origin. One experiment seeking to do this is the Italian DAMA/LIBRA experiment, which is one of the underground scintillation experiments. Specifically, it is aiming to detect an expected annual change in the rate of detection events. If dark matter is concentrated more to one part of our sky than another (such as to the centre of the Milky Way), then there will be a directional sense to the steady ‘wind’ of secondary by-products created by dark matter annihilation that are streaming our way. This should remain pretty constant relative to the direction that the Sun is rotating around the centre of the Galaxy, but its direction to the detector will change as the Earth moves in its yearly orbit around the Sun. Our motion relative to this ‘wind’ should produce a variation in detection rate of events that follows an annual cycle, with about 5-7% more detections in one half-year over the other. 

This could be considered analogous to the way a cyclist might alternate between experiencing a head wind or tail wind as they cycle around a circular track, moving into and away from a steady headwind.

The scientific team seems to detect modulation at a high significance over 13 years of accumulated detection, ie a different count rate during the summer than during the winter months. If due to WIMPs, then the particles responsible have a mass some 10 times that of a proton. Which sounds promising, but many scientists are reluctant to draw firm conclusions from this as the DAMA/LIBRA results seem incompatible with other limits from a range of other experiments such as LUX, which use completely different detectors and setups. The situation is confused: maybe the modulation is real, but caused by a background signal such as contaminating radioactive material in the immediate surroundings of the experiment whose presence or activity depends on seasonal temperature variation. 



There is still so much to determine about dark matter. Experimental results remain at best tantalising, and even though the detector sensitivity is improving, real progress may yet have to wait for next generation experiments. These facilities will have to not only have to detect and define the nature of the WIMP (if that really is the responsible agent), but include some directional information – for example: that the direction of the recoil from an atomic nucleus following a collision in a detector reveals the origin of the incoming dark matter; or mapping any modulations in the detection events that are not just seasonal (and ideally observed simultaneously from both hemispheres of the Earth), but also due to the daily rotation of our planet through a steady headwind of dark matter. Most of all, we require all the wide variety of experiments to provide consistent results; results that are arrived at assuming different physics, use different detectors, and different tracers ; only then might we begin to believe we finally can begin to understand the nature of dark matter. 



© Professor Carolin Crawford, 2014

This event was on Wed, 22 Oct 2014


Professor Carolin Crawford

Professor of Astronomy

Outreach Officer at the Institute of Astronomy and Fellow of Emmanuel College, University of Cambridge, and Gresham Professor of Astronomy (2011–2015), Carolin Crawford is one...

Find out more

Support Gresham

Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds.