Comparing Britain against other countries:
Can we measure if and how we differ?
Professor Roger Jowell CBE
One of the things I do is to look at national differences throughout Europe, in the European Social Survey. We are trying to find out about value change and attitude change over the years in 34 countries, which is also very difficult. The real question is – and I need to try and answer that first – how valuable is it to do this work at all? The factors that make countries different such as the cultural differences also make them extremely difficult to measure, so I shall also discuss those problems. The reason we bother is that, despite all those formidable difficulties, not all of which can be solved, it is actually important to compare differences and similarities across countries. We find out, if we can, how our country differs, in what respects, and whether one wants it to differ or not, because that is often within the power of governments. Sometimes, it is not within their power, because these differences are influenced by very strong, long-lasting cultural factors. Furthermore, it helps to challenge national stereotypes. There are many more myths that we hold about Britain than there are facts about ourselves. There is a very strong belief that Britain believes entirely in fairness and is an extremely fair society. Well, it may be true. It is certainly not one of the most unfair societies of the world, if you compare fairness with tolerance and so on, but it certainly is not one of the very fairest either. I think you would need to go to Scandinavian countries for that or elsewhere. Our sense of fair play always comes out, in virtually all conversations about Britain. Well, on all sorts of things, that does not happen to be as true as we think it is. You only have to listen to the Chancellor’s speech yesterday to see the kinds of references to Britain’s generosity with the poor and its traditional welfare state values and so on. However, you discover that, actually, they are not as impressive as I think we like to think we are in some respects, and in other respects, we are impressive. So, it is worth finding out, and the only way you can find out is to put Britain up against other countries to actually see how we fare and how they fare on different aspects of society. That is a great help to governance as well. There is no longer an excuse for governing in ignorance of what the rest of the world is doing, We have complete access to other countries’ information, through the web, through all sorts of other factors, and it is important to find out whether we have things right or whether we could do things better. That is particularly true in Europe of course, where we also have cross-national governments, through the European Union, and they need to know about the particular characteristics and foibles of different countries. The one thing that one sees when one does see countries compared, for instance, on GDP, which is the favourite one, is that league tables are not enough. They describe one aspect, but they do not actually describe what is going on underneath. Finding out a country’s overall wealth says nothing about the distribution of wealth in that country and how that compares with other countries. For instance, we find that India is in the top nine countries in the world in terms of income, but its distribution also has a very high proportion of extremely poor people. Also, Britain, America and other countries like that have very large numbers of very poor people in a country that’s relatively rich. So, league tables between the countries actually give rise to kind of gee-whiz findings: “Look at that, isn’t Holland irreligious!” without any explanation of what the historical and sociological factors might be to explain why it is more secular than other countries. Ideally, one wants comparisons both between countries and across time. Countries change, and there are some changes which we call period changes, where all countries change together, and the world changes accordingly. The rise of the internet would be one such change. The increase in mobile phones is another. Technology changes are infectious in their effects, even in poor countries, in many respects. There are other countries that are much more resistant to change; they are encapsulated in their own particular governance arrangements. An example is China not allowing Google access for so many years. The fact is that one needs to make these comparisons often, like forecasting often, in order to actually be sure that you are still on the right track. So, as I say, if you are going to compare, compare regularly, and compare in the same way across different countries at different times. There are formidable obstacles to doing this and I will just mention some of them. Some of them are technical, and some of them are, in their nature, cultural. The first one is the very fact that countries differ often for good cultural, constitutional and institutional reasons. Countries differ in their governance arrangements, in their voting systems, in their economic arrangements, in their distributions of income, and therefore, comparing one country with another country in lots of factors means that you are often comparing apples with pears - you are not comparing things on the same basis. If you are trying, as a social scientist, to actually make comparisons that are valid between two different units, you have to try to neutralise your measuring objects, so that you are measuring one thing with another thing of the same type. Institutional incompatibilities make that very hard. Ours is an attitude survey. We measure value and attitudinal change, and we asked some questions about the nature of democracy, people’s appreciation of it in their own country and how it has changed and so on. We suddenly realised, when we were piloting these things, that the word “democracy” had very different interpretations between the old countries of Eastern European, the old communist countries and those in the traditional democracies of the West. When you ask people about democracy in the West, they think of things like civil liberties. When you ask about democracy in Eastern Europe, the populations view it as giving people the right to vote in free and fair elections. The people were answering questions about different things, and if you ask in Eastern Europe, the responses are usually positive with people saying, that it has hugely improved in recent years because they are not necessarily thinking of civil liberties improving but thinking simply of the fact that elections now are a great deal fairer and more open than they used to be. So those are the difficulties about trying to measure across countries. Another problem is something I call the principle of equivalence. All social science measurements or comparisons are based on equivalent measurements. Equivalence between the samples that we are measuring - who is being measured, what is being measured – is necessary for good measurements. That principle of equivalence is breached regularly in cross-national surveys, simply by the nature of the institutions we are trying to measure. Balancing high standards and consistent methods is very hard. Countries have their own ways of measuring. We use survey measurements where we have a biannual survey in each of the 34 countries. We go along and ask them a questionnaire that lasts about an hour. We make sure that we have the same rigorous sampling standards in each country, so that, in each case, we have got a representative sample of the population, based on random procedures. We are insistent that questions get considered and reconsidered and tested and retested before they are incorporated into the questionnaire. So we go to a great deal of effort to try and ensure that our surveys are good science, but translation is another hurdle to cross. We do lots of testing in English, we do it in some other countries too, but you cannot do it in 34 countries to the same degree. Therefore, once you begin translating these questions or translating the sampling method that you have devised so lovingly into other countries’ circumstances and cultures, they begin to go wrong, and one has to do all sorts of remedial action before one can get them right and get the measurement as a consistent equivalent measurement. The higher your standards, the less likely it is that you are going to get consistent methods because lots of countries do not have the same standards as Britain and Germany, for example, have in terms of social measurement. There are national differences in methodological capacity. Not always do the social scientists in other countries have the same sorts of interest and the same sort of capacities to do the work. Furthermore, there are national differences in methodological habits. Some countries do it one way, other countries do it another. Often, they are both legitimate methods, but they are not the same, and we are looking for similar methods. So we had an awful problem trying to persuade the Scandinavian countries that they should use face-to-face interviewing as their method of getting the information in. They said, “But we’ve left that to a large extent – we now use the telephone or the web to collect our data,” and you point out to them that, even in a country like Britain, which is a highly developed country in the context of the European nations, only about 55% of people have weekly access to the internet. So it would be inappropriate to use the internet as the only source – you have got to use the same method. So we insist on face-to-face interviewing, which is extremely unpopular, and they were saying “We’re losing the ability to do that,” so we are looking at that again, over time. Overcoming the obstacles therefore means we must strive for equivalency in sampling methods. We need to obtain a representative sample and ensure that the modes of data collection are similar. We then have the impossible task of trying to translate words from English into other languages, so that they have the same weight and the same meaning. One of the most difficult parts of any kind of cross-national comparison is getting over that language barrier, because languages carry an awful lot of historical and cultural cues. English clichés do not work in other languages. There is a famous example of an American survey that used the phrase “out of sight, out of mind” and it came back in translation as “blind and demented”! That is simply an extreme example of something that happens all the time, and it is very important to get the same flavour that you are trying to get across – the democracy example was a case in point. When we design the questionnaire, before it is translated, we give notes to translators as to what we mean by the words and phrases and concepts, whenever there might be any chance of ambiguity. So for instance, in democracy, we said: “By democracy, we mean the existence of civil liberties in the country, like freedom of speech, freedom of association and so on, not the ability to vote in elections.” That means, when the Eastern Europeans, who are still seeing democracy as the ability to vote in elections as the primary salient factor about democracy in their country, when they translate it, they use the annotated translation, rather than what they might otherwise have put. That is the way we ensure equivalence, but language defies equivalence, to a large extent. I always wonder how the phrase “quite a lot” translates into Serbo-Croat or some other language! Are there equivalents of all these things? We try and very carefully get English words which have a precise meaning or resonance, and then you realise that there is no similar resonance in other countries. It means we have to meticulously document everything. I mentioned the notes on the questionnaires which explain what we mean by different concepts, but everything else has to be documented in the same way, our methods and so on. This is so that the user of the data, the ultimate user, can see that there is an obvious deviation of methodology in Serbia from what is being used in Russia, or whatever. They can then choose not to use Serbia in my data, or they might use one or other of them, or try and take it into account in different ways and annotate it accordingly. The other thing it involves is consultative design. The opposite has been dubbed “the safari approach”. The “safari approach” is one developed country, usually America, going into different countries with their scientists and plonking their tents there and saying “You will ask these questions,” and then, dutifully, they get the data in and then think it is all comparable. We get so much from consulting with the countries. We have big meetings of all the countries together, talking about the nature of what we are trying to do, but of course these meetings take place in one language. They take place in English because that is the official language and the most widely-spoken language, so some points may be missed in translation, even during our discussions, and this is a problem we cannot quite overcome. Very importantly, we include contextual information, alongside the data we produce. This is an essential part of it all, because we need to know the GDP of each country, which can affect a lot of the answers. We want to know a great deal about the educational system and a great deal about various aspects of society, which have to be easily to hand for the person who is using the data and trying to understand the differences. Is the political interest of people simply a factor of different educational levels between countries? The answer to that question is yes, to a large extent, it is. The higher your education, the more likely you are to be interested in politics, the more likely you are to participate in politics and to vote in elections and so on. So, unless you begin to know these factors about countries, you draw conclusions about differences between the people that are actually system differences between the countries, so these are very important background variables. I want to use the European Social Survey, which is the one I run, which is an example of what we are doing and how it happened, because it is relatively new, it is only about 10 years old. There was a growing recognition in Europe, expressed through the research councils and national academies of countries, and via the European Science Foundation, which is the federal body, that public attitudes mattered a great deal and were mattering more in the modern world, and that they were very poorly measured in Europe. There were some good examples, but they were few and far between, and it was very important to try and get rigorous data about to measure how similar we were to one another and at what rate we were changing in our values and in our attitudes. That was the origin of the European Social Survey. It was a recognition, by various countries, simultaneously, that this was a great absence in our ability to analyse ourselves in relation to one another. The aim was to tap climate change in attitude and value formation, not to look at mere changes in the weather. The opinion polls were already monitoring who likes whom as Prime Minister, from day to day, and which parties they support, what their values are, from one minute to the next. There was a huge shortage of comparative material, because all the opinion polls differ in their methodologies and so on, but what was needed was a rigorous, scientific approach to actually trying to chart change over time, in many countries simultaneously, and that is what we were asked to do. We avoid, like the plague, any topical issues and concentrate merely on big changes in values and so on, underlying values rather than top of the mind kind of attitudes about whether you are in favour of the Conservatives or Labour yesterday or today. It started in 2001. 34 countries now participate biannually in this survey, although a lot of the work is done in the non-survey years in preparation, and the aim is to contribute both to scholarship and to governance across Europe, and that does not just come from the results. It also extends from the methodology, because a rigorous methodology that was used equivalently by many countries throughout Europe was what was missing. It includes a large training component, which is heavily oversubscribed, and a large methodological component. We discuss methodological tests of one sort or another and how best to capture the sorts of information that we want to capture. We cover virtually all the world in terms of the different attitude clusters, subjects and value changes that we measure. We also examine a great deal of contextual information, like education and occupation details, financial circumstances of the country and of the respondents, household circumstances, demographic composition, and so on. This represents a great number of things on both political aspects and sociological aspects. The demand for that sort of data is shown by the fact that we already have 34,000 registered data users. This has been taken up in huge volume by academics, students, think tanks and government people in lots of different countries, not only in Europe but in America too. I think virtually all the PhDs in the last few years in social science have been using the ESS data. We have an online bibliography of publications that have used the ESS data as their basis, and it contains 236 journal articles at present, about 36 books, and about 90 chapters in books, so far. It is changing, literally, daily. The data are increasingly deployed in policy debates, again, not just in Britain and Germany but throughout Europe, and a great deal within the European Commission, for instance, where the data are simply brought into policy. We cannot claim, and I do not want to claim, that the data have changed any actual policies. That is not what they are meant to do. They are supposed to inform policy rather than change policy. However, what one can say is that, in several cases, we are changing, or the data we are producing is changing the atmosphere of the debate, and that is as much as we should be hoping for. As I said, the training courses are oversubscribed, and we are having a very profound influence on methods of comparative measurement well beyond Europe. We are holding seminars all the time in America, and combined tests with American scholars. It has won various awards and the latest thing, which is very important, is that it has become one of the potential new European research infrastructures. This is a new thing that the European Commission has developed. It picks major scientific projects in Europe that require long-term support and funding, and we have been shortlisted as those and are now preparing to change into that kind of structure, where we will be under the aegis, if you like, of a range of research councils or governments throughout Europe who are interested in pursuing independent academic research on this subject. The central coordinating team is based at City University, and we have other institutions in other countries that we work with. It is a very wide and disparate group of countries that are involved, from very small countries to very large ones. Nearly all of them speaking a different language and that makes it all very complicated and absolutely fascinating. I am going to give some summary findings. We measure what is becoming and has been identified as a very important part of democratic societies: how much we trust our institutions of government. We only have to look back a year, when we had the terrible scandals of the MPs’ expenses, to wonder what that was doing to people’s trust in Parliament, and what the effect of a lower trust in Parliament will actually do. We need to trust the institutions that govern us. Lack of trust and lack of confidence in these institutions has led to revolutions in the past in lots of countries although I am not suggesting for one minute that it is due in Britain. We have chosen a few institutions to look at the extent to which the public has confidence in them. We began by looking at the Police, and the spider graph shows the level of trust in the Police in 34 different countries. At the very outside of that, of the curve is a score of 8 points out of 10, 10 being the highest, being total trust more or less, and 0 being no trust at all, nearer the middle of that diagram. The countries that are skirting the outside of the diagram, with a lot of trust, are Scandinavian countries. You will find that in virtually all of the distribution, on all of the factors that we measure, Scandinavians always come out best. We have Finland, Norway, Sweden, Denmark and Iceland, all with very considerable trust in their government. The inside is brought up largely by the countries that were formerly part of the Communist block, and that came out very strongly. It is not entirely the case, but what was interesting was that it was very largely the case. When I first saw that distribution, I thought it interesting it is that the countries are so similar in their levels of trust, but then I realised that can only score up to 10 out of 10. What we find is that there are very few countries in the very lower reaches and all of those are Eastern European countries. Then we superimpose on that trust in the legal system. That is the trust they have in their system of fairness, the judicial system, the courts and the independence of judges. What is fascinating is the very similar shape of those two distributions. It is always interesting that the legal system is always a little less trustworthy than the Police. The Police come out top. The legal system comes out next top. The relationship is very much in the same order as before, and that is not surprising as there is a relationship between the Police and the legal system. However, the degree of similarity is certainly surprising. Then we superimpose upon the trust in national parliaments. They were all even lower or virtually all lower. So the Police are the most trusted and are followed by the legal system which is followed by national parliaments. Once again the old Eastern European countries are in the lower parts of that diagram and the Scandinavian countries are near the top. There is a consistent pattern in all these questions of trust in the authorities. Finally, I superimpose trust in politicians which is lower still! That was taken, in Britain, incidentally, before the expenses’ crisis, so god knows where it is now – it is probably down near Estonia or Bulgaria. It is fascinating that the shape of those is so consistently the same, over a lot of different countries. They are always in much the same position relative to how the other countries identify themselves. So that is, in many ways, the fascinating thing of what one gets from looking at international comparisons of data. It shows is how similar we are, in lots of respects, and where there are differences, how consistent the differences tend to be. The other aspect we are looking at is attitudes to migration. We have been measuring that over the last 10 years, and what is interesting about that is, I suppose is most interesting about that, is the association of xenophobia with economic downturns. You only have to see that now, with the recent recession in Europe being followed by parties in Holland and Sweden where the people at the right in terms of migration, are coming into power, and are making deals, often because of coalition politics, with others. Therefore, they are influencing the migration policies of those countries. When the economic crisis is over, you can expect that kind of xenophobia to go away. We looked at that very carefully, because the question is whether the xenophobia is simply linked to fears of high unemployment and a competitive labour market – a fear of more people. However, we need to see whether it is fear of foreigners. The answer is, once again, that education matters, and that happens in so many of our distributions. It is one of the big factors, I am glad to say, that actually affects human values. Education matters a great deal here because what you find is that it is largely the uneducated that are the most xenophobic, and that, again, is something that is extremely similar across all European countries. It is extremely strong in some countries and stronger in some countries than others. If you were making internal comparisons, it is nearly always true that the more education you have, the less xenophobia you tend to suffer from and the greater sympathy you have towards cultural diversity. That is true of all newcomers. The educated are also more tolerant and more sympathetic towards the migration to their country of people who are going to compete with them. So, professional classes, for instance, in all the countries, are much more sympathetic to immigration, even of other professionals who may compete with them. So it is not simply a question of labour market competition, it is also a system of norms and different norms that different levels of education bring. So I want to just summarise and then sit down, and I would be interested of course in your criticisms and questions. The first thing I want to say is that precise cross-national comparisons are always elusive. We are producing some, we have more confidence in some than in others, but we do not have total confidence in any. It is almost impossible to do that. Anybody who does have total confidence in these sorts of data I think should be highly suspect. There is still considerable scope for improvement. We have been charged with trying to improve the position, and I think we have, without being stupidly modest. It is true that we have done quite a lot to improve cross-national measurements, and I think that they will be better as a result of this project having existed, but there is still a huge amount of ground to cover, and some of it will never be covered entirely. However, we have to carry on and get good comparisons between countries on these sorts of issues, for the sake both of good science and better governance. I just want to ask myself the questions I asked - have we discovered a set of quintessentially British values from all this, from having compared Britain against all these other countries? The answer is: decidedly not. Britain tends to be about in the middle of all those distributions, on virtually every subject of all – on tolerance, migration, institutional trust. You look at these diagrams, and when you want to find out the nicest people in European, you look at the Scandinavian countries, and if you want to find the average kind of country in Europe, look for Britain in the distributions. It is always somewhere near the median of every distribution. Sorry to give you bad news!
©Professor Roger Jowell, Gresham College 2010