Statistics Division Home
Fundamental Principles
Country Practices
Reference Materials and Links
Sign in

Ranking: right or wrong? Some problems in comparing national statistical offices and systems

Ranking: right or wrong?
Some problems in comparing national statistical offices and systems

Willem de Vries


At the 47th plenary session of the Conference of European Statisticians at Neuchâtel in Switzerland in June 1999 the topic for substantive discussion was ‘Performance indicators’. A couple of papers that were discussed in this regard generated some excitement. One in particular, an invited paper by Robert Chote, Economics Editor of the Financial Times (Chote, 1999), was rather furiously attacked by several participants, some of whom even requested it be excluded from the official proceedings of the conference. My own paper (De Vries, 1999) for the conference was less controversial, except for a small table I showed, which was not part of the paper itself, comparing statistical systems on costs. In particular, some eminent statisticians whom I greatly admire, said that ‘benchmarking on costs’ was nonsense. Admittedly, it is a very difficult exercise, and in fact my table was taken from a report which is mostly about precisely these difficulties. Therefore, the purpose of this article is to explain how the table came about and what it may or may not be worth.

2. Risks of ranking

When the newspaper The Economist published rankings (or league tables) of national statistical offices some years ago, quite a few senior statisticians were upset and although there was little public discussion about the method that The Economist had used, there was broad agreement that the assessment had been superficial and that the results were debatable, to say the least. I will not go into the technicalities of The Economist’s ranking approach, except to say that a combination of ‘objective’ criteria and judgement by peers was used.

Robert Chote’s ranking was based on assessing the performance of national statistical systems against the IMF Special Data Dissemination Standard (SDDS), a voluntary standard, agreed by about 50 countries now. Its purpose is to make the quality of official statistics more transparent for the users. The main dimensions of the SDDS are:
- data characteristics (coverage, periodicity, timeliness);
- access by the public (advance dissemination of release calendars, simultaneous release to all interested parties);
- integrity and transparency (government access to data before public release, government comments on data, revisions etc.);
- quality (methodology, sources, possibilities to crosscheck).

Chote’s ranking results were based on a kind of weighted aggregate score on five main dimensions (coverage, periodicity, timeliness, release calendar and the presence of a hyperlink, allowing users to move directly from the IMF bulletin board to national statistical websites). He is the first to admit that his approach has its flaws; he himself says: ‘The table at the end of this paper illustrates just one way in which this (ranking) might be done ... This framework is open to all the same criticisms of incompleteness and inappropriateness that was levelled at The Economist ... But I am the first to concede that it would have to be a rather more rigorous and carefully thought out exercise than the back-of-an-envelope attempt I have made here’.

In spite of all these reservations from the author, the table came as a shock. Latvia headed the table, followed by Canada, Slovenia, Peru, the United Kingdom and Japan. Well-respected statistical offices such as those of Switzerland (27th), Australia (30th) and Iceland (40th) were shown as lagging far behind. The Netherlands found itself in 20th place, although if we took more care of our hyperlinks, we could get into the top ten. In particular the ‘composite’ nature of the total score was one of the points raised in criticism of the list.

One may question the significance of statistical ranking exercises as a matter of principle, but the fact of the matter is that statistics are used for ranking all the time. And rarely does this seem to shock the statisticians very much, even when the ranking is based on highly composite and/or relatively ‘soft’ measures, such as per capita GDP, research and development expenditure as a percentage of GDP, illiteracy rates, or poverty and development indicators etc. (1) So it seems curiously paradoxical that while statisticians can live with many kinds of rankings, they get jumpy when they themselves are the subjects of such an exercise. And a second paradox, which I shall discuss further below, is that official statisticians are supposed to be good at measuring nearly anything of relevance to society, but are definitely not very good at measuring their own activities and performance, at least not in my experience.

3. Why performance measurement?

Government budgets are under continuous pressure in most countries. There is a general tendency towards reducing government spending and therefore the spending on official statistics is also regularly scrutinised. Statistical offices are often relatively large and costly operations, and, as everyone knows, continuous technological and other developments make it easier to compile statistics more efficiently. So governments are likely to ask us, from time to time: can’t you do it cheaper? And in asking this question, it is practically inevitable that they also ask: and how do you compare with similar institutions in other countries? (2)

In exactly such a scenario, the Dutch government and parliament recently asked us to do a summary comparison exercise. The remainder of this article represents our answer to this request. It should be noted here, that the results have little to do with ‘ranking’ in the proper sense, although the order of countries in the small comparison table of the document, may be construed as such (from ‘expensive’ to ‘less expensive’ statistical systems).

4. Comparing statistical offices: the difficulties

The question was whether Statistics Netherlands is big and costly, compared with similar organisations in other countries. It is difficult to answer this question with any degree of precision as only a very global comparison is possible. The following reasons make comparisons of this kind rather problematic:

- The general organisational structure of statistics (centralised or decentralised);
- Coverage of the statistical work programme, i.e. which subject matter areas are covered; the work programme of Statistics Netherlands, for example, covers a very wide range of subject matter areas);
- The size of the country (there are certain economies of scale; on the other hand it should be recognised that large countries often have some sort of regionalised administrative and statistical structure, which may imply inefficiencies);
- Administrative and legal infrastructure (which may be relevant for the possibilities to use registers for statistical purposes);
- Special responsibilities that some national statistical offices may have (e.g. economic analysis).

Some explanations of concepts

- What is meant by ‘central’ and ‘decentral’? In some countries, for example the Netherlands, Sweden, Australia and Canada, the production of national statistics is the responsibility of one single organisation, regardless of how many ‘offices’ it may have. In other countries, however, there is some kind of decentralisation, either regionally - often, but not always, in combination with a ‘federal’ administrative structure - or departmentally, i.e. ministries producing statistics for their own policy areas, or a combination of various forms of decentralisation. Pure forms of ‘central’ and ‘decentral’ are relatively rare, but it is widely believed that any form of decentralisation implies certain negative efficiency effects.

- A factor that is difficult to measure is the role and significance of statistics in individual countries? To what extent are political and administrative decisions (e.g. the financing of regional and local authorities, budget policies, wage bargaining etc.) based on statistics? Many countries apparently have forms of ‘formula use’ of statistics, which give statistics some extra weight. In countries with some kind of federal structure this is certainly true, but it also applies to the European Union. It is hard to say what effects this phenomenon has on statistical expenditure. For the countries of the European Union included in the comparison in this article, it may be said that they all have to comply with the so-called acquis communautaire (for Statistics Netherlands, 70% of the work programme is covered by European regulations and directives), and are therefore in a more or less comparable situation.

- As to ‘administrative and legal infrastructure’, an important question is to what extent registrations exist: in the Anglo-Saxon countries, for instance, there are no population registers; and secondly, whether there are legal and other arrangements that enable statistical offices to use these registrations for statistical purposes. In Scandinavia these arrangements are very well developed. If there are no registrations, or if they may not be used for statistics, other - relatively more expensive - forms of data collection are necessary: population censuses, for example.

The comparison presented here deals with some larger and medium-sized, economically developed countries, which have a statistical system that is generally considered to be good or adequate (e.g. according to the league table developed by The Economist in 1993, which ranked Canada, Australia and the Netherlands as the top three). Besides the Netherlands, eight countries were compared, six in Europe: Sweden, Finland, Denmark, France, Germany, and the United Kingdom, and two outside: Canada and Australia.

Brief outline of the statistical systems involved

Germany, France and the United Kingdom (although recently measures towards more centralisation have been taken) have fairly decentralised statistical systems.

The Netherlands, Sweden, Finland, Denmark, Canada and Australia have a centralised system, though some of these countries have more than one statistical ‘office’, either offices per state or province (Australia, Canada), or two locations (Sweden, the Netherlands).

In Germany there is a Statistisches Bundesamt in Wiesbaden, roughly 500 km from the new government centre Berlin. The former fairly large branch office in Berlin recently moved to Bonn, the former government centre, as a compensation for the ministries that are being relocated from Bonn to Berlin. However, most of the data collection and dissemination is done by the statistical offices of the Länder.

France has both regional and departmental decentralisation. Apart from the central office INSEE (Institut Nationale pour la Statistique et les Analyses Économiques) in Paris, the ministries have their own statistical departments which are loosely connected with INSEE, which appoints the senior statistical managers, however. In addition INSEE has some dozens of regional offices for data collection and dissemination.

The United Kingdom traditionally had a small central statistical office, set up by Winston Churchill during World War II, but most statistics used to be produced by ministries. Co-ordination and confidence problems in the nineties led to centralisation of the most important economic and social statistics in one office: the Office for National Statistics (ONS). Part of ONS is located in London, but there are some other offices as well, including one in Newport, Wales, roughly 250 km from London.

Statistics Canada has its central office in Ottawa, but there are branch offices in each of the Canadian provinces as well.

In Australia the central office of the Australian Bureau of Statistics (ABS) is located in Canberra, but in addition there is an ABS office in each of the states. These regional offices do some specific statistical work for the government of their state, as well as some data collection and dissemination for the ABS, but they are also responsible for a certain part of the national statistical programme, for example business registers and service statistics (Melbourne), mining (Adelaide), or the financial sector (Sydney).

Sweden (Stockholm/Orebro) and the Netherlands (Voorburg/Heerlen) have one statistical office with two locations (in both cases about 200 km apart). Norway and Ireland have a similar structure. Finland and Denmark, lastly, have one organisation and one location.

5. Comparison of some indicators

The table below compares some indicators for the nine countries mentioned, in particular with respect to inhabitants and number of ‘official statisticians’, as well as the increase or decrease in this number over a certain time period, and government spending on official statistics, related to GDP (1998).

Explanation and commentary

In the table, expenditure, as a percentage of GDP, is based on gross budgets. The share of ‘own’ income that statistical offices may have from sales of products or specially financed projects, as a rule varies from 10-20 per cent of the overall budget; in the case of Statistics Netherlands it is about 10 per cent.

Germany: The increase in the number of statisticians between 1983 and 1998 is mainly a result of the re-unification of Germany, when a number of East-German statisticians were taken over by the Bundesamt.

United Kingdom: The recent history of British statistics has been turbulent. In the Thatcher period severe budget cuts were implemented, based on the philosophy that official statistics were to serve government interests only. This policy was later partly reversed. In addition many important statistical offices merged, making comparisons over time rather difficult. However, it would seem that official statistics in the UK are remarkably inexpensive.

Canada: The increase of staff between 1988 and 1998 is partly based on new statistical work to support the redistribution of VAT between some Canadian provinces, which involves some 700 staff. Excluding this effect, the number of statisticians per million inhabitants for Canada would be 237.

France: The numbers include the départements d’outre-mer. In addition to production of statistics, INSEE is also charged with economic analysis. It is difficult to say precisely how many staff are engaged in this work, but 200 would seem to be a fair estimate.

As for the Scandinavian countries: in Sweden, Finland and Denmark a substantial number of official statistics (80 to 90 per cent) are compiled on the basis of register information. In the Netherlands this part is estimated at 60 per cent. In Sweden, official statistics are financed in a rather unusual way: instead of a central budget, a substantial part of the program (40 to 50 per cent) is financed on the basis of ‘contracts’ between Statistics Sweden and other agencies. This makes it difficult to compare expenditure.

Some general conclusions

Compared with some other countries, the ratio statisticians to inhabitants and statistical expenditure to GDP, i.e. the cost level, of Dutch statistics is ‘average’. In some countries - which, by the way, have excellent statistical systems - statistics are clearly more expensive. (3)

Of all countries in the comparison, the costs of Dutch statistics have been reduced by most over the last ten to fifteen years. Only Sweden and the United Kingdom have experienced similar developments.

6. Can we make better comparisons?

As I have mentioned before, I do not think statisticians are very good at and/or really interested in measuring themselves, and definitely not in a way that makes comparisons across countries easy. Over the last fifteen years or so I have been involved in various comparison exercises of this kind and most of them have been complete failures. In the eighties we tried to compare the costs of external trade statistics and the consumer price index between a few countries in Europe, and after some time the effort was aborted because it proved too difficult and too time-consuming. Eurostat and a Eurostat working party have tried for many years now to make cost-benefit comparisons between EU national statistical offices and some of their specific products. Clearly, benefits of statistics are very difficult to measure, but even as far as cost is concerned, the results so far are practically nil, one of the main reasons being that the experts involved could not agree on definitions and various measurement issues. The latest development in this respect is that a final effort is to be made to continue with cost comparisons, forgetting about the benefits. Another example is the Mahalanobis Committee, created by the International Statistical Institute in 1995. Its aim was to develop ‘statistics about statistics’. There have been no results so far, the reasons being on the one hand a total lack of interest in participating in the committee’s work, and on the other hand widespread disagreement about how to tackle the issue. The most recent important initiative that I know of was taken by the Australian Bureau of Statistics, which invited some sister agencies (including Statistics Netherlands) to participate in a benchmarking exercise on the cost and quality of some sets of statistics. At this stage it is too early to say whether the ABS exercise will work.
I believe it is obvious that governments will go on asking statistical offices about their performance and efficiency compared with statistical offices abroad. Therefore, but just as importantly because it is worthwhile for statistical offices themselves to know how cost-effective colleagues in other countries do their work, I think the international statistical community would be well advised to make a real effort to improve their performance and cost-accounting measurements, and try to do so in an internationally comparable manner. Perhaps there is a challenge here for the Conference of European Statisticians or indeed the United Nations Statistical Commission.

After all, I think there is some irony in the fact that while statisticians are constantly trying to agree in great detail which internationally comparable information (in terms of definitions, classifications, other measurement methodology etc.) to ask from businesses, institutions and households, they do not wish to agree on the measurement of their own operations, and are apparently unable - up to now - to agree on universally accepted definitions of concepts such as non-response or, indeed, statistician.

For further information or comments on this contribution, please contact Willem de Vries:


(1) GDP is clearly one of the most complex and composite statistical measures in the world today, even though one may argue that there are detailed, internationally agreed rules on how to calculate GDP, in the UN System of National Accounts.

(2) An interesting point here is, clearly, that national statistical offices are ’unique institutions’ in their own countries, rather difficult to compare with other government agencies, but are far easier to compare, in principle, with sister offices abroad.

(3) The Canadian Chief Statistician’s spontaneous comments when I said that Statistics Canada seemed to be expensive was: ‘But worth every penny’, which may well be true.


Chote, Robert. Performance indicators for national statistical systems. CES/1999/14. 30 March 1999.

De Vries, Willem. Are we measuring up? Questions on the performance of national statistical systems. CES/1999/15, 26 March 1999.

Back to top | Statistics Division Home | Contact Us | Search | Site Map
Copyright © United Nations, 2014