Statistics Division Home
Overview
Fundamental Principles
Implementation
Country Practices
Publications
Reference Materials and Links
Sign in

How are we doing? Performance indicators for national statistical systems

How are we doing?
Performance indicators for national statistical systems

Willem F.M. de Vries (1)

This paper proposes a systematic approach to evaluating the performance of national statistical systems. Its starting points are the so-called Fundamental Principles of Official Statistics, which were adopted by the United Nations some time ago. The aim is to translate the principles into operational terms and concrete questions about ‘how we are doing’.

1. Introduction

The rankings (or league tables, as they were called) of national statistical offices, published by The Economist newspaper some years ago, caused mild shock waves among official statisticians around the world. The first Economist ranking (1991) was primarily based on the timeliness and accuracy of some major statistical series. The second round, in 1993, also took into account judgements of chief government statisticians about the objectivity of statistics (in terms of absence of political interference), reliability of the numbers, the statistical methodology applied and the relevance of the published figures.

The appreciation of the ratings varied of course. The national statistical offices included mentioned in The Economist’s list were more or less pleased, depending on their relative position. Offices not in the list wondered why they had not been mentioned. Some offices argued that their rating was questionable or incorrect, because the information used had been incomplete or outdated. However, there was little discussion about the criteria The Economist had used, even though there was fairly broad agreement that the assessment had been somewhat superficial.

From The Economist’s point of view, as a newspaper primarily voicing the interests of the users of macro-economic statistics, the applied ‘objective’ criteria (average size of revisions to GDP growth, timeliness, and value for money in terms of number of statisticians per 10,000 population as well as the government statistics budget per head of the population) made good sense. Adding senior statisticians’ views to these criteria was perhaps not a bad idea either. However, it was clear to most insiders that the overall ratings at best presented an incomplete picture. This article proposes a more comprehensive, systematic checklist of points to be considered when evaluating a national statistical office or national statistical system .(2)

This ‘checklist’ is mainly based on the so-called Fundamental Principles of Official Statistics, first adopted by the Economic Commission for Europe during its 47th session in 1992, and subsequently endorsed by the United Nations Statistical Commission (with some minor amendments). These ten principles are a now a universally agreed framework for the mission of national statistical offices and indeed also for the statistical work of official international organisations.

After quoting the official wording of each of the Fundamental Principles of Official Statistics, a brief explanation in simple words of the essence of each Principle will be given. In addition, I have tried to make the principles more operational by raising some questions about them. The answers to these questions should indicate whether and to what extent a principle is adhered to in a given NSI. The article does not discuss all aspects of each of the Principles in any depth. It only raises some points which are thought to be of key interest (3) . Neither does it discuss measurement issues (in other words: how to ‘score’ on the questions) in a strictly quantitative sense, although suggestions for a very primitive scoring system are given at the end .(4)

The question has been raised, and rightfully so, whether the approach that I am advocating here ultimately produces real indications about which are ‘good’ or ‘better’ statistical systems. A statistical system that scores high on ‘my indicators’, it is argued, may have a high ethical and professional standard and may do its very best in many ways, but is there any guarantee that it produces good, relevant, timely statistics? The answer to that question would probably be: no, but nevertheless I am convinced that there is a high positive correlation between scoring well on ‘my indicators’ and being a successful system in terms of output.

However, to accommodate the above views, I have divided the paper into two parts. Part 1 is about the Fundamental Principles, Part 2 is about real statistical output as such.

PART 1

Relevance, impartiality and equal access

1. Official statistics provide an indispensable element in the information system of a society, serving the government, the economy and the public with data about the economic, demographic, social and environmental situation. To this end, official statistics that meet the test of practical utility are to be compiled and made available on an impartial basis by official statistical agencies to honour citizens’ entitlement to public information.

In other words, Principle 1 means that official statistics should be relevant for society, compiled in an impartial manner, free from political interference and accessible for everyone under equal conditions.

One of the reasons why Britain and the USA were rated relatively low (despite their good performance in other respects) by The Economist in 1993 was: ‘the lingering suspicion that statistics in America and Britain are subject to political meddling’. Despite recent moves towards more centralisation of official statistics in Britain, a large part of the statistical work is still scattered across some 30 government departments, where the statisticians report directly to ministers. This (wrote The Economist) ‘allows politicians to take an unhealthy interest in statistics...’.

Several questions can be asked in the context of judging national statistical offices against the background of the principles of relevance, impartiality and equal access.

The ultimate question pertaining to relevance would of course be: to what extent do the users think that the activities (data collections, or ultimately outputs and products) of statistical systems are relevant for them? It is, however, extremely difficult to express this aspect of ‘user satisfaction’ in terms of one or a few a simple indicators (which does not mean one should not try to do so). Some users may consider some activities to be very relevant (while others may not), and may be very dissatisfied with other activities (much liked by others). Therefore, I would propose a more general question which has to do more with the general attitude of NSI’s in this regard than with concrete indicators or measures. That question is:

1. How well developed are mechanisms to ensure that statistical work programs are relevant for the various user groups?

In many countries, there is something like a national advisory board for statistics, but whether this works satisfactorily or not is a different matter. In addition, however, there are many other possible mechanisms to foster the relations between users and producers of official statistics. The basic question to be asked here is: are national statistical offices making a real effort to find out what their users need and to adapt their statistical programmes accordingly? And the next question would be: how flexible are they in practice when it comes to tackling ‘new’ (and probably quite relevant) subject matter areas such as the services sector, the environment, the ‘information technology sector’ and other matters relating to the economy of the ‘intangibles’, and last but not least ‘the global economy’ (including phenomena such as foreign direct investment and correct measurement of the activities of multinationals in general).

Another, more specific question regarding ‘user satisfaction’ would be:

2. How well developed are mechanisms to assess user satisfaction with statistical products and their dissemination?

Apart from statistical programs, which often describe what statistical offices are doing or are planning to do in terms of the subject matter areas to be covered, the content and coverage of data collections, and sometimes the methodology to be used and the timing and expected quality of statistical results, there are also the actual statistical outputs to consider and how the users appreciate these: news releases, printed publications of various kinds, data in electronic formats, including data bases etc. In other words: do statistical offices have a well developed dissemination system? Are the statistical products what the users want in terms of quality, timeliness, price, distribution modes? Are sales of statistical products increasing or declining? Is there any real, systematic marketing effort? (5)

As to impartiality, the question is:

3. How well do national statistical offices adhere to their obligation of impartiality?

This may sound relatively simple, but in fact rather complex issues are at stake. The complexity largely depends on one’s general notion of ‘impartiality’. Very orthodox official statisticians may believe that even undertaking a survey at the special request of a ministerial department may affect the impartiality of a national statistical office, especially if this department (usually paying for the extra work to be done) wants to have a say in the methodology of the survey. However, most statisticians may tend to interpret ‘impartiality’ more loosely as: to avoid taking any partisan view in the choice of definitions or methodology, and, most particularly, to avoid adopting a biased stand as to the release of statistical numbers and commentary on those numbers.

Most national statistical offices have a strong tradition of not making any non-statistical comments on their figures. Sometimes this principle is adhered to very strictly. In a press release about the latest unemployment numbers, the comment given will then be restricted to something like: ‘Compared with the previous quarter, unemployment has decreased by 0.7 percentage points’, leaving any additional comments to politicians and others. Nowadays, as many statistical offices wish to improve press coverage of their numbers, some may comment as follows: ‘The decrease of unemployment in this quarter was 0.7 percentage points compared with the previous quarter. This is the strongest quarterly decrease since the second quarter of 1982’.

As a general principle, however, statistical offices should (and indeed most will) avoid making any comments referring to the success or failure of government policy, even if the numbers may seem obvious in revealing this.

As far as the issue of ‘political interference with statistics’ is concerned, the pertinent question is:

4. How well are statistical offices shielded from political intervention as to the content and the release of statistical results?

Some of the most common forms of unwanted political intervention seem to be:
- Pressure to change definitions in order to obtain statistics which put government policies in a better light;
- Tampering with the release of key statistical figures, in order to select a moment for release which is politically favourable or least damaging;
- Leaking to the media of ‘favourable’ statistics by politicians before the data are made available for everyone;
- Pressure to release identifiable micro-data to policy researchers in the case of statistical collections intended for and financed by specific clients (e.g. ministries) (6) ;

Apart from the first category (for which it is hard to formulate general rules of good practice), the highest risk of political interference with statistics therefore occurs at the stage when figures are (about to be) released. To avoid tampering with releases of fresh statistical figures, many countries have now adopted a system of announcing release dates of key statistics well (a month or even a year) in advance. Avoiding leaks may prove to be more difficult. It is the custom in many countries to give ministers a head start with respect to fresh key statistics by supplying them with the figures some time before these are officially released. This may be anything from an hour to several days and the list of recipients of these ‘pre-releases’ may be quite extensive. There is general agreement among statisticians, however, that it is commendable to restrict both the list and the time lap as much as possible.

In view of the important role of the media in making statistics available for the general public, it is sometimes argued that supplying information to the media ‘under embargo’ (i.e. some hours before the official release time), in order to give them a better opportunity to prepare an attractive news item (this applies in particular to television news programmes, where this may take some time), should be possible(7) .

As for ‘equal access’ the question is:

5. How well is the principle of ‘equal access under equal conditions’ adhered to?

Apart from the political considerations under the previous point, there is also the general principle of safeguarding that all users are treated equally. Some aspects of this equality are not trivial. Obviously, for certain figures a head start of a few minutes, for one user over another, may generate a considerable (financial) advantage. Therefore, statistical offices have to find ways to give all users access to new figures at virtually exactly the same moment. Apart from recently developed possibilities of simultaneous electronic distribution (e.g. by e-mailing statistical releases to the media), some countries use a system of ‘lock-ups’ for the release of certain sensitive figures(8) .

Another aspect of equality is that, in principle, all users should pay the same prices for the same statistical products and that the number of ‘privileged users’ who receive the data free of charge (government agencies, members of parliament) should be restricted as much as possible.

A slightly distinct point, which is not covered by the principle of ‘equal access’ as such, but which is nevertheless very essential, is the notion that official statistics are (intended as) a public good, which should in principle be freely available for all citizens. Most NSI’s put this notion into practice through various means. First of all, as mentioned before, building up good relations with the media is important to serve the general public with basic statistical information. Secondly, it is a generally accepted practice for NSI’s to make arrangements for the most important statistics to be freely accessible in their own libraries and in university and public libraries. Thirdly, most NSI’s will give free information on the telephone (including follow-ups by sending free copies of tables etc. by mail) or by electronic channels, such as the Internet(9) .

Professionalism

2. To retain trust in official statistics, the statistical agencies need to decide according to strictly professional considerations, including scientific principles and professional ethics, on the methods and procedures for the collection, processing, storage and presentation of statistical data.

Principle 2 simply says that official statistics should be compiled by using professional methods and also that statistical results should be presented to the users in a professional manner.

The real issue here is: to what extent is the professional integrity of NSI’s safeguarded? Measuring professionalism and the adherence to professional ethics(10) , and even more so: comparing these characteristics between national statistical offices, is obviously very difficult. On a subjective level, there may be some agreement among statisticians that national statistical office X or Y is relatively active in terms of methodological innovation in this or that area, but agreeing on some objective measure is an entirely different matter. The number of university graduates and their percentage share in the total staff of a national statistical institute may be an indication of its ‘methodological potential’, as may the number of research and methodology papers produced and published in respected scientific journals, but few would agree that this is a sound basis for comparisons between different statistical offices. The importance of analysis and research for methodological progress and for increased efficiency and effectiveness of statistical operations is widely recognised. A United Nations report(11) , which is still the standard manual on the organisation of official statistics at the national level, underlines the significance of research and analysis for various reasons, including getting a clearer picture of the value of statistics, in particular as to discovering lacks and inconsistencies. An important American report(12) states that ‘It became quite clear that it is analysis that holds a statistical system in place, makes possible most communication with decision-makers about their data needs, and informs them of current statistical capability. Analysis is the glue that holds all information systems together’. Lastly, Sir Claus Moser, then director of the Central Statistical Office, said in a speech to the Royal Statistical Society (1979) that ‘One more aspect needs mentioning, namely the need for the government statistical service to devote more attention and resources to methodological work....The CSO has much to gain from constantly improving its technical standards, indeed has a duty to do so and to publish its findings’.

Therefore, some general questions may be asked to assess (the focus on) professionalism in national statistical offices.

6. How well is professionalism systematically promoted and shared by such mechanisms as analytical work, circulating and publishing methodological papers, and organising lectures and conferences? (13)
7. Are statistical methods well documented and are methodological improvements made on the basis of scientific criteria?
8. Are decisions about survey design, survey methods and techniques etc. made on the basis of professional considerations (or do other - e.g. political - considerations play a role)?
9. Is training and re-training of professional and other staff a real policy issue for the organisation and is enough effort (e.g. in a percentage of the overall budget) spent on training?
10. Is statistical quality management a real policy issue and are real and systematic efforts (including the promotion of well documented quality management guidelines) made to enhance the quality of statistics?

As for the aspect of ‘professional presentation’ of statistics, some comments have already been made under ‘impartiality’. Some other points will be made under the next paragraph on ‘accountability’.

Accountability

3. To facilitate a correct interpretation of the data, the statistical agencies are to present information according to scientific standards on the sources, methods and procedures of the statistics.

Accountability is understood in the sense that statisticians should systematically and thoroughly explain to the users of statistics what the numbers exactly represent and what their quality is.

To some extent this principle may seem trivial, but considering that the issue has long been (and still is) a topic for lively debate among statisticians, some non-trivial aspects are involved as well. The triviality lies in the fact that it is obvious that if you produce and publish figures, you should inform the user in some way what these figures are about. The debate is on how to do this in the best possible manner(14) .

In terms of so-called meta-data (information about the data, i.e. definition of the population covered, definition of the variables, description of the data sources used, description of survey methodology, etc.), there is broad agreement that it is essential for the users of statistics to have access to as complete a set of meta-data as possible. Therefore, national statistical offices should see to it that full descriptions of the complete methodology for all their collections are documented and kept up-to-date. This does not imply, obviously, that all statistical publications must contain a full set of meta-data, as that would be both impractical and user-unfriendly. Statistical databases, however, should preferably contain all the meta-data in some user-friendly form, because it would be a burden for the users to have to consult separate publications to see what the data are worth(15) .

A good example of meta-data are the Sources and Methods accompanying the OECD Short Term Economic Indicators publications. Also, the initiative taken by the International Monetary Fund in 1996 to set standards(16) (general standards for all countries, plus so-called special standards for the most developed countries) for meta-data about a set of major statistical series, must be mentioned in this respect. A large number of countries have now endorsed these standards.

The question to be asked with regard to meta-data is therefore:

11. How well does a statistical office provide the users with information about what the data really mean and about the methodology used to collect and process them?

Another issue, which is closely related to the previous paragraphs on meta-data, but which is nevertheless slightly different, is how statistical offices inform the users about the quality of the data they produce. Proper meta-data may tell a lot about the quality of statistics (at least for ‘professional’ users), but they do not give the whole picture. Therefore, though there may be a certain overlap between the two, explicit statements about the quality of statistics are an additional aspect of principle 3. Quality particularly concerns such aspects as sampling and non-sampling error, any biases the data may have, information about non-response and its treatment, about imputations etc. In the eighties, the Conference of European Statisticians of the United Nations Economic Commission for Europe adopted ‘Guidelines for quality presentation’, which are still very useful and are applied in some form or other, but often not systematically, by quite a few statistical offices. The question is therefore:

12. How well developed and applied is the presentation of the quality of statistics?

Prevention of misuse

4. The statistical agencies are entitled to comment on erroneous interpretation and misuse of statistics.

Principle 4 means simply that statisticians may react to any wrongful use of statistics that they perceive. Although the official wording of the principle is ‘entitled’, the general understanding of the principle is that statistical agencies indeed have a duty to comment.
There are of course many different ways to define ‘erroneous interpretation’ and ‘misuse’, and not all forms of these are equally bad or harmful. Moreover: most instances of misuse will escape the attention of statistical offices. Many users know ‘how to lie with statistics’, but this need not always be a concern for statistical offices.

However, some kinds of misuse may require corrective actions: in particular misuse by government agencies and by the media. For both categories, it is commendable for statistical offices to undertake immediate corrective actions in whatever way. At Statistics Canada it used to be (and probably still is) standard policy that when any misrepresentation or misinterpretation of official statistical figures in the media was noticed, the Chief Statistician wrote a letter to the editor explaining that a mistake had been made and how the numbers ought to have been correctly presented.

Similar steps were also taken for government misuse. It was felt that this general attitude has had positive effects by ‘educating important users of statistics’(17) . So, while it may be difficult to prescribe a standard recipe for these situations, the general question that may be asked is:

13. How well and systematically do statistical offices educate their key users in order to promote proper use of statistics and to prevent misuse?

Cost-effectiveness

5. Data for statistical purposes may be drawn from all types of sources, be they statistical surveys or administrative records. Statistical agencies are to choose the sources with regard to quality, timeliness, costs and the burden on respondents.

Principle 5 means that statistical offices must try to be as cost-effective as possible by making the best choice of sources and methods, aiming at improved timeliness and also data quality, at spending tax-money as efficiently as possible and at reducing the response burden.

To some extent, possibilities to achieve cost-effectiveness depend on national circumstances. In countries with good administrative registers which are also available for statistical use, the need to have censuses or indeed traditional sample surveys will be less than in countries where such registers do not exist, are of poor quality or are not put at the disposal of the statisticians.

One of the most eloquent examples of how the national administrative infrastructure affects statistical expenditure very directly is the population census. Whereas in countries which do not have a population register (such as the United States) very costly periodic population censuses remain necessary, other countries (such as the Scandinavian countries and the Netherlands) nowadays produce very much the same statistics that were previously collected through a census by using registers and some additional sample surveys, at a mere fraction of the cost.

In terms of data input, making the best possible, balanced choice of data sources, given national circumstances, should therefore be an important issue for all statistical offices. The general question to be asked is:

14. How well considered is the ‘data sources mix’ used by statistical offices, and is achieving the best possible mix (also taking cost-effectiveness into account) a subject of systematic improvement effort?

In the different phases of data throughput (the data editing process, aggregation, analysis etc.), there are also many possibilities to increase timeliness, efficiency and/or to improve data quality. There are organisational issues to be considered, as well as methodological and technological aspects and many of these issues and aspects are inter-related. For example: introducing macro-editing instead of the more traditional micro-editing approach is only possible if statisticians are well-trained in this new approach and can make use of advanced information technology (software and hardware). It is impossible to give brief general guidelines, but the central question here seems to be fairly straightforward:

15. How effective and efficient is the data throughput in statistical offices, in terms of organisation, methodology and technology?

And an additional question of perhaps equal importance may be:

16. Is improving timeliness an issue of serious and systematic effort?

The response burden generated by statistical offices is another aspect of their cost-effectiveness, as data collection, apart from costing taxpayers’ money, also implies costs for data providers. Therefore, reducing the response burden, in particular for data providers from the private sector, is presently an issue of concern in many countries. There are many different techniques to reduce the response burden(18) , some of them fairly simple, others of a more ‘high-tech’ nature.

Comparison of the level of response burden generated by different statistical offices is very difficult, because the response burden depends on several factors, many of which are related to very specific national conditions and requirements. It is possible, however, to compare the overall development (upwards or downwards) of the response burden, as well as the general attitude of statistical offices with respect to the issue. A general question that could be asked is therefore:

17. How successful has a statistical office been in systematically reducing the response burden it imposes on data providers?

Cost-effectiveness is obviously also a matter of organisation, management and even ‘corporate culture’. It is very difficult to measure the ‘productivity’ of statistical workers and even more so to compare ‘productivity’ between different statistical offices. Efforts to compare the cost of specific, rather comparable statistical operations (such as the Labour Force Survey or the Consumer Price Index) in a few countries of the European Union have so far been unsuccessful.

Because better standards to measure productivity and cost-effectiveness in statistics do not exist, The Economist was probably right in defining a couple of simple indicators to compare these issues between countries. Therefore, I propose to stick with these indicators: number of official statisticians per 10,000 population and the government statistics budget per head of the population(19) . For countries which have a decentralised statistical system, the numbers should of course include both the central and the decentralised parts of the system. The problem is, of course, that the question ‘how are we doing in this respect’ can only be answered if comparable data for other countries are available. Nevertheless, the question must be asked:

18.How cost-effective is a national statistical system (in terms of relative cost indicators such as statisticians per 10,000 population and statistics budget per head of the population)?

Confidentiality

6. Individual data collected by statistical agencies for statistical compilation, whether they refer to natural or legal persons, are to be strictly confidential and used exclusively for statistical purposes.

Again, this seems to be a very simple principle, but it has many ramifications, some of which may involve very complex issues(20) . There is a well known joke, often told in countries which used to have a centrally planned economy, but are now moving towards a more market-oriented system. It is: ‘In our country, individual data used to be widely known, while aggregates always were top secret’. This is a clear illustration of how the principle of confidentiality should not be interpreted and applied. Unfortunately, it does not say much about how it should.

Various questions can be raised about the concepts ‘individual’ and ‘confidential’. The interpretation of the concepts may also vary from country to country. However, one should first of all consider what the true meaning of the principle is: self- interest of statistical offices. The simple reason why statistical offices must adhere to confidentiality of individual data is that it is the only way to safeguard the trust of the respondents. Respondents must be certain that the information they give is used for statistical purposes only and that they therefore have no interest in supplying anything but true data.

One may look at the issues from various angles. At the general policy level one may take into account what the law (if any) says. In many countries there is legislation about the protection of the privacy of citizens. This often includes provisions for statistics and these provisions may be more or less strict. In the Netherlands, for example, the general ‘personal data protection law’ makes some exceptions for statistics and research.(21) Equally, the confidentiality of individual business data is often safeguarded legally, be it under a general statistics law or in separate legislation. However, in this respect there may be some more or less essential differences between countries, in particular as far as the legal possibilities for exchange of company data between various government agencies are concerned.

At a more basic and practical level, it seems that most statistical offices have some official policy, or at the very least an accepted practice about how to prevent disclosure of individual data in disseminating their statistical products. A distinction may be made here between disclosure protection in the case of traditional, printed publications, and the more complex issue of disclosure protection with respect to electronic files with micro-data(22) . For printed publications, the rules are in practice often relatively simple, such as (in particular in the case of business statistics) suppressing cells in tables which contain information about just a few (e.g. three or less) individual entities.

For electronic files the rules may be more sophisticated, particularly in the case of so-called micro-data: files containing (anonymous) information about individual entities. In several countries (e.g. in the United States) such files are made generally available for research purposes: so-called public data files. The structure of these files is such that disclosure of individual data is considered to be virtually impossible. A variety of techniques is applied to prevent disclosure. In the Netherlands a distinction is made between such public data files and another type of micro-data: research-files which are not 100% ‘disclosure proof’, and which are only made available to certain categories of researchers and under very strict legal provisions.

So some general questions can be asked:

19. How well developed and practised are the rules to prevent disclosure of individual data in printed publications?

20. How well developed are techniques and systems to make statistical files available for research purposes, while preventing disclosure in the best possible manner?

Another issue regarding confidentiality is the prevention of non-statistical use of statistical data and guaranteeing administrative immunity of respondent groups. This is a rather complex problem area. When the draft of a Regulation for Community statistics (better known as the ‘European Statistical Law’) was discussed by the member states of the European Union, prolonged debates took place about the definition of and wording around such concepts as ‘statistical data’, ‘use for statistical purposes’ and ‘non-statistical use’. For a discussion of the concept of ‘administrative immunity’, one may consult Begeer et al. (1984).

Yet another issue related to the confidence of citizens in the national statistical office concerns the perception of the public that databases and networks within these offices are in practice secure against external intrusions (by ‘hacking’ or otherwise). At Statistics Netherlands great care is taken to ‘waterproof’ the internal systems from the outside world.
I suggest that we do not include all these points, however relevant and even important they may be, in the ‘performance indicator system’ which is the subject of this article.

Legislation

7. The laws, regulations and measures under which the statistical systems operate are to be made public.

Principle 7 means that the position of statistical offices, including their rights and obligations should be codified in proper, publicly available legislation, in order to show the public what it may expect from the national statistical system.

It is impossible to set out very specific rules for statistical legislation. Much depends on national legal culture and traditions. Many countries have a formal ‘general statistics law’, but in others the statistical legislation may be scattered over a series of specific laws and various other government documents. Neither situation, however, is a guarantee that official statistics are in good shape, because it is useful to note here, that laws obviously cannot solve all problems. In some countries which do not have a ‘general statistics law’ (e.g. the United States or the United Kingdom), many of the best possible statistical practices may be adhered to, while other countries may have a statistical law which is perfectly formulated, but in practice is not much more than just another piece of paper.
Nevertheless, it is suggested that statistical legislation and/or other legislation which is also relevant for official statistics, should cover all or most of the following basic points:

- The general position of the national statistical office/system (including points such as who decides on the work programme, who decides on methodological issues, how are data collected, what are the relations between the national statistical office -if any- and other government agencies doing statistical work, what are the relations between the statistical system and government/parliament etc.)
- The position of the head of the national statistical office/system (including points such as who appoints and dismisses, to whom does the ‘national statistician’ report and about what, does he/she have any specific responsibilities etc.)
- Basic rules of data collection and confidentiality (voluntary and statutory data collection, any penalties for non-compliance with compulsory data collections, general and specific confidentiality rules)

In view of this, the question to be asked about statistical legislation may be:

21. How good is the statistical legislation in a country, in terms of clearly setting out the mission and the competencies of statistical agencies, legal obligations to provide information for statistical purposes and the protection of confidentiality of individual data?

In addition, some implementation aspects of statistical legislation or of the principles for good statistical conduct are to be taken into account where the ‘performance’ of statistical systems is concerned. In particular, it is generally considered to be not more than sensible and decent always to inform respondents properly of the legitimate basis for statistical data collections and other activities of statistical agencies, for instance by briefing them explicitly about the statutory or non-statutory nature of data collections. In the longer run, this is once again a matter of self-interest: ‘honesty is the best policy’. A special issue in this regard is ‘informed consent’ of respondents as to any use of the provided (individual) information for non-statistical or research purposes.

The question to be answered would be:

22. How well developed are the policies and practices of dealing with respondents, in terms of ensuring that they are fully informed of their rights and duties with regard to statistical data collection?

National coordination

8. Coordination among statistical agencies within countries is essential to achieve consistency and efficiency in the statistical system.

In other words, Principle 8 means that in order to prevent inefficiency, undue response burden and the compilation of incomparable statistics, effective mechanisms for national coordination of statistics should be in place.

Statistical coordination has two main aspects: coordination of programmes (in particular with respect to data collections) and coordination of statistical concepts. Coordination of programmes aims at achieving efficiency (avoiding duplication of efforts) and at reducing the response burden (avoiding the same information being collected several times). Coordination of standards (in particular definitions and classifications) also has efficiency and response burden effects, but aims primarily at compilation of comparable statistics.

In this latter respect it is important that the national statistical office is recognised as the ‘bureau of standards’, standards which are respected and followed by all other agencies which may be active in official statistics.

Obviously, coordination is easier to achieve in countries with a centralised statistical system (such as Canada, Australia, the Netherlands) than in countries where official statistics are highly decentralised (such as the United States, where more than 70 federal agencies are active in statistics) or relatively decentralised (such as the United Kingdom, France or Japan).

Nevertheless, coordination mechanisms in countries with decentralised systems may be well developed and successful, while coordination in countries with a centralised system does not always function perfectly(23) . The question to be asked is therefore:

23. How well developed are national statistical coordination mechanisms and to what extent do they produce the envisaged results?

International coordination

9. The use by statistical agencies in each country of international concepts, classifications and methods promotes the consistency and efficiency of statistical systems at all official levels.

Principle 9 basically means that statistical offices should as much as possible adhere to international statistical standards and best practices, not only in order to produce internationally comparable statistics, but also in order to enhance efficiency of statistical operations and the overall quality of statistics.

There are two different aspects to international statistical coordination.

First of all, it is important that national statistical systems follow international definitions and classifications, in order to achieve cross-country comparability of statistics. This may seem simple and obvious, but poses considerable problems in practice. International statistical definitions and classifications are by definition the result of a complex process of compromise. The compromise may be such that some countries can live with it better than others. In particular, developing countries may have difficulties in applying the standards fully, because the process of developing the standards is usually dominated by the more advanced countries.

Also, some ‘blocks’ of countries (e.g. the European Union) may wish to have their own specific standards, which sometimes slightly differ from the world (UN) standards(24) . Therefore, there is general international agreement that international coordination in this respect should be ‘flexible’, in the sense that countries or groups of countries are entitled to diverge from the world standards, as long as they ensure that the linkage between their standards and the world standards is straightforward and transparent.

The second aspect of international coordination is that countries should benefit as much as possible from methodological, organisational and other practical developments elsewhere. This form of coordination is aimed at improving efficiency and enhancing the quality of statistical products and operations.

Taking both aspects in one stride, the question to be asked with respect to this principle would be:

24. How well does a statistical system adhere to agreed international standards and does it contribute to the best of its abilities to the further development and promulgation of best statistical practices?

International statistical cooperation

10. Bilateral and multilateral cooperation in statistics contributes to the improvement of official statistics in all countries.

Principle 10 means that international cooperation is a prerequisite to enhance the overall, world-wide quality of official statistics. Therefore, national statistical agencies should regard it as part of their core activities to assist other countries to the best of their abilities.
Apart from international meetings of statisticians, where (the improvement of) statistical standards is discussed, quite a lot of other international statistical cooperation is going on. International organisations are trying to promote the use of standards and best practices by issuing handbooks and guidelines in many languages. Some of them also organise and finance technical cooperation programmes for developing countries or countries in transition from a centrally planned economy to a market economy. A considerable number of training institutions exist, in all continents, where statisticians are trained in statistical methods, techniques and practices. In addition, there is a lot of bilateral cooperation between countries, sometimes financed from international funds, sometimes from national aid programmes.

The efficiency and effectiveness of international technical cooperation in statistics, in terms of avoiding duplication and promoting a systematic, goal-oriented approach, is also a topic of continuous discussion between national statistical agencies and international organisations.

The question to be asked with regard to this principle would be:

25. How actively is a statistical agency involved in international technical assistance?


PART 2
And what about the figures?

(The proof of the pudding is in the eating)

Some users may think that all these noble Fundamental Principles are of course all very well, and that respecting them may certainly help to improve the statistical system in the shorter or longer run, but that they really care more about the bottom line: do national statistical offices produce good statistics? And they have a point. So, in the footsteps of The Economist I suggest we also take into account the quality of some key statistics. Without disregard for all other valuable statistics, I propose a list of ten key statistics whose importance is probably undisputed and which are produced, in some form or other, by almost all national statistical systems: annual national accounts, quarterly national accounts, labour statistics (in particular monthly or quarterly unemployment rates), income statistics, basic demographic statistics, external trade statistics, the retail trade index, statistics on the services sector, the industrial production index and the consumer price index. So the questions to be answered are:

1. How good are the annual national accounts?
2. How good are the quarterly national accounts?
3. How good are the labour statistics (unemployment rates)?
4. How good are the statistics on the distribution of income?
5. How good are the basic demographic statistics?
6. How good are the external trade statistics?
7. How good is the retail trade index?
8. How well developed are statistics on the services sector?
9. How good is the industrial production index?
10. How good is the consumer price index?

And finally: may we have your points, please?

As indicated before, the aim of the above checklist was not really to generate ‘scores’, let alone rankings of statistical offices on the basis of those scores. The primary intention of the list was rather to propose an instrument for systematic ‘self-evaluation’.

However, it may be tempting to use the results for some sort of comparison as well. Before discussing this issue, something has to be said about a general point of criticism that may be put forward against the list as such. Some people may rightly maintain that the items in the list are to some extent not entirely independent of each other. For example: the chances are that countries with good statistical legislation will also have good provisions with regard to confidentiality and prevention of political interference. Nevertheless, it is suggested that the inter-dependence of the items is not so strong that scores for individual items are meaningless, and that the overall results will be strongly biased by these inter-dependencies.

If this is accepted, three other measurement questions remain to be solved: the weights of the items, the points to be given and who sets the scores.

Obviously, not all the above issues will be considered as having the same importance. Nevertheless, since it will be impossible to agree on what weight should be given to each individual item, it is proposed to simply use equal weights.

As for points, an equally simple solution is proposed: a five point scale, in which 5 points are given for ‘very good’, 4 for ‘good’, 3 for ‘fair’, 2 for ‘poor’ and 1 for ‘very poor’.
With respect to ‘who sets the scores’, the reality is that only the senior manager(s) of each national statistical office will be in a position to judge their own agency’s performance on each of the criteria(25) .

The maximum score to be achieved, then, is 125 points on the Principles and 50 points on the Practice. Assuming that managers set the scores fairly and as objectively as possible, I would suggest that scores of 100 and 40 are perhaps too good to be true. The principal worry of statistical offices in that category should probably be not to become complacent.

For the benefit of those offices interested in finding out what their own score would be, the appendix contains a scoring card.(26)

For further information, please contact Willem de Vries at wvrs@cbs.nl.

Scoring card

Very good=5
Good=4
Fair=3
Poor=2
Very poor=1
Blanks 0 points

Part 1

1. Development of mechanisms to ensure that work programmes are relevant
2. Development of mechanisms to assess user satisfaction with statistical products
3. Adherence to the obligation of impartiality
4. Freedom from political interference with statistical results
5. Adherence to the principle of equal access under equal conditions
6. Systematic promotion and sharing of professionalism
7. Improving methodology on a scientific basis
8. Survey design and methodology based on professional criteria only
9. Systematic efforts to train and re-train staff
10. Systematic promotion of statistical quality management
11. Systematic providing of adequate meta-data
12. Systematic presentation of the quality of statistics
13. Systematic education of key users in order to prevent misuse of statistics
14. Systematic efforts to achieve the best possible ‘data sources mix’
15. Systematic efforts to improve cost-effectiveness
16. Systematic efforts to improve timeliness of statistics
17. Systematic efforts to reduce the response burden
18. Cost-effectiveness in terms of statisticians/budget/ population ratios
19. Rules and practices to prevent disclosure from printed publications
20. Development of methods to supply micro data files, preventing disclosure
21. Quality of the statistical legislation
22. Development of practices for honestly dealing with respondents
23. Development of national statistical coordination mechanisms
24. (Flexible) Adherence to international statistical standards
25. Involvement in international statistical cooperation

Part 2

1. Quality of the annual national accounts
2. Quality of the quarterly national accounts
3. Quality of the labour statistics (unemployment rates)
4. Quality of statistics on income distribution
5. Quality of basic demographic statistics
6. Quality of the external trade statistics
7. Quality of the retail trade index
8. Quality of statistics on the services sector
9. Quality of the industrial production index
10. Quality of the consumer price index


Notes:

(1) Deputy Director-General of Statistics Netherlands. The opinions expressed here are personal and do not necessarily reflect Statistics Netherlands’ position or policies. The author thanks Ad Willeboordse, Lidwine Dellaert, Henk van Tuinen, Wouter Keller and Johan Lock for their useful comments on a first draft.

(2) Theoretically, there is of course a distinction to be made between ‘system’ and ‘office’. In countries with a decentralised statistical system, the ‘system’ consists of a collection of ‘national statistical offices’. Throughout this article I refer to the systems as a whole, even though I may from time to time use the term ‘national statistical office’ or ‘institute’ (NSI for short, this being the commonly used international term). Obviously, measuring the performance of a (decentralised) ‘system’ may in practice be more complex than measuring the performance of single ‘offices’, but this article is not so much about the technicalities of measuring.

(3) Some Principles (e.g. the one on confidentiality) involve so many complex issues that they may be (and indeed sometimes are) the subject for regular meetings or full-fledged conferences of experts.

(4) Statisticians are, naturally, keen on ‘how to measure or quantify’. The discussion of measuring techniques, however, is beyond the scope of this paper. The author would hope that this article stimulates the debate about ‘how best to measure performance in practice’.

(5) One may argue that this is an awkward and tricky question. What if an NSI is very active in marketing and measuring user satisfaction, but gets poor results (low user satisfaction) in return? Does it score high on this issue or not? My assumption is, however, that an NSI which shows this kind of real user orientation, will in the end almost unavoidably improve its performance in this regard.

(6) This may be a specific Dutch problem. The policy of Statistics Netherlands is not to give in to such pressure.

(7) At Statistics Netherlands, this is still under discussion. It will definitely not be applied to really sensitive statistics, such as the CPI and others.

(8) Under this system, members of the press are literally locked up in a room, some time before the moment of official, pre-announced release of the statistics. The journalists are then presented with the statistics to enable them to compose their article or message. The room is equipped with computer facilities and telecommunication equipment. However, telecommunications are of course blocked until a central switch is turned on.

(9) Discussions on how far ‘free’ should go, however, are still inconclusive. Some argue that all available statistics should be supplied free of charge on the Internet, others think that only some basic information should be free, while for further details some charge should be paid. The second point of view would be consistent with the most common practices followed for printed material: limited sets of material (e.g. some photocopies) are free, users who need more have to pay the marginal cost of the data carrier plus postage, occasionally even for the extra work involved to compile alternative tabulations etc.

(10) The universally agreed standards of professional ethics for statisticians are laid down in the Declaration of professional ethics of the International Statistical Institute, 1985.

(11) The organization of national statistical services: a review of major issues (New York, 1977).

(12) Report of the American President’s Reorganization Project for the Federal statistical system (Washington, DC, 1981)

(13) Having units in statistical offices whose main tasks are analytical work and giving methodological advice may not be essential in this respect, but is certainly helpful to promote professionalism.

(14) Which of course includes the question: how far and how deep should this information go? Experience shows that some users are very deeply interested in ‘what’s behind the numbers’, while others, to put it bluntly, couldn’t care less.

(15) A special point of concern is to ensure that the data elements in a time series are consistent and if not, to inform the users clearly about the exact nature of any inconsistencies.

(16) This process was initiated with a paper about Development of Standards for Dissemination of Economic and Financial Statistics to the Public by Member Countries, IMF, 1995.

(17) It may be argued that the fundamental principle in question is perhaps too defensively worded and that the real issue is that NSI’s, more in general, should make an effort to educate and train the users, not so much in order to prevent misuse, but to promote the best possible use.

(18) See for example: ‘Reducing the response burden; some developments in the Netherlands’, by Willem de Vries et al.; International Statistical Review, 2/1996).

(19) The Economist used the cost of statistics as such more as a background variable than as a performance indicator in its own right. Performing well at a relatively low cost was of course regarded as an additional positive feature. It may be argued that The Economist’s ‘formula’ is unfair for smaller countries and that something like statistical budget / Ö population is a more adequate measure.

(20) For a comprehensive analysis of some major issues one may wish to read Administration and statistics by W. Begeer et al; Eurostat 1984.

(21) In the sense that data files which are kept for statistical or research purposes only, are not subject to the general rule that individuals are entitled to check what is registered about them in the files, as well as to correct this information if they so wish.

(22) Statistics Netherlands has developed the so-called ARGUS software to check files on disclosure risks.

(23) In the Netherlands statistical activities outside Statistics Netherlands are insignificant in size. Nevertheless, Statistics Netherlands has recently set up a small unit to monitor such activities and to advise other government agencies on how their statistical needs may be fulfilled by them.

(24) In the case of the European Union, moreover, these standards are legally imposed on Member States.

(25) A better alternative would perhaps be to ask statisticians for their scores about other statistical systems than their own, but the reality is that this would take too much effort, because one would need to collect and study quite some material to do so in a more or less satisfactory manner.

(26) Eleven senior managers at Statistics Netherlands scoring in accordance with this list in 1997 produced an average item score of 3.6 points, the extremes being 3.8 on the high end and 3.5 on the low in 1997.


Back to top | Statistics Division Home | Contact Us | Search | Site Map
Copyright © United Nations, 2014