Statistics Division Home
Overview
Fundamental Principles
Implementation
Country Practices
Publications
Reference Materials and Links
Sign in

Quality systems and statistical auditing - A pragmatic approach to statistical quality management

Quality systems and statistical auditing. A pragmatic approach to statistical quality management

Willem de Vries and Richard van Brakel

This article(1) is about quality management at Statistics Netherlands (SN), and about statistical auditing in particular. Statistics Netherlands adopted an overall quality programme in 1996: ‘CBS 2000’. One of the objectives of this business plan is to introduce quality systems in all statistical departments. A standardised model for such a system will be finalised by the end of 1998. Provisional guidelines for quality systems were issued in 1997. Simultaneously, a system of ‘statistical auditing’ was set up to check how quality management in statistical departments is functioning, and how the quality of statistical products and procedures may be improved.

Introduction

As providers of essential information for public debate, and for decision making at various levels of society, national statistical offices (NSI’s) have to take the quality of their products and services very seriously. This has always been so, and quality issues have been around as long as NSI’s have existed. However, particularly since the performance of NSI’s, and indeed of government services in general, have come under closer scrutiny in many countries, quality management has lately become a focal point for many NSI’s. Obviously, there are many sides to the ‘quality’ of official statistics. To mention some of the most important aspects: official statistics must be

- relevant
- timely and
- accurate,
but they should also be
- produced cost-effectively, and
- without too much of a burden for data providers.

Each of these major quality aspects of official statistics requires its own quality management approach.

Quality systems

National statistical institutes (NSI’s) appear to adopt various approaches to quality management. Some NSI’s have opted for a system of Total Quality Management (TQM), others aim at certification along the lines of the ISO-9000 system.

In 1996, Statistics Netherlands (SN) adopted a kind of comprehensive quality programme of its own, laid down in the form of a ‘business plan’ for the next decade(2) . Apart from some general background information about this business plan, this article in particular highlights two specific components of this overall quality programme. On the one hand it is about the quality guidelines we are introducing. On the other it is about a system of what we have named ‘statistical auditing’. The focus of statistical auditing in this sense is on the quality of the statistical production process. This implies that it relates primarily to the quality elements ‘timely’, ‘accurate’, ‘produced in a cost-effective manner’ and ‘without too much of a burden for data providers’. ‘Relevance’, though an important part of the quality guidelines, is usually not covered in depth by the statistical audits. There are other mechanisma to measure user satisfaction with the SN work programme in general and with individual sets of statistics in particular. These too are set out in the business plan.

We certainly do not to have ‘invented’ statistical auditing at Statistics Netherlands. In fact, the approach we have taken was partly inspired by similar activities that have been going on at Statistics Canada for a number of years. (3)

What is quality?

Essentially, quality is a subjective measure of the properties of a product or service. Quality assessments by users of products and services depend to a large extent on their specific needs and also: their expectations. This implies that what one user considers to be insufficient, may at the same time be excellent for another. Another useful definition of quality is therefore: ‘fit for purpose’ (i.e. the purpose of a specific user). To illustrate this with an example in the area of official statistics: macro-economic policymakers will generally be satisfied with fairly quick, reasonably accurate, highly aggregated statistics about international trade, while these same statistics, in that particular format, will be virtually worthless for a user who needs numbers for market research in the area of cosmetics or Scotch whisky. Therefore, it is rather difficult to assess the quality of statistics in simple ‘objective’ terms. In addition, as mentioned before, there are quite a few aspects to the ‘quality of statistics’. Statistics are indeed a fairly complex product. If McDonalds needs several pages of text to properly define the quality of a Big Mac, which is, with all due respect, just a bun with two hamburgers inside, it is no wonder that it takes a whole book to properly describe, say, the quality of the Consumer Price Index, let alone the quality of the national accounts.

ISO or not ISO

Quality systems encompass the organisational structures, responsibilities, procedures, processes and infrastructure for the implementation of quality care. There are several types of quality systems, differing in philosophy, degree of ‘regulatory ambition’ and applicability. Nearly all activities and instruments of an organisation affect the quality of its products. Therefore, quality care is strongly linked to organisation and management issues. This is not the place to go into these different systems in depth and detail , but some remarks about the so-called ISO-norms are appropriate. The general philosophy of ISO is that, in order to ensure a certain minimal quality level of final products, an organisation must be able to demonstrate that all details of the production process are in some way formalised and are thus in principle kept ‘under control’. Under ISO, quality audits are used to monitor whether the system is actually in place, and whether it is respected by staff and maintained by managers.

Some authors, at least in the Netherlands, are critical of the ISO system (Swinkels et al., 1995). Indeed, ISO seems to be rather bureaucratic and costly. Other authors think that certain preconditions have to be met before it is useful to introduce any quality system at all.(Spoelstra et al., 1993).They think that some of the necessary conditions to be met are: a shared strategic vision of top management, effective communication, effective management at all levels, in particular also the lower management levels, clear targets and objectives, and lastly: an organisational climate and culture in which success and good performance are systematically recognised and rewarded.

After careful consideration, Statistics Netherlands decided not to go for ISO, but to adopt a more modest pragmatic approach. One particular reason for this decision was that the statistical process is in such a phase of dramatic changes (redesign of many statistical processes, introduction of electronic data interchage [EDI], combining external registrations with Statistics Netherlands data collections, organisational changes), that investing in a system that would primarily describe the present situation and procedures was deemed to be inefficient. However, we did expect local managers to operate some kind of quality system, and to promote the introduction of such systems we issued provisional guidelines. These will be discussed later. Ultimately, we want to establish more binding guidelines for quality systems, but in the meantime one of the main purposes of statistical auditing as we see it is to find out just exactly which quality systems are in place to guarantee a certain level of quality of the statistical product. The final guidelines will be developed on the basis of the best practices applied in our office.

SN Business Plan

As mentioned before, SN adopted a general quality programme (business plan) in 1996. The SN Business Plan (SN 2000 for short) sets out six major objectives:


A relevant work programme.

This objective has to do with all mechanisms to ensure that the work programme of Statistics Netherlands meets the needs of the users. Decisions about the work programme are made by the Central Commission for Statistics. To assess user satisfaction, regular ‘evaluation rounds’ will be held among all major user groups: ministries, government research and planning institutions, organisations representing employers and employees, academia etc. In addition, and this has to do in particular with the aim of Statistics Netherlands to flexibly approach new user needs, proposals to exchange 10 per cent (in budgetary terms) of ‘old’ for ‘new’ statistics will in the future be made in each four-year work program (starting with the work program for 2001-2004) that is presented to the Central Commission for Statistics, enabling the Commission to make real choices and to set priorities.


A substantially reduced response burden.

This objective will be achieved by a mix of different instruments, such as increased use of registrations kept by other government institutions, more cooperation with other data-collecting agencies, ‘speaking the language of the respondents’ better, introduction of EDI-techniques and the systematic promotion of advanced statistical methods. To monitor the response burden, a ‘response burden meter’ was introduced in 1996. It shows the development of the statistical response burden as an index (1994=100). Each year, parliament is informed about how and to what extent the response burden has been reduced, with the target to achieve a reduction of 12.5 per cent by 1998. In addition, despite the fact that the Dutch system of official statistics is highly centralised, a Centre for Government Surveys has been created. This Centre has two basic functions: a) to detect where statistical activities (in particular: surveys) are taking place or are being planned elsewhere within central government and b) to try and help the people making such plans to get the data via SN, by means of either adapting or expanding existing data collections or applying special analyses to data already collected.


Effective statistical information.

To measure the effectiveness of the statistical information it produces, Statistics Netherlands uses some specific targets and indicators. For example: the sales of printed publications in the year 2000 must be twenty per cent higher than in 1994. Sixty per cent of SN’s press releases, on average, must be picked up by the seven main national daily newspapers. More importantly, by the end of 1997, all important SN data, including all necessary meta-information, must be available through the user data base StatLine. Lastly, by the year 2000 a broad range of ‘client satisfaction monitoring instruments’ will be in place.

Comprehensive quality management system.

The aim is to have a comprehensive ‘quality management manual’ by the end of 1998. In the year 2000, over half of all statistical projects will comply with this manual. In addition, and this is what this paper is specifically about, each statistical project will be submitted to an auditing exercise every five years, including a follow-up to see whether deficiencies have been corrected. Some other specific targets are: a) the response rates in household surveys should be 8 percentage points higher in 2000 than in 1996 and b) SN will publish at least one hundred research papers a year externally.


Adequately trained and motivated staff.

Owing to dramatic changes in the statistical process, we foresee substantial changes in our staff as well over the next five years or so. Increased mobility will be required, both internally (the target is ten per cent yearly) and externally. Quite a lot of tasks will no longer be required; some people may be re-trained for new positions, but others will have to look for employment elsewhere. A major programme of ‘empowerment’ and training (at present, two per cent of SN’s budget is earmarked for training) is in place to promote internal and external mobility, and some financial incentives are also available. On the other hand, we wish to recruit a substantial number of young, highly trained professionals. Another, more specific target is a reduction in sick leave to an average of five per cent. Lastly, general job satisfaction among staff will be monitored systematically.

An efficient, well managed, flexible organisation.

The present system of management contracts will be further developed. This also requires further improvement of our accounting structure and management information systems.

The ultimate aim of SN 2000 is to create a vital organisation with a manageable budget.


Provisional quality guidelines

As stated above, there are many aspects to ‘statistical quality’. In order to measure and monitor quality effectively, in whatever systematic way, it is therefore necessary to define statistical quality more concretely and precisely. We think the quality criteria listed below are the most relevant and important in the area of the production process of official statistics. Therefore, we would expect that quality systems at all levels of the organisation cover most of these aspects. In the provisional quality guidelines, a number of points to be taken into account are listed for each of five major aspects. The listing here is merely meant as an illustration and is therefore not exhaustive. The aspects covered in the list are:

1. purpose of the statistical collection
2. the survey design
3. data input
4. data throughput
5. data output

Explicitly not yet covered by the points of the list are: marketing, the appearance of publications and management systems (other than statistical management in a narrow sense).

1. The purpose of statistical collections

-- Who are the most important internal and external users of the statistics?
-- When were they last consulted about their needs?
-- What are their needs as regards: detail of variables, levels of aggregation, periodicity, coverage, comparability with other statistics, timeliness, accuracy etc.?
-- What is the real value of the statistics in relation to what the users would want?
-- Which user needs cannot be met?
-- What may be done to improve this situation?

2. The survey design

-- Is the survey design documented?
-- Are the statistics based on data collection in the field, or on integration of existing data sources?

Data collection:

-- What type of sampling (if any) is applied and why?
-- Which sampling frames are used?
-- To what extent is the sampling frame complete and up-to-date?
-- Does the frame contain the right kind of statistical units?
-- How do you cope with imperfections in these respects?
-- What is the medium of data collection ? (EDI, mailed questionnaires, interviews etc.)

Data sources:

-- Which data sources are used?
-- Are there alternatives and why are they not used?

Structure of questionnaires:

-- Have questions been tested for clarity? Are they answerable?
-- Are questions assessed on validity?

3 and 4. Data input and throughput

Input planning and procedures

-- Is there a time schedule for the different phases of the statistical process?
-- How is the input process organised and monitored?
-- Have any efforts been made to optimise the input and throughput process?
-- Are there documented procedures for non-response treatment, imputation, data editing, raising, estimation, cross-checking data?
-- For integration processes: is the relationship between statistics and their sources documented?
-- For data editing: are all questionnaires checked/cleaned individually and if not, what are the criteria for selection?
-- How are sampling frame errors treated?
-- Imputation: how are non-response gaps filled?
-- Weighting and raising: are intermediate results calculated and how are they used?
-- How are statistics matched with other numbers and time series?

5. Output

-- Does the final product meet the users’ needs?
-- Are there any discrepancies with other, related SN statistics and what has been done to minimise these?
-- Are analyses of discrepancies well documented and publicly available?
-- Are efforts made to avoid misinterpretation of the statistics?
-- How is the quality of the statistics presented to users?
-- Is a complete quality description available for users?
-- What is exactly known about non-sampling errors? Is this knowledge well documented?

The questions of these provisional guidelines are meant to increase quality awareness in statistical departments. In the course of an audit, a checklist for self-evaluation is given to the auditees. This is just one illustration of an important principle of auditing as we see it, namely that auditing is not in the first place a corrective ‘policing’ instrument, but rather a coaching tool to enhance the general feeling that quality is important and that auditing therefore should ultimately be perceived as a preventive mechanism. As such they are only a first step towards the more final quality guidelines mentioned before.

Introduction of auditing

Now we come to statistical auditing in the sense of this paper. As mentioned before, statistical auditing as it has been introduced in SN is a technique which has three purposes:

-- to actually find out what is being done about quality management in statistical departments
-- to generate suggestions on how to improve quality management
-- to find out what the best quality practices are and to incorporate these into the guidelines for quality systems that will be issued by the end of 1998.

It should be stressed and this has been made clear over and over within SN, that auditing is not intended to be a form of ‘policing’ in order to find out where things are not going as they should go. On the contrary: statistical auditing should be (and is increasingly) perceived as a form of help and advice to achieve improvements. However, this does not mean that it is entirely innocent and harmless. If the auditors discover weaknesses and unprofessional approaches, they will certainly report these and discuss them with management. Also, in the final discussion about the audit reports, agreements are made about how to achieve specific improvements. Finally, there is a systematic follow-up to check whether the agreements are implemented.

To obtain experience with statistical auditing, two pilots were carried out in 1996. One was about the statistics on Performing Arts, the other about statistics of the Transport industry. The aim of the pilots was to better define the scope of future regular audits and to develop a set of rules on procedures and instruments. The pilot audits were done by two teams of three SN-staff each. A private consulting company, which had broad experience in auditing and quality management, was commissioned to train the auditors and to moderate the process. Auditors were selected by the Audit secretariat on the basis of their statistical and managerial qualities. The techniques applied during the audits were interviews on the one hand and analysis of documentation on the other. The findings of the audits and the recommendations made on the basis of these findings were laid down in reports.
As to the selection of auditors, the idea was that all audits would have to be done by own SN-staff. The aim was to create a ‘pool’ of about 25 auditors from various divisions, selected on the basis of their expertise, but to some extent also their personality. We want the auditors to come from various divisions to ensure that a variety of experiences and expertise is represented in the audit teams. The auditors do this work on a part time basis only, because we want them to remain involved in regular statistical activities as well. The disadvantage of full-time auditors would be that such people may ‘lose touch’ with current practices and new developments. Ideally, an audit team consists of one person who is a specialist in statistical methodology, one who is well versed in statistical organisation aspects and one who has a special affinity with producing outputs. In addition, some of the qualities looked after are:

-- good communicative skills at various levels; diplomatic skills
-- good analytic qualities
-- openness for change
-- knowledge of statistical processes
-- good editorial qualities and the ability to present results orally

It appeared to be relatively difficult to find a sufficient number of (potential) auditors, not only because of the qualities that auditors must have, but also because people who have these qualities, are usually also much in demand for other priority issues. The envisaged auditors were subsequently trained for four days, in such areas as audit philosophy, norms, rules and procedures, interview techniques, reporting and presenting audit results.

Evaluation of pilots

A first evaluation of the pilot audits showed the following main points:

-- most auditors had liked the work
-- they were received well in the audited sectors and cooperation of staff and management had been good
-- the training had been enjoyed
-- in one of the two cases, the terms of reference for the audit had not been explicit enough, which had resulted in an incomplete audit
-- drafting a systematic audit plan for the sector to be audited was important (including questions as: who supplies/reads documentation, who interviews who etc.)
-- auditing takes time; therefore it is not possible to combine it with other tasks during the audit period
-- to remain distant and objective can be difficult, in particular when auditees become emotional and when the auditor is in some way familiar with the audited sector
-- on the part of the auditees, more than 70% were convinced of the usefulness of audits; 90% of them had felt that the atmosphere during audit interviews had been relaxed; 71% felt that the evaluation session had been good; 90% thought that the audit report was well written and clear, but some auditees thought that the conclusions could have been harder; 65% thought that the recommendations of the audit report had been useful. It has also been noted that, in general, most auditees very much like to talk about their work and enjoy sharing experiences problems with others.

Code of conduct

As one of the results of the pilot audits, the following code of conduct for audits was agreed.

-- The main purpose of statistical audits within the SN is to help identify statistical sectors what the weak and strong points of their statistical processes are and how these may be improved. In a way, audits are liking presenting a ‘mirror’ to the auditees.
-- There will be an audit plan, as part of the management contracts between division managers and the Director-General. Each statistical process in a statistical department will be audited once every five years.
-- Audits are organised and moderated by an audit secretariat, which is part of the DG staff
-- Audits are carried out by teams of three auditors, selected on the basis of specific expertise. A pool of about 25 auditors will be trained and regularly employed. Their performance will regularly be monitored by the audit secretariat
-- Before an audit starts, the procedures and planning will be agreed with the department manager.
-- The department manager is responsible for: the supply of proper documentation, including a list of employees and their tasks, work instructions, checklists, handbooks, existing guidelines for quality control. He/she also appoints a contact person from his sector.
-- In a workshop, the audit secretariat briefs the audit-team on how the audit will be carried out. Also, the scope of the audit (including any points which deserve special attention) is formulated.
-- The audit secretariat organises an introductory meeting, in which the scope and procedures are discussed. After that an interview scheme is drafted (implying, among other things, the final selection of the people to be interviewed). The maximum number of interviews per day is three, by two auditors, because interviews are to be relaxed. Interview reports are only for auditors. However, all reports are given to auditees for correction.
-- The audit team drafts first report, which is first discussed with the audit secretariat.
-- One audit secretary and the lead auditor discuss the first draft with the department head and contact person.
-- The audit report is subsequently discussed in a meeting with department head and auditees.
-- The final audit report is then written and sent to the department manager. A copy is sent to the Director-General of SN.
-- The department manager has three months to react and to draft a plan for improvements on the basis of the recommendations.
-- One year after the audit has taken place a questionnaire is sent to the department manager in order to check what has been done with the recommendations.
-- After every five audits, the Audit Secretariat writes a summary report about important results, which may be beneficial for other departments as well. This report is discussed by the Management Committee for Auditing and Quality Care and is also widely circulated.

Planning of future audits

In 1997 a start was made with a regular audit program. This program is rather ambitious: two audits will take place each month (except July and August). However, so far the planning is respected.(4)

It is felt that some points still require further discussion and care:

-- The way of reporting about audit results, in particular whether more openness (so far the reports have been treated rather confidentially) may be useful in order to enhance the ‘learning effects’ of audits for others than the audited sector.
-- Whether or not specific priorities will have to be set to audit certain specific aspects of statistical processes that seem to be a ‘weak spot’ or an urgent problem. It is now tried to expressly formulate this in each individual audit instruction.
-- The follow-up of audits; is a plan to remedy weak points drawn up and implemented and how should the implementation be monitored?

References

Spoelstra, G.J.P. en J. De Vries. Met de lijn de vloer op: het realiseren van integrale kwaliteitszorg, (Management on the workfloor: the realisation of integral quality monitoring). In: Personeelsbeleid 29, 1993, no. 1, pp. 26-31.

Swinkels, G.J.P., J.G. Verrijdt en G.J. van der Pijl. Kwaliteitszorg, kwaliteitssysteem en certificering (Quality monitoring, quality systems and certification). In: Maandblad Accountancy en Bedrijfseconomie, no. 4, 1995 pp. 253-261.
____________________________________________________

Footnotes:

(1) This article is based on paper presented at the DGINS conference in Stockholm, May 1998. It was written on a personal basis and does not necessarily reflect views or policies of Statistics Netherlands.

(2) CBS 2000: doeltreffende diensten, lage lasten (SN Business plan: SN 2000; effective services, low burden to society)

(3) The authors thank Henk van Tuinen, Director for Statistical Policy and Dick Kroeze of the Audit Secretariat for their comments, as well as René Huigen and Josée Nollen, who laid the groundwork for more systematic quality thinking in Statistics Netherlands.

(4) One point that is not entirely in conformity with the original planning is that audits (from start to finish) take about two months, while the expectation was: one month. The reason is not so much that auditors have insufficient time (each auditor may spend 80 work hours for an audit), but that it takes more time than expected to write summaries and to formulate recommendations which are applicable for other sectors as well.


Back to top | Statistics Division Home | Contact Us | Search | Site Map
Copyright © United Nations, 2014