Statistics Division Home
Overview
Fundamental Principles
Implementation
Country Practices
Publications
Reference Materials and Links
Sign in

Monitoring the Performance of a National Statistical Institute

Monitoring the Performance of a National Statistical Institute (NSI)

Ivan P. Fellegi and Gordon J. Brackstone
Statistics Canada [Footnote 1: Prepared for the Plenary Session of the Conference of European Statisticians, June 1999]

1. Introduction

As NSIs we take pride in our ability to monitor the performance of our countries, with all their complexities and interdependencies. It would be ironic indeed if we failed to monitor our own performance effectively. And yet it is easy to underestimate the importance of performance monitoring to a NSI, to think of performance reports as a necessary but tiresome administrative burden, and not to bring the same thought and attention to performance measurement as we try to bring to the measurement challenges that our economies and societies present. In this paper we consider the objectives of performance monitoring for a NSI and some approaches towards meeting those objectives. We emphasize the need for an integrated system of performance monitoring to underlie any narrower set of performance indicators that may be required or produced. Our emphasis here is on medium to long-term performance monitoring and management, and not on day to day operational management of individual programs.

2. Objectives and purpose

The design of any measurement program has to start with some purpose and objectives. Why is performance measurement needed? Who will use the results and for what? As with the measurement of a national economy, there are no simple or single answers to questions like these. The performance of a NSI is a multidimensional concept and different audiences are interested in different dimensions of that performance. For example, financial performance may be of interest to the Treasury, quality of output will be of interest to users, and respondent burden will concern the business community. Individual performance indicators may be useful for specific purposes but inevitably reflect a one-dimensional view of the NSI. We will argue, first, that the design of a performance measurement process should be holistic and driven primarily by the needs of the NSI itself, and second, that the design of the process itself is as important as any particular set of "indicators".

Understanding its own performance is crucial to almost all the decisions and trade-offs that the management of the NSI has to make. Without such understanding, corporately and within individual programs, we are feeling in the dark when it comes to deciding where to adjust and where to invest. We ourselves use similar arguments when we emphasize the necessity of a sound statistical base for managing the affairs of a nation. Therefore, the management information needs of the NSI should be the principal determinant of what performance information is gathered and used. Chances are that if there is adequate information for management purposes, all other needs can also be satisfied.

Nevertheless, the external audiences for performance information are also very important and often influential, but, as we have said, their interests are usually confined to specific dimensions of performance. It would be an unusual situation for any of these specific interests not to be a subset of what is of interest to an informed NSI management, but bizarre external interests can sometimes arise! In any event, NSI management, concerned with all dimensions of performance, can ensure that the performance information system they put in place "spins off" the specific indicators required by external audiences. This helps to avoid duplication and even inconsistency when different parts of the NSI become involved in reporting related indicators to different audiences.

This integration of specific indicators in an overall performance assessment process has a second important advantage. It enables the NSI to put into context any unidimensional performance indicators that it is required, or chooses, to report. The risk of misinterpretation of isolated indicators is real and the existence of a context to explain changes in an individual indicator is valuable. For example, an increase in sampling error could be the result of a conscious decision to reduce sample size in favour of devoting more resources to nonresponse follow-up. Auditors monitoring sampling errors alone and response rates alone would obtain quite different impressions of performance. This same context can also counteract the distorting effects of a "management by indicator" syndrome whereby all effort is devoted to optimizing one particular indicator without due regard to the impact on other indicators.

For these reasons, we argue that the management information needs of the NSI should be the principal determinant of what performance measures are put in place. An integrated and credible self-imposed system of performance assessment is much preferable to a reactive accumulation of arbitrary indicators imposed on the NSI at various times and by various bodies.

Lest we leave an impression that only the needs of the NSI matter, we should observe that the assessment of performance information is not an optional activity for the NSI. Quite apart from any legislated or other obligatory requirements, there is a moral responsibility to report on dimensions of performance that are not visible from outside the NSI. Some aspects of performance (e.g. timeliness, financial performance, relevance) may be visible to outsiders from observation or public records, but others (especially many aspects of data quality) are not - unless the NSI makes them known. The NSI has to make information on data quality and methodology known to its users, whether or not it recognizes that this information is also important for its own internal resource allocation processes.

Note that we emphasize the notion of performance assessment, rather than performance measurement. Indeed, what is crucial is that the top management of the NSI have in place systematic and all-encompassing processes designed to highlight different aspects of the organization's performance. The various processes should provide signals about all aspects of performance - both globally and by individual program or product. These signals can then be integrated within a comprehensive planning system that facilitates the necessarily subjective decision-making that is the inescapable essence of management: making trade-offs among different "goods" (such as sampling errors, timeliness, reporting burden, costs, user needs, etc.). Within such a planning system not all aspects of performance are quantitatively measurable (and hence capable of summary through "indicators" in the usual sense). Yet this element of subjectivity should not be equated with being haphazard in choosing which elements of performance to consider. To re-emphasize, the process of gathering performance information both quantitative and qualitative should be comprehensive.

Coming back to our initial analogy with measuring the performance of the national economy or society, the foregoing discussion highlights an important difference: given its much smaller scale of activity, the management of the NSI is much better able to cope with the integration of qualitative information in its decision-making.

To expand this notion of a target system of performance information, we next outline the principal dimensions of performance that we think should be assessed. Then we go on to describe some specific approaches that can be taken towards the assessment of each dimension of performance.

3. Dimensions of Performance

Though we have stressed that performance assessment should first of all be of use to NSI management, we can nonetheless associate each of the four primary dimensions of performance that we advocate with a particular stakeholder group that has an interest in our performance.

1. The users of our information products have an interest in the quality of those products, where "quality" is broadly defined as fitness for use.

2. The funders of our activities, the taxpayers of Canada and those in Government charged with managing public funds, have an interest in our financial performance, including efficiency, good management and proper use of taxpayers= money.

3. The respondents to our surveys, and their representatives, have an interest in the response burden we impose on them, in how we interact with them, and in the care with which we protect the information they have confided in us.

4. Our employees on whom we depend, and the agencies charged with human resource management standards in Government, have an interest in our performance in human resources management.

In Sections 4 to 7 we address approaches and information that may be used for assessing performance in each of these four main dimensions. In Section 8 we briefly mention four important additional cross-cutting aspects of performance that merit attention, namely:

- innovation - the incidence of positive change in the programs of the organization;
- key findings - analytic conclusions that inform important public policy issues;
- professional standards - independent professional review practices; and.
- service delivery standards - performance in handling individual client transactions.

It is not possible to produce direct quantitative results or output measures for all aspects of performance. Where we can meaningfully do so we should. But where we can't, it is still useful and valid to report on the processes in place as demonstration of good practice even if the impact or results of those processes cannot be quantified. In what follows, we use the term "performance information" to refer to both descriptive and quantitative information. The latter may also be referred to as "performance measures", and summarized in "performance indicators".

4. Information Quality

There is no universally accepted definition of quality for official statistics. We have chosen a broad concept of information quality based on fitness for use. We identify six aspects of information quality that are pertinent to the use of information: relevance; accuracy; timeliness; accessibility; interpretability; coherence (Statistics Canada, 1998a). Some of these aspects are directly observable by users; others only the NSI can assess. Some of these aspects can be quantified in numerical indicators; others can be assessed only in terms of the processes followed by the NSI.

Relevance refers to the degree to which the information produced responds to the needs of the user community that the NSI aims to service. While one can speak of the relevance of an individual statistic, relevance is more meaningfully assessed in terms of how well the full repertoire of available information satisfies user needs. Relevance is not a concept that lends itself to precise quantitative measurement. Rather, performance in this domain has to be assessed in terms of processes in place and broadly defined user satisfaction.

Three primary processes need to be in place. Firstly mechanisms whereby the NSI stays abreast of the current and future information needs of its main user communities are needed. Typically an array of consultative and intelligence mechanisms are required to keep tuned to the changing needs of users, and even to anticipate these changes. Mechanisms used by Statistics Canada have been described elsewhere (Fellegi, 1996). These mechanisms should lead to the recognition of gaps in the statistical system - that is to say information required by users that is not currently available or good enough for the desired purposes.

The second process is a periodic independent assessment of the extent to which individual statistical programs are meeting the needs of their users. For example a cyclical program of evaluations using external consultants who collect and analyze user views and present their recommendations may serve this purpose.

Thirdly, there has to be a process internal to the NSI that periodically integrates the signals coming in to the Agency from the first two processes and converts them into initiatives that will address the most important gaps or weaknesses. At Statistics Canada, the regular annual planning cycle is the core of this process, supplemented by periodic exercises to obtain support and funding from key federal data users for addressing the major data gaps (Statistics Canada, 1998b).

The performance information for this aspect of quality will normally consist of (a) description of the processes described above; (b) evidence of the workings of these processes (records of consultations, assessment reports, decision records, etc.); (c) demonstration of impact in the form of changes to programs as a result of feedback from these processes.

One further technique for demonstrating relevance is to point out the association between major information releases from the NSI with the topical public policy or societal issue on which they shed light. This emphasizes the significance of the Agency's outputs to questions that are clearly important to the country.

Accuracy refers to the degree to which data correctly estimate or describe the quantities or characteristics that the statistical activity was designed to measure. Users presented with statistical estimates alone cannot usually judge their accuracy from the numbers themselves. An obligation rests with the NSI to provide information about the methodology used to produce the data (see under Interpretability below) and about the accuracy of the data. Accuracy itself has many dimensions and a single measure rarely captures the full picture in a useful way. Typically, measures of accuracy reflect sources of error in survey processes (coverage, sampling, response, nonresponse, etc.) and distinguish variance from bias. Not all accuracy measures will be quantitative; qualitative assessments can be valuable in the absence of useful quantitative estimates of accuracy.

Statistics Canada=s approach has been through a Policy on Informing users of Data Quality and Methodology (Statistics Canada, 1992) which requires each data release to be accompanied by or make reference to descriptions of methodology and indicators of data quality. Indicators of coverage, sampling error, and response rates are regarded as mandatory (where they apply), while an array of additional measures may be provided depending on the size of the program and the importance of the estimates.

In addition to program by program measures of accuracy, it is also useful to monitor some key accuracy indicators across programs within the NSI. For example, tracking trends in response rates across surveys of a similar type can provide valuable management information on a changing respondent climate, or on difficulties in particular surveys. Regular measures of the coverage of major survey frames such as a business register or an address register also provide information that is important both to individual programs using these frames, and to NSI management.

The choice of how much effort to invest in measuring accuracy is a management decision that has to be made in the context of the usual trade-offs in survey design. But a corporate policy that defines minimum requirements, and higher goals to aim towards, helps to promote consistency within the NSI.

Timeliness can refer to two distinct phenomena. For continuing programs it normally refers to the length of time between the end of the reference period and the appearance of the data. For one-time or new surveys it can refer to the interval between the time when the need is made known (or funded) and the appearance of data. This latter sense may be better called "responsiveness". We will concentrate on the first sense. Unlike accuracy, timeliness is clearly visible to users and easy to track. Since there is often a trade-off between timeliness and accuracy, one should track them together.

The choice of a timeliness target is closely related to relevance since information may not be useful if not available in time. Given timeliness targets, two performance measures may be useful. The first is the existence of pre-announced release dates for regular series, and adherence to these dates. The second is improvements in the timeliness achieved - either through changes to the targets, or due to exceeding the targets. However, this measure has to be considered in conjunction with other factors since improvements that are achieved at the expense of accuracy, or at undue cost, may not represent an overall performance improvement.

Accessibility reflects the availability of information from the holdings of the NSI. It includes the existence of suitable modes of disseminating information to different audiences, the availability of catalogues or searching tools that allow users to know what is available and how to obtain it, and the provision of access that is affordable and convenient to different user groups.

Performance information for this factor are of three broad types: (a) descriptions of dissemination systems that demonstrate the existence and functioning of a variety of channels of dissemination and searching facilities designed to accommodate the differing content and timing needs, the differing levels of usage, the differing levels of technology, and the differing budgets of various groups of users; (b) trends in information usage in terms of, for example, sales, enquiries, Internet hits, customized requests; (c) user feedback through client surveys or through unsolicited comment.

Interpretability refers to the ease with which users can understand and properly use and analyze information. It covers the availability of metadata (or information about the data), particularly descriptions of the underlying concepts and definitions used, of the methodology used in compiling the data, and of the accuracy of the data (as described above).

Performance information in this area include the existence of a clear policy and guidelines on what meta-information should be made available to users, and measures of the level of compliance with such a policy.

Coherence refers to the degree to which data or information from different programs are compatible and can be analyzed together. It is promoted by the use of common, or at least compatible, conceptual frameworks, definitions, classifications, and collection and processing methodologies across programs. It is increased through regular analytic integration of data within broad frameworks such as the SNA, and is tested by regular comparisons of related series that are tracking the same phenomenon from different angles.

Performance information includes the use of common conceptual frameworks where they exist, and efforts to create them if they do not exist. It also includes the use of standard variables and classification systems, and of common collection and processing methodologies. The systematic use and results of various analytic comparisons also demonstrate attention to cohesion - though the results themselves, e.g. residual error in the National Accounts, might also reflect the accuracy of some individual series.

The existence of a range of performance information for each aspect of quality serves to emphasize that no single performance measure can summarize even the single performance dimension we have called quality. Information on all aspects of quality is necessary for management to assess weaknesses and decide where investments are needed.

As an example of pulling together these various aspects of quality, we recently undertook, on an experimental basis, comprehensive quality assessments of four of Statistics Canada=s major programs. These were conducted by experienced staff from outside the subject programs in the context of a broad audit of Statistics Canada's quality management by the Auditor-General of Canada. They focused on the adequacy of the processes in place within each program for ensuring and/or measuring the six aspects of quality described above. As well as serving the immediate needs of the audit, they also brought together, sometimes for the first time, a variety of documentation and quality measures for each program, identified potential improvements, and recognized practices that could be more widely applied across programs. We are currently considering whether to introduce a regular cyclical program of such assessments for all major programs.

5. Financial Performance

If the NSI hopes to attract continuing support, and even additional funding, it must be able to demonstrate wise and careful management of existing funds. Each NSI operates within its own government financial régime that prescribes practices and reporting requirements on financial matters. Nevertheless, there are several key common components to this demonstration

Overall macro-financial reporting shows whether the NSI is operating within its allotted budget, and whether, in broad terms, it is becoming more efficient. Such reports may refer both to the NSI as a whole and to major program components within the NSI.

Demonstration of adherence to financial policies and procedures reflects proper regard and accountability for public money. This demonstration may come through periodic auditing activity or through built-in safeguards that prevent or flag deviations.

An effective cost recording system by program, by organizational unit, and by function is essential for management control and efficient design of statistical and administrative processes. For example, since the essence of efficient survey design is a trade-off between cost components and accuracy, reliable information on which to base cost estimates of alternatives is crucial. It is also crucial when resource reallocations have to be considered - whether as a result of externally imposed budget cuts, or triggered by a self-imposed need to free up funds for important new investments.

Tracking of costs and workloads for repetitive operations allows management to recognize efficiency and promote good practice. This may apply to whole programs that are in a steady state, or to specific operations (e.g. data entry, coding, personnel transactions) across programs.

Tracking the financial performance of cost recovery programs and product sales is crucial to a NSI heavily dependent on revenue generation. The risk that losses from such programs eat into the budget available for regular programs makes it essential to have good information on revenue programs. Gross_revenue measures are also a persuasive indicator of the value of the NSI's output to users.

It will also be important for management to track the financial performance of internal cost centres that operate on a cost recovery basis. For example, the financial performance of a computing centre that has to cover its full operating and capital costs will have an impact, not only on costs charged to all other programs, but also on the NSI's cash flow situation as the peaks of capital investment have to be covered.

Finally, the tracking of major development projects is particularly important given their size and tendency to stretch out. Management attention to their utilization of resources in comparison to progress made should provide an early alert to any problems.

Again there is no single measure that should summarize financial performance. The demonstration that certain key processes are in place, together with a set of selected financial indicators that illustrate the performance achieved, constitute an effective management information régime in this area.

6. Respondent Relations

Without respondents we starve, so maintaining positive respondent relations is a survival issue for a NSI. Respondents are a diverse group ranging from individuals and households, through businesses and institutions, to government organizations that provide administrative records. The issues and successful approaches to the issues are different for each group. For direct collection by the NSI (i.e. for censuses and surveys) we might distinguish four primary aspects of performance:

- measures taken at the survey design stage to minimize the need for respondents to provide information;
- procedures applied at the collection stage to make the provision of information as easy as possible for respondents;
- the resulting burden of response that we impose on respondents
- procedures to protect the confidentiality of information once provided.

At the survey design stage the measures focus primarily on justifying the need for the survey as a whole, ensuring that no alternative sources of the information already exist, justifying each question proposed, minimizing sample size while assuring adequate quality, and controlling the burden of response on individual respondents across occasions and across surveys[Footnote 2: A longer and expanded list of such measures is included in the Appendix.]. The performance information required is the documentation of the particular measures that are taken both within surveys and across groups of surveys.

Given that a survey of a certain sample size and questionnaire length has been judged sufficiently important to warrant the imposition of the implied reporting burden, measures that might be applied at the collection stage to facilitate response include, for example, development of user-friendly and well-tested questionnaires, offering modes of response that are convenient for the respondent, negotiated reporting arrangements with large businesses, training of interviewers in survey content and in diplomacy.2 The performance information here too consists of documentation of corporate and program-specific measures used to these ends.

Response burden is a complex concept to measure. Time (person-hours) spent completing a questionnaire is one metric that is common across all surveys, but it does not reflect the value of the time spent (monetarily in the case of businesses, or perceived value of lost time in the case of individuals). It does not reflect the perceived intrusiveness of the questionnaire. Nor does it allow for the possibility that some respondents see value in, and might actually enjoy, completing certain questionnaires. Despite these weaknesses, estimated time to complete is an interpretable measure and the one most commonly used. For businesses, at Statistics Canada, we monitor the total burden hours by adding up over all questionnaires the estimated average time to fill out multiplied by the number sent out. We do not multiply by the response rate since we feel that we are imposing the burden on the full sample even if some choose not to accept it. Measures such as rotating samples or coordinated sampling serve to distribute the burden more evenly between businesses over time but do not affect the total burden as measured in this way. Separate measures may be calculated by size of business to isolate the impact on small businesses. Tracking such a measure over time allows the business community to see the trends in burden, and in particular to see the reductions that result from substituting administrative data or from more efficient sample design.

For households, the burden of surveys is much less on average and the focus is more on controlling the maximum burden imposed on any one household. This implies carefully monitoring the length and intrusiveness of individual questionnaires at the design stage, ensuring that respondents picked in rotating or longitudinal surveys are not over-burdened, and that households do not suffer as a result of re-use of old samples for new surveys. Response rates, in addition to being an important quality measure, may also provide a symptomatic indicator of response burden.

Custodians of administrative records are usually governmental organizations. Special bilateral arrangements need to be in place with the major providers to ensure that the NSI's needs for timely access to records receive sufficient weight in the operation of the administrative system.

The confidentiality undertaking given to respondents is a pillar of respondent relations and an important determinant of cooperation in surveys. The procedures in place to fulfill that undertaking should be visible to interested respondents. This includes the security measures surrounding questionnaires before and after data are extracted from them, the security and access controls surrounding computer files of respondent data, and the measures applied to outputs to ensure that no individual data are revealed. Details of these procedures should form part of a NSI's performance information.

In summary, performance information on respondent relations should include documentation of measures designed to minimize burden, facilitate response, liaise with major respondents (large businesses, institutions, and holders of administrative records), and protect the confidentiality of individual responses after collection. Explicit measures of response burden, at least for businesses, can also be tracked over time and by size of business.


7. Human Resource Management

A motivated, versatile and well-trained workforce is as essential to a NSI as adequate financial resources. The continous nurturing, development and replenishment of the NSI's "intellectual capital" is essential to long-term success. Management and development of the human resources of the organization is therefore a high priority for management. Information necessary to monitor performance in human resource development is an essential component of a NSI's performance information database.

As in the case of financial management, government-wide policies and procedures need to be observed and some reports to central agencies may be obligatory. These should be by-products of the NSI's own internal information systems. The type of information that is useful for performance monitoring in this domain can be divided into three main categories:

(a) descriptions of programs and procedures used in human resource management and demonstrations or measures of their use;
(b) quantitative tracking of key human resource statistics;
(c) information on employee opinions.

As with other dimensions of performance, the adequacy of human resource management cannot be assessed by output measures alone. Demonstration that good practices are being applied is a crucial component of performance information in this area. An integrated description of human resource practices not only provides a valuable reference for employees and managers, but also serves to bring cohesion to, and identify gaps in, the set of practices that may have evolved over a long period of time. Statistics Canada's document on this topic (Statistics Canada, 1997) describes its overall principles and strategy for human resource development and provides detailed descriptions under the headings of recruitment and development, training, career-broadening assignments, and the work environment. Such a document constitutes the source of any information on human resource practices that may be required by central agencies.

A current statistical database of the Agency's workforce is also a crucial management tool. As well as providing the current demographic and linguistic profile of employees by group [Footnote 3: Group in the Canadian context refers to occupational groups such as Economists, Clerks, Computer specialists, etc.] and level, it allows the tracking of staffing trends, including recruitment rates, departure rates (by cause), promotion rates, and the incidence of lateral rotations. It provides the basis for micro-modelling the future evolution of the workforce based on various assumptions about recruitment and attrition, thus allowing management to recognize in advance potential areas of shortage or over-supply. Incidentally, this micro-modelling capacity is a service that Statistics Canada has also offered successfully to other large government departments facing similar concerns about the ageing of their workforce. The database may also contain or be linked to records of training taken, of skills acquired, of the employee appraisal process, or other important human resource activities. Such a database provides whatever summary indicators are deemed to be important and meaningful to management, including those required by central agencies for monitoring such issues as bilingualism, application of the merit principle, and employment equity. Importantly, the database also provides a broader context for these indicators.

The third type_of information comes from employees themselves through employee opinion surveys, focus groups, or other means of tapping their opinions. These may deal with a wide variety of topics including, but not restricted to, views on human resource paractices. For example, Statistics Canada now conducts a regular employee opinion survey every three years covering a range of topics under the broad headings of "The Job Itself", "Immediate Supervisor", "Work Environment", "Official Languages", "Commitment to Clients and Respondents", "Competitions", and "Human Resources Programs". The primary purpose of these surveys is not to produce ratings that can be tracked over time, but to provide a catalyst for discussion, within every division, of the survey results and the measures that need to be taken to address problems identified by the survey. The survey is therefore a census so that reliable results can be produced at the divisional level - and without jeopardizing confidentiality. Each division is required to hold follow-up meetings to review the results and to report on their follow up actions. Inevitably, comparisons are made between occasions and between divisions, but the focus is on understanding what underlies poor results in some areas, and how improvements were achieved in others, so that lessons and solutions can be shared. Incidentally, this idea of an employee survey designed as a catalyst for employee-management discussions at a divisional level has also been exported by Statistics Canada to other departments.


8. Other Important Aspects of Performance

Here we mention four important aspects of performance that were not emphasized explicitly under the major areas already described.

A successful organization has to adapt both to changing client needs and to changing respondent and technology environments. That means doing new things, or doing old things new ways. Highlighting selected innovative activities by the NSI as a component of performance information serves to demonstrate the organization's ability to adapt.

As mentioned earlier under relevance, the raison d'être of a NSI is to produce information that responds to important public policy needs. Highlighting important new findings and emphasizing their relevance to major policy issues serves to illustrate directly how the NSI is fulfilling its mandate.

There is a high professional content in the work of a NSI. This raises the question of how the NSI ensures that professional standards are being met. Human resource management practices play an important role in developing a staff of high professional standard, but additional checks and balances are important. Publicizing the NSI=s practices in the area of professional review and validation (e.g. advisory commitees, external review, internal peer review, technical guidelines) serves to demonstrate its attention to professional standards.

Some parts of a NSI provide direct service to the public. For example, information enquiry officers handle requests for information on a daily basis. These interfaces represent a very visible reflection of an organization=s performance in terms of its service to the public. Published service levels to be expected (e.g. in terms of initial response times, or turnaround times) and monitoring of service levels achieved are important elements of performance information. Similarly, service levels may be defined and monitored for key services provided within the NSI.

9. Managing Performance Information

It should be clear from the above that what we regard as "performance information" includes a wide variety of information in different forms. It includes some quantitative databases from which summary statistics or indicators may be drawn, but it also includes a lot of process information that will be in text or document form. Just as we need to manage the statistical information database of the NSI, we also need to manage our performance information database. Three processes appear necessary to make effective use of performance information in the management of the NSI:

(a) organization of performance information, both textual and numeric, in a way that makes it easy to find and accessible to managers when they need it;
(b) processes for regular recording or updating of performance information to ensure that this information remains current; and
(c) a process for periodic assessment of the "messages" coming through performance information with a view to identifying priorities for investing in improved performance.

With respect to the first two processes, we would not argue that a single physical database is necessary or desirable. The work of a NSI involves many different and disparate programs. Some performance information relates to individual programs, while some applies corporately. For information at the program level, some form of regular reporting by programs to management is required. For example, in Statistics Canada, each program manager is required to submit a biennial report on their program including appropriate performance information. For corporate-level information, somebody in the organization has to be charged with responsibility for managing each component of performance assessment. To the extent possible the production of performance information should be built into operational processes so as to minimize the need for special collection efforts, and to reduce reliance on staff remembering to record information they would not otherwise record.

The annual planning system (Statistics Canada, 1998b) is an important part of the third process in Statistics Canada. It provides the opportunity for managers at several levels to assess the status of their programs and put forward proposals for investments designed to improve their performance. Problems in corporate infrastructure and internal services can also be identified and corrective measures proposed.

10. Conclusion

The effectiveness of a NSI depends on its credibility. Confidence in the trustworthiness of the information it produces is essential for it to perform a useful function in society. The credibility and reputation of a NSI may depend on many factors, including some that are beyond its own control. But an openness about its methods and operations is a prerequisite for building that confidence. A balanced and open approach to the measurement of its own performance can only serve to strengthen a NSI=s reputation for objectivity and impartiality - even when some performance measures are not as positive as we would like. And the contrary is true too: any suspicion that the organization is reluctant to expose details of its performance can cast doubt on the quality of its outputs too.

We believe that a comprehensive system of performance assessment should serve, first and foremost, the needs of the management of the NSI. But all requirements for external reporting, whether mandatory or voluntary, should flow from this same information base. Performance information must be interpreted broadly to include not just quantitative measures that one can plot on a graph over time, but also descriptive information about practices and processes used, on the basis of which interested observers can determine whether the NSI is performing well. There is a real risk that concentrating on only those aspects of performance that can be easily quantified will provide a partial and potentially distorted picture of the overall performance of the NSI.

References

Fellegi, Ivan P., "Characteristics of an Effective Statistical System", International Statistical Review, Vol. 64, No. 2, August 1996.

Statistics Canada (1992). Policy on Informing Users of Data Quality and Methodology, April 1992. Policy Manual 2.3

Statistics Canada (1994). Policy on Development, Testing and Evaluation of Questionnaires, January 1994. Policy Manual 2.8.

Statistics Canada (1997). Human Resource Development at Statistics Canada, November 1997. Internal document.

Statistics Canada (1998a), Quality Guidelines, Third Edition, October 1998, Statistics Canada publication no. 12-539-X1E.

Statistics Canada (1998b), Statistics Canada's Corporate Planning and Program Monitoring System, October 1998. Internal document.

Statistics Canada (1998c). Policy on Informing Survey Respondents, September 1998. Policy Manual 1.1.


Appendix: Measures to Minimize Burden and Facilitate Response in Surveys

Measures that can be used at the survey design stage to control respondent burden (essentially to control who gets asked what and how often) include:
- use of sampling to the extent possible;
- rotation of samples in periodic surveys to ensure that the burden of response is shared over time;
- coordinating sample selection across surveys to ensure that the same respondents do not become victims of multiple surveys in the same time period;
- challenging and justifying every question on a questionnaire in terms of its necessity to achieve the information or analytic objectives of the survey program (including verification that the information is not already available from an existing source);
- to the extent possible, integrating surveys so that the same questions are not repeated unnecessarily on multiple surveys;
- requiring that respondents be informed, at the time of initial contact, of the purposes of each survey and of examples of specific uses to which the results will be put (Statistics Canada, 1998c);
- engaging in joint collection agreements with other levels of government, and with other organizations where possible, to avoid duplicative collection by different bodies.

Given that a survey of a certain sample size and questionnaire length has been judged necessary, measures that might be applied at the collection stage to facilitate response include:
- providing options for the mode of response (mail, phone, electronic, etc.) so that respondents can choose the one that suits them best;
- undertaking thorough questionnaire testing with real respondents (for all modes of response) to make them as easy as possible to understand and complete accurately (Statistics Canada, 1994);
- negotiating reporting arrangements with business respondents to ensure that questionnaires are sent to the appropriate persons within the organization and that respondent time and effort is minimized;
- offering feedback of survey results to respondents (e.g., for businesses, in the form of industry averages to which they can compare their own data).
- working with respondent associations, where these exist, to obtain their input and support for necessary surveys;
- ensuring that interviewers are well-trained - in both content and technique;
- at least for large businesses, offering a single point of contact within STC to whom they can turn for assistance or negotiation in fulfilling STC=s information demands;
- for small businesses, providing the services of an Ombudsman to act, within the NSI, as a protector of the interests of small business in meeting their obligations as respondents to the various survey programs of the Agency.


Back to top | Statistics Division Home | Contact Us | Search | Site Map
Copyright © United Nations, 2014