C.  Focusing on quality assurance

19.20.        Quality assurance is a critical part of producing statistics; it ensures that the methods have been correctly applied and that the statistics are robust and fit for their purposes. Given the complexity surrounding the compilation of statistics within the framework for describing the international supply of services, it is particularly important for organizations produce, document, implement, monitor and maintain a quality assurance strategy, as well as policy and quality assurance procedures, for regular publications, new outputs and changes to outputs, as well as for statistical outputs derived from surveys, administrative sources and other secondary sources. 

19.21.        Quality assurance should be built in at each step of the statistical process, and should inform the following actions: 

(a) Selecting methods; 

(b) Identifying issues related to the quality outcomes of the chosen methods; 

(c) Carefully checking the outcomes of the application of the methods; 

(d) Ensuring that a sufficient range of stakeholders is engaged in the quality assurance process. 

19.22.        Compilers should adopt quality assurance procedures, including the considering whether a statistical product meets the requirements of users, and ensuring the coherence of the procedures with those of other statistical products. The quality assurance policy should include such aspects as control, improvement processes, quality measures, documentation and awareness-raising. It is good practice for the quality assurance procedures to specify clear ownership and accountability for the statistics and related products. 

19.23.        Appropriate validation to minimize the risk of errors should also be incorporated into the quality assurance procedures, including the following actions: 

(a) Building in validation into the production processes wherever possible; 

(b) Conducting internal validation checks, for example, by comparing with previously produced outputs from the same source, or conducting validation checks by more than one person in case of a large degree of manual intervention; 

(c) Conducting external validation checks, such as  “sense-checking” against other relevant sources. 

19.24.        For all regular statistical outputs, a programme of periodic reviews should be planned and undertaken, covering quality, methodologies and processes. More information on national quality assurance can be found in international guidelines, such as the “Guidelines for the Template for a Generic National Quality Assurance Framework”, available on the website of the Statistical Division.[1] At its forty-third session, in 2012, the Statistical Commission fully endorsed the template and encouraged countries to use it. The template is provided in box 19.2. 

Box  19.2

Template for a Generic National Quality Assurance Framework  

1. Quality context   

1a. Circumstances and key issues driving the need for quality management   

1b. Benefits and challenges

1c. Relationship to other statistical agency policies, strategies and frameworks and evolution over time  

 2. Quality concepts and frameworks 

2a. Concepts and terminology

2b. Mapping to existing frameworks

3.     Quality assurance guidelines

 3a. Managing the statistical system

[NQAF 1] Coordinating the national statistical system

[NQAF 2] Managing relationships with data users and data providers

[NQAF 3] Managing statistical standards

 3b. Managing the institutional environment

[NQAF 4] Assuring professional independence

[NQAF 5] Assuring impartiality and objectivity

[NQAF 6] Assuring transparency

[NQAF 7] Assuring statistical confidentiality and security

[NQAF 8] Assuring the quality commitment

[NQAF 9] Assuring adequacy of resources

 3c. Managing statistical processes

[NQAF 10] Assuring methodological soundness

[NQAF 11] Assuring cost-effectiveness

[NQAF 12] Assuring soundness of implementation

[NQAF 13] Managing the respondent burden

 3d. Managing statistical outputs

[NQAF 14] Assuring relevance

[NQAF 15] Assuring accuracy and reliability

[NQAF 16] Assuring timeliness and punctual

[NQAF 16] Assuring timeliness and punctuality

[NQAF 17] Assuring accessibility and clarity

[NQAF 18] Assuring coherence and comparability

[NQAF 19] Managing metadata

 4.     Quality assessment and reporting

4a. Measuring product and process quality: use of quality indicators, quality targets and process variables and descriptions

4b. Communicating about quality: quality reports

4c. Obtaining feedback from users

4d. Conducting assessments; labelling and certification

4e. Assuring continuous quality improvement

 5. Quality and other management frameworks

5a. Performance management

5b. Resource management

5c. Ethical standards

5d. Continuous improvement

5e. Governance

19.25.        Within the NQAF, four components of quality assurance are particularly relevant for statistics on the international supply of services: 

(a) Assuring methodological soundness This requires the use of the statistical methodologies based on internationally agreed recommendations contained in MSITS 2010 and good practices described in the present Guide

(b) Assuring cost-effectiveness  Cost-effectiveness is assured by the implementation of standardized solutions that increase effectiveness and efficiency, documentation of the cost of data production at each stage of the statistical process and carrying out the cost-benefit analyses to determine the appropriate trade-offs, in terms of data quality; 

(c) Assuring soundness of implementation  This is achieved by carrying out such activities as selecting staff; conducting training programmes that emphasize the importance of statistics fit for the purpose; building into the production process data quality checkpoints and, as appropriate, sign-offs to be completed before proceeding to subsequent stages in the statistical life cycle; documenting procedures for the design, development, implementation and evaluation of the statistical compilations; and consulting with stakeholders, especially users and potential respondents, at all appropriate stages of the statistical life cycle; 

(d) Managing the respondent burden The respondent burden is managed by raising awareness that the requirement to collect information (user needs) should be balanced against production costs and the burden placed on respondents (supplier costs). Compilers of statistics on the international supply of services should be proactive in managing the respondent burden, ensure that there are mechanisms in place to assess the necessity of undertaking new statistical surveys, and take care to reduce or distribute response burden. It is important for compilers to inform respondents about (a) the purpose of the survey (including the expected uses and users of the statistics to be produced from the survey), (b) the authority under which the surveys are taken, (c) the collection registration details, (d) the mandatory or voluntary nature of the survey, (e) confidentiality protection and (f) the record linkage plans and the identity of the parties to any agreements to share the information provided by those respondents. Mechanisms to maintain good relationships with individual providers of data and to proactively manage the response burden are essential for improving quality. 

19.26.        Concerning outputs produced within the statistical framework for describing the international supply of services, the NQAF lists six groups of activities that should be carried out in the following manner: 

(a) Assuring relevance in the context of the varying needs of users Relevance can be assured by consulting users about the content of the statistical work programme, prioritizing among different users’ needs in the work programme, establishing an advisory council to assist in setting overall statistical priorities, conducting periodic reviews of the continuing relevance and cost-effectiveness of individual statistical programmes/domains, ensuring a good understanding of the interdependencies among individual statistical programmes/domains and coordinating, harmonizing and providing full coverage of statistical information produced by the national statistical system; 

(b) Assuring the accuracy and reliability of outputs   Assessing and validating source data, intermediate results and final outputs, using proper statistical techniques, comparing the obtained data with other existing sources of information in order to ensure their validity, clearly identifying preliminary and revised data and providing explanations about timing and the reasons for and the nature of revisions; 

(c) Assuring timeliness and punctuality  Establishing a clear release policy (taking into account the need for different dissemination formats and the frequency of data), considering trade-offs between various quality dimensions, clearly identifying preliminary data so that users are provided with appropriate information for assessing their quality; 

(d) Assuring accessibility and clarity  Releasing  statistical results with readily accessible and up-to-date metadata covering concepts used and deviations, scope, classifications, the basis of recording, data sources, compilation methods and statistical techniques, etc., to allow for a better understanding of the data;

(e) Assuring coherence and comparability Establishing well-defined relationships among statistics on resident/non-resident transactions in services, FATS and additional indicators on the international supply of services is very important; also, the relationships between statistics on the international supply of services and other economic statistics should be well articulated. Compilers should ensure the application of efficient and documented procedures for combining data from various sources; provide clear identification and explanations of breaks in series; and provide provision of methods for ensuring necessary data reconciliation; 

(f) Managing metadata  This includes the provision of information covering the underlying concepts, variables and classifications used, the methodology of data collection and processing and indications of the quality of the statistical information to enable the user to understand all of the attributes of the statistics on the international supply of services, including their limitations, for informed decision-making. 

19.27.    In the context of point (e), on assuring coherence and comparability, a number of compiling agencies have created a unit in charge of ensuring the coherence of data collection quality and the coordination of work across economic statistics concerning data on large multinational enterprises. The unit is often referred to as the “large and complex enterprise unit” and may have different tasks assigned to it. In many countries, a relatively small number of multinational enterprises account for a major part of total services production and trade. For that reason, they are generally included in most surveys that are carried out in the area of economic statistics, and are, consequently, covered in the work of the large and complex enterprise unit. Several statistical offices have also realized that a more proactive dialogue with important respondents can improve the large enterprises’ understanding of the statistical data requirements. This is why in countries in which a small number of multinational enterprises account for a large part of the national production, there are also likely to be important services traders. Compilers are therefore encouraged to identify whether there is a need to consider such a unit, and if it already exists, to approach it in to identify how to collect and compile data in a coherent way.[2]

Evaluating the validity of reported data

19.28.    It is important for due attention to be paid to evaluating the validity of reported data. Techniques for that evaluation include comparing reported values for the current period to those for prior periods in order to ensure consistency, calculating ratios of key items, such as sales per employee or value added to sales, to ensure reasonable results and establishing ranges and tolerances to identify outliers for review by survey staff.  Automated checks can be included in an electronic questionnaire or in the survey processing system to detect unusual or large changes in the data, internal inconsistencies or invalid responses.[3]  The questions about the validity of reported data can be subsequently resolved through consultation with respondents and by other means (e.g., use of relevant financial statements, regulatory filings and other surveys).  It is also good practice to dedicate staff to evaluating data reported by specific respondents. Establishing a relationship between the compiler and the reporter can lead to improved reporting. 

19.29.    Validated responses provided by respondents at the individual company (micro) level must be aggregated to higher (macro) levels for further review and evaluation and, ultimately, public release.  For universe (benchmark) surveys, validated microdata (including estimated values for non-respondents) can simply be summed over all units. For sample surveys or other non-benchmark surveys, aggregation of microdata will depend primarily on how the sample was selected and the associated sample weighting factors used to develop universe estimates. For a probability sample with weights, individual values are multiplied by the weighting factor, and the weighted values are summed.  For cut-off surveys or other surveys not based on probability samples, growth factors can be used to extrapolate the aggregate value forward from the most recent benchmark year, using values from a matched sample for adjacent years.  That method assumes that the growth rates for the firms in the matched sample are representative of growth for firms that were excluded from the sample because they fall below the sample cut-off threshold.

 

Include page: Country experience: Ireland (Chapter 19)

 

Next: D. Quality measurement and reporting



[2] More information can be found in the ECE “Guide to measuring global production” (chapter 6).

[3] See chapter 21 for more information on the use of information and communications technology.