Statistics Division Home
Development of National Statistical Systems
Country Profiles of Statistical systems
Key Features of National Statistical Systems
Country Practices
Handbook of Statistical Organization
National Quality Assurance Framework
Technical Cooperation Trust Fund UN-China
Search by Country
Free Text Search
Recent Updates
Sign in

Managing Quality in Statistics New Zealand

Production and presentation of official statistics:
strategies for managing quality

John Cornish & Robert Templeton
Statistics New Zealand
PO Box 2922
Wellington, New Zealand
E-mail john_cornish@stats.govt.nz

1 Introduction

1 The main uses of economic and social statistics produced by official statistical offices have specific and quite demanding quality requirements. Regular revisions, inconsistencies or other persistent data quality problems can result in loss of confidence in the information produced by an organisation. Other aspects of quality such as coherency of presentation with other related data, whether adjustments have been made for say seasonality or compositional differences, and the accessibility of information on methods and quality are also important for maintaining user confidence. The quality of official statistics therefore needs to be managed as a key organisational strategy. Its many aspects need to be well understood and its management needs to be deliberate and pervasive across all aspects of production and delivery.

2 Quality, however, is just one part of the trilogy managers constantly balance:- budgets, time constraints and quality. To ensure quality receives due priority in this balancing, Statistics New Zealand has introduced several corporate-wide initiatives. They build on the many strategies and practices that are being followed in Statistics New Zealand to achieve quality in the day to day production of statistics (eg sound statistical methodologies, use of standards, peer reviews, training, validation of results etc). The end goal is to raise the profile of management of quality and to establish it as a focus in the planning and ongoing production of outputs.

3 The two initiatives described in the paper are directed at a) assessing the quality of statistical outputs for the various subject fields, and b) managing quality in the ongoing production of statistics. They are supported by various policies, tools etc which are also listed.

2 Assessing the quality of statistical outputs

4 Statistical products and services provided by Statistics New Zealand are produced by ‘output’ areas (which cover subject fields of statistics eg national accounts, business statistics, labour etc). These output areas take statistics from ongoing and ad hoc collections (including administrative records), and present the results in statistical products and services (ie outputs). Sometimes the data are presented on their own as the results of a survey, but more likely they are either presented along with other related statistics for the period concerned (eg quarterly national accounts), or along with the results for previous periods (eg economic indicators).

5 Generally the statistics are presented in accordance with a conceptual framework. This framework might be relatively narrow or as comprehensive as the system of national accounts or balance of payments. In many cases, outputs are simply the results of ongoing collections presented as the latest measures in time series. Various degrees of analysis will be undertaken and reported on.

6 The quality of the various outputs is judged by users on a variety of aspects relating to their use of the statistics. To help an output area to judge its collective efforts at meeting user quality requirements across all the different quality aspects, and to determine areas of risk of performance, a framework for assessing the quality of outputs for a field of statistics has been developed. The results are also being used in corporate planning to point to where there is a need for additional investment in statistical or IT infrastructure to reduce quality risk or improve performance in areas of corporate concern.

7 The framework has about 100 indicators grouped under the following aspects of quality:-

1. relevance
2. expertise
3. adaptability and responsiveness
4. accuracy
5. coherence
6. interpretability (analysis, presentation)
7. timeliness
8. accessibility of data and metadata
9. client service
10. efficiency
11. respondent management
12. management of risk and performance

The last 3 aspects relate to performance aspects that can indirectly impact on the other aspects of quality (eg efficiency influences costs and timeliness, poor respondent relations can result in less than satisfactory response rates). The full framework is provided as Attachment A.

8 An important premise underlying the framework is that the environment in which we operate and user needs are constantly changing. Hence, some emphasis is placed on the importance of having a sense of direction, being adaptive and having the right skills, infrastructure and practices to enable an output area to meet expectations into the future as well as now. There is also some emphasis on the extent of corporate fit through achievement of corporate goals and adherence to policies such as the use of statistical standards to facilitate integration of data, setting release dates in advance and meeting them, and responsiveness to the statistical needs of the Maori population.

9 The framework has been prepared to allow for self-assessment by an output manager, and the results are to be used to underpin the regular reporting of the position and strategic direction of the various statistical programs used for planning purposes by Statistics New Zealand. After the initial assessment, subsequent reports are expected to show where progress has been made to alleviate the risks arising from areas of underachievement of quality expectations and performance identified by the framework.

10 Initial use has indicated that, at the very least, it provides a useful and comprehensive list of the many aspects relating to quality and performance that managers are expected to achieve for their outputs. A challenge has been to produce a succinct summary of the key aspects shown by the many indicators, so that the main areas for improvements stand out and progress can be monitored. There are usually some common themes running through, as shown in the example provided as Attachment B.

11 The requirement to include the assessment along with the strategic program reports provides an opportunity for senior management to discuss with program managers areas of quality risk and areas where priorities and funding limitations are putting at risk any aspect of quality. Such considerations are feeding into priority setting and resource allocation for the annual planning round. The initial results have also led to questioning of whether self-review has been critical enough or the assessments need to be done by an independent assessor.

3 Managing quality in the production of statistics

12 The focus of the above framework is on the quality of the statistical products and services collectively provided for a subject field, with a very broad view taken of quality as it relates to a user perspective. In particular, there is an emphasis on providing the right information and presenting it in ways which readily supports informed use. A key determinant of the quality of a statistical output product is, of course, the quality of the underlying data source(s). The aspects of quality that are most important for source data relate to those of measurement (ie the right measures, accuracy, coherence, and timeliness).

13 In Statistics New Zealand, the different stages of data collection (from design through to respondent management, data collection and output production) are usually undertaken by different work areas, which are often in different offices. To manage the quality of what is produced requires a total system approach, with attention paid to each of the components, how they interact and how they come together as a whole, as depicted in Figure 1.




14 Overall responsibility for what is produced from a collection and its quality is vested in an output manager in a subject area. So that an output area can meet the standards required for their output products and services they need to specify the standards required of their source data (eg standard error of movements, response rates, time between reference period and release, use of standard classifications). The source data collections should be designed and developed to meet these standards.

15 The output manager also needs assurance that the required quality is being achieved and that consistent decisions are being made at all stages with regard to achieving the specified quality standards. This is being managed in SNZ by the implementation of a Quality Management Model, the essential elements of which are:-
-- quality standards specified and known to all process managers and staff
-- processes and rules documented and the information readily accessible
-- owners of the outcomes of each stage identified along with their responsibilities
-- ‘outputs’ at each stage defined along with standards and tolerances
-- the production and monitoring of indicators to ensure the standards are met and users can be informed of the quality achieved
-- a system for registering process problems and managing action taken
-- a system for change management.

16 The quality management model is based on a simple but essential premise viz "how can you manage something if you do not know what standard you are aiming at and what you are achieving?" The quality management model demands all collections to have statements of the quality standards required to meet the key uses of the data and the indicators to be produced for various processes of production. It also requires monitoring of the indicators against the standards, with managers expected to take action when there are unacceptable variances. Key indicators will also be included in regular reporting to senior managers as done with budgets and other aspects of performance. Attachment C provides an example of a quality standard and the indicators of quality to be monitored.

17 Design is usually an area of strength for official statistical offices, arising from a history of statistical professionalism founded on the use of sound mathematical statistical techniques in the design of a sample survey, and the use of standards such as frameworks and classifications, robust computer systems and training. The designs, however, are based on assumptions, the validity of which need to be checked regularly to ensure that they remain valid. This includes assumptions about coverage of frames, representativeness of respondents, imputation rules, understanding by respondents of concepts underlying questions and relevance of classifications to name some of the areas. These assumptions are required to be spelt out in the quality standard statement, as they point to some of the indicators of quality to be monitored.

18 A standard set of indicators has been developed for each of business surveys, household surveys and administrative records. The relevant set , along with the design assumptions, is used to decide on the indicators to be monitored for a particular collections. Examples of indicators to be monitored are:-
-- sample errors
-- response rates (by key strata/groups)
-- proportion of proxy interviews
-- special treatments
-- impact of births, deaths & rotation
-- units out of strata selection boundaries (size, industry etc)
-- value of outliers & imputation made
-- scope & coverage adjustments
-- population change
-- irregularity in time series
-- respondent load

19 Production systems are being modified so that such indicators are produced as a by-product of production and available for monitoring the quality of the processes and the resulting statistics. The indicators are to be stored in one central location for access by all interested areas. Key information on quality should also be stored in the Survey Information Manager (SIM - a corporate repository of metadata), so that an historical record is kept and readily available to users of the data.

20 It is expected that the indicators will form the basis of reports prepared for ‘signing-off’ the quality of survey results for publication. Some of the information should be regularly made available to users (as currently done with sample errors), and most of the information should be accessible by users on demand to allow them to assess quality for use.

4 Support

21 The management of quality of ongoing collections needs the support of tools and infrastructure. The key tools and infrastructure which have been developed by Statistics New Zealand for assisting the management of outputs to quality standards include:-
-- expert service areas with a focus on standards and understanding of techniques for reducing error (eg sampling methodologists, questionnaire designers, time series analysts, subject experts)
-- the Statistical Project Management (SPM) methodology, which is used to manage the development of collections
-- Protocols for Official Statistics, with associated standards and guidelines for graphs, tables, time series presentation, revisions, releasing data with error, form design, etc
-- standard frames, frameworks, definitions, questions, classifications, codefiles, coders
-- the Classifications and Related Standards system (CARS), which is used to manage classifications
-- Survey Information Manager (SIM) and other Lotus Notes databases for easy access to documentation
-- peer reviews of the design of various aspects of a collection
-- registers for logging problems and tracking their resolution

22 Frames such as the Business Frame which service many projects have their own quality standards to ensure they can support the needs of particular collections as regard to coverage of units, up-to-dateness of design information etc. The challenge with such infrastructure, and with expert service areas, is to ensure they do not get a life of their own and avoid striving for quality beyond what is needed by the collections.

23 Sound designs require a good understanding of the many sources of non-sample error and their relative contribution to overall error in the absence of quantification of the errors. To assist survey designers, as well as operational managers, a framework is being developed which sets out the strategies and work done within Statistics New Zealand to minimise each of the components of non-sample error, and the likely end impact on output statistics.

24 High level management support and leadership is also needed, particularly as quality measurement and improvement are often done alongside the more demanding operational aspects of production according to a time schedule. The main principles are:-
-- output managers actively taking responsibility for the quality for their products, and encouraging continuous improvement based on team work
-- efficiency and budget management should be applied within the context of quality management
-- encourage statistical thinking through analysis of data and information on performance
-- information on quality regularly collected and used in decisions on developments, production etc
-- customer/end user orientation
-- quality is stressed in plans and communications
-- documentation is encouraged
-- good practices sought out and promoted
-- regular reviews of performance of systems
-- interest taken in how well systems are working
-- extension of solutions/innovations to other processes
-- staff development and support.

25 Finally, no matter how carefully managed are the above quality initiatives and strategies, there are quality problems which arise, often during tight production schedules, that need to be investigates and resolved. The key to success here is staff with the right skills able to take a broad perspective and willing to step back and question assumptions made in designs etc. Often experience learnt from past problems provides the key to quick resolution.

ATTACEMENT A: FRAMEWORK FOR THE ASSESSMENT OF QUALITY AND PERFORMANCE RISK OF A SUBJECT FIELD OF STATISTICS

INTRODUCTION

This framework has been developed as a basis for assessing the risk of a statistical output area not achieving expected quality or performance standards for the statistical products and services produced for a field of statistics (eg national accounts, health statistics, demographic statistics). It has been produced primarily as a guide for assessing the risk of quality problems, but can also point to where there is a need for additional investment in statistical or IT infrastructure to reduce risk or improve performance in areas of corporate concern.

Risk is judged across four broad dimensions:-

1 RELEVANCE AND DIRECTION

1. relevance
2. expertise
3. adaptability and responsiveness

2 QUALITY

1. accuracy
2. coherence
3. interpretability (analysis, presentation)
4. timeliness

3 ACCESSIBILITY AND SERVICE

1. accessibility of data and metadata
2. client service

4 MANAGEMENT

1. efficiency
2. respondent management
3. management of risk and performance

No one of these aspects can be considered in isolation from the others. In particular, a balance is required across aspects of quality, timeliness and cost.

The framework has been prepared to allow for self-assessment by an output area , with most assessments being simply 'OK' or "not OK'. However, some of the assessments may be best made with input from an independent assessor. Regular assessments (say annually) should show improvements in those areas underachieving, particularly areas needing attention to ameliorate risk of inadequate performance.

While the framework relates to an output area (which generally brings together data from more than one source), it can be applied with some tailoring to the output of a specific ongoing collection which produces regular outputs presented as time series.

1 RELEVANCE AND DIRECTION

1.1 Relevance

For an output area, it is important that it is producing the right statistical information that informs current government and public decision making, research and discussion on issues in the subject field. It is also important that there is a sense of direction whereby the statistics are being enhanced or further developed so that they remain relevant or improve their usefulness.

It is usually the case that not all statistical needs in a field are able to be met from within allocated budgets and so there must be up-to-date understanding of priorities and outputs produced reflecting those priorities. There must also be a balance between the frequency and detail of high priority needs and the less important needs so that a range of information needs are being met.

Key indicators are:-

-- good understanding of who are the key users and emerging new stakeholders
-- information needs and associated quality standards are regularly assessed and defined through user consultation
-- good understanding exists of public policy directions and issues, and the context within such policy is developed
-- information supports the Maori Statistics Framework and the government's Maori outcomes assessment policy
-- highest priority needs are being met
-- information is produced at the right frequency to allow timely monitoring of changes
-- information provides an estimate of what the key users want to measure, rather than something else and users are left to make adjustments
-- questionnaires, definitions and classifications reflect contemporary needs and situations
-- a balance exists in the frequency and detail of outputs across priority needs so that less important needs are met less frequently
-- key user commitment and funding of lower priority needs when necessary

Other factors to consider are

-- what are the key unmet needs in priority order?
-- what would go if 20% less budget and what extra would be done with 20% more

1.2 Expertise

The production of relevant and reliable statistical output requires expertise in the relevant subject field and of the underlying statistical frameworks, concepts etc. It also requires understanding of the data (both source and outputs) and of the basic statistical principles underlying production of the data. In some cases the expertise can be provided by specialised staff or contracted in.

Key indicators are:-

-- staff with good knowledge and understanding of subject field, source data, relevant concepts and classifications, current and emerging issues, statistical elements of operations
-- expertise is maintained and not vulnerable to key staff changes
-- good, up-to-date and accessible documentation
-- staff in touch with counterparts in other statistical agencies, key users and relevant experts
-- good understanding of key uses of data, including by Maori
-- good understanding of the concepts and measurement issues related to Maori statistics and small populations in general
-- contributions made to relevant professional associations, international developments

1.3 Adaptability and responsiveness

This refers to the ability of data input systems to be adapted to collect different information and the time taken to respond to new demands. This could mean exploiting the collection infrastructure in a planned way so that there is a program of core surveys and of 'supplementary' surveys conducted at marginal cost to meet more detailed or lower priority needs, as well as an ability to collect unplanned new information with short lead times.

Key indicators are:-

-- source collection infrastructure support a mix of core and 'supplementary' data collection
-- flexibile systems able to quickly respond to requests for new information needs
-- low cost adaptability of input collections and output systems to changing needs

2 QUALITY

Quality has many dimensions, all of which should be determined by the uses made of the data by the key users. From a user's perspective, quality of statistics is their 'fitness for use'. Relevance and accessibility are important characteristics, and these are covered as separate aspects of performance because they have wider implications. The key dimensions of quality to be assessed here are accuracy, coherence, interpretability and timeliness.

2.1 Accuracy

Accuracy of information produced by an output area relates to the degree to which the information correctly estimates what the statistical processes producing the information were designed to measure. The accuracy of an estimate covers both variability arising from measurement (sample and non-sample error) and any biases in the measures. Whether the statistical processes are measuring the right thing is covered under Relevance.

Key indicators are:-

-- quality measures (eg sampling errors) and indicators (eg non-response rates) regularly produced and monitored for source data and derived measures
-- mean square error meets standard for key user needs
-- definitions of data consistent with both user and provider understanding
-- assumptions on which key measures are based remain valid
-- a revisions policy which balances the need to inform users of improved estimates with possible confusion from insignificant changes
-- insignificant and consistent (size and direction) differences between preliminary and final estimates
-- data rebased regularly
-- data source samples redesigned or reselected regularly to maintain sample errors within quality standards

2.2 Coherence

This refers to the degree to which information is brought together, confronted and presented in a logical and comprehensive way, either from a single source or more likely brought together from different surveys or sources. Time series are a particularly common method of presentation where coherency is important. Other common examples are industries, areas, and population groups such as Maori. Common definitions, classifications and methodologies are important to achieving coherency.

Key indicators are:-

-- all source data measured in accordance with standards (frame, statistical units, definitions, classifications, processes)
-- data presented in a framework, along with other relevant data from other sources
-- long term time series are available for repeated measures, with explanations or adjustments for breaks in the series
-- input data from different sources are confronted and reconciled
-- output data is consistent or reconcilable with other sources
-- concordances available to allow data to be related to previous classification versions or related classifications
-- consistency between aggregates and components
-- key classifications are maintained so that comparability is maintained over time to allow comparisons required by users while providing measures of contemporary and emerging events

2.3 Interpretability

This refers to the ease with which a user may understand and properly interpret information produced by an output area.

Key indicators are:-

-- analysis undertaken to find and present key findings of, and relationships within, the source data
-- information of significance to Maori and Maori public policy are clearly presented
-- seasonal and trend analysis and other standard adjustment techniques (eg population standardisation) are undertaken to enhance the usefulness of the data and reduce problems of interpretation and comparisons
-- preliminary or early estimates are clearly indicated as such and information provided on limitations and expected level of revisions
-- major changes to the data from revisions, rebasing, etc are published separately from, and where appropriate ahead of, the release of new information so that the impact on the data is able to be separated out
-- all revisions are clearly marked and explained
-- presentation standards used for tables and graphs
-- information on methods, concepts, data sources, etc is up-to-date and readily available to users
-- information on quality achieved is readily available

2.4 Timeliness

This is primarily determined by the length of time between the reference period of the data and its availability, but must be considered along with other aspects of quality as there are usually trade-offs.

Key indicators are:-

-- achieved timeliness meets key user needs
-- the relative importance between timeliness and accuracy is understood, and this balance is met
-- preliminary results of acceptable quality produced to meet time constraints of key users
-- release dates are published in advance and met

3 ACCESSIBILITY AND SERVICE

3.1 Accessibility of information

Accessibility refers to the degree to which the information required by users is known and is easily obtainable in the format required for use.

Key indicators are:-

-- main findings are made widely available (eg through press releases, website, public libraries)
-- information on data availability (published and unpublished) is readily known
-- catalogues/directories are available for field of statistics
-- information is available in formats and media required by users
-- first release of key data are made available in all media at the same time
-- release dates are announced in advance
-- access by Maori is facilitated by dissemination of relevant results and in appropriate formats
-- information is affordable
-- new information and communication technology is being used to improve the presentation and accessibility of data

3.2 Client service

As well as making information readily accessible, support services need to be provided so that users get the information they need, when they want it and how they want it.

Key indicators are:-

-- expert assistance readily available
-- standards set for time taken to meet requests
-- indicators of service produced and monitored
-- standards met for time taken to meet requests

In addition, outline
-- signals of user satisfaction and dissatisfaction with products and services
-- support for user funded surveys to meet lower priority or special needs
-- trends in usage (sales of standard products and consultancies, enquiries, website usage, etc)

4 MANAGEMENT

4.1 Efficiency

Efficiency relates to the cost of production and focuses on the cost of inputs used for an output. For an output area, the cost incurred directly by the organisation for the supply of data, as well as indirectly by respondents, would usually be a critical input to the overall cost of production.

There would be an expectation of productivity improvements over time so that either costs are declining or approved outputs increasing (not output creep which relates to unapproved outputs being achieved instead of declared cost reductions).

Best practice methods and contestable supply of inputs are a key to achieving efficient production.

Key indicators are:-

-- measures of efficiency monitored for all production processes
-- cost parameters available and used in design of source data collections
-- quality and costs for inputs set on the basis of their contribution to overall error and quality of outputs
-- low cost systems maintenance required
-- use of latest IT or statistical methodology to achieve efficiency, timeliness or quality standards

In addition, outline

-- extent of use of contestable inputs
-- degree of automation of processes
-- improvements made over time in efficiency, quality and/or timeliness.

4.2 Respondent management

The long term viability of production of any output depends on maintaining good relationships with the suppliers of the data used to produce the outputs (both respondents and custodians of administrative records). This means ensuring demands made are reasonable, that methods are taken to spread demands and that assurances given to respondents are honoured.

Key indicators are:-

-- forms in source data collections tested with respondents
-- special collection needs of Maori addressed
-- selection methods used for source data collections which manage overlap and rotation of respondents
-- administrative data used wherever possible
-- good, up-to-date understanding exists of respondent information sources
-- respondents able to provide requested information via preferred method
-- help/support system available
-- measures of load produced regularly
-- sound security of information
-- confidentiality checking of releases

4.3 Management of risk and performance

Statistics New Zealand has many policies and standards which are expected to be followed by each output area and collection to achieve corporate goals. There are also expectations of practices to be followed to achieve cost-effective outputs produced to quality standards.

Key indicators are:-

-- communication channels established and used regularly with key and other users
-- regular participation and exposure at professional conferences etc
-- survey infrastructure exploited to produce irregular and less frequent demands at marginal cost off core surveys
-- implementation of SNZ Maori Responsiveness Plan
-- quality standards set and regularly reviewed for key outputs
-- indicators of quality (including timeliness) regularly measured and monitored
-- problems and suggestions for process improvement are logged and action tracked
-- systematic feedback to collection design from errors and problems identified in analysis, editing, field etc
-- documentation on standards, processes, etc up-to-date and accessible
-- outputs systematically analysed and validated before release
-- systems integrated with corporate statistical infrastructure systems (CARS, BF, etc)
-- information on methods, etc in SIM and available to users
-- information on achieved quality reported to users
-- continuous improvement philosophy to maintain relevance
-- regular independent review
-- data custodian
-- data archived
-- adherence to security and confidentiality policies
-- release process so that all releases consistently available at release time
-- metadata on SIM updated at time of release
-- requirements of Statistics Act met

Finally,

-- what are the priorities for improvement in performance
-- what are the major barriers to further improvement in performance (eg new IT system needed to improve responsiveness to users or timeliness).

ATTACHMENT B: EXAMPLE SUMMARY OF QUALITY RISK ASSESSMENT
FOR BUSINESS STATISTICS

RELEVANCE & DIRECTION

-- In a rapidly changing economy, need to improve understanding of user needs beyond the macro level, and have flexible systems to be responsive
-- Need to introduce a Maori perspective.

QUALITY

-- Survey designs need updating more regularly
-- Frame updating & sample maintenance problems

SERVICE

-- Improve understanding of access needs of business users

MANAGEMENT OF RISK

-- Systems could be more flexible, and output systems more automated
-- Management of quality of input systems needs to be institutionalised

ATTACHMENT C: QUALITY STANDARD FOR QUARTERLY EMPLOYMENT SURVEY

Content

Objectives
The prime objective of the Quarterly Employment Survey (QES) is to regularly collect from employers information on the average ordinary time and overtime payments to employees, the average hours worked by employees, and the number of filled jobs for the purpose of monitoring wage inflation and labour costs.

A second objective is to monitor such information by broad industry and gender.

Key uses

-- Private sector hourly earnings data are used by the Reserve Bank in their economic models to indicate wage inflation variable. For such use, the data must be able to show the size and direction of movements and indicate turning points early and accurately.
-- Ordinary time weekly earnings are used as the benchmark in setting the floor for National Superannuation levels, and for setting stand-down periods for unemployment benefits. Very high levels of budget expenditure depend on such use and hence accurate estimates of level are required and without revision.
-- Total paid hours are used to monitor economic activity and for the calculation of labour productivity to monitor competitiveness and performance.
-- Gross payout figures are used in the measure of compensation of employees in GDP measurement. Accurate estimates of movement are required as this measure is a significant component of quarterly national income which is required to be estimated to a high degree of accuracy for monitoring the economy.

Accuracy

To support the key uses, the following accuracy levels at the 95% confidence level at the national level are required quarterly:

Average weekly ordinary time earnings qtr-qtr change +/- 0.5%
Average ordinary time weekly earnings +/- 1.8%
Total filled jobs +/- 0.5%
Average weekly ordinary time earnings +/- 0.6%

As well as high levels of accuracy, revisions should be kept to minimum as they can result in loss of faith in the data as an indicator of economic activity and in can have major implications for government expenditure.

Because the prime need is to monitor changes overtime, consistency of data produced from measures and operations is of prime importance .

Timeliness

To provide timely information on trends, the survey is conducted four times a year with the reference period being the payweek immediately preceding the 20th of February, May, August, and November.

Release days are published in the corporate plan and in the publishing calendar. They are to be within 13 weeks of the survey reference date for the February provisional results, and within 14 weeks for the May, August and November sample survey results.

Presentation

Internal
Each quarter…

-- One frozen/final Sprocet view (to create a final dataset, final TSM loading files, (final esttab listing of absolute values for TSM loaded series), final sample errors of change (quarter and year) and errors on level for all TSM loaded series.
-- One final hand-over report specifying information about births and deaths and their impact on results, outliers and how they were dealt with, imputation, special treatments, and any other available information about data, respondents, and sample maintenance that has occurred in the quarter.
-- One expectations report (to EBFS) detailing expectations for the following quarters results:
expected movement and why
acceptable variations to expectations
major contributors
major movers against the trend and why
by industry

External
Each quarter…

Hot Off the Press Release
Media Release
TSM data

Tolerances and conditions

Labour Market Division and line management to be notified when...
-- Timetables are unable to be met.
-- The full range of edits are unable to be worked through.
-- Changes are made to questionnaires, post-out, methodology, processing system, editing system, imputation system, key firms list, scope and coverage adjustments etc (ie anything that could potentially change the outputs)
-- Key firm respondents have not responded (also requires notification as to what is being done to correct this).

Key assumptions in survey design

Frame Assumptions

-- Only a very small percentage of the target population can not be referenced on the frame. When this assumption does not hold, some estimation adjustments should be considered.
-- Quality control processes are in place to ensure that there is no duplication of units on the frame.
-- The frame should contain only a very small percentage of ineligible units (when such units are selected into a survey, it is assumed that their ineligibility would be correctly identified if there is a response).
-- Regular processes are in place to identify all new businesses and these operate at a more or less continuous frequency. It is assumed these processes operate evenly and consistently across industry, size, sector etc. It is also assumed that the time lag before such units are available for selection is within known and manageable limits.
-- Regular processes are in place to identify all units on the frame (including re-validating business structure, contact details, industry, size etc) and that these operate at least annually. It is assumed these updates are fed to the frame in an even and consistent manner across industry, size, sector etc.
-- Quality control processes are in place to ensure that the accuracy of address, phone, industry codes are within known and manageable limits.

Response
-- Non-Respondents of a certain type (i.e. within a given estimation cell) behave similarly to respondents of the same type.
-- Regular processes are in place to ensure that units which cannot be accurately imputed (i.e. too volatile) or are very significant are prioritised for non-response follow-up.
-- The combined contribution of processing and responseerrors and is very small.
-- Out-lying units can be determined to be unique or not.
-- There is only a small percentage of special treatments (but this number can increase over the life of the design).

Standards to be applied to achieve to the accuracy standards

Response rates

No less than 85% of responses by weighted FTE for the February Census
No less than 89% of responses by weighted FTE for the May, August and November samples, and no less than 85% of responses by geographic unit.
100% of responses for key firms

Editing

The data is to be edited as specified in the document titled ‘Quarterly Employment Survey Training Guides’.

At output stage macro editing procedures are detailed in the document titled ‘Operating Instructions Quarterly Employment Survey’. Generally, movements in excess of 2% at the 2 digit NZSIC level for each GEO are looked at. Significant changes are automatically sent to Survey Methods for consideration of special treatment. Data tables are also independently peer reviewed.

At output analysis stage investigations are detailed in the document titled ‘QES how to’ (how to produce the QES release and QES checklist ). The purpose of investigations at this stage is to ensure the QES data is:

-- consistent with other available information and if not why
-- consistent with market expectations and if not why
-- find out what is driving changes
-- ensure there are no gross errors

All key firms with movements in total earnings, hours or FTEs of over 20% must have a comment in Sprocet.


Back to top | Statistics Division Home | Contact Us | Search | Site Map
Copyright © United Nations, 2007