A. Quality assurance
1. An Overview of the main elements of national quality assurance frameworks
9.3. Systematic approach to quality assurance. IMTS 2010 promotes a systematic approach to data quality. This means that all aspects of the entire trade statistics program are to be examined and evaluated against certain principles and standards in order to more effectively identify and implement appropriate action to further improve data quality (see IMTS 2010, para. 9.4). The treatment of quality assurance in the present section entails a narrower focus, on data quality, as the section examines only some of the specific issues considered and actions commonly taken, by customs and the compiling agency to assure the accuracy of the statistical information within the existing setting of data compilation.
9.4. Methodological soundness. Quality assurance requires the adoption, application and enforcement of a conceptual framework for foreign trade statistics, preferable in line with the international recommendations. Decisions in respect of treatment of transactions in specific categories of goods (scope of trade statistics), and of transactions destined for or originating in certain territorial elements, their classification and valuation, quantity measurement and attribution of partner country are part of the daily work of customs officers and trade statistician and require the existence of a clear methodological framework. Any automated quality assurance and data validation must be based on and derived from the conceptual framework adopted by a country.
9.5. Data processing and validation: types of checks and tools. Statistical data processing requires the capture of individual trade transactions, the creation of trade records, and their validation and integration into data sets encompassing all the records of a specific period. Validation checks are commonly used for: completeness, validity of codes, range check of values, internal consistency and aggregate consistency. Often, the estimation and insertion of missing values and codes are integrated into the completeness check. Tools for validation include validation at data entry (“in dialog”), batch validation with the creation of error lists, generation of error statistics, flagging of significant transactions, classification of errors into certain versus possible errors and automated versus manual error correction. The inclusion of additional sources of information usually requires manual corrections as such information is external to the system. In some offices, manual corrections will always require that additional and/or external sources be used (e.g., by contacting the declarant). However, it might not always be possible to obtain the additional information in the time available, and manual corrections might be made without the use of such additional sources.
9.6. The information problem at data entry. The starting point of the statistical quality assurance process is the point at which the information is provided. This usually occurs when the customs declaration is completed, as customs records are the main, and normally the preferred, data source for merchandise trade statistics. Customs declarations are themselves administrative records containing selected information about commercial (or non-commercial) transactions and the logistics of moving the goods from the seller to the buyer (or from the sender to the receiver). Usually, the information on the customs declaration is entered separately and is not, for example, derived electronically from existing information; hence, those completing the customs declarations (commonly the shipping agent or trader) might not have complete information about the transaction, logistics and subsequent transactions.
9.7. Data entry. The most important stage of quality assurance for trade statistics occurs when the required information is entered into the customs declaration, as the agent or the person completing the customs declaration should have available all the required information to the best possible extent. Electronic data entry systems allow the implementation of comprehensive validation rules which can prevent certain types of typing errors, entry of invalid or implausible codes, and entry of values outside a certain range, as well as invalid or implausible combinations of entries. The development and implementation of such rules require significant knowledge and investment in the IT system. Also, the validation systems need to be carefully designed so as not to obstruct the entry of accurate information or invite “gaming with” or circumventing the validation system, thereby leading to a deterioration of data quality.
2. Quality assurance at customs
9.8. Priorities. Security and safety and the collection of revenue are the core functions of customs and can be viewed as the prime objectives of data quality assurance at customs. Therefore, the customs information on imports is in many countries considered as being of higher quality than the data for exports, as customs duties usually apply only to imports and not to exports. However, this traditional view is not an adequate description of the situation in many countries. Many customs offices have statistical units that aim to ensure comprehensively the quality of statistical information. Quality assurance, seen as a comprehensive concept and supported by the automation at customs, will lead to an improvement in quality of all elements of the data. Further, the concept of an integrated data pipeline extending from the buyer to the seller (see para. 8.8 above) demonstrates than an emphasis solely on import information is outdated, since in a possible future global customs system, the information for export and imports will be integrated and treated as two sides of one transaction.
3. Quality assurance at the responsible agency
9.9. Characteristics. The responsible agency is expected to conduct a systematic quality assurance programme covering all elements of the statistical information, using the full range of validation checks and tools as specified in paragraph 9.5 above and ensuring the timeliness of the information provided to users. A special focus is often given to the aggregated data and the final results which are compared with the ones from previous periods. However, frequently, special attention is also given to certain transactions that might be of particular importance or of high value, or might be potentially outside the scope of IMTS (e.g., goods for repairs and transactions in ships and aircraft). Often, the responsible agency has or can gain access to the original record and its accompanying information at customs. In many ways, the quality assurance at the responsible agency depends on data provision by and cooperation with customs, unless, of course, customs itself is the responsible agency.
4. Major quality issues and how to approach them
9.10. Main quality issues from the user’s perspective. Gaps in coverage, asymmetries in partner information, unreliable quantity information and insufficient timeliness are often perceived as the major quality issues associated with international merchandise trade data. The issues raised are discussed briefly below. However, certain country practices, discussed further on this publication, address some of these issues in more detail.
9.11. Coverage. Some major coverage issues such as the application of the special trade system or the need for confidentiality of certain transactions are beyond the scope of the regular quality assurance at the responsible agency. However, in many countries, transactions in certain commodities, such as oil, gas, electricity, raw materials, ships and aircraft, are not or not adequately captured by customs or by the responsible agency. In other countries, border or shuttle trade maybe important but is not fully recorded by the responsible agency. Lack of coverage can also arise in the case of the applications of various thresholds for simplification purposes at customs (see chap. XIX, sect. E). Possible approaches to these issues of coverage entail use of additional data sources and, if necessary and appropriate, addressing them with the relevant governmental authorities, which, for example, can mandate that information be made available to statistical authorities. In the case of trade below certain reporting thresholds, appropriate estimation methods might need to be developed.
9.12. Asymmetries in partner data. Asymmetries in partner data, that is, differences between the compiling country’s own data on exports and imports and the partner country’s data on imports and exports, can have multiple causes, including differences in the time of recording, differences in the classification of commodities, partner-country attribution, trade system, confidentiality, etc., and many bilateral studies have been conducted to examine this issue and to reduce these asymmetries. However, an important factor in these asymmetries is trading partner information which may be impossible to align owing to conceptual as well as practical factors, in particular in the case of global value and supply chains. In order to improve the situation, IMTS 2010 strengthened the recommendation to provide the country of consignment as second partner with information not only for imports but also for exports (see IMTS, para. 6.26). As indicated, one way to examine and address these asymmetries is to conduct reconciliation studies (see sect. C).
9.13. Quality of quantity information. Many users and producers of trade statistics agree that quantity information (quantity in WCO standard units of quantity and net weight, where the standard unit is different from net weight) is the weakest data element in the core data set for trade statistics. In some countries, the provision of quantity or net weight is not mandatory, and often the information is not complete for other reasons. Information on quantity is internationally comparable only when reported by countries in a uniform manner. However, often quantities are reported in units different from the ones recommended by WCO for each specific commodity. An important quality problem is the incorrect reporting of the quantity or net weight, which might be difficult or impossible to determine. There are several possible means of improving the quantity information. For example, as part of a standardized quality assurance procedure, suspicious high-quantity values could be identified and the data provider contacted to verify them; or suspicious or missing quantities could be replaced with estimates based on the data provided by the same firm or other reporters. A further option is to use additional data sources such as shipping documents to verify the quantity information. Yet another possibility is to allow data providers to estimate missing information using empirical values or to allow the provision of quantities in quantity units from which standard units of quantity or net weight could be derived using appropriate conversion factors. Whatever the method used, it should be documented in the metadata that are made available to users.
9.14. Quantity aggregation. The quantity and net weight information provided by countries at the six-digit level of HS is frequently an aggregation of multiple trade transactions. Usually each transaction has a trade value, but the same is not true for net weight and quantity values, which can be missing. Further, quantity data for various transactions within the same six-digit commodity code might be reported in different quantity units. Hence, countries generally need to apply estimations for any missing net weight and quantity data and conversions or estimations for any non-standard quantity units in order to provide information on net weight and quantity at different levels of aggregation, or to refrain from providing aggregations that are not of sufficient quality. The difficulties in quantity and net weight aggregation constitute a quality issue on their own which has to be carefully addressed in view of the multiple and growing uses of these data, including for health and environmental policymaking. It is good practice for the responsible agency to work closely with customs on this issue.
9.15. Timeliness. The relevance of trade statistics is greatly increased if the data are provided in a timely manner. However, in many countries, the information is provided much later than suggested (see IMTS 2010, para 10.7) requiring data users to make their own estimates. One means of improving the timeliness of information is to review the data production process in light of existing best practices and to publish preliminary data (ibid., para. 10.8).
5. Country examples and best practices
9.16. Effect of mandatory electronic filing on export data: experience of the United States of America. Effective 2 July 2008, the United States Census Bureau began requiring mandatory filing of export information through the Automated Export System (AES) for all shipments where a Shipper’s Export Declaration (SED) was required. The implementation of the regulations and subsequent move to an all automated data collection process had an overall positive impact on the quality, coverage and timeliness of export data. These improvements have been achieved through more complete and timely data collection through the AES system, upfront validation checks of data, and reduced reporting and keying errors. Most errors involve missing or invalid commodity classification codes and missing or incorrect quantities or shipping weights. The AES contains online validation checks which immediately detect reporting errors and refer those errors back to the filer for correction before the data can be submitted. This has resulted in a significant decrease in reporting error rates on export transactions. The timeliness of the data has also eliminated the need for estimation of data that were received too late. Risks associated with electronic filing include unresolved edit failures which could result in undercoverage and underestimations by filers. The AES Report Card provides a tool for the monitoring of filers and for identifying thereby further actions to improve quality.
9.17. Statistical quality assurance in the case of Brazil. Brazil publishes its international merchandise trade statistics one day after the end of the reference period (monthly and weekly), which is made possible mainly because of the use of a “preventive validation” methodology in Brazil’s SISCOMEX system. SISCOMEX is a computerized system which integrates customs, commercial and foreign-exchange information. Annex IX.A explains its mains functions.
9.18. ASYCUDA: data quality assurance, measurement and reporting. In any computer system, the quality of the data entered for processing or storage is of paramount importance, as wrong data can jeopardize all of data processing and can yield incorrect results. In this regard, the Automated System for Customs Data (ASYCUDA) ensures the highest quality of the keyed- in or imported data by performing several types of data validation and control. Some of them are set to be mandatory and others are configurable (to be mandatory or remain optional) depending on specific needs and national circumstances. The following types of data validation and control are integrated into ASYCUDA: (a) existence controls, (b) data format controls, (c) referential and validity controls, and (d) consistency controls; in addition, ASYCUDA provides a statistical reporting module that can also be used for validation purposes (see annex IX.B for details).
9.19. Harmonized framework for data validation: Eurostat. Eurostat proposed for utilization by European Union member States a harmonized framework for data validation in external trade statistics which covers the trade in goods not only between countries of the European Union (intra-EU trade statistics) but also of members countries with countries outside the European Union (extra-EU trade statistics). Regarding extra-EU trade statistics the following is covered: (a) validation of the input data by the customs offices, (b) validation of the input data by the competent national authorities (responsible statistical authority), (c) validation of the output data by the competent national authorities and (d) validation of the output data by Eurostat.
9.20. Validation rules. The validation rules specify acceptable values for the different variables, the appropriate controls and checking rules, metadata, possible errors and actions in case of errors. The fields (or variables) of a record (single administrative document (SAD) are checked for whether the values (codes) comply with the permitted entries (i.e., 1: imports; and 2: exports), whether the combination of values (codes) of two or more fields are permitted (e.g., commodity code against mode of transport), whether numerical values or combinations of numerical values (e.g., statistical value against quantity expressed in net mass) are within a certain range, and whether aggregated numerical values (e.g., aggregated statistical value by flow and commodity code) are within a certain range.
9.21. Application of the validation rules. These validations referring to individual SADs are expected to be performed through automated systems at customs, while the responsible agency would perform at the input data stage only a few additional checks on the aggregated data. However, in respect of the output data, the responsible agency would perform similar validation checks of all output fields (sections) to ensure correct values. The output data validation at Eurostat would not repeat the previous checks but would mainly check for outliers, in particular as Eurostat would strive to harmonize settings (calibration) of validation limits and thresholds for automatic correction in the validation rules for numerical fields (variables). The annex to harmonized framework presents in summary the statistical methods proposed for the validation and correction of numerical variables, as well as the methodological procedure to be followed for the editing of combinations of categorical variables.
9.22. Validation rules of the Eurotrace DBMS. The Eurotrace DBMS allows the definition of validation rules to control and maintain the quality of data within a data set. The rules are established as so-called tests and usually take the form of combinations of logical or numerical queries ranging from simple to complex. The test language supported by Eurotrace is SQL. The validation rules are based on the concept that a record is made up of codes and values. The codes can be checked against “dictionary” lists of valid codes, while values can be validated against ranges of acceptable values. A very simple check would consist of verifying whether a record has important values missing. However, much more complex tests can be defined and various tools for doing so are provided. A set of validation rules (called an algorithm) can be applied when importing data into a data set from a file, when exporting data out to the Eurotrace editor, and when importing data back into a data set from the Eurotrace editor after an editing session. Errors are best corrected by using the Eurotrace Editor program which has been designed to edit Eurotrace data easily.
 In fact, quality assurance starts even before, when data providers and, in particular, agents at customs are informed and trained regarding the statistical (customs) requirements regarding the information provided.
 For imports, it is recommended that the country of origin be recorded. However, there is no uniform definition of “country of origin”. Further, it might become more difficult to determine the country of origin if a country belongs to a customs union, as more and more countries do. For exports, it is recommended the county of last known destination be recorded, although the objective is to obtain the country of last or final destination. As indicated in para. 9.6 above, those completing the customs declaration might not have full information about the transactions to which the goods have been subjected, nor about future transactions.
 In the European Union Intrastat system, it is not mandatory to provide net weight if the supplementary quantity (WCO standard units of quantity) is different from net weight; however, member States are obliged to estimate net weight when not collected.
 The information presented is derived from the information note entitled “Effect of mandatory electronic filing on export data” 14 January 2009. Available from: http://www.census.gov/foreign-trade/aip/mandatoryelectronicfiling.html.
 The Government control of Brazil’s foreign trade, which is decentralized, consists of three elements: commercial, customs and foreignexchange controls. The administrative (commercial) control determines what goods can or cannot enter or leave the country, and is under the responsibility of the Secretariat of Foreign Trade of the Ministry of Development, Industry and Foreign Trade and other consenting agencies. The customs control covers the verification of documents and the examination of goods under the regular tax by Brazil’s customs (Ministry of Finance). The exchange control is performed by the Central Bank of Brazil on the delivery or receipt of foreign exchange related to goods imported and exported. SISCOMEX is the administrative tool that integrates the activities of registration, monitoring and controlling foreign trade operations by a computerized single flow of information. Exports are controlled by two registers in SISCOMEX: export registration (RE) and credit registration (RC). Export registration confers the "authorization to export" and must be requested before the goods are shipped abroad. The export processing begins with export registration and the provision of commercial, financial, fiscal and exchangerate information. Currently, the agency of Brazil responsible for producing foreign trade statistics, Foreign/Trade Secretariat Foreign Trade Planning and Development Department (SECEX/DEPLA), is responsible for the validation of specific fields of export registration.
 See Eurostat, “Proposal for a harmonized framework for data validation in external trade statistics: 2010 version”.
 See Eurostat, “Eurotrace DBMS, version 2.2: user guide” (European Commission, 2003), pp. 220-275.
 More specifically, SQL for the Microsoft Access Jet Database Engine.
 See, “Eurotrace Editor: user guide” (European Commission, 2003).