Statistical disclosure control (SDC), also known as statistical disclosure limitation (SDL) or disclosure avoidance, is a technique used in data-driven research to ensure no person or organization is identifiable from the results of an analysis of survey or administrative data, or in the release of microdata. The purpose of SDC is to protect the confidentiality of the respondents and subjects of the research.
There are two main approaches to SDC: principles-based and rules-based. In principles-based systems, disclosure control attempts to uphold a specific set of fundamental principles—for example, "no person should be identifiable in released microdata". Rules-based systems, in contrast, are evidenced by a specific set of rules that a person performing disclosure control follows, after which the data are presumed to be safe to release. Using this taxonomy, proposed by Ritchie and Elliot in 2013, disclosure control based on differential privacy can be seen as a principles-based approach, whereas controls based on de-identification, such as the US Health Insurance Portability and Accountability Act's Privacy Rule's Safe Harbor method for de-identifying protected health information can be seen as a rule-based system.
Many kinds of social, economic and health research use potentially sensitive data as a basis for their research, such as survey or Census data, tax records, health records, educational information, etc. Such information is usually given in confidence, and, in the case of administrative data, not always for the purpose of research.
Researchers are not usually interested in information about one single person or business; they are looking for trends among larger groups of people. However, the data they use is, in the first place, linked to individual people and businesses, and SDC ensures that these cannot be identified from published data, no matter how detailed or broad.
It is possible that at the end of data analysis, the researcher somehow singles out one person or business through their research. For example, a researcher may identify the exceptionally good or bad service in a geriatric department within a hospital in a remote area, where only one hospital provides such care. In that case, the data analysis 'discloses' the identity of the hospital, even if the dataset used for analysis was properly anonymised or de-identified.
Statistical disclosure control will identify this disclosure risk and ensure the results of the analysis are altered to protect confidentiality. It requires a balance between protecting confidentiality and ensuring the results of the data analysis are still useful for statistical research.
In rules-based SDC, a rigid set of rules is used to determine whether or not the results of data analysis can be released. The rules are applied consistently, which makes it obvious what kinds of output are acceptable. However, because the rules are inflexible, either disclosive information may still slip through, or the rules are overrestrictive and may only allow for results that are too broad for useful analysis to be published.
In principles-based SDC, both the researcher and the output checker are trained in SDC. They receive a set of rules, which are rules-of-thumb rather than hard rules as in rules-based SDC. This means that in principle, any output may be approved or refused. The rules-of-thumb are a starting point for the researcher and explain from the beginning which outputs would be deemed safe and non-disclosive, and which outputs are unsafe. It is up to the researcher to prove that any 'unsafe' outputs are non-disclosive, but the checker has the final say. Since there are no hard rules, this requires specialist knowledge on disclosure risks from both the researcher and the checker. It encourages the researcher to produce safe results in the first place. However, this also means that the outcome may be inconsistent and uncertain. It requires extensive training and a high understanding of statistics and data analysis.
Many contemporary statistical disclosure control techniques, such as generalization and cell suppression, have been shown to be vulnerable to attack by a hypothetical data intruder. For example, Cox showed in 2009 that Complementary cell suppression typically leads to "over-protected" solutions because of the need to suppress both primary and complementary cells, and even then can lead to the compromise of sensitive data when exact intervals are reported.
- Skinner, Chris (2009). "Statistical Disclosure Control for Survey Data" (PDF). Handbook of Statistics Vol 29A: Sample Surveys: Design, Methods and Applications. Retrieved March 2016. Check date values in:
- Ritchie, Felix, and Elliott, Mark (2015). "Principles- Versus Rules-Based Output Statistical Disclosure Control In Remote Access Environments" (PDF). IASSIST Quarterly v39 pp5-13. Retrieved March 2016. Check date values in:
- "ADRN » Safe results". adrn.ac.uk. Retrieved 2016-03-08.
- "Government Statistical Services: Statistical Disclosure Control". Retrieved March 2016. Check date values in:
- Templ, Matthias; et al. (2014). "International Household Survey Network" (PDF). IHSN Working Paper. Retrieved March 2016. Check date values in:
- "Archived: ONS Statistical Disclosure Control". Office for National Statistics. Archived from the original on 2016-01-05. Retrieved March 2016. Check date values in:
- "Census 2001 - Methodology" (PDF). Northern Ireland Statistics and Research Agency. 2001. Retrieved March 2016. Check date values in:
- Afkhamai, Reza; et al. (2013). "Statistical Disclosure Control Practice in the Secure Access of the UK Data Service" (PDF). United Nations Economic Commission for Europe. Retrieved March 2016. Check date values in:
- Lawrence H. Cox, Vulnerability of Complementary Cell Suppression to Intruder Attack, Journal of Privacy and Confidentiality (2009) 1, Number 2, pp. 235–251 http://repository.cmu.edu/jpc/vol1/iss2/8/