Synthetic Datasets for Statistical Disclosure Control


Book Description

The aim of this book is to give the reader a detailed introduction to the different approaches to generating multiply imputed synthetic datasets. It describes all approaches that have been developed so far, provides a brief history of synthetic datasets, and gives useful hints on how to deal with real data problems like nonresponse, skip patterns, or logical constraints. Each chapter is dedicated to one approach, first describing the general concept followed by a detailed application to a real dataset providing useful guidelines on how to implement the theory in practice. The discussed multiple imputation approaches include imputation for nonresponse, generating fully synthetic datasets, generating partially synthetic datasets, generating synthetic datasets when the original data is subject to nonresponse, and a two-stage imputation approach that helps to better address the omnipresent trade-off between analytical validity and the risk of disclosure. The book concludes with a glimpse into the future of synthetic datasets, discussing the potential benefits and possible obstacles of the approach and ways to address the concerns of data users and their understandable discomfort with using data that doesn’t consist only of the originally collected values. The book is intended for researchers and practitioners alike. It helps the researcher to find the state of the art in synthetic data summarized in one book with full reference to all relevant papers on the topic. But it is also useful for the practitioner at the statistical agency who is considering the synthetic data approach for data dissemination in the future and wants to get familiar with the topic.




Synthetic Data for Confidentiality


Book Description

This paper reviews methodology for creating and analyzing synthetic data files, as implemented for various US Census Bureau survey programs -- particularly a SIPP/SSA/IRS linked file and group quarters data from the American Community Survey. The motivation for synthetic data is to be able to release a public use microdata file suitable for analysis that is not confidential. Methods for data synthesis and proper analysis of synthesized data are reviewed, and issues of confidentiality and analytical validity are discussed. The paper concludes with a literature review.




Handbook of Sharing Confidential Data


Book Description

Statistical agencies, research organizations, companies, and other data stewards that seek to share data with the public face a challenging dilemma. They need to protect the privacy and confidentiality of data subjects and their attributes while providing data products that are useful for their intended purposes. In an age when information on data subjects is available from a wide range of data sources, as are the computational resources to obtain that information, this challenge is increasingly difficult. The Handbook of Sharing Confidential Data helps data stewards understand how tools from the data confidentiality literature—specifically, synthetic data, formal privacy, and secure computation—can be used to manage trade-offs in disclosure risk and data usefulness. Key features: • Provides overviews of the potential and the limitations of synthetic data, differential privacy, and secure computation • Offers an accessible review of methods for implementing differential privacy, both from methodological and practical perspectives • Presents perspectives from both computer science and statistical science for addressing data confidentiality and privacy • Describes genuine applications of synthetic data, formal privacy, and secure computation to help practitioners implement these approaches The handbook is accessible to both researchers and practitioners who work with confidential data. It requires familiarity with basic concepts from probability and data analysis.




Innovations in Federal Statistics


Book Description

Federal government statistics provide critical information to the country and serve a key role in a democracy. For decades, sample surveys with instruments carefully designed for particular data needs have been one of the primary methods for collecting data for federal statistics. However, the costs of conducting such surveys have been increasing while response rates have been declining, and many surveys are not able to fulfill growing demands for more timely information and for more detailed information at state and local levels. Innovations in Federal Statistics examines the opportunities and risks of using government administrative and private sector data sources to foster a paradigm shift in federal statistical programs that would combine diverse data sources in a secure manner to enhance federal statistics. This first publication of a two-part series discusses the challenges faced by the federal statistical system and the foundational elements needed for a new paradigm.




Practical Synthetic Data Generation


Book Description

Building and testing machine learning models requires access to large and diverse data. But where can you find usable datasets without running into privacy issues? This practical book introduces techniques for generating synthetic data—fake data generated from real data—so you can perform secondary analysis to do research, understand customer behaviors, develop new products, or generate new revenue. Data scientists will learn how synthetic data generation provides a way to make such data broadly available for secondary purposes while addressing many privacy concerns. Analysts will learn the principles and steps for generating synthetic data from real datasets. And business leaders will see how synthetic data can help accelerate time to a product or solution. This book describes: Steps for generating synthetic data using multivariate normal distributions Methods for distribution fitting covering different goodness-of-fit metrics How to replicate the simple structure of original data An approach for modeling data structure to consider complex relationships Multiple approaches and metrics you can use to assess data utility How analysis performed on real data can be replicated with synthetic data Privacy implications of synthetic data and methods to assess identity disclosure




Synthetic Data for Deep Learning


Book Description

This is the first book on synthetic data for deep learning, and its breadth of coverage may render this book as the default reference on synthetic data for years to come. The book can also serve as an introduction to several other important subfields of machine learning that are seldom touched upon in other books. Machine learning as a discipline would not be possible without the inner workings of optimization at hand. The book includes the necessary sinews of optimization though the crux of the discussion centers on the increasingly popular tool for training deep learning models, namely synthetic data. It is expected that the field of synthetic data will undergo exponential growth in the near future. This book serves as a comprehensive survey of the field. In the simplest case, synthetic data refers to computer-generated graphics used to train computer vision models. There are many more facets of synthetic data to consider. In the section on basic computer vision, the book discusses fundamental computer vision problems, both low-level (e.g., optical flow estimation) and high-level (e.g., object detection and semantic segmentation), synthetic environments and datasets for outdoor and urban scenes (autonomous driving), indoor scenes (indoor navigation), aerial navigation, and simulation environments for robotics. Additionally, it touches upon applications of synthetic data outside computer vision (in neural programming, bioinformatics, NLP, and more). It also surveys the work on improving synthetic data development and alternative ways to produce it such as GANs. The book introduces and reviews several different approaches to synthetic data in various domains of machine learning, most notably the following fields: domain adaptation for making synthetic data more realistic and/or adapting the models to be trained on synthetic data and differential privacy for generating synthetic data with privacy guarantees. This discussion is accompanied by an introduction into generative adversarial networks (GAN) and an introduction to differential privacy.




Privacy and Synthetic Datasets


Book Description

Sharing is a virtue, instilled in us from childhood. Unfortunately, when it comes to big data -- i.e., databases possessing the potential to usher in a whole new world of scientific progress -- the legal landscape prefers a hoggish motif. The historic approach to the resulting database-privacy problem has been anonymization, a subtractive technique incurring not only poor privacy results, but also lackluster utility. In anonymization's stead, differential privacy arose; it provides better, near-perfect privacy, but is nonetheless subtractive in terms of utility. Today, another solution is leaning into the fore, synthetic data. Using the magic of machine learning, synthetic data offers a generative, additive approach -- the creation of almost-but-not-quite replica data. In fact, as we recommend, synthetic data may be combined with differential privacy to achieve a best-of-both-worlds scenario. After unpacking the technical nuances of synthetic data, we analyze its legal implications, finding both over and under inclusive applications. Privacy statutes either overweigh or downplay the potential for synthetic data to leak secrets, inviting ambiguity. We conclude by finding that synthetic data is a valid, privacy-conscious alternative to raw data, but is not a cure-all for every situation. In the end, computer science progress must be met with proper policy in order to move the area of useful data dissemination forward.




Federal Statistics, Multiple Data Sources, and Privacy Protection


Book Description

The environment for obtaining information and providing statistical data for policy makers and the public has changed significantly in the past decade, raising questions about the fundamental survey paradigm that underlies federal statistics. New data sources provide opportunities to develop a new paradigm that can improve timeliness, geographic or subpopulation detail, and statistical efficiency. It also has the potential to reduce the costs of producing federal statistics. The panel's first report described federal statistical agencies' current paradigm, which relies heavily on sample surveys for producing national statistics, and challenges agencies are facing; the legal frameworks and mechanisms for protecting the privacy and confidentiality of statistical data and for providing researchers access to data, and challenges to those frameworks and mechanisms; and statistical agencies access to alternative sources of data. The panel recommended a new approach for federal statistical programs that would combine diverse data sources from government and private sector sources and the creation of a new entity that would provide the foundational elements needed for this new approach, including legal authority to access data and protect privacy. This second of the panel's two reports builds on the analysis, conclusions, and recommendations in the first one. This report assesses alternative methods for implementing a new approach that would combine diverse data sources from government and private sector sources, including describing statistical models for combining data from multiple sources; examining statistical and computer science approaches that foster privacy protections; evaluating frameworks for assessing the quality and utility of alternative data sources; and various models for implementing the recommended new entity. Together, the two reports offer ideas and recommendations to help federal statistical agencies examine and evaluate data from alternative sources and then combine them as appropriate to provide the country with more timely, actionable, and useful information for policy makers, businesses, and individuals.




Putting People on the Map


Book Description

Precise, accurate spatial information linked to social and behavioral data is revolutionizing social science by opening new questions for investigation and improving understanding of human behavior in its environmental context. At the same time, precise spatial data make it more likely that individuals can be identified, breaching the promise of confidentiality made when the data were collected. Because norms of science and government agencies favor open access to all scientific data, the tension between the benefits of open access and the risks associated with potential breach of confidentiality pose significant challenges to researchers, research sponsors, scientific institutions, and data archivists. Putting People on the Map finds that several technical approaches for making data available while limiting risk have potential, but none is adequate on its own or in combination. This book offers recommendations for education, training, research, and practice to researchers, professional societies, federal agencies, institutional review boards, and data stewards.




Data Privacy


Book Description

The book covers data privacy in depth with respect to data mining, test data management, synthetic data generation etc. It formalizes principles of data privacy that are essential for good anonymization design based on the data format and discipline. The principles outline best practices and reflect on the conflicting relationship between privacy and utility. From a practice standpoint, it provides practitioners and researchers with a definitive guide to approach anonymization of various data formats, including multidimensional, longitudinal, time-series, transaction, and graph data. In addition to helping CIOs protect confidential data, it also offers a guideline as to how this can be implemented for a wide range of data at the enterprise level.