Incomplete Data and Data Dependencies in Relational Databases


Book Description

The chase has long been used as a central tool to analyze dependencies and their effect on queries. It has been applied to different relevant problems in database theory such as query optimization, query containment and equivalence, dependency implication, and database schema design. Recent years have seen a renewed interest in the chase as an important tool in several database applications, such as data exchange and integration, query answering in incomplete data, and many others. It is well known that the chase algorithm might be non-terminating and thus, in order for it to find practical applicability, it is crucial to identify cases where its termination is guaranteed. Another important aspect to consider when dealing with the chase is that it can introduce null values into the database, thereby leading to incomplete data. Thus, in several scenarios where the chase is used the problem of dealing with data dependencies and incomplete data arises. This book discusses fundamental issues concerning data dependencies and incomplete data with a particular focus on the chase and its applications in different database areas. We report recent results about the crucial issue of identifying conditions that guarantee the chase termination. Different database applications where the chase is a central tool are discussed with particular attention devoted to query answering in the presence of data dependencies and database schema design. Table of Contents: Introduction / Relational Databases / Incomplete Databases / The Chase Algorithm / Chase Termination / Data Dependencies and Normal Forms / Universal Repairs / Chase and Database Applications




Incomplete Data and Data Dependencies in Relational Databases


Book Description

The chase has long been used as a central tool to analyze dependencies and their effect on queries. It has been applied to different relevant problems in database theory such as query optimization, query containment and equivalence, dependency implication, and database schema design. Recent years have seen a renewed interest in the chase as an important tool in several database applications, such as data exchange and integration, query answering in incomplete data, and many others. It is well known that the chase algorithm might be non-terminating and thus, in order for it to find practical applicability, it is crucial to identify cases where its termination is guaranteed. Another important aspect to consider when dealing with the chase is that it can introduce null values into the database, thereby leading to incomplete data. Thus, in several scenarios where the chase is used the problem of dealing with data dependencies and incomplete data arises. This book discusses fundamental issues concerning data dependencies and incomplete data with a particular focus on the chase and its applications in different database areas. We report recent results about the crucial issue of identifying conditions that guarantee the chase termination. Different database applications where the chase is a central tool are discussed with particular attention devoted to query answering in the presence of data dependencies and database schema design. Table of Contents: Introduction / Relational Databases / Incomplete Databases / The Chase Algorithm / Chase Termination / Data Dependencies and Normal Forms / Universal Repairs / Chase and Database Applications




The Problem of Incomplete Information in Relational Databases


Book Description

Reviews of Environmental Contamination and Toxicology publishes authoritative reviews on the occurrence, effects, and fate of pesticide residues and other environmental contaminants. It will keep you informed of the latest significant issues by providing in-depth information in the areas of analytical chemistry, agricultural microbiology, biochemistry, human and veterinary medicine, toxicology, and food technology.




Foundations of Information and Knowledge Systems


Book Description

This book constitutes the refereed proceedings of the 11th International Symposium on Foundations of Information and Knowledge Systems, FoIKS 2020, held in Dortmund, Germany, in February 2020. The 19 revised full papers presented were carefully reviewed and selected from 33 submissions. The papers address various topics such as big data; database design; dynamics of information; information fusion; integrity and constraint management; intelligent agents; knowledge discovery and information retrieval; knowledge representation, reasoning and planning; logics in databases and AI; mathematical foundations; security in information and knowledge systems; semi-structured data and XML; social computing; the semantic web and knowledge management; and the world wide web.​







Complex Pattern Mining


Book Description

This book discusses the challenges facing current research in knowledge discovery and data mining posed by the huge volumes of complex data now gathered in various real-world applications (e.g., business process monitoring, cybersecurity, medicine, language processing, and remote sensing). The book consists of 14 chapters covering the latest research by the authors and the research centers they represent. It illustrates techniques and algorithms that have recently been developed to preserve the richness of the data and allow us to efficiently and effectively identify the complex information it contains. Presenting the latest developments in complex pattern mining, this book is a valuable reference resource for data science researchers and professionals in academia and industry.




Robust Data Profiling and Schema Design for Incomplete Relational Databases


Book Description

After more than 40 years in use, relational database systems provide a mature and sophisticated technology for effective and efficient data management. For example, SQL is the number one skill that data scientists possess. Relational database systems model a given domain of interest. One of the core problems that still remain is the correct handling of incomplete information. For example, we still do not know how database schemata for data with missing values should be designed. Many approaches to this problem exist, with most depending on a specific interpretation of missing values. Already in classical applications such as accounting, but even more so in modern day applications such as data integration, a reasonable approach should not be dependent on a specific interpretation of missing values. In this thesis, a new approach is established for the design of incomplete relational databases. The new approach is robust under different interpretations of missing values, and only depends on the complete fragments of an incomplete database. The thesis introduces the classes of embedded uniqueness constraints and embedded functional dependencies that enable users to declare completeness and integrity requirements of a given application within a single framework. Embedded functional dependencies are sources of redundant data value occurrences, while embedded uniqueness constraints avoid redundant data value occurrences. This makes them particularly interesting for the design of databases, as data redundancy largely determines how efficiently queries and updates can be processed. We study several computational problems related to the combined class, including axiomatic and algorithmic solutions to their implication problem, structural and computational aspects of Armstrong relations, their discovery problem, and the development of normal forms and decompositions that subsume the classical cases of Boyce-Codd and Third normal forms, respectively, as special cases. Many of our solutions are implemented as prototypes, and theoretical investigations of the computational complexity are complemented by empirical investigations in terms of performance studies on benchmark datasets. As a side result, we also develop a new algorithm that discovers all classical functional dependencies that hold on a given incomplete relation and show that it outperforms state-of-the-art for efficiency, row-, and column-scalability. Overall, the thesis offers a completeness-tailored approach to the design of relational databases.




A Guided Tour of Relational Databases and Beyond


Book Description

Addressing important extensions of the relational database model, including deductive, temporal, and object-oriented databases, this book provides an overview of database modeling with the Entity-Relationship (ER) model and the relational model. The book focuses on the primary achievements in relational database theory, including query languages, integrity constraints, database design, computable queries, and concurrency control. This reference will shed light on the ideas underlying relational database systems and the problems that confront database designers and researchers.




Foundations of Data Quality Management


Book Description

Data quality is one of the most important problems in data management. A database system typically aims to support the creation, maintenance, and use of large amount of data, focusing on the quantity of data. However, real-life data are often dirty: inconsistent, duplicated, inaccurate, incomplete, or stale. Dirty data in a database routinely generate misleading or biased analytical results and decisions, and lead to loss of revenues, credibility and customers. With this comes the need for data quality management. In contrast to traditional data management tasks, data quality management enables the detection and correction of errors in the data, syntactic or semantic, in order to improve the quality of the data and hence, add value to business processes. While data quality has been a longstanding problem for decades, the prevalent use of the Web has increased the risks, on an unprecedented scale, of creating and propagating dirty data. This monograph gives an overview of fundamental issues underlying central aspects of data quality, namely, data consistency, data deduplication, data accuracy, data currency, and information completeness. We promote a uniform logical framework for dealing with these issues, based on data quality rules. The text is organized into seven chapters, focusing on relational data. Chapter One introduces data quality issues. A conditional dependency theory is developed in Chapter Two, for capturing data inconsistencies. It is followed by practical techniques in Chapter 2b for discovering conditional dependencies, and for detecting inconsistencies and repairing data based on conditional dependencies. Matching dependencies are introduced in Chapter Three, as matching rules for data deduplication. A theory of relative information completeness is studied in Chapter Four, revising the classical Closed World Assumption and the Open World Assumption, to characterize incomplete information in the real world. A data currency model is presented in Chapter Five, to identify the current values of entities in a database and to answer queries with the current values, in the absence of reliable timestamps. Finally, interactions between these data quality issues are explored in Chapter Six. Important theoretical results and practical algorithms are covered, but formal proofs are omitted. The bibliographical notes contain pointers to papers in which the results were presented and proven, as well as references to materials for further reading. This text is intended for a seminar course at the graduate level. It is also to serve as a useful resource for researchers and practitioners who are interested in the study of data quality. The fundamental research on data quality draws on several areas, including mathematical logic, computational complexity and database theory. It has raised as many questions as it has answered, and is a rich source of questions and vitality. Table of Contents: Data Quality: An Overview / Conditional Dependencies / Cleaning Data with Conditional Dependencies / Data Deduplication / Information Completeness / Data Currency / Interactions between Data Quality Issues




Semantics in Databases


Book Description

This book presents a coherent suvey on exciting developments in database semantics. The origins of the volume date back to a workshop held in Prague, Czech Republic, in 1995. The nine revised full papers and surveys presented were carefully reviewed for inclusion in the book. They address more traditional aspects like dealing with integrity constraints and conceptual modeling as well as new areas of databases; object-orientation, incomplete information, database transformations and other issues are investigated by applying formal semantics, e.g. the evolving algebra semantics.