The Foundations for Provenance on the Web


Book Description

Provenance, i.e., the origin or source of something, is becoming an important concern, since it offers the means to verify data products, to infer their quality, and to decide whether they can be trusted. For instance, provenance enables the reproducibility of scientific results; provenance is necessary to track attribution and credit in curated databases; and, it is essential for reasoners to make trust judgements about the information they use over the Semantic Web. As the Web allows information sharing, discovery, aggregation, filtering and flow in an unprecedented manner, it also becomes difficult to identify the original source that produced information on the Web. This survey contends that provenance can and should reliably be tracked and exploited on the Web, and investigates the necessary foundations to achieve such a vision.




Provenance in Databases


Book Description

Reviews research over the past ten years on why, how, and where provenance, clarifies the relationships among these notions of provenance, and describes some of their applications in confidence computation, view maintenance and update, debugging, and annotation propagation




Data Provenance


Book Description

The term provenance is used in the art world to describe a record of the history of ownership of a piece of art. This term has been adapted by the database community to describe a record of the origin of a piece of data. Data provenance emerged as a research topic in the database community in the late 1990s. Data provenance, by explaining how the result of an operation was derived from its inputs, has proven to be a useful tool that is applicable in a wide variety of applications. This monograph gives a comprehensive introduction to data provenance concepts, algorithms, and methodology developed in the last few decades. It introduces the reader to the formalisms, algorithms, and system's developments in this fascinating field as well as providing a collection of relevant literature references for further research. The monograph provides a concise starting point for research into and using provenance in data. Although focusing on data provenance in databases pointers to work in other fields are given throughout. The intended audience is researchers and practitioners unfamiliar with the topic who want to develop a basic understanding of provenance techniques and the state-of-the-art in the field as well as researchers with prior experience in provenance that want to broaden their horizon.




Provenance and Annotation of Data and Process


Book Description

The 7 revised full papers, 11 revised medium-length papers, 6 revised short, and 7 demo papers presented together with 10 poster/abstract papers describing late-breaking work were carefully reviewed and selected from numerous submissions. Provenance has been recognized to be important in a wide range of areas including databases, workflows, knowledge representation and reasoning, and digital libraries. Thus, many disciplines have proposed a wide range of provenance models, techniques, and infrastructure for encoding and using provenance. The papers investigate many facets of data provenance, process documentation, data derivation, and data annotation.




Secure Data Provenance and Inference Control with Semantic Web


Book Description

With an ever-increasing amount of information on the web, it is critical to understand the pedigree, quality, and accuracy of your data. Using provenance, you can ascertain the quality of data based on its ancestral data and derivations, track back to sources of errors, allow automatic re-enactment of derivations to update data, and provide attribution of the data source. Secure Data Provenance and Inference Control with Semantic Web supplies step-by-step instructions on how to secure the provenance of your data to make sure it is safe from inference attacks. It details the design and implementation of a policy engine for provenance of data and presents case studies that illustrate solutions in a typical distributed health care system for hospitals. Although the case studies describe solutions in the health care domain, you can easily apply the methods presented in the book to a range of other domains. The book describes the design and implementation of a policy engine for provenance and demonstrates the use of Semantic Web technologies and cloud computing technologies to enhance the scalability of solutions. It covers Semantic Web technologies for the representation and reasoning of the provenance of the data and provides a unifying framework for securing provenance that can help to address the various criteria of your information systems. Illustrating key concepts and practical techniques, the book considers cloud computing technologies that can enhance the scalability of solutions. After reading this book you will be better prepared to keep up with the on-going development of the prototypes, products, tools, and standards for secure data management, secure Semantic Web, secure web services, and secure cloud computing.




Intertwingled


Book Description

This engaging volume celebrates the life and work of Theodor Holm “Ted” Nelson, a pioneer and legendary figure from the history of early computing. Presenting contributions from world-renowned computer scientists and figures from the media industry, the book delves into hypertext, the docuverse, Xanadu and other products of Ted Nelson’s unique mind. Features: includes a cartoon and a sequence of poems created in Nelson’s honor, reflecting his wide-ranging and interdisciplinary intellect; presents peer histories, providing a sense of the milieu that resulted from Nelson’s ideas; contains personal accounts revealing what it is like to collaborate directly with Nelson; describes Nelson’s legacy from the perspective of his contemporaries from the computing world; provides a contribution from Ted Nelson himself. With a broad appeal spanning computer scientists, science historians and the general reader, this inspiring collection reveals the continuing influence of the original visionary of the World Wide Web.




For Attribution


Book Description

The growth of electronic publishing of literature has created new challenges, such as the need for mechanisms for citing online references in ways that can assure discoverability and retrieval for many years into the future. The growth in online datasets presents related, yet more complex challenges. It depends upon the ability to reliably identify, locate, access, interpret, and verify the version, integrity, and provenance of digital datasets. Data citation standards and good practices can form the basis for increased incentives, recognition, and rewards for scientific data activities that in many cases are currently lacking in many fields of research. The rapidly-expanding universe of online digital data holds the promise of allowing peer-examination and review of conclusions or analysis based on experimental or observational data, the integration of data into new forms of scholarly publishing, and the ability for subsequent users to make new and unforeseen uses and analyses of the same data-either in isolation, or in combination with, other datasets. The problem of citing online data is complicated by the lack of established practices for referring to portions or subsets of data. There are a number of initiatives in different organizations, countries, and disciplines already underway. An important set of technical and policy approaches have already been launched by the U.S. National Information Standards Organization (NISO) and other standards bodies regarding persistent identifiers and online linking. The workshop summarized in For Attribution-Developing Data Attribution and Citation Practices and Standards: Summary of an International Workshop was organized by a steering committee under the National Research Council's (NRC's) Board on Research Data and Information, in collaboration with an international CODATA-ICSTI Task Group on Data Citation Standards and Practices. The purpose of the symposium was to examine a number of key issues related to data identification, attribution, citation, and linking to help coordinate activities in this area internationally, and to promote common practices and standards in the scientific community.




Principles of Security and Trust


Book Description

This book constitutes the refereed proceedings of the first International Conference on Principles of Security and Trust, POST 2012, held in Tallinn, Estonia, in March/April 2012, as part of ETAPS 2012, the European Joint Conferences on Theory and Practice of Software. The 20 papers, presented together with the abstract of an invited talk and a joint-ETAPS paper, were selected from a total of 67 submissions. Topics covered by the papers include: foundations of security, authentication, confidentiality, privacy and anonymity, authorization and trust, network security, protocols for security, language-based security, and quantitative security properties.




In Search of Elegance in the Theory and Practice of Computation


Book Description

This Festschrift volume, published in honour of Peter Buneman, contains contributions written by some of his colleagues, former students, and friends. In celebration of his distinguished career a colloquium was held in Edinburgh, Scotland, 27-29 October, 2013. The articles presented herein belong to some of the many areas of Peter's research interests.




MEDINFO 2019: Health and Wellbeing e-Networks for All


Book Description

Combining and integrating cross-institutional data remains a challenge for both researchers and those involved in patient care. Patient-generated data can contribute precious information to healthcare professionals by enabling monitoring under normal life conditions and also helping patients play a more active role in their own care. This book presents the proceedings of MEDINFO 2019, the 17th World Congress on Medical and Health Informatics, held in Lyon, France, from 25 to 30 August 2019. The theme of this year’s conference was ‘Health and Wellbeing: E-Networks for All’, stressing the increasing importance of networks in healthcare on the one hand, and the patient-centered perspective on the other. Over 1100 manuscripts were submitted to the conference and, after a thorough review process by at least three reviewers and assessment by a scientific program committee member, 285 papers and 296 posters were accepted, together with 47 podium abstracts, 7 demonstrations, 45 panels, 21 workshops and 9 tutorials. All accepted paper and poster contributions are included in these proceedings. The papers are grouped under four thematic tracks: interpreting health and biomedical data, supporting care delivery, enabling precision medicine and public health, and the human element in medical informatics. The posters are divided into the same four groups. The book presents an overview of state-of-the-art informatics projects from multiple regions of the world; it will be of interest to anyone working in the field of medical informatics.