Data Management: a gentle introduction


Book Description

The overall objective of this book is to show that data management is an exciting and valuable capability that is worth time and effort. More specifically it aims to achieve the following goals: 1. To give a “gentle” introduction to the field of DM by explaining and illustrating its core concepts, based on a mix of theory, practical frameworks such as TOGAF, ArchiMate, and DMBOK, as well as results from real-world assignments. 2. To offer guidance on how to build an effective DM capability in an organization.This is illustrated by various use cases, linked to the previously mentioned theoretical exploration as well as the stories of practitioners in the field. The primary target groups are: busy professionals who “are actively involved with managing data”. The book is also aimed at (Bachelor’s/ Master’s) students with an interest in data management. The book is industry-agnostic and should be applicable in different industries such as government, finance, telecommunications etc. Typical roles for which this book is intended: data governance office/ council, data owners, data stewards, people involved with data governance (data governance board), enterprise architects, data architects, process managers, business analysts and IT analysts. The book is divided into three main parts: theory, practice, and closing remarks. Furthermore, the chapters are as short and to the point as possible and also make a clear distinction between the main text and the examples. If the reader is already familiar with the topic of a chapter, he/she can easily skip it and move on to the next.




Missing Data


Book Description

While most books on missing data focus on applying sophisticated statistical techniques to deal with the problem after it has occurred, this volume provides a methodology for the control and prevention of missing data. In clear, nontechnical language, the authors help the reader understand the different types of missing data and their implications for the reliability, validity, and generalizability of a study’s conclusions. They provide practical recommendations for designing studies that decrease the likelihood of missing data, and for addressing this important issue when reporting study results. When statistical remedies are needed--such as deletion procedures, augmentation methods, and single imputation and multiple imputation procedures--the book also explains how to make sound decisions about their use. Patrick E. McKnight's website offers a periodically updated annotated bibliography on missing data and links to other Web resources that address missing data.




A Gentle Introduction to Effective Computing in Quantitative Research


Book Description

A practical guide to using modern software effectively in quantitative research in the social and natural sciences. This book offers a practical guide to the computational methods at the heart of most modern quantitative research. It will be essential reading for research assistants needing hands-on experience; students entering PhD programs in business, economics, and other social or natural sciences; and those seeking quantitative jobs in industry. No background in computer science is assumed; a learner need only have a computer with access to the Internet. Using the example as its principal pedagogical device, the book offers tried-and-true prototypes that illustrate many important computational tasks required in quantitative research. The best way to use the book is to read it at the computer keyboard and learn by doing. The book begins by introducing basic skills: how to use the operating system, how to organize data, and how to complete simple programming tasks. For its demonstrations, the book uses a UNIX-based operating system and a set of free software tools: the scripting language Python for programming tasks; the database management system SQLite; and the freely available R for statistical computing and graphics. The book goes on to describe particular tasks: analyzing data, implementing commonly used numerical and simulation methods, and creating extensions to Python to reduce cycle time. Finally, the book describes the use of LaTeX, a document markup language and preparation system.




Executing Data Quality Projects


Book Description

Executing Data Quality Projects, Second Edition presents a structured yet flexible approach for creating, improving, sustaining and managing the quality of data and information within any organization. Studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. Help is here! This book describes a proven Ten Step approach that combines a conceptual framework for understanding information quality with techniques, tools, and instructions for practically putting the approach to work – with the end result of high-quality trusted data and information, so critical to today's data-dependent organizations. The Ten Steps approach applies to all types of data and all types of organizations – for-profit in any industry, non-profit, government, education, healthcare, science, research, and medicine. This book includes numerous templates, detailed examples, and practical advice for executing every step. At the same time, readers are advised on how to select relevant steps and apply them in different ways to best address the many situations they will face. The layout allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, best practices, and warnings. The experience of actual clients and users of the Ten Steps provide real examples of outputs for the steps plus highlighted, sidebar case studies called Ten Steps in Action. This book uses projects as the vehicle for data quality work and the word broadly to include: 1) focused data quality improvement projects, such as improving data used in supply chain management, 2) data quality activities in other projects such as building new applications and migrating data from legacy systems, integrating data because of mergers and acquisitions, or untangling data due to organizational breakups, and 3) ad hoc use of data quality steps, techniques, or activities in the course of daily work. The Ten Steps approach can also be used to enrich an organization's standard SDLC (whether sequential or Agile) and it complements general improvement methodologies such as six sigma or lean. No two data quality projects are the same but the flexible nature of the Ten Steps means the methodology can be applied to all. The new Second Edition highlights topics such as artificial intelligence and machine learning, Internet of Things, security and privacy, analytics, legal and regulatory requirements, data science, big data, data lakes, and cloud computing, among others, to show their dependence on data and information and why data quality is more relevant and critical now than ever before. - Includes concrete instructions, numerous templates, and practical advice for executing every step of The Ten Steps approach - Contains real examples from around the world, gleaned from the author's consulting practice and from those who implemented based on her training courses and the earlier edition of the book - Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices - A companion Web site includes links to numerous data quality resources, including many of the templates featured in the text, quick summaries of key ideas from the Ten Steps methodology, and other tools and information that are available online




SAS Applications Programming


Book Description

Intended for use as a core text or to supplement any introductory or intermediate level statistics course, this book presents the basics of the SAS system in a well-paced, structured, non-threatening manner. It provides an introduction to the SAS system for data management, analysis, and reporting using the subset of the language ideally suited for beginning students, while at the same time serving as a useful reference for intermediate or advanced users. Students learn the language's power and flexibility with many real-world examples drawn from the author's industry experience. Beginning with an overview of the system, this text shows students how to read data, perform simple analyses, and produce simple reports. More complex topics are carefully introduced, guiding students to manage multiple datasets and write custom reports. More advanced statistical techniques such as correlation, regression, and analysis of variance are presented in later chapters.




A General Introduction to Data Analytics


Book Description

A guide to the principles and methods of data analysis that does not require knowledge of statistics or programming A General Introduction to Data Analytics is an essential guide to understand and use data analytics. This book is written using easy-to-understand terms and does not require familiarity with statistics or programming. The authors—noted experts in the field—highlight an explanation of the intuition behind the basic data analytics techniques. The text also contains exercises and illustrative examples. Thought to be easily accessible to non-experts, the book provides motivation to the necessity of analyzing data. It explains how to visualize and summarize data, and how to find natural groups and frequent patterns in a dataset. The book also explores predictive tasks, be them classification or regression. Finally, the book discusses popular data analytic applications, like mining the web, information retrieval, social network analysis, working with text, and recommender systems. The learning resources offer: A guide to the reasoning behind data mining techniques A unique illustrative example that extends throughout all the chapters Exercises at the end of each chapter and larger projects at the end of each of the text’s two main parts Together with these learning resources, the book can be used in a 13-week course guide, one chapter per course topic. The book was written in a format that allows the understanding of the main data analytics concepts by non-mathematicians, non-statisticians and non-computer scientists interested in getting an introduction to data science. A General Introduction to Data Analytics is a basic guide to data analytics written in highly accessible terms.




Data Management at Scale


Book Description

As data management and integration continue to evolve rapidly, storing all your data in one place, such as a data warehouse, is no longer scalable. In the very near future, data will need to be distributed and available for several technological solutions. With this practical book, you’ll learnhow to migrate your enterprise from a complex and tightly coupled data landscape to a more flexible architecture ready for the modern world of data consumption. Executives, data architects, analytics teams, and compliance and governance staff will learn how to build a modern scalable data landscape using the Scaled Architecture, which you can introduce incrementally without a large upfront investment. Author Piethein Strengholt provides blueprints, principles, observations, best practices, and patterns to get you up to speed. Examine data management trends, including technological developments, regulatory requirements, and privacy concerns Go deep into the Scaled Architecture and learn how the pieces fit together Explore data governance and data security, master data management, self-service data marketplaces, and the importance of metadata




An Introduction to Data Science


Book Description

An Introduction to Data Science is an easy-to-read data science textbook for those with no prior coding knowledge. It features exercises at the end of each chapter, author-generated tables and visualizations, and R code examples throughout.




Metadata Management for Information Control and Business Success


Book Description

By describing how to establish metadata management with an organization, this book provides examples of data structure architectures, and reviews issues associated with metadata management in relation to the Internet and data warehousing. It helps readers control the factors that make data useable throughout an organization and manage data so that it becomes a valuable corporate asset. The book examines real-world business departments that can benefit from this approach and ways in which sets ...




Web Data Management


Book Description

The Internet and World Wide Web have revolutionized access to information. Users now store information across multiple platforms from personal computers to smartphones and websites. As a consequence, data management concepts, methods and techniques are increasingly focused on distribution concerns. Now that information largely resides in the network, so do the tools that process this information. This book explains the foundations of XML with a focus on data distribution. It covers the many facets of distributed data management on the Web, such as description logics, that are already emerging in today's data integration applications and herald tomorrow's semantic Web. It also introduces the machinery used to manipulate the unprecedented amount of data collected on the Web. Several 'Putting into Practice' chapters describe detailed practical applications of the technologies and techniques. The book will serve as an introduction to the new, global, information systems for Web professionals and master's level courses.