Registries for Evaluating Patient Outcomes


Book Description

This User’s Guide is intended to support the design, implementation, analysis, interpretation, and quality evaluation of registries created to increase understanding of patient outcomes. For the purposes of this guide, a patient registry is an organized system that uses observational study methods to collect uniform data (clinical and other) to evaluate specified outcomes for a population defined by a particular disease, condition, or exposure, and that serves one or more predetermined scientific, clinical, or policy purposes. A registry database is a file (or files) derived from the registry. Although registries can serve many purposes, this guide focuses on registries created for one or more of the following purposes: to describe the natural history of disease, to determine clinical effectiveness or cost-effectiveness of health care products and services, to measure or monitor safety and harm, and/or to measure quality of care. Registries are classified according to how their populations are defined. For example, product registries include patients who have been exposed to biopharmaceutical products or medical devices. Health services registries consist of patients who have had a common procedure, clinical encounter, or hospitalization. Disease or condition registries are defined by patients having the same diagnosis, such as cystic fibrosis or heart failure. The User’s Guide was created by researchers affiliated with AHRQ’s Effective Health Care Program, particularly those who participated in AHRQ’s DEcIDE (Developing Evidence to Inform Decisions About Effectiveness) program. Chapters were subject to multiple internal and external independent reviews.




Executing Data Quality Projects


Book Description

Executing Data Quality Projects, Second Edition presents a structured yet flexible approach for creating, improving, sustaining and managing the quality of data and information within any organization. Studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. Help is here! This book describes a proven Ten Step approach that combines a conceptual framework for understanding information quality with techniques, tools, and instructions for practically putting the approach to work – with the end result of high-quality trusted data and information, so critical to today's data-dependent organizations. The Ten Steps approach applies to all types of data and all types of organizations – for-profit in any industry, non-profit, government, education, healthcare, science, research, and medicine. This book includes numerous templates, detailed examples, and practical advice for executing every step. At the same time, readers are advised on how to select relevant steps and apply them in different ways to best address the many situations they will face. The layout allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, best practices, and warnings. The experience of actual clients and users of the Ten Steps provide real examples of outputs for the steps plus highlighted, sidebar case studies called Ten Steps in Action. This book uses projects as the vehicle for data quality work and the word broadly to include: 1) focused data quality improvement projects, such as improving data used in supply chain management, 2) data quality activities in other projects such as building new applications and migrating data from legacy systems, integrating data because of mergers and acquisitions, or untangling data due to organizational breakups, and 3) ad hoc use of data quality steps, techniques, or activities in the course of daily work. The Ten Steps approach can also be used to enrich an organization's standard SDLC (whether sequential or Agile) and it complements general improvement methodologies such as six sigma or lean. No two data quality projects are the same but the flexible nature of the Ten Steps means the methodology can be applied to all. The new Second Edition highlights topics such as artificial intelligence and machine learning, Internet of Things, security and privacy, analytics, legal and regulatory requirements, data science, big data, data lakes, and cloud computing, among others, to show their dependence on data and information and why data quality is more relevant and critical now than ever before. - Includes concrete instructions, numerous templates, and practical advice for executing every step of The Ten Steps approach - Contains real examples from around the world, gleaned from the author's consulting practice and from those who implemented based on her training courses and the earlier edition of the book - Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices - A companion Web site includes links to numerous data quality resources, including many of the templates featured in the text, quick summaries of key ideas from the Ten Steps methodology, and other tools and information that are available online




Data Management for Researchers


Book Description

A comprehensive guide to everything scientists need to know about data management, this book is essential for researchers who need to learn how to organize, document and take care of their own data. Researchers in all disciplines are faced with the challenge of managing the growing amounts of digital data that are the foundation of their research. Kristin Briney offers practical advice and clearly explains policies and principles, in an accessible and in-depth text that will allow researchers to understand and achieve the goal of better research data management. Data Management for Researchers includes sections on: * The data problem – an introduction to the growing importance and challenges of using digital data in research. Covers both the inherent problems with managing digital information, as well as how the research landscape is changing to give more value to research datasets and code. * The data lifecycle – a framework for data’s place within the research process and how data’s role is changing. Greater emphasis on data sharing and data reuse will not only change the way we conduct research but also how we manage research data. * Planning for data management – covers the many aspects of data management and how to put them together in a data management plan. This section also includes sample data management plans. * Documenting your data – an often overlooked part of the data management process, but one that is critical to good management; data without documentation are frequently unusable. * Organizing your data – explains how to keep your data in order using organizational systems and file naming conventions. This section also covers using a database to organize and analyze content. * Improving data analysis – covers managing information through the analysis process. This section starts by comparing the management of raw and analyzed data and then describes ways to make analysis easier, such as spreadsheet best practices. It also examines practices for research code, including version control systems. * Managing secure and private data – many researchers are dealing with data that require extra security. This section outlines what data falls into this category and some of the policies that apply, before addressing the best practices for keeping data secure. * Short-term storage – deals with the practical matters of storage and backup and covers the many options available. This section also goes through the best practices to insure that data are not lost. * Preserving and archiving your data – digital data can have a long life if properly cared for. This section covers managing data in the long term including choosing good file formats and media, as well as determining who will manage the data after the end of the project. * Sharing/publishing your data – addresses how to make data sharing across research groups easier, as well as how and why to publicly share data. This section covers intellectual property and licenses for datasets, before ending with the altmetrics that measure the impact of publicly shared data. * Reusing data – as more data are shared, it becomes possible to use outside data in your research. This chapter discusses strategies for finding datasets and lays out how to cite data once you have found it. This book is designed for active scientific researchers but it is useful for anyone who wants to get more from their data: academics, educators, professionals or anyone who teaches data management, sharing and preservation. "An excellent practical treatise on the art and practice of data management, this book is essential to any researcher, regardless of subject or discipline." —Robert Buntrock, Chemical Information Bulletin




Best Practices in Data Cleaning


Book Description

Many researchers jump straight from data collection to data analysis without realizing how analyses and hypothesis tests can go profoundly wrong without clean data. This book provides a clear, step-by-step process of examining and cleaning data in order to decrease error rates and increase both the power and replicability of results. Jason W. Osborne, author of Best Practices in Quantitative Methods (SAGE, 2008) provides easily-implemented suggestions that are research-based and will motivate change in practice by empirically demonstrating, for each topic, the benefits of following best practices and the potential consequences of not following these guidelines. If your goal is to do the best research you can do, draw conclusions that are most likely to be accurate representations of the population(s) you wish to speak about, and report results that are most likely to be replicated by other researchers, then this basic guidebook will be indispensible.




The Ultimate School Counselor's Guide to Assessment and Data Collection


Book Description

Showcases assessments that specifically support the unique work of school counselors! Written specifically for school counselors and those in training, this is the first book to highlight the use of assessment and data collection to effectively advocate for student success. It bridges the gap in relevant knowledge and skills by not only delineating the requirements for formulating a data-driven approach, but also presenting actual assessments that can immediately be implemented. Underscoring the professional and ethical responsibilities of practicing school counselors to be data-driven, the book delivers the guidance and instruments needed to access multiple levels of data. This includes individual student data, school-level data, school counseling program-level data, or data regarding the school counselors' practices or beliefs. This practical, user-friendly book is organized step-by-step, starting with foundational knowledge and progressing towards application. It introduces readers to both formal and informal assessments and provides examples of how to integrate assessments within comprehensive school counseling programs (CSCP). It addresses a variety of approaches to assessments and data collection across the domains of academic, career, and social-emotional development, and examines needs assessment and program evaluation to drive the development and implementation of a CSCP. Additionally, the resource explains each type of data, reinforced with examples across domains and school levels. Also included are technology tools that can aid in the assessment and data collection process as well as accountability reporting. Key Features: Provides specific, concrete steps for using assessment and data collection to advocate for student success and develop effective CSCPs Includes examples of data collection tools, assessments, charts, tables, and illustrations Delivers hands-on application tasks throughout Delineates valid and reliable instruments to bolster effectiveness Includes downloadable appendix with formal assessments and templates to complete tasks described throughout the text




Cochrane Handbook for Systematic Reviews of Interventions


Book Description

Healthcare providers, consumers, researchers and policy makers are inundated with unmanageable amounts of information, including evidence from healthcare research. It has become impossible for all to have the time and resources to find, appraise and interpret this evidence and incorporate it into healthcare decisions. Cochrane Reviews respond to this challenge by identifying, appraising and synthesizing research-based evidence and presenting it in a standardized format, published in The Cochrane Library (www.thecochranelibrary.com). The Cochrane Handbook for Systematic Reviews of Interventions contains methodological guidance for the preparation and maintenance of Cochrane intervention reviews. Written in a clear and accessible format, it is the essential manual for all those preparing, maintaining and reading Cochrane reviews. Many of the principles and methods described here are appropriate for systematic reviews applied to other types of research and to systematic reviews of interventions undertaken by others. It is hoped therefore that this book will be invaluable to all those who want to understand the role of systematic reviews, critically appraise published reviews or perform reviews themselves.




Data Mining: Concepts and Techniques


Book Description

Data Mining: Concepts and Techniques provides the concepts and techniques in processing gathered data or information, which will be used in various applications. Specifically, it explains data mining and the tools used in discovering knowledge from the collected data. This book is referred as the knowledge discovery from data (KDD). It focuses on the feasibility, usefulness, effectiveness, and scalability of techniques of large data sets. After describing data mining, this edition explains the methods of knowing, preprocessing, processing, and warehousing data. It then presents information about data warehouses, online analytical processing (OLAP), and data cube technology. Then, the methods involved in mining frequent patterns, associations, and correlations for large data sets are described. The book details the methods for data classification and introduces the concepts and methods for data clustering. The remaining chapters discuss the outlier detection and the trends, applications, and research frontiers in data mining. This book is intended for Computer Science students, application developers, business professionals, and researchers who seek information on data mining. - Presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects - Addresses advanced topics such as mining object-relational databases, spatial databases, multimedia databases, time-series databases, text databases, the World Wide Web, and applications in several fields - Provides a comprehensive, practical look at the concepts and techniques you need to get the most out of your data




MongoDB: The Definitive Guide


Book Description

Manage the huMONGOus amount of data collected through your web application with MongoDB. This authoritative introduction—written by a core contributor to the project—shows you the many advantages of using document-oriented databases, and demonstrates how this reliable, high-performance system allows for almost infinite horizontal scalability. This updated second edition provides guidance for database developers, advanced configuration for system administrators, and an overview of the concepts and use cases for other people on your project. Ideal for NoSQL newcomers and experienced MongoDB users alike, this guide provides numerous real-world schema design examples. Get started with MongoDB core concepts and vocabulary Perform basic write operations at different levels of safety and speed Create complex queries, with options for limiting, skipping, and sorting results Design an application that works well with MongoDB Aggregate data, including counting, finding distinct values, grouping documents, and using MapReduce Gather and interpret statistics about your collections and databases Set up replica sets and automatic failover in MongoDB Use sharding to scale horizontally, and learn how it impacts applications Delve into monitoring, security and authentication, backup/restore, and other administrative tasks




The SAGE Handbook of Qualitative Data Collection


Book Description

The SAGE Handbook of Qualitative Data Collection is a timely overview of the methodological developments available to social science researchers, covering key themes including: Concepts, Contexts, Basics Verbal Data Digital and Internet Data Triangulation and Mixed Methods Collecting Data in Specific Populations.




Safety and Health for Engineers


Book Description

SAFETY AND HEALTH FOR ENGINEERS A comprehensive resource for making products, facilities, processes, and operations safe for workers, users, and the public Ensuring the health and safety of individuals in the workplace is vital on an interpersonal level but is also crucial to limiting the liability of companies in the event of an onsite injury. The Bureau of Labor Statistics reported over 4,700 fatal work injuries in the United States in 2020, most frequently in transportation-related incidents. The same year, approximately 2.7 million workplace injuries and illnesses were reported by private industry employers. According to the National Safety Council, the cost in lost wages, productivity, medical and administrative costs is close to 1.2 trillion dollars in the US alone. It is imperative—by law and ethics—for engineers and safety and health professionals to drive down these statistics by creating a safe workplace and safe products, as well as maintaining a safe environment. Safety and Health for Engineers is considered the gold standard for engineers in all specialties, teaching an understanding of many components necessary to achieve safe workplaces, products, facilities, and methods to secure safety for workers, users, and the public. Each chapter offers information relevant to help safety professionals and engineers in the achievement of the first canon of professional ethics: to protect the health, safety, and welfare of the public. The textbook examines the fundamentals of safety, legal aspects, hazard recognition and control, the human element, and techniques to manage safety decisions. In doing so, it covers the primary safety essentials necessary for certification examinations for practitioners. Readers of the fourth edition of Safety and Health for Engineers readers will also find: Updates to all chapters, informed by research and references gathered since the last publication The most up-to-date information on current policy, certifications, regulations, agency standards, and the impact of new technologies, such as wearable technology, automation in transportation, and artificial intelligence New international information, including U.S. and foreign standards agencies, professional societies, and other organizations worldwide Expanded sections with real-world applications, exercises, and 164 case studies An extensive list of references to help readers find more detail on chapter contents A solution manual available to qualified instructors Safety and Health for Engineers is an ideal textbook for courses in safety engineering around the world in undergraduate or graduate studies, or in professional development learning. It also is a useful reference for professionals in engineering, safety, health, and associated fields who are preparing for credentialing examinations in safety and health.