Classification as a Tool for Research


Book Description

Clustering and Classification, Data Analysis, Data Handling and Business Intelligence are research areas at the intersection of statistics, mathematics, computer science and artificial intelligence. They cover general methods and techniques that can be applied to a vast set of applications such as in business and economics, marketing and finance, engineering, linguistics, archaeology, musicology, biology and medical science. This volume contains the revised versions of selected papers presented during the 11th Biennial IFCS Conference and 33rd Annual Conference of the German Classification Society (Gesellschaft für Klassifikation - GfKl). The conference was organized in cooperation with the International Federation of Classification Societies (IFCS), and was hosted by Dresden University of Technology, Germany, in March 2009.




Classification and Data Analysis


Book Description

This volume gathers peer-reviewed contributions on data analysis, classification and related areas presented at the 28th Conference of the Section on Classification and Data Analysis of the Polish Statistical Association, SKAD 2019, held in Szczecin, Poland, on September 18–20, 2019. Providing a balance between theoretical and methodological contributions and empirical papers, it covers a broad variety of topics, ranging from multivariate data analysis, classification and regression, symbolic (and other) data analysis, visualization, data mining, and computer methods to composite measures, and numerous applications of data analysis methods in economics, finance and other social sciences. The book is intended for a wide audience, including researchers at universities and research institutions, graduate and doctoral students, practitioners, data scientists and employees in public statistical institutions.




Validity and Inter-Rater Reliability Testing of Quality Assessment Instruments


Book Description

The internal validity of a study reflects the extent to which the design and conduct of the study have prevented bias(es). One of the key steps in a systematic review is assessment of a study's internal validity, or potential for bias. This assessment serves to: (1) identify the strengths and limitations of the included studies; (2) investigate, and potentially explain heterogeneity in findings across different studies included in a systematic review; and (3) grade the strength of evidence for a given question. The risk of bias assessment directly informs one of four key domains considered when assessing the strength of evidence. With the increase in the number of published systematic reviews and development of systematic review methodology over the past 15 years, close attention has been paid to the methods for assessing internal validity. Until recently this has been referred to as “quality assessment” or “assessment of methodological quality.” In this context “quality” refers to “the confidence that the trial design, conduct, and analysis has minimized or avoided biases in its treatment comparisons.” To facilitate the assessment of methodological quality, a plethora of tools has emerged. Some of these tools were developed for specific study designs (e.g., randomized controlled trials (RCTs), cohort studies, case-control studies), while others were intended to be applied to a range of designs. The tools often incorporate characteristics that may be associated with bias; however, many tools also contain elements related to reporting (e.g., was the study population described) and design (e.g., was a sample size calculation performed) that are not related to bias. The Cochrane Collaboration recently developed a tool to assess the potential risk of bias in RCTs. The Risk of Bias (ROB) tool was developed to address some of the shortcomings of existing quality assessment instruments, including over-reliance on reporting rather than methods. Several systematic reviews have catalogued and critiqued the numerous tools available to assess methodological quality, or risk of bias of primary studies. In summary, few existing tools have undergone extensive inter-rater reliability or validity testing. Moreover, the focus of much of the tool development or testing that has been done has been on criterion or face validity. Therefore it is unknown whether, or to what extent, the summary assessments based on these tools differentiate between studies with biased and unbiased results (i.e., studies that may over- or underestimate treatment effects). There is a clear need for inter-rater reliability testing of different tools in order to enhance consistency in their application and interpretation across different systematic reviews. Further, validity testing is essential to ensure that the tools being used can identify studies with biased results. Finally, there is a need to determine inter-rater reliability and validity in order to support the uptake and use of individual tools that are recommended by the systematic review community, and specifically the ROB tool within the Evidence-based Practice Center (EPC) Program. In this project we focused on two tools that are commonly used in systematic reviews. The Cochrane ROB tool was designed for RCTs and is the instrument recommended by The Cochrane Collaboration for use in systematic reviews of RCTs. The Newcastle-Ottawa Scale is commonly used for nonrandomized studies, specifically cohort and case-control studies.




Theory and Practice of Business Intelligence in Healthcare


Book Description

Business intelligence supports managers in enterprises to make informed business decisions in various levels and domains such as in healthcare. These technologies can handle large structured and unstructured data (big data) in the healthcare industry. Because of the complex nature of healthcare data and the significant impact of healthcare data analysis, it is important to understand both the theories and practices of business intelligence in healthcare. Theory and Practice of Business Intelligence in Healthcare is a collection of innovative research that introduces data mining, modeling, and analytic techniques to health and healthcare data; articulates the value of big volumes of data to health and healthcare; evaluates business intelligence tools; and explores business intelligence use and applications in healthcare. While highlighting topics including digital health, operations intelligence, and patient empowerment, this book is ideally designed for healthcare professionals, IT consultants, hospital directors, data management staff, data analysts, hospital administrators, executives, managers, academicians, students, and researchers seeking current research on the digitization of health records and health systems integration.




Data Classification


Book Description

Comprehensive Coverage of the Entire Area of ClassificationResearch on the problem of classification tends to be fragmented across such areas as pattern recognition, database, data mining, and machine learning. Addressing the work of these different communities in a unified way, Data Classification: Algorithms and Applications explores the underlyi




Handbook of Research on Geoinformatics


Book Description

"This book discusses the complete range of contemporary research topics such as computer modeling, geometry, geoprocessing, and geographic information systems"--Provided by publisher.







Research Anthology on Usage and Development of Open Source Software


Book Description

The quick growth of computer technology and development of software caused it to be in a constant state of change and advancement. This advancement in software development meant that there would be many types of software developed in order to excel in usability and efficiency. Among these different types of software was open source software, one that grants permission for users to use, study, change, and distribute it freely. Due to its availability, open source software has quickly become a valuable asset to the world of computer technology and across various disciplines including education, business, and library science. The Research Anthology on Usage and Development of Open Source Software presents comprehensive research on the design and development of open source software as well as the ways in which it is used. The text discusses in depth the way in which this computer software has been made into a collaborative effort for the advancement of software technology. Discussing topics such as ISO standards, big data, fault prediction, open collaboration, and software development, this anthology is essential for computer engineers, software developers, IT specialists and consultants, instructors, librarians, managers, executives, professionals, academicians, researchers, and students.