DB2 11 for z/OS Technical Overview


Book Description

IBM® DB2® Version 11.1 for z/OS® (DB2 11 for z/OS or just DB2 11 throughout this book) is the fifteenth release of DB2 for IBM MVSTM. It brings performance and synergy with the IBM System z® hardware and opportunities to drive business value in the following areas. DB2 11 can provide unmatched reliability, availability, and scalability - Improved data sharing performance and efficiency - Less downtime by removing growth limitations - Simplified management, improved autonomics, and reduced planned outages DB2 11 can save money and save time - Aggressive CPU reduction goals - Additional utilities performance and CPU improvements - Save time and resources with new autonomic and application development capabilities DB2 11 provides simpler, faster migration - SQL compatibility, divorce system migration from application migration - Access path stability improvements - Better application performance with SQL and XML enhancements DB2 11 includes enhanced business analytics - Faster, more efficient performance for query workloads - Accelerator enhancements - More efficient inline database scoring enables predictive analytics The DB2 11 environment is available either for new installations of DB2 or for migrations from DB2 10 for z/OS subsystems only. This IBM Redbooks® publication introduces the enhancements made available with DB2 11 for z/OS. The contents help database administrators to understand the new functions and performance enhancements, to plan for ways to use the key new capabilities, and to justify the investment in installing or migrating to DB2 11.




Enabling Real-time Analytics on IBM z Systems Platform


Book Description

Regarding online transaction processing (OLTP) workloads, IBM® z SystemsTM platform, with IBM DB2®, data sharing, Workload Manager (WLM), geoplex, and other high-end features, is the widely acknowledged leader. Most customers now integrate business analytics with OLTP by running, for example, scoring functions from transactional context for real-time analytics or by applying machine-learning algorithms on enterprise data that is kept on the mainframe. As a result, IBM adds investment so clients can keep the complete lifecycle for data analysis, modeling, and scoring on z Systems control in a cost-efficient way, keeping the qualities of services in availability, security, reliability that z Systems solutions offer. Because of the changed architecture and tighter integration, IBM has shown, in a customer proof-of-concept, that a particular client was able to achieve an orders-of-magnitude improvement in performance, allowing that client's data scientist to investigate the data in a more interactive process. Open technologies, such as Predictive Model Markup Language (PMML) can help customers update single components instead of being forced to replace everything at once. As a result, you have the possibility to combine your preferred tool for model generation (such as SAS Enterprise Miner or IBM SPSS® Modeler) with a different technology for model scoring (such as Zementis, a company focused on PMML scoring). IBM SPSS Modeler is a leading data mining workbench that can apply various algorithms in data preparation, cleansing, statistics, visualization, machine learning, and predictive analytics. It has over 20 years of experience and continued development, and is integrated with z Systems. With IBM DB2 Analytics Accelerator 5.1 and SPSS Modeler 17.1, the possibility exists to do the complete predictive model creation including data transformation within DB2 Analytics Accelerator. So, instead of moving the data to a distributed environment, algorithms can be pushed to the data, using cost-efficient DB2 Accelerator for the required resource-intensive operations. This IBM Redbooks® publication explains the overall z Systems architecture, how the components can be installed and customized, how the new IBM DB2 Analytics Accelerator loader can help efficient data loading for z Systems data and external data, how in-database transformation, in-database modeling, and in-transactional real-time scoring can be used, and what other related technologies are available. This book is intended for technical specialists and architects, and data scientists who want to use the technology on the z Systems platform. Most of the technologies described in this book require IBM DB2 for z/OS®. For acceleration of the data investigation, data transformation, and data modeling process, DB2 Analytics Accelerator is required. Most value can be achieved if most of the data already resides on z Systems platforms, although adding external data (like from social sources) poses no problem at all.




A Handbook of Statistical Analyses Using R, Second Edition


Book Description

A Proven Guide for Easily Using R to Effectively Analyze Data Like its bestselling predecessor, A Handbook of Statistical Analyses Using R, Second Edition provides a guide to data analysis using the R system for statistical computing. Each chapter includes a brief account of the relevant statistical background, along with appropriate references. New to the Second Edition New chapters on graphical displays, generalized additive models, and simultaneous inference A new section on generalized linear mixed models that completes the discussion on the analysis of longitudinal data where the response variable does not have a normal distribution New examples and additional exercises in several chapters A new version of the HSAUR package (HSAUR2), which is available from CRAN This edition continues to offer straightforward descriptions of how to conduct a range of statistical analyses using R, from simple inference to recursive partitioning to cluster analysis. Focusing on how to use R and interpret the results, it provides students and researchers in many disciplines with a self-contained means of using R to analyze their data.




Confirmatory Factor Analysis for Applied Research, Second Edition


Book Description

This accessible book has established itself as the go-to resource on confirmatory factor analysis (CFA) for its emphasis on practical and conceptual aspects rather than mathematics or formulas. Detailed, worked-through examples drawn from psychology, management, and sociology studies illustrate the procedures, pitfalls, and extensions of CFA methodology. The text shows how to formulate, program, and interpret CFA models using popular latent variable software packages (LISREL, Mplus, EQS, SAS/CALIS); understand the similarities ...




Basic Statistics


Book Description

This introductory statistics textbook for non-statisticians covers basic principles, concepts, and methods routinely used in applied research. What sets this text apart is the incorporation of the many advances and insights from the last half century when explaining basic principles. These advances provide a foundation for vastly improving our ability to detect and describe differences among groups and associations among variables and provide a deeper and more accurate sense of when basic methods perform well and when they fail. Assuming no prior training, Wilcox introduces students to basic principles and concepts in a simple manner that makes these advances and insights, as well as standard ideas and methods, easy to understand and appreciate.




Monte-Carlo Simulation-Based Statistical Modeling


Book Description

This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.




Guidelines for Producing Statistics on Violence Against Women


Book Description

This publication provides national statistical offices with detailed guidance on how to collect, process, disseminate and analyse data on violence against women. The role of statistical surveys in meeting policy objectives related to violence against women, the essential features of these surveys, the steps required to plan, organize and execute these surveys, the concepts that are essential for ensuring the reliable, valid and consistent measurement of women's experiences in accordance with core topics and a plan for data analysis and dissemination are laid out.




Analyzing Media Messages


Book Description

Analyzing Media Messages provides a comprehensive and comprehensible guide to conducting content analysis research. It establishes a formal definition of quantitative content analysis; gives step-by-step instruction on designing a content analysis study; and explores in depth research questions that recur in content analysis, in such areas as measurement, sampling, reliability, data analysis, validity, and technology. This Second Edition maintains the concise, accessible approach of the first edition while offering an updated discussion and new examples. The goal of this resource is to make content analysis understandable, and to produce a useful guide for novice and experienced researchers alike. Accompanied by detailed, practical examples of current and classic applications, this volume is appropriate for use as a primary text for content analysis coursework, or as a supplemental text in research methods courses. It is also an indispensable reference for researchers in mass communication fields, political science, and other social and behavioral sciences.




Data Mining with Rattle and R


Book Description

Data mining is the art and science of intelligent data analysis. By building knowledge from information, data mining adds considerable value to the ever increasing stores of electronic data that abound today. In performing data mining many decisions need to be made regarding the choice of methodology, the choice of data, the choice of tools, and the choice of algorithms. Throughout this book the reader is introduced to the basic concepts and some of the more popular algorithms of data mining. With a focus on the hands-on end-to-end process for data mining, Williams guides the reader through various capabilities of the easy to use, free, and open source Rattle Data Mining Software built on the sophisticated R Statistical Software. The focus on doing data mining rather than just reading about data mining is refreshing. The book covers data understanding, data preparation, data refinement, model building, model evaluation, and practical deployment. The reader will learn to rapidly deliver a data mining project using software easily installed for free from the Internet. Coupling Rattle with R delivers a very sophisticated data mining environment with all the power, and more, of the many commercial offerings.




Mostly Harmless Econometrics


Book Description

In addition to econometric essentials, this book covers important new extensions as well as how to get standard errors right. The authors explain why fancier econometric techniques are typically unnecessary and even dangerous.