Concurrent Data Processing in Elixir


Book Description

Learn different ways of writing concurrent code in Elixir and increase your application's performance, without sacrificing scalability or fault-tolerance. Most projects benefit from running background tasks and processing data concurrently, but the world of OTP and various libraries can be challenging. Which Supervisor and what strategy to use? What about GenServer? Maybe you need back-pressure, but is GenStage, Flow, or Broadway a better choice? You will learn everything you need to know to answer these questions, start building highly concurrent applications in no time, and write code that's not only fast, but also resilient to errors and easy to scale. Whether you are building a high-frequency stock trading application or a consumer web app, you need to know how to leverage concurrency to build applications that are fast and efficient. Elixir and the OTP offer a range of powerful tools, and this guide will show you how to choose the best tool for each job, and use it effectively to quickly start building highly concurrent applications. Learn about Tasks, supervision trees, and the different types of Supervisors available to you. Understand why processes and process linking are the building blocks of concurrency in Elixir. Get comfortable with the OTP and use the GenServer behaviour to maintain process state for long-running jobs. Easily scale the number of running processes using the Registry. Handle large volumes of data and traffic spikes with GenStage, using back-pressure to your advantage. Create your first multi-stage data processing pipeline using producer, consumer, and producer-consumer stages. Process large collections with Flow, using MapReduce and more in parallel. Thanks to Broadway, you will see how easy it is to integrate with popular message broker systems, or even existing GenStage producers. Start building the high-performance and fault-tolerant applications Elixir is famous for today. What You Need: You'll need Elixir 1.9+ and Erlang/OTP 22+ installed on a Mac OS X, Linux, or Windows machine.




Development Research in Practice


Book Description

Development Research in Practice leads the reader through a complete empirical research project, providing links to continuously updated resources on the DIME Wiki as well as illustrative examples from the Demand for Safe Spaces study. The handbook is intended to train users of development data how to handle data effectively, efficiently, and ethically. “In the DIME Analytics Data Handbook, the DIME team has produced an extraordinary public good: a detailed, comprehensive, yet easy-to-read manual for how to manage a data-oriented research project from beginning to end. It offers everything from big-picture guidance on the determinants of high-quality empirical research, to specific practical guidance on how to implement specific workflows—and includes computer code! I think it will prove durably useful to a broad range of researchers in international development and beyond, and I learned new practices that I plan on adopting in my own research group.†? —Marshall Burke, Associate Professor, Department of Earth System Science, and Deputy Director, Center on Food Security and the Environment, Stanford University “Data are the essential ingredient in any research or evaluation project, yet there has been too little attention to standardized practices to ensure high-quality data collection, handling, documentation, and exchange. Development Research in Practice: The DIME Analytics Data Handbook seeks to fill that gap with practical guidance and tools, grounded in ethics and efficiency, for data management at every stage in a research project. This excellent resource sets a new standard for the field and is an essential reference for all empirical researchers.†? —Ruth E. Levine, PhD, CEO, IDinsight “Development Research in Practice: The DIME Analytics Data Handbook is an important resource and a must-read for all development economists, empirical social scientists, and public policy analysts. Based on decades of pioneering work at the World Bank on data collection, measurement, and analysis, the handbook provides valuable tools to allow research teams to more efficiently and transparently manage their work flows—yielding more credible analytical conclusions as a result.†? —Edward Miguel, Oxfam Professor in Environmental and Resource Economics and Faculty Director of the Center for Effective Global Action, University of California, Berkeley “The DIME Analytics Data Handbook is a must-read for any data-driven researcher looking to create credible research outcomes and policy advice. By meticulously describing detailed steps, from project planning via ethical and responsible code and data practices to the publication of research papers and associated replication packages, the DIME handbook makes the complexities of transparent and credible research easier.†? —Lars Vilhuber, Data Editor, American Economic Association, and Executive Director, Labor Dynamics Institute, Cornell University




The Coding Manual for Qualitative Researchers


Book Description

The Coding Manual for Qualitative Researchers is unique in providing, in one volume, an in-depth guide to each of the multiple approaches available for coding qualitative data. In total, 29 different approaches to coding are covered, ranging in complexity from beginner to advanced level and covering the full range of types of qualitative data from interview transcripts to field notes. For each approach profiled, Johnny Saldaña discusses the method’s origins in the professional literature, a description of the method, recommendations for practical applications, and a clearly illustrated example.




Storage Systems


Book Description

Storage Systems: Organization, Performance, Coding, Reliability and Their Data Processing was motivated by the 1988 Redundant Array of Inexpensive/Independent Disks proposal to replace large form factor mainframe disks with an array of commodity disks. Disk loads are balanced by striping data into strips—with one strip per disk— and storage reliability is enhanced via replication or erasure coding, which at best dedicates k strips per stripe to tolerate k disk failures. Flash memories have resulted in a paradigm shift with Solid State Drives (SSDs) replacing Hard Disk Drives (HDDs) for high performance applications. RAID and Flash have resulted in the emergence of new storage companies, namely EMC, NetApp, SanDisk, and Purestorage, and a multibillion-dollar storage market. Key new conferences and publications are reviewed in this book.The goal of the book is to expose students, researchers, and IT professionals to the more important developments in storage systems, while covering the evolution of storage technologies, traditional and novel databases, and novel sources of data. We describe several prototypes: FAWN at CMU, RAMCloud at Stanford, and Lightstore at MIT; Oracle's Exadata, AWS' Aurora, Alibaba's PolarDB, Fungible Data Center; and author's paper designs for cloud storage, namely heterogeneous disk arrays and hierarchical RAID. - Surveys storage technologies and lists sources of data: measurements, text, audio, images, and video - Familiarizes with paradigms to improve performance: caching, prefetching, log-structured file systems, and merge-trees (LSMs) - Describes RAID organizations and analyzes their performance and reliability - Conserves storage via data compression, deduplication, compaction, and secures data via encryption - Specifies implications of storage technologies on performance and power consumption - Exemplifies database parallelism for big data, analytics, deep learning via multicore CPUs, GPUs, FPGAs, and ASICs, e.g., Google's Tensor Processing Units




The SAGE Encyclopedia of Communication Research Methods


Book Description

Communication research is evolving and changing in a world of online journals, open-access, and new ways of obtaining data and conducting experiments via the Internet. Although there are generic encyclopedias describing basic social science research methodologies in general, until now there has been no comprehensive A-to-Z reference work exploring methods specific to communication and media studies. Our entries, authored by key figures in the field, focus on special considerations when applied specifically to communication research, accompanied by engaging examples from the literature of communication, journalism, and media studies. Entries cover every step of the research process, from the creative development of research topics and questions to literature reviews, selection of best methods (whether quantitative, qualitative, or mixed) for analyzing research results and publishing research findings, whether in traditional media or via new media outlets. In addition to expected entries covering the basics of theories and methods traditionally used in communication research, other entries discuss important trends influencing the future of that research, including contemporary practical issues students will face in communication professions, the influences of globalization on research, use of new recording technologies in fieldwork, and the challenges and opportunities related to studying online multi-media environments. Email, texting, cellphone video, and blogging are shown not only as topics of research but also as means of collecting and analyzing data. Still other entries delve into considerations of accountability, copyright, confidentiality, data ownership and security, privacy, and other aspects of conducting an ethical research program. Features: 652 signed entries are contained in an authoritative work spanning four volumes available in choice of electronic or print formats. Although organized A-to-Z, front matter includes a Reader’s Guide grouping entries thematically to help students interested in a specific aspect of communication research to more easily locate directly related entries. Back matter includes a Chronology of the development of the field of communication research; a Resource Guide to classic books, journals, and associations; a Glossary introducing the terminology of the field; and a detailed Index. Entries conclude with References/Further Readings and Cross-References to related entries to guide students further in their research journeys. The Index, Reader’s Guide themes, and Cross-References combine to provide robust search-and-browse in the e-version.




Data Analysis for Business, Economics, and Policy


Book Description

A comprehensive textbook on data analysis for business, applied economics and public policy that uses case studies with real-world data.




Analyzing and Interpreting Qualitative Research


Book Description

Drawing on the expertise of major names in the field, this text provides comprehensive coverage of the key methods for analyzing, interpreting, and writing up qualitative research in a single volume.




R for Data Science


Book Description

Learn how to use R to turn raw data into insight, knowledge, and understanding. This book introduces you to R, RStudio, and the tidyverse, a collection of R packages designed to work together to make data science fast, fluent, and fun. Suitable for readers with no previous programming experience, R for Data Science is designed to get you doing data science as quickly as possible. Authors Hadley Wickham and Garrett Grolemund guide you through the steps of importing, wrangling, exploring, and modeling your data and communicating the results. You'll get a complete, big-picture understanding of the data science cycle, along with basic tools you need to manage the details. Each section of the book is paired with exercises to help you practice what you've learned along the way. You'll learn how to: Wrangle—transform your datasets into a form convenient for analysis Program—learn powerful R tools for solving data problems with greater clarity and ease Explore—examine your data, generate hypotheses, and quickly test them Model—provide a low-dimensional summary that captures true "signals" in your dataset Communicate—learn R Markdown for integrating prose, code, and results




Hands-On Data Preprocessing in Python


Book Description

Get your raw data cleaned up and ready for processing to design better data analytic solutions Key FeaturesDevelop the skills to perform data cleaning, data integration, data reduction, and data transformationMake the most of your raw data with powerful data transformation and massaging techniquesPerform thorough data cleaning, including dealing with missing values and outliersBook Description Hands-On Data Preprocessing is a primer on the best data cleaning and preprocessing techniques, written by an expert who's developed college-level courses on data preprocessing and related subjects. With this book, you'll be equipped with the optimum data preprocessing techniques from multiple perspectives, ensuring that you get the best possible insights from your data. You'll learn about different technical and analytical aspects of data preprocessing – data collection, data cleaning, data integration, data reduction, and data transformation – and get to grips with implementing them using the open source Python programming environment. The hands-on examples and easy-to-follow chapters will help you gain a comprehensive articulation of data preprocessing, its whys and hows, and identify opportunities where data analytics could lead to more effective decision making. As you progress through the chapters, you'll also understand the role of data management systems and technologies for effective analytics and how to use APIs to pull data. By the end of this Python data preprocessing book, you'll be able to use Python to read, manipulate, and analyze data; perform data cleaning, integration, reduction, and transformation techniques, and handle outliers or missing values to effectively prepare data for analytic tools. What you will learnUse Python to perform analytics functions on your dataUnderstand the role of databases and how to effectively pull data from databasesPerform data preprocessing steps defined by your analytics goalsRecognize and resolve data integration challengesIdentify the need for data reduction and execute itDetect opportunities to improve analytics with data transformationWho this book is for This book is for junior and senior data analysts, business intelligence professionals, engineering undergraduates, and data enthusiasts looking to perform preprocessing and data cleaning on large amounts of data. You don't need any prior experience with data preprocessing to get started with this book. However, basic programming skills, such as working with variables, conditionals, and loops, along with beginner-level knowledge of Python and simple analytics experience, are a prerequisite.