Resource Management for Big Data Platforms


Book Description

Serving as a flagship driver towards advance research in the area of Big Data platforms and applications, this book provides a platform for the dissemination of advanced topics of theory, research efforts and analysis, and implementation oriented on methods, techniques and performance evaluation. In 23 chapters, several important formulations of the architecture design, optimization techniques, advanced analytics methods, biological, medical and social media applications are presented. These chapters discuss the research of members from the ICT COST Action IC1406 High-Performance Modelling and Simulation for Big Data Applications (cHiPSet). This volume is ideal as a reference for students, researchers and industry practitioners working in or interested in joining interdisciplinary works in the areas of intelligent decision systems using emergent distributed computing paradigms. It will also allow newcomers to grasp the key concerns and their potential solutions.




Big Data Platforms and Applications


Book Description

This book provides a review of advanced topics relating to the theory, research, analysis and implementation in the context of big data platforms and their applications, with a focus on methods, techniques, and performance evaluation. The explosive growth in the volume, speed, and variety of data being produced every day requires a continuous increase in the processing speeds of servers and of entire network infrastructures, as well as new resource management models. This poses significant challenges (and provides striking development opportunities) for data intensive and high-performance computing, i.e., how to efficiently turn extremely large datasets into valuable information and meaningful knowledge. The task of context data management is further complicated by the variety of sources such data derives from, resulting in different data formats, with varying storage, transformation, delivery, and archiving requirements. At the same time rapid responses are needed for real-time applications. With the emergence of cloud infrastructures, achieving highly scalable data management in such contexts is a critical problem, as the overall application performance is highly dependent on the properties of the data management service.







Data Science in Agriculture and Natural Resource Management


Book Description

This book aims to address emerging challenges in the field of agriculture and natural resource management using the principles and applications of data science (DS). The book is organized in three sections, and it has fourteen chapters dealing with specialized areas. The chapters are written by experts sharing their experiences very lucidly through case studies, suitable illustrations and tables. The contents have been designed to fulfil the needs of geospatial, data science, agricultural, natural resources and environmental sciences of traditional universities, agricultural universities, technological universities, research institutes and academic colleges worldwide. It will help the planners, policymakers and extension scientists in planning and sustainable management of agriculture and natural resources. The authors believe that with its uniqueness the book is one of the important efforts in the contemporary cyber-physical systems.




Resource Management in Cluster Computing Platforms for Large Scale Data Processing


Book Description

In the era of big data, one of the most significant research areas is cluster computing for large-scale data processing. Many cluster computing frameworks and cluster resource management schemes were recently developed to satisfy the increasing demands on large volume data processing. Among them, Apache Hadoop became the de facto platform that has been widely adopted in both industry and academia due to its prominent features such as scalability, simplicity and fault tolerance. The original Hadoop platform was designed to closely resemble the MapReduce framework, which is a programming paradigm for cluster computing proposed by Google. Recently, the Hadoop platform has evolved into its second generation, Hadoop YARN, which serves as a unified cluster resource management layer to support multiplexing of different cluster computing frameworks. A fundamental issue in this field is that given limited computing resources in a cluster, how to efficiently manage and schedule the execution of a large number of data processing jobs. Therefore, in this dissertation, we mainly focus on improving system efficiency and performance for cluster computing platforms, i.e., Hadoop MapReduce and Hadoop YARN, by designing the following new scheduling algorithms and resource management schemes. First, we developed a Hadoop scheduler (LsPS), which aims to improve average job response times by leveraging job size patterns of different users to tune resource sharing between users as well as choose a good scheduling policy for each user. We further presented a self-adjusting slot configuration scheme, named TuMM, for Hadoop MapReduce to improve the makespan of batch jobs. TuMM abandons the static and manual slot configurations in the existing Hadoop MapReduce framework. Instead, by using a feedback control mechanism, TuMM dynamically tunes map and reduce slot numbers on each cluster node based on monitored workload information to align the execution of map and reduce phases. The second main contribution of this dissertation lies in the development of new scheduler and resource management scheme for the next generation Hadoop, i.e., Hadoop YARN. We designed a YARN scheduler, named HaSTE, which can effectively reduce the makespan of MapReduce jobs in YARN platform by leveraging the information of requested resources, resource capacities, and dependency between tasks. Moreover, we proposed an opportunistic scheduling scheme to reassign reserved but idle resources to other waiting tasks. The major goal of our new scheme is to improve system resource utilization without incurring severe resource contentions due to resource over provisioning. We implemented all of our resource management schemes in Hadoop MapReduce and Hadoop YARN, and evaluated the effectiveness of these new schedulers and schemes on different cluster systems, including our local clusters and large clusters in cloud computing, such as Amazon EC2. Representative benchmarks are used for sensitivity analysis and performance evaluations. Experimental results demonstrate that our new Hadoop/YARN schedulers and resource management schemes can successfully improve the performance in terms of job response times, job makespan, and system utilization in both Hadoop MapReduce and Hadoop YARN platforms.







Big Data for Big Decisions


Book Description

Building a data-driven organization (DDO) is an enterprise-wide initiative that may consume and lock up resources for the long term. Understandably, any organization considering such an initiative would insist on a roadmap and business case to be prepared and evaluated prior to approval. This book presents a step-by-step methodology in order to create a roadmap and business case, and provides a narration of the constraints and experiences of managers who have attempted the setting up of DDOs. The emphasis is on the big decisions – the key decisions that influence 90% of business outcomes – starting from decision first and reengineering the data to the decisions process-chain and data governance, so as to ensure the right data are available at the right time, every time. Investing in artificial intelligence and data-driven decision making are now being considered a survival necessity for organizations to stay competitive. While every enterprise aspires to become 100% data-driven and every Chief Information Officer (CIO) has a budget, Gartner estimates over 80% of all analytics projects fail to deliver intended value. Most CIOs think a data-driven organization is a distant dream, especially while they are still struggling to explain the value from analytics. They know a few isolated successes, or a one-time leveraging of big data for decision making does not make an organization data-driven. As of now, there is no precise definition for data-driven organization or what qualifies an organization to call itself data-driven. Given the hype in the market for big data, analytics and AI, every CIO has a budget for analytics, but very little clarity on where to begin or how to choose and prioritize the analytics projects. Most end up investing in a visualization platform like Tableau or QlikView, which in essence is an improved version of their BI dashboard that the organization had invested into not too long ago. The most important stakeholders, the decision-makers, are rarely kept in the loop while choosing analytics projects. This book provides a fail-safe methodology for assured success in deriving intended value from investments into analytics. It is a practitioners’ handbook for creating a step-by-step transformational roadmap prioritizing the big data for the big decisions, the 10% of decisions that influence 90% of business outcomes, and delivering material improvements in the quality of decisions, as well as measurable value from analytics investments. The acid test for a data-driven organization is when all the big decisions, especially top-level strategic decisions, are taken based on data and not on the collective gut feeling of the decision makers in the organization.




Data Science and Big Data Analytics in Smart Environments


Book Description

Most applications generate large datasets, like social networking and social influence programs, smart cities applications, smart house environments, Cloud applications, public web sites, scientific experiments and simulations, data warehouse, monitoring platforms, and e-government services. Data grows rapidly, since applications produce continuously increasing volumes of both unstructured and structured data. Large-scale interconnected systems aim to aggregate and efficiently exploit the power of widely distributed resources. In this context, major solutions for scalability, mobility, reliability, fault tolerance and security are required to achieve high performance and to create a smart environment. The impact on data processing, transfer and storage is the need to re-evaluate the approaches and solutions to better answer the user needs. A variety of solutions for specific applications and platforms exist so a thorough and systematic analysis of existing solutions for data science, data analytics, methods and algorithms used in Big Data processing and storage environments is significant in designing and implementing a smart environment. Fundamental issues pertaining to smart environments (smart cities, ambient assisted leaving, smart houses, green houses, cyber physical systems, etc.) are reviewed. Most of the current efforts still do not adequately address the heterogeneity of different distributed systems, the interoperability between them, and the systems resilience. This book will primarily encompass practical approaches that promote research in all aspects of data processing, data analytics, data processing in different type of systems: Cluster Computing, Grid Computing, Peer-to-Peer, Cloud/Edge/Fog Computing, all involving elements of heterogeneity, having a large variety of tools and software to manage them. The main role of resource management techniques in this domain is to create the suitable frameworks for development of applications and deployment in smart environments, with respect to high performance. The book focuses on topics covering algorithms, architectures, management models, high performance computing techniques and large-scale distributed systems.




Adaptive Resource Management and Scheduling for Cloud Computing


Book Description

This book constitutes the thoroughly refereed post-conference proceedings of the Second International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing, ARMS-CC 2015, held in Conjunction with ACM Symposium on Principles of Distributed Computing, PODC 2015, in Donostia-San Sebastián, Spain, in July 2015. The 12 revised full papers, including 1 invited paper, were carefully reviewed and selected from 24 submissions. The papers have identified several important aspects of the problem addressed by ARMS-CC: self-* and autonomous cloud systems, cloud quality management and service level agreement (SLA), scalable computing, mobile cloud computing, cloud computing techniques for big data, high performance cloud computing, resource management in big data platforms, scheduling algorithms for big data processing, cloud composition, federation, bridging, and bursting, cloud resource virtualization and composition, load-balancing and co-allocation, fault tolerance, reliability, and availability of cloud systems.




Big Data Management and Processing


Book Description

From the Foreword: "Big Data Management and Processing is [a] state-of-the-art book that deals with a wide range of topical themes in the field of Big Data. The book, which probes many issues related to this exciting and rapidly growing field, covers processing, management, analytics, and applications... [It] is a very valuable addition to the literature. It will serve as a source of up-to-date research in this continuously developing area. The book also provides an opportunity for researchers to explore the use of advanced computing technologies and their impact on enhancing our capabilities to conduct more sophisticated studies." ---Sartaj Sahni, University of Florida, USA "Big Data Management and Processing covers the latest Big Data research results in processing, analytics, management and applications. Both fundamental insights and representative applications are provided. This book is a timely and valuable resource for students, researchers and seasoned practitioners in Big Data fields. --Hai Jin, Huazhong University of Science and Technology, China Big Data Management and Processing explores a range of big data related issues and their impact on the design of new computing systems. The twenty-one chapters were carefully selected and feature contributions from several outstanding researchers. The book endeavors to strike a balance between theoretical and practical coverage of innovative problem solving techniques for a range of platforms. It serves as a repository of paradigms, technologies, and applications that target different facets of big data computing systems. The first part of the book explores energy and resource management issues, as well as legal compliance and quality management for Big Data. It covers In-Memory computing and In-Memory data grids, as well as co-scheduling for high performance computing applications. The second part of the book includes comprehensive coverage of Hadoop and Spark, along with security, privacy, and trust challenges and solutions. The latter part of the book covers mining and clustering in Big Data, and includes applications in genomics, hospital big data processing, and vehicular cloud computing. The book also analyzes funding for Big Data projects.