Building Cloud Data Platforms Solutions


Book Description

"Building Cloud Data Platforms Solutions: An End-to-End Guide for Designing, Implementing, and Managing Robust Data Solutions in the Cloud" comprehensively covers a wide range of topics related to building data platforms in the cloud. This book provides a deep exploration of the essential concepts, strategies, and best practices involved in designing, implementing, and managing end-to-end data solutions. The book begins by introducing the fundamental principles and benefits of cloud computing, with a specific focus on its impact on data management and analytics. It covers various cloud services and architectures, enabling readers to understand the foundation upon which cloud data platforms are built. Next, the book dives into key considerations for building cloud data solutions, aligning business needs with cloud data strategies, and ensuring scalability, security, and compliance. It explores the process of data ingestion, discussing various techniques for acquiring and ingesting data from different sources into the cloud platform. The book then delves into data storage and management in the cloud. It covers different storage options, such as data lakes and data warehouses, and discusses strategies for organizing and optimizing data storage to facilitate efficient data processing and analytics. It also addresses data governance, data quality, and data integration techniques to ensure data integrity and consistency across the platform. A significant portion of the book is dedicated to data processing and analytics in the cloud. It explores modern data processing frameworks and technologies, such as Apache Spark and serverless computing, and provides practical guidance on implementing scalable and efficient data processing pipelines. The book also covers advanced analytics techniques, including machine learning and AI, and demonstrates how these can be integrated into the data platform to unlock valuable insights. Furthermore, the book addresses an aspects of data platform monitoring, security, and performance optimization. It explores techniques for monitoring data pipelines, ensuring data security, and optimizing performance to meet the demands of real-time data processing and analytics. Throughout the book, real-world examples, case studies, and best practices are provided to illustrate the concepts discussed. This helps readers apply the knowledge gained to their own data platform projects.




Designing Cloud Data Platforms


Book Description

Centralized data warehouses, the long-time defacto standard for housing data for analytics, are rapidly giving way to multi-faceted cloud data platforms. Companies that embrace modern cloud data platforms benefit from an integrated view of their business using all of their data and can take advantage of advanced analytic practices to drive predictions and as yet unimagined data services. Designing Cloud Data Platforms is an hands-on guide to envisioning and designing a modern scalable data platform that takes full advantage of the flexibility of the cloud. As you read, you''ll learn the core components of a cloud data platform design, along with the role of key technologies like Spark and Kafka Streams. You''ll also explore setting up processes to manage cloud-based data, keep it secure, and using advanced analytic and BI tools to analyse it. about the technology Access to affordable, dependable, serverless cloud services has revolutionized the way organizations can approach data management, and companies both big and small are raring to migrate to the cloud. But without a properly designed data platform, data in the cloud can remain just as siloed and inaccessible as it is today for most organizations. Designing Cloud Data Platforms lays out the principles of a well-designed platform that uses the scalable resources of the public cloud to manage all of an organization''s data, and present it as useful business insights. about the book In Designing Cloud Data Platforms, you''ll learn how to integrate data from multiple sources into a single, cloud-based, modern data platform. Drawing on their real-world experiences designing cloud data platforms for dozens of organizations, cloud data experts Danil Zburivsky and Lynda Partner take you through a six-layer approach to creating cloud data platforms that maximizes flexibility and manageability and reduces costs. Starting with foundational principles, you''ll learn how to get data into your platform from different databases, files, and APIs, the essential practices for organizing and processing that raw data, and how to best take advantage of the services offered by major cloud vendors. As you progress past the basics you''ll take a deep dive into advanced topics to get the most out of your data platform, including real-time data management, machine learning analytics, schema management, and more. what''s inside The tools of different public cloud for implementing data platforms Best practices for managing structured and unstructured data sets Machine learning tools that can be used on top of the cloud Cost optimization techniques about the reader For data professionals familiar with the basics of cloud computing and distributed data processing systems like Hadoop and Spark. about the authors Danil Zburivsky has over 10 years experience designing and supporting large-scale data infrastructure for enterprises across the globe. Lynda Partner is the VP of Analytics-as-a-Service at Pythian, and has been on the business side of data for over 20 years.




Architecting Modern Data Platforms


Book Description

There’s a lot of information about big data technologies, but splicing these technologies into an end-to-end enterprise data platform is a daunting task not widely covered. With this practical book, you’ll learn how to build big data infrastructure both on-premises and in the cloud and successfully architect a modern data platform. Ideal for enterprise architects, IT managers, application architects, and data engineers, this book shows you how to overcome the many challenges that emerge during Hadoop projects. You’ll explore the vast landscape of tools available in the Hadoop and big data realm in a thorough technical primer before diving into: Infrastructure: Look at all component layers in a modern data platform, from the server to the data center, to establish a solid foundation for data in your enterprise Platform: Understand aspects of deployment, operation, security, high availability, and disaster recovery, along with everything you need to know to integrate your platform with the rest of your enterprise IT Taking Hadoop to the cloud: Learn the important architectural aspects of running a big data platform in the cloud while maintaining enterprise security and high availability




Rise of the Data Cloud


Book Description

The rise of the Data Cloud is ushering in a new era of computing. The world’s digital data is mass migrating to the cloud, where it can be more effectively integrated, managed, and mobilized. The data cloud eliminates data siloes and enables data sharing with business partners, capitalizing on data network effects. It democratizes data analytics, making the most sophisticated data science tools accessible to organizations of all sizes. Data exchanges enable businesses to discover, explore, and easily purchase or sell data—opening up new revenue streams. Business leaders have long dreamed of data driving their organizations. Now, thanks to the Data Cloud, nothing stands in their way.




Data Mesh


Book Description

Many enterprises are investing in a next-generation data lake, hoping to democratize data at scale to provide business insights and ultimately make automated intelligent decisions. In this practical book, author Zhamak Dehghani reveals that, despite the time, money, and effort poured into them, data warehouses and data lakes fail when applied at the scale and speed of today's organizations. A distributed data mesh is a better choice. Dehghani guides architects, technical leaders, and decision makers on their journey from monolithic big data architecture to a sociotechnical paradigm that draws from modern distributed architecture. A data mesh considers domains as a first-class concern, applies platform thinking to create self-serve data infrastructure, treats data as a product, and introduces a federated and computational model of data governance. This book shows you why and how. Examine the current data landscape from the perspective of business and organizational needs, environmental challenges, and existing architectures Analyze the landscape's underlying characteristics and failure modes Get a complete introduction to data mesh principles and its constituents Learn how to design a data mesh architecture Move beyond a monolithic data lake to a distributed data mesh.




Building the Infrastructure for Cloud Security


Book Description

For cloud users and providers alike, security is an everyday concern, yet there are very few books covering cloud security as a main subject. This book will help address this information gap from an Information Technology solution and usage-centric view of cloud infrastructure security. The book highlights the fundamental technology components necessary to build and enable trusted clouds. Here also is an explanation of the security and compliance challenges organizations face as they migrate mission-critical applications to the cloud, and how trusted clouds, that have their integrity rooted in hardware, can address these challenges. This book provides: Use cases and solution reference architectures to enable infrastructure integrity and the creation of trusted pools leveraging Intel Trusted Execution Technology (TXT). Trusted geo-location management in the cloud, enabling workload and data location compliance and boundary control usages in the cloud. OpenStack-based reference architecture of tenant-controlled virtual machine and workload protection in the cloud. A reference design to enable secure hybrid clouds for a cloud bursting use case, providing infrastructure visibility and control to organizations. "A valuable guide to the next generation of cloud security and hardware based root of trust. More than an explanation of the what and how, is the explanation of why. And why you can’t afford to ignore it!" —Vince Lubsey, Vice President, Product Development, Virtustream Inc. " Raghu provides a valuable reference for the new 'inside out' approach, where trust in hardware, software, and privileged users is never assumed—but instead measured, attested, and limited according to least privilege principles." —John Skinner, Vice President, HyTrust Inc. "Traditional parameter based defenses are in sufficient in the cloud. Raghu's book addresses this problem head-on by highlighting unique usage models to enable trusted infrastructure in this open environment. A must read if you are exposed in cloud." —Nikhil Sharma, Sr. Director of Cloud Solutions, Office of CTO, EMC Corporation




The Enterprise Big Data Lake


Book Description

The data lake is a daring new approach for harnessing the power of big data technology and providing convenient self-service capabilities. But is it right for your company? This book is based on discussions with practitioners and executives from more than a hundred organizations, ranging from data-driven companies such as Google, LinkedIn, and Facebook, to governments and traditional corporate enterprises. You’ll learn what a data lake is, why enterprises need one, and how to build one successfully with the best practices in this book. Alex Gorelik, CTO and founder of Waterline Data, explains why old systems and processes can no longer support data needs in the enterprise. Then, in a collection of essays about data lake implementation, you’ll examine data lake initiatives, analytic projects, experiences, and best practices from data experts working in various industries. Get a succinct introduction to data warehousing, big data, and data science Learn various paths enterprises take to build a data lake Explore how to build a self-service model and best practices for providing analysts access to the data Use different methods for architecting your data lake Discover ways to implement a data lake from experts in different industries




Cloud Computing


Book Description

This book describes cloud computing as a service that is "highly scalable" and operates in "a resilient environment". The authors emphasize architectural layers and models - but also business and security factors.




Data Engineering with Google Cloud Platform


Book Description

Build and deploy your own data pipelines on GCP, make key architectural decisions, and gain the confidence to boost your career as a data engineer Key Features Understand data engineering concepts, the role of a data engineer, and the benefits of using GCP for building your solution Learn how to use the various GCP products to ingest, consume, and transform data and orchestrate pipelines Discover tips to prepare for and pass the Professional Data Engineer exam Book DescriptionWith this book, you'll understand how the highly scalable Google Cloud Platform (GCP) enables data engineers to create end-to-end data pipelines right from storing and processing data and workflow orchestration to presenting data through visualization dashboards. Starting with a quick overview of the fundamental concepts of data engineering, you'll learn the various responsibilities of a data engineer and how GCP plays a vital role in fulfilling those responsibilities. As you progress through the chapters, you'll be able to leverage GCP products to build a sample data warehouse using Cloud Storage and BigQuery and a data lake using Dataproc. The book gradually takes you through operations such as data ingestion, data cleansing, transformation, and integrating data with other sources. You'll learn how to design IAM for data governance, deploy ML pipelines with the Vertex AI, leverage pre-built GCP models as a service, and visualize data with Google Data Studio to build compelling reports. Finally, you'll find tips on how to boost your career as a data engineer, take the Professional Data Engineer certification exam, and get ready to become an expert in data engineering with GCP. By the end of this data engineering book, you'll have developed the skills to perform core data engineering tasks and build efficient ETL data pipelines with GCP.What you will learn Load data into BigQuery and materialize its output for downstream consumption Build data pipeline orchestration using Cloud Composer Develop Airflow jobs to orchestrate and automate a data warehouse Build a Hadoop data lake, create ephemeral clusters, and run jobs on the Dataproc cluster Leverage Pub/Sub for messaging and ingestion for event-driven systems Use Dataflow to perform ETL on streaming data Unlock the power of your data with Data Studio Calculate the GCP cost estimation for your end-to-end data solutions Who this book is for This book is for data engineers, data analysts, and anyone looking to design and manage data processing pipelines using GCP. You'll find this book useful if you are preparing to take Google's Professional Data Engineer exam. Beginner-level understanding of data science, the Python programming language, and Linux commands is necessary. A basic understanding of data processing and cloud computing, in general, will help you make the most out of this book.




Designing Big Data Platforms


Book Description

DESIGNING BIG DATA PLATFORMS Provides expert guidance and valuable insights on getting the most out of Big Data systems An array of tools are currently available for managing and processing data—some are ready-to-go solutions that can be immediately deployed, while others require complex and time-intensive setups. With such a vast range of options, choosing the right tool to build a solution can be complicated, as can determining which tools work well with each other. Designing Big Data Platforms provides clear and authoritative guidance on the critical decisions necessary for successfully deploying, operating, and maintaining Big Data systems. This highly practical guide helps readers understand how to process large amounts of data with well-known Linux tools and database solutions, use effective techniques to collect and manage data from multiple sources, transform data into meaningful business insights, and much more. Author Yusuf Aytas, a software engineer with a vast amount of big data experience, discusses the design of the ideal Big Data platform: one that meets the needs of data analysts, data engineers, data scientists, software engineers, and a spectrum of other stakeholders across an organization. Detailed yet accessible chapters cover key topics such as stream data processing, data analytics, data science, data discovery, and data security. This real-world manual for Big Data technologies: Provides up-to-date coverage of the tools currently used in Big Data processing and management Offers step-by-step guidance on building a data pipeline, from basic scripting to distributed systems Highlights and explains how data is processed at scale Includes an introduction to the foundation of a modern data platform Designing Big Data Platforms: How to Use, Deploy, and Maintain Big Data Systems is a must-have for all professionals working with Big Data, as well researchers and students in computer science and related fields.