Home
:
Book details
:
Book description
Description of
Data Processing and Modeling with Hadoop: Mastering Hadoop Ecosystem Including ETL, Data Vault, DMBok, GDPR, and Various Data-Centric Tools (English Edition)
Understand data in a simple way using a data lake. Key Features In-depth practical demonstration of Hadoop/Yarn concepts with numerous examples. Includes graphical illustrations and visual explanations for Hadoop commands and parameters. Includes details of dimensional modeling and Data Vault modeling. Includes details of how to create and define a structure to a data lake. Description The book 'Data Processing and Modeling with Hadoop' explains how a distributed system works and its benefits in the big data era in a straightforward and clear manner. After reading the book, you will be able to plan and organize projects involving a massive amount of data. The book describes the standards and technologies that aid in data management and compares them to other technology business standards. The reader receives practical guidance on how to segregate and separate data into zones, as well as how to develop a model that can aid in data evolution. It discusses security and the measures that are utilized to reduce the impact of security. Self-service analytics, Data Lake, Data Vault 2.0, and Data Mesh are discussed in the book. After reading this book, the reader will have a thorough understanding of how to structure a data lake, as well as the ability to plan, organize, and carry out the implementation of a data-driven business with full governance and security. What you will learn Learn the basics of components to the Hadoop Ecosystem. Understand the structure, files, and zones of a Data Lake. Learn to implement the security part of the Hadoop Ecosystem. Learn to work with the Data Vault 2.0 modeling. Learn to develop a strategy to define good governance. Learn new tools to work with Data and Big Data Who this book is for This book caters to big data developers, technical specialists, consultants, and students who want to build good proficiency in big data. Knowing basic SQL concepts, modeling, and development would be good, although not mandatory. Table of Contents 1. Understanding the Current Moment 2. Defining the Zones 3. The Importance of Modeling 4. Massive Parallel Processing 5. Doing ETL/ELT 6. A Little Governance 7. Talking About Security 8. What Are the Next Steps?