Home
:
Book details
:
Book description
Description of
Coursera NoSQL, Big Data, and Spark Foundations Specialization
Last updated 2/2024 MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz, 2 Ch Genre: eLearning | Language: English + srt | Duration: 93 Lessons ( 9h 5m ) | Size: 912 MB Springboard your Big Data career. Master fundamentals of NoSQL, Big Data, and Apache Spark with hands-on job-ready skills in machine learning and data engineering. What you'll learn Work with NoSQL databases to insert, update, delete, query, index, aggregate, and shard/partition data. Develop hands-on NoSQL experience working with MongoDB, Apache Cassandra, and IBM Cloudant. Develop foundational knowledge of Big Data and gain hands-on lab experience using Apache Hadoop, MapReduce, Apache Spark, Spark SQL, and Kubernetes. Perform Extract, Transform and Load (ETL) processing and Machine Learning model training and deployment with Apache Spark. Skills you'll gain Cloud Database Mongodb Cassandra NoSQL Cloudant Machine Learning Machine Learning Pipelines Data Engineer SparkML Apache Spark Big Data SparkSQL Apache Hadoop Big Data Engineers and professionals with NoSQL skills are highly sought after in the data management industry. This Specialization is designed for those seeking to develop fundamental skills for working with Big Data, Apache Spark, and NoSQL databases. Three information-packed courses cover popular NoSQL databases like MongoDB and Apache Cassandra, the widely used Apache Hadoop ecosystem of Big Data tools, as well as Apache Spark analytics engine for large-scale data processing. This specialization is suitable for beginners in the fields of NoSQL and Big Data ? whether you are or preparing to be a Data Engineer, Software Developer, IT Architect, Data Scientist, or IT Manager. Applied Learning Project The emphasis in this specialization is on learning by doing. As such, each course includes hands-on labs to practice & apply the NoSQL and Big Data skills you learn during lectures. In the first course, you will work hands-on with several NoSQL databases- MongoDB, Apache Cassandra, and IBM Cloudant to perform a variety of tasks: creating the database, adding documents, querying data, utilizing the HTTP API, performing Create, Read, Update & Delete (CRUD) operations, limiting & sorting records, indexing, aggregation, replication, using CQL shell, keyspace operations, & other table operations. In the next course, you'll launch a Hadoop cluster using Docker and run Map Reduce jobs. You'll explore working with Spark using Jupyter notebooks on a Python kernel. You'll build your Spark skills using DataFrames, Spark SQL, and scale your jobs using Kubernetes. In the final course you will use Spark for ETL processing, and Machine Learning model training and deployment using IBM Watson.