Download Taming Big Data with Apache Spark and Python - Hands On! For Free

Description
“Big data" analysis is a hot and highly valuable skill – and this course will teach you the hottest technology in big data: Apache Spark and specifically PySpark. Employers including Amazon, EBay, NASA JPL, and Yahoo all use Spark to quickly extract meaning from massive data sets across a fault-tolerant Hadoop cluster. You’ll learn those same techniques, using your own Windows system right at home. It’s easier than you might think.
Learn and master the art of framing data analysis problems as Spark problems through over 20 hands-on examples, and then scale them up to run on cloud computing services in this course. You’ll be learning from an ex-engineer and senior manager from Amazon and IMDb.
Learn the concepts of Spark’s DataFrames and Resilient Distributed Datastores
Develop and run Spark jobs quickly using Python and pyspark
Translate complex analysis problems into iterative or multi-stage Spark scripts
Scale up to larger data sets using Amazon’s Elastic MapReduce service
Understand how Hadoop YARN distributes Spark across computing clusters
Learn about other Spark technologies, like Spark SQL, Spark Streaming, and GraphX
Practice using Spark’s latest features, including Pandas-On-Spark, Spark Connect, and User-Defined Table Functions (UDTFs).
By the end of this course, you’ll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes.
This course uses the familiar Python programming language; if you’d rather use Scala to get the best performance out of Spark, see my “Apache Spark with Scala - Hands On with Big Data” course instead.
What you’ll learn
Use DataFrames and Structured Streaming in Spark 3
Use the MLLib machine learning library to answer common data mining questions
Understand how Spark Streaming lets your process continuous streams of data in real time
Frame big data analysis problems as Spark problems
Use Amazon’s Elastic MapReduce service to run your job on a cluster with Hadoop YARN
Install and run Apache Spark on a desktop computer or on a cluster
Use Spark’s Resilient Distributed Datasets to process and analyze large data sets across many CPU’s
Implement iterative algorithms such as breadth-first-search using Spark
Understand how Spark SQL lets you work with structured data
Tune and troubleshoot large jobs running on a cluster
Share information between nodes on a Spark cluster using broadcast variables and accumulators
Understand how the GraphX library helps with network analysis problems