- Home /
- Shop All /
- Networking & Security /
- Data Storage /
- Big Data Processing with Apache Spark
Big Data Processing with Apache Spark
Course Description
Processing big data in real-time is challenging due to scalability, information consistency, and fault tolerance. This course shows you how you can use Spark to make your overall analysis workflow faster and more efficient. You'll learn all about the core concepts and tools within the Spark ecosystem, like Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming.
Overview
You'll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. After grasping these fundamentals, you'll move on to using Spark Streaming APIs to consume data in real time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption.
By the end of this course, you’ll not only have understood how to use machine learning extensions and structured streams but you’ll also be able to apply Spark in your own upcoming big data projects.
After completing this course, you will be able to:
- Write your own Python programs that can interact with Spark
- Implement data stream consumption using Apache Spark
- Recognize common operations in Spark to process known data streams
- Integrate Spark streaming with Amazon Web Services
- Create a collaborative filtering model with Python and the movielens dataset
- Apply processed data streams to Spark machine learning APIs
Course Length
2 days
Scope
This course is aimed at IT professionals seeking to learn Spark to process big data. This course is get you up and running with Apache Spark and Python. You'll integrate Spark with AWS for real-time analytics. Finally, you'll apply processed data streams to machine learning APIs of Apache Spark.
Target Audience
Big Data Processing with Apache Spark is for you if you are a software engineer, architect, or IT professional who wants to explore distributed systems and big data analytics. Although you don‘t need any knowledge of Spark, prior experience of working with Python is recommended.
Technical Requirements
Hardware:
For an optimal experience with the hands-on labs and other practical activities, we recommend the following hardware configuration:
- Processor: Intel Core i5 or equivalent
- Memory: 4GB RAM
- Storage: 35 GB available space
Software:
- OS: Windows 7 SP1 64-bit, Windows 8.1 64-bit or Windows 10 64-bit
- PostgreSQL 9.0 or above
- Python 3.0 or above
- Spark 2.3.0
- Amazon Web Services (AWS) account
Course Outline
Lesson 1: Introduction to Spark Distributed Processing
- Introduction to Spark and Resilient Distributed Datasetsll
- Operations Supported by the RDD API
- Self-Contained Python Spark Programs
- Introduction to SQL, Datasets, and DataFrames
Lesson 2: Introduction to Spark Streaming
- Streaming Architectures
- Introduction to Discretized Streams
- Windowing Operations
- Introduction to Structured Streaming
Lesson 3: Spark Streaming Integration with AWS
- Spark Integration with AWS Services
- Integrating AWS Kinesis and Python
- AWS S3 Basic Functionality
Lesson 4: Spark Streaming, ML, and Windowing Operations
- Spark Integration with Machine Learning
SKU | 035435I |
---|---|
Weight | 0.7320 |
Coming Soon | No |
Days of Training | 2.0 |
Audience | Instructor |
Product Family | Partnerware |
Product Type | Print and Digital Courseware |
Electronic | Yes |
ISBN | 1789530695 |
Language | English |
Page Count | 124 |
Curriculum Library | No |
Year | No |
Manufacturer's Product Code | No |
Current Revision | 1.0 |
---|---|
Revision Notes | No Revision Information Available |
Original Publication Date | 2018-12-14 00:00:00 |
-
(Full Color) Big Data Processing with Apache Spark
(035435SC) Student Print and Digital Courseware$144.00