Learning Patterns Your Source for Quality Technology Courseware

Introduction to Spark 3 with Python

This course introduces the Apache Spark distributed computing engine, and is suitable for developers, data analysts, architects, technical managers, and anyone who needs to use Spark in a hands-on manner. It is based on the Spark 3.x release. All examples and labs use Python for programming.

The course provides a solid technical introduction to the Spark architecture and how Spark works. It covers the basic building blocks of Spark (e.g. RDDs and the distributed compute engine), as well as higher-level constructs that provide a simpler and more capable interface (e.g. DataFrames and Spark SQL). It includes in-depth coverage of Spark SQL and DataFrames, which are now the preferred programming API. This includes exploring possible performance issues and strategies for optimization.

The course also covers more advanced capabilities such as the use of Spark Streaming to process streaming data, and integrating with the Kafka server.

The course is very hands-on, with many labs. Participants will interact with Spark through the pyspark shell (for interactive, ad-hoc processing) as well as through programs using the Spark API. After taking this course, you will be ready to work with Spark in an informed and productive manner.

Course Information:

Availability: NEW

Course Code: SPARK3-PYTHON

Price: $200

Duration: 4 days

Labs: many hands-on labs (minimum 50% of course)

Prerequisites: Working knowledge of some programming language - no Java experience needed

Supported Software Environments:

  • Standard: We supply virtual machine with all software/labs (generally VirtualBox VM)
  • Contact Us for Details
(Non-standard software may require additional wait and incur additional charges.)

Course Objectives:

  • Understand the need for Spark in data processing
  • Understand the Spark architecture and how it distributes computations to cluster nodes
  • Be familiar with basic installation / setup / layout of Spark
  • Use the Spark shell for interactive and ad-hoc operations
  • Understand RDDs (Resilient Distributed Datasets), and data partitioning, pipelining, and computations
  • Understand/use RDD ops such as map(), filter() and others.
  • Understand and use Spark SQL and the DataFrame API.
  • Understand the DataFrame capabilities, including the Catalyst query optimizer and Tungsten memory/cpu optimizations.
  • Be familiar with performance issues, and use DataFrames and Spark SQL for efficient computations
  • Understand Spark’s data caching and use it for efficient data transfer
  • Write/run standalone Spark programs with the Spark API
  • Use Spark Structured Streaming to process streaming (real-time) data
  • Ingest streaming data from Kafka, and process via Spark Structured Streaming
  • Understand performance implications and optimizations when using Spark

Course Outline:

  • Session 1: Introduction to Spark
    • Overview, Motivations, Spark Systems
    • Spark Ecosystem
    • Spark vs. Hadoop
    • Acquiring and Installing Spark
    • The Spark Shell, SparkContext
  • Session 2: RDDs and Spark Architecture
    • RDD Concepts, Lifecycle, Lazy Evaluation
    • RDD Partitioning and Transformations
    • Working with RDDs - Creating and Transforming (map, filter, etc.)
  • Session 3: Spark SQL, DataFrames, and DataSets
    • Overview
    • SparkSession, Loading/Saving Data, Data Formats (JSON, CSV, Parquet, text ...)
    • Introducing DataFrames (Creation and Schema Inference)
    • Supported Data Formats (JSON, Text, CSV, Parquet)
    • Working with the DataFrame (untyped) Query DSL (Column, Filtering, Grouping, Aggregation)
    • SQL-based Queries
    • Mapping and Splitting (flatMap(), explode(), and split())
    • DataFrames vs. RDDs
  • Session 4: Shuffling Transformations and Performance
    • Grouping, Reducing, Joining
    • Shuffling, Narrow vs. Wide Dependencies, and Performance Implications
    • Exploring the Catalyst Query Optimizer (explain(), Query Plans, Issues with lambdas)
    • The Tungsten Optimizer (Binary Format, Cache Awareness, Whole-Stage Code Gen)
  • Session 5: Spark Streaming
    • Introduction and Streaming Basics
    • Streaming Introduction
    • Structured Streaming (Spark 2+)
      • Continuous Applications
      • Table Paradigm, Result Table
      • Steps for Structured Streaming
      • Sources and Sinks
    • Consuming Kafka Data
      • Kafka Overview
      • Structured Streaming - "kafka" format
      • Processing the Stream
  • Session 6: Performance Tuning
    • Caching - Concepts, Storage Type, Guidelines
    • Minimizing Shuffling for Increased Performance
    • Using Broadcast Variables and Accumulators
    • General Performance Guidelines
  • Session 7: Creating Standalone Applications
    • Core API, SparkSession.Builder
    • Configuring and Creating a SparkSession
    • Building and Running Applications - sbt/build.sbt and spark-submit
    • Application Lifecycle (Driver, Executors, and Tasks)
    • Cluster Managers (Standalone, YARN, Mesos)
    • Logging and Debugging