LearningPatterns: Your Global Source for Java Training, Mentoring, and Consulting
Home | About LearningPatterns | Our Services | Contact Us | Printer Friendly Link

Course Description (Course code - SPARK):

Introduction to Spark Programming Courseware for License

This course introduces the Apache Spark distributed computing engine, and is suitable for developers, data analysts, architects, technical managers, and anyone who needs to use Spark in a hands-on manner.

The course provides a solid technical introduction to the Spark architecture and how Spark works. It covers the basic building blocks of Spark (e.g. RDDs and the distributed compute engine), as well as higher-level constructs that provide a simpler and more capable interface (e.g. Spark SQL and DataFrames). It also covers more advanced capabilities such as the use of Spark Streaming to process streaming data, and provides an overview of Spark GraphX (graph processing) and Spark MLlib (machine learning). Finally, the course explores possible performance issues and strategies for optimization.

The course is very hands-on, with many labs. Participants will interact with Spark through the Spark shell (for interactive, ad-hoc processing) as well as through programs using the Spark API . Labs currently support Scala - contact us for Python/Java support.

The Apache Spark distributed computing engine is rapidly becoming a primary tool in the processing and analyzing of large-scale data sets. It has many advantages over existing engines, such as Hadoop, including runtime speeds that are 10-100x faster, as well as a much simpler programming model. After taking this course, you will be ready to work with Spark in an informed and productive manner.

Course Information

Duration: 3 days

Labs: Minimum 50% hands-on labs

Prerequisites: Reasonable programming experience. An overview of Scala is provided for those who don't know it.

Supported Platforms: Spark 1.4+ on Windows and Linux

Knowledge and Skills Gained:

Course Details:

Session 1 (Optional): Scala Ramp Up

  • Scala Introduction, Variables, Data Types, Control Flow
  • The Scala Interpreter
  • Collections and their Standard Methods (e.g. map())
  • Functions, Methods, Function Literals
  • Class, Object, Trait

Session 2: Introduction to Spark

  • Overview, Motivations, Spark Systems
  • Spark Ecosystem
  • Spark vs. Hadoop
  • Acquiring and Installing Spark
  • The Spark Shell

Session 3: RDDs and Spark Architecture

  • RDD Concepts, Lifecycle, Lazy Evaluation
  • RDD Partitioning and Transformations
  • Working with RDDs - Creating and Transforming (map, filter, etc.)
  • Key-Value Pairs - Definition, Creation, and Operations
  • Caching - Concepts, Storage Type, Guidelines

Session 4: Spark API

  • Overview, Basic Driver Code, SparkConf
  • Creating and Using a SparkContext
  • RDD API
  • Building and Running Applications
  • Application Lifecycle
  • Cluster Managers
  • Logging and Debugging

Session 5: Spark SQL

  • Introduction and Usage
  • DataFrames and SQLContext
  • Working with JSON
  • Querying - The DataFrame DSL, and SQL
  • Data Formats

Session 6: Spark Streaming

  • Overview and Streaming Basics
  • DStreams (Discretized Steams),
  • Architecture, Stateless, Stateful, and Windowed Transformations
  • Spark Streaming API
  • Programming and Transformations

Session 7: Performance Characteristics and Tuning

  • The Spark UI
  • Narrow vs. Wide Dependencies
  • Minimizing Data Processing and Shuffling
  • Using Caching
  • Using Broadcast Variables and Accumulators

Session 8 (Optional): Spark GraphX Overview

  • Introduction
  • Constructing Simple Graphs
  • GraphX API
  • Shortest Path Example

Session 9 (Optional): MLLib Overview

  • Introduction
  • Feature Vectors
  • Clustering / Grouping, K-Means
  • Recommendations
  • Classifications
Home | About LearningPatterns | Our Services | Legal | Contact Us       Copyright © LearningPatterns, Inc. All rights reserved.