Apache Spark Big Data Boot Camp

3 Day Classroom  •  3 Day Live Online
3 Day Training at your location.
Adjustable to meet your needs.
Group Rate:
GSA Discount:
When training eight or more people, onsite team training offers a more affordable and convenient option.
Register Now
Request Quote

Apache Spark: the second-generation big data platform that supplants and improves on Hadoop

The speed, expanded versatility, and new access to powerful APIs and libraries make Apache Spark the undisputed new toolset for powering big data solutions with distributed cluster computing. Also, for the first time ever Spark gives applications the ability to support serious data science capabilities with R-type dataframes and big data streaming to overcome time constraints.

Learn to use Spark for your own applications in three days packed with hands-on labs.

This fast-paced 3-day course is for data engineers, data analysts, data scientists, developers and operations teams and provides a thorough, hands-on overview of the Apache Spark Platform and various technologies and paradigms which are in Apache Spark.

  • We will explore Apache Spark, how it came into existence, how it compares with Apache Hadoop – currently the de facto big data standard – and the new use cases that can be realized with Apache Spark as well as how your current use cases can be made more performant and powerful.
  • We will also look at Apache Spark’s Streaming Architecture which can help realize most of the real time-constrained needs of your business. We will also explore Apache Spark’s SQL Architecture which provides very fast migration from traditional slower analytical tools like Hive to SparkSQL.
  • We will spend some time on Apache Spark ML/ML Lib which provides a total integrated Architecture with both real-time and batch analytics.
  • Finally, we will also look at Apache Spark GraphX which deals with Graph Algorithms.

All these workshops are delivered with guided hands-on labs allowing attendees to explore the data and the techniques and familiarize themselves with the various paradigms.

Upcoming Dates and Locations
Guaranteed To Run
Mar 2, 2020 – Mar 4, 2020    8:30am – 4:30pm Live Online
8:30am – 4:30pm
Mar 2, 2020 – Mar 4, 2020    8:30am – 4:30pm Philadelphia, Pennsylvania

Hyatt Place
440 American Avenue
King Of Prussia, PA 19406
United States

May 4, 2020 – May 6, 2020    8:30am – 4:30pm Live Online
8:30am – 4:30pm
May 4, 2020 – May 6, 2020    8:30am – 4:30pm Washington, District of Columbia

Attune, Formerly Microtek-Washington, DC
1110 Vermont Avenue NW
Suite 700
Washington, DC 20005
United States

Jul 8, 2020 – Jul 10, 2020    8:30am – 4:30pm Live Online
8:30am – 4:30pm
Jul 8, 2020 – Jul 10, 2020    8:30am – 4:30pm Atlanta, Georgia

Attune, formerly Microtek Atlanta
1000 Abernathy Rd. NE Ste 194
Northpark Bldg 400
Atlanta, GA 30328
United States

Sep 1, 2020 – Sep 3, 2020    8:30am – 4:30pm San Francisco, California

Learn IT
33 New Montgomery St.
Suite 300
San Francisco, CA 94105
United States

Sep 1, 2020 – Sep 3, 2020    11:30am – 7:30pm Live Online
11:30am – 7:30pm
Nov 2, 2020 – Nov 4, 2020    8:30am – 4:30pm Chicago, Illinois

Attune, formerly Microtek Chicago
230 W. Monroe
Suite 900
Chicago, IL 60606
United States

Nov 2, 2020 – Nov 4, 2020    9:30am – 5:30pm Live Online
9:30am – 5:30pm
Course Outline

Part 1: Introduction to Big Data & Apache Spark

  1. Introduce Data Analysis
  2. Introduce Big Data
  3. Big Data Definition
  4. Introduce the techniques and challenges in Big Data
  5. Introduce the techniques and challenges in Distributed Computing
  6. Show how the functional programming approach is particularly useful in tackling these challenges
  7. Short overview of previous solutions: Google’s MapReduce and Apache Hadoop
  8. Introduce Apache Spark

Hands-on practice: We will get exposure to admin and setup

Part 2: Deploying & Understanding Apache Spark Architecture

  1. Spark Architecture in a Cluster
  2. Spark Ecosystem and Cluster Management
  3. Deploying Spark on a Cluster
  4. Deploying Spark on a Standalone Cluster
  5. Deploying Spark on a Mesos Cluster
  6. Deploying Spark on YARN cluster
  7. Cloud-based Deployment

Hands-on practice: Learn to deploy and begin using Spark

Part 3: Spark Core, RDDs and Spark Shell

  1. Dig deeper into Apache Spark
  2. Introduce Resilient Distributed Datasets (RDDs)
  3. Apache Spark installation (basic, local)
  4. Introduce the Spark Shell
  5. Actions and Transformations (Laziness)
  6. Caching
  7. Loading and Saving data files from the file system

Hands-on practice: Get hands-on with Spark Core and RDDs

Part 4: Deep Dive into RDD

  1. Tailored RDD
  2. Pair RDD
  3. NewHadoop RDD
  4. Aggregations
  5. Partitioning
  6. Broadcast Variables
  7. Accumulators

Hands-on practice: You’ll learn expanded RDD capabilities

Part 5: Spark SQL and DataFrames

  1. SparkSQL & DataFrames
  2. DataFrame & SQL API
  3. DataFrame Schema
  4. Datasets and Encoders
  5. Loading and Saving data
  6. Aggregations
  7. Joins

Hands-on practice: You’ll learn to use one of Spark’s most powerful features: DataFrames using R-style modeling supported by supercomputing clusters

Part 6: Spark Streaming

  1. A brief introduction to streaming
  2. Spark Streaming
  3. Discretized Streams
  4. Structured Streaming
  5. Stateful / Stateless Transformations
  6. Checkpointing
  7. Interoperability with Streaming Platforms (Apache Kafka)

Hands-on practice: Another of Spark 2.1’s most exciting features is the ability to provide big data streaming to allow beating the timeframe constraints of previous big data solutions

Part 7: Spark MLlib and ML

  1. Introduction to Machine Learning
  2. Spark Machine Learning APIs
  3. Feature Extractor and Transformation
  4. Classification using Logistic Regression
  5. Best Practice in ML for the Practitioners

Hands-on practice: Use Spark to perform production-friendly calls for powerful machine learning service and predictive analytics

Part 8: Graphx

  1. Brief Introduction to Graph Theory
  2. GraphX
  3. Vertex and Edge RDDs
  4. Graph operators
  5. Pregel API
  6. PageRank / Travelling Salesman Problem

Hands-on practice: Get hands-on practice using Graphx

Part 9: Testing and Debugging Spark

  1. Testing in a Distributed Environment
  2. Testing Spark Application
  3. Debugging Spark Application

Hands-on practice: You’ll get lab practice supporting Spark solutions with best practices for testing, debugging, and normal-day production issues for Spark solutions

Who should attend
  • Developers and Team Leads
  • Software Engineers
  • Business Analysts
  • System Analysts
  • Data Analysts and Scientists
  • Data Scientists
  • Operations and DevOps Engineers
  • JAVA Developers
  • Big Data Engineers

Labs can be accessed by everyone using the cloud environment set up by the instructor. Participation is not mandatory; if they prefer, attendees can simply observe the instructor perform the lab example. Scala/Python are a nice to have skill to better understand what is being done in the Labs.

Download the brochure