Big Data Fundamentals

2 Day Classroom  •  2 Day Live Online
Adjustable to meet your needs.
Group Rate:
When training eight or more people, onsite team training offers a more affordable and convenient option.
Register Now
Request Quote

This course is a survey of big data – the landscape, the technology behind it, business drivers and strategic possibilities. “Big data” is a hot buzzword, but most organizations are struggling to put it to practical use. Without assuming any prior knowledge of Apache Hadoop or big data management, this course teaches a wide range of professional roles how to tap and manage the potential benefits of big data, including:

  • Discovering customer insights buried in your existing data
  • Uncovering product opportunities from data insights
  • Pinpointing decision points and criteria
  • Scaling your existing workflows and operations
  • Learning to ask questions that drive tangible business value from Big Data tools 
Navigate the technology stacks and tools used to work with big data
Establish a common vocabulary on your teams for applying big data practices
Get an overview of how big data technologies work: Apache Hadoop, Spark, Pig, Hive, Sqoop, OOZIE, and FLUME
Design both functional and non-functional requirements for working with big data
Understand common business cases for big data
Differentiate between hype and what’s truly possible
Look at examples of real-world big data use cases
Select initiatives and projects that have high potential to benefit from big data applications
Understand what type of staffing, technical skills, and training is required for projects that incorporate or focus on big data
Upcoming Dates and Locations
Guaranteed To Run
Sep 26, 2019 – Sep 27, 2019    8:30am – 4:30pm Live Online
8:30am – 4:30pm
Oct 24, 2019 – Oct 25, 2019    8:30am – 4:30pm Live Online
8:30am – 4:30pm
Dec 5, 2019 – Dec 6, 2019    8:30am – 4:30pm Live Online
8:30am – 4:30pm
Course Outline

Part 1: Introduction to Big Data

  1. Academic
  2. Early web
  3. Web-scale
    • 1994 – 2012
    • 2016
    • 2020

Part 2: Sources (Examples)

  1. Internet
  2. Transport systems
  3. Medical, healthcare
  4. Insurance
  5. Military and others

Part 3: Hadoop – the free platform for working with big data

  1. History
  2. Yahoo
  3. Platform fragmentation
  4. What usage looks like in the enterprise

Part 4: The concepts

  1. Load data how you find it
  2. Process it when you can
  3. Project it into various schemas on the fly
  4. Push it back to where you need it

Part 5: The basics

  1. What it’s good for
  2. What can’t it do / disadvantages
  3. Most common use cases for big data

Part 6: Introduction to HDFS

  1. Robustness
  2. Data Replication
  3. Gotchas

Part 7: MapReduce – the core big data function

  1. Map explained
  2. Sort and shuffle explained
  3. Reduce explained

Demonstration: Hadoop, HDFS, and MapReduce - Let’s try it!

Part 8: YARN

  1. How it fits
  2. How it works
  3. Resource Manager
  4. Application Master

Part 9: PIG

  1. What it is
  2. How it works
  3. Compatibilities
  4. Advantages
  5. Disadvantages

Demonstration: YARN and PIG - Let’s try it!

Part 10: Processing Data

  1. The Piggy Bank
  2. Loading and Illustrating the data
  3. Writing a Query
  4. Storing the Result

Part 11: HIVE

  1. Data warehousing
  2. What it is, what it’s not
  3. Language compatibilities
  4. Advantages

Demonstration: HIVE - Let’s try it!

Example demo walkthrough: Contextual advertising

Part 12: OOZIE

  1. What it is
  2. Complex workflow environments
  3. Reducing time-to-market
  4. Frequency execution
  5. How it works with other big data tools

Example demo walkthrough: How to run a job

Part 13: FLUME – stream, collect, store and analyze high-volume log data

  1. How it works: Event, source, sink, channel, agent and client
  2. How it works illustrated
  3. How it works demonstrated

Part 14: SPARK

  1. Move over 2012 Big Data tools: Apache SPARK is the new power tool
  2. The new open source cluster framework
  3. When SPARK performs 100 times faster
  4. Performance comparison of Spark and Hadoop
  5. What else can it do?

Part 15: HBASE

  1. What it is
  2. Common use cases

Part 16: Using External Tools

Who should attend

This class is for anyone involved in project, product, or IT work who is actively consuming or considering big data services. No specific technical experience or prerequisites are needed. 
•    Software Engineers and Team Leads
•    Project Managers
•    Business Analysts
•    DBAs and Data Engineering teams
•    Business Customers
•    System Analysts


No specific technical experience or prerequisites are needed.