Big Data Boot Camp

2 Day Classroom  •  2 Day Live Online
2 Day Training at your location.
Adjustable to meet your needs.
Individual:
$1795.00
Group Rate:
$1595.00
GSA Discount:
$1310.35
When training eight or more people, onsite team training offers a more affordable and convenient option.
Register Now
Request Quote
Individual
Onsite
Overview

This big data training course will provide a technical overview of Apache Hadoop for project managers, business managers and data analysts. Students will understand the overall big data space, technologies involved and will get a detailed overview of Apache Hadoop. The course will expose students to real world use cases to comprehend the capabilities of Apache Hadoop. Students will also learn about YARN and HDFS and how to develop applications and analyze Big Data stored in Apache Hadoop using Apache Pig and Apache Hive. Each topic will provide hands on experience to the students.

The course is developed and taught by certified Hadoop consultants who have a passion for teaching and help deliver value to various clients using Big Data and Hadoop technologies on a daily basis.

Learn about the big data ecosystem
Understand the benefits and ROI you can get from your existing data
Learn about Hadoop and how it is transforming the workspace
Learn about MapReduce and Hadoop Distributed File system
Learn about using Hadoop to identify new business opportunities
Learn about using Hadoop to improve data management processes
Learn about using Hadoop to clarify results
Learn about using Hadoop to expand your data sources
Learn about scaling your current workflow to handle more users and lower your overall performance cost
Learn about the various technologies that comprise the Hadoop ecosystem
Upcoming Dates and Locations
Guaranteed To Run
Sep 24, 2018 – Sep 25, 2018    8:30am – 4:30pm Live Online
8:30am – 4:30pm
Register
Oct 29, 2018 – Oct 30, 2018    8:30am – 4:30pm Kansas City, Kansas

Centriq Training
8700 State Line Road
Suite 200
Leawood, KS 66206
United States

Register
Oct 29, 2018 – Oct 30, 2018    9:30am – 5:30pm Live Online
9:30am – 5:30pm
Register
Nov 26, 2018 – Nov 27, 2018    8:30am – 4:30pm Live Online
8:30am – 4:30pm
Register
Nov 26, 2018 – Nov 27, 2018    8:30am – 4:30pm Washington, District of Columbia

Microtek-Washington, DC
1110 Vermont Avenue NW
Suite 700
Washington, DC 20005
United States

Register
Dec 17, 2018 – Dec 18, 2018    8:30am – 4:30pm Live Online
8:30am – 4:30pm
Register
Dec 17, 2018 – Dec 18, 2018    8:30am – 4:30pm Raleigh, North Carolina

ASPE Training
114 Edinburgh South Dr
Suite 200
Cary, NC 27511
United States

Register
Course Outline

Part 1: Introduction to Big Data

  1. Big Data - beyond the obvious trends
    • Technologies involved
    • Business drivers
    • Implications for enterprise computing
  2. Exponentially increasing data
    • ERP Data
    • CRM Data
    • Web Data
    • Big Data
  3. Big data sources
    • Sensors
    • Social
    • Geospatial
    • Video
    • Machine to machine
    • Others
  4. Data warehousing, business intelligence, analytics, predictive statistics, data science

Part 2: Survey of Big Data technologies

  1. First generation systems
    • RDBMS systems
    • ETL systems
    • BI systems
  2. Second generation systems
    • Columnar databases with compression
    • MPP architectures
    • Data warehousing appliances
  3. Enterprise search
  4. Visualizing and understanding data with processing
    • Streaming processing
    • Statistical processing
    • Data visualization
  5. NOSQL databases
    • How do technologies like MongoDB, MarkLogic, and CouchDB fit in?
    • What is polyglot persistence?
  6. Apache Hadoop

Part 3: Introduction to Hadoop

  1. What is Hadoop? Who are the major vendors? 
  2. A dive into the Hadoop Ecosystem
  3. Benefits of using Hadoop
  4. How to use Hadoop within your infrastructure?
    • Where do we use Hadoop?
    • Where do we look at options besides Hadoop?

Part 4: Introduction to MapReduce

  1. What is MapReduce?
  2. Why do you need MapReduce?
  3. Using MapReduce with Java and Ruby

Lab: How to use MapReduce in Hadoop?

Part 5: Introduction to Yarn

  1. What is Yarn?
  2. What are the advantages of using Yarn over classical MapReduce?
  3. Using Yarn with Java and Ruby

Lab: How to use Yarn within Hadoop?

Part 6: Introduction to HDFS

  1. What is HDFS?
  2. Why do you need a distributed file system?
  3. How is a distributed file system different from a traditional file system?
  4. What is unique about HDFS when compared to other file systems?
  5. HDFS and reliability?
  6. Does it offer support for compressions, checksums and data integrity?

Lab: Overview of HDFS commands

Part 7: Data Transformation 

  1. Why do you need to transform data?
  2. What is Pig?
  3. Use cases for Pig

Lab: Hands-on activities with Pig

Part 8: Structured Data Analysis?

  1. How do you handle structured data with Hadoop?
  2. What is Hive/HCatalog?
  3. Use cases for Hive/HCatalog

Lab: Hands-on activities with Hive/HCatalog

Part 9: Loading data into Hadoop

  1. How do you move your existing data into Hadoop?
  2. What is Sqoop?

Lab: Hands-on activities with Sqoop

Part 10: Automating workflows in Hadoop

  1. Benefits of Automation
  2. What is oozie?
  3. Automatically running workflows
  4. Setting up workflow triggers

Lab: Demonstration of oozie

Part 11: Exploring opportunities in your own organization

  1. Framing scenarios
  2. Understanding how to ask questions
  3. Tying possibilities to your own business drivers
  4. Common opportunities
  5. Real world examples

Hands-on Exercises

You'll experience "in-the-trenches" practice built around actual big data implementations. You'll learn to avoid pitfalls and do it right the first time. Your instructor will help you map the tools and techniques you learn in this class to your own business, so they can be applied in your own organization immediately after the class.

How to use MapReduce in Hadoop?

  1. How does it work from languages like Java?
  2. How does it work with languages like Ruby?

How to use Yarn within Hadoop?

  1. How does it work from languages like Java?
  2. How does it work with languages like Ruby?

Overview of HDFS commands

  1. Standard file system commands
  2. Moving data to and from HDFS

Hands-on activities with Pig

  1. Joining Data
  2. Filtering Data
  3. Storing and Loading Data

Hands-on activities with Hive/HCatalog

  1. Storing and Loading Data
  2. Select expressions
  3. Hive vs SQL

Hands-on activities with Sqoop

  1. Running evaluation commands with Sqoop
  2. Importing data from relational databases
  3. Exporting data to relational databases

Demonstration of Oozie

  1. Creating a workflow
  2. Running a workflow automatically at regular intervals
  3. Running a workflow automatically when some events are triggered
Who should attend

Anybody who is involved with databases, data analysis, wondering how to deal with the mountains of data (any where gigabytes of user/log data etc to petabytes will benefit from this program.

This course is perfect for:

  • Business Analysts
  • Software Engineers
  • Project Managers
  • Data Analysts
  • Business Customers
  • Team Leaders
  • System Analysts
Pre-Requisites

No prior knowledge of big data and/or Hadoop is required for this class. Some prior programming experience is a plus for this class, but not necessary.

Download the brochure