Spark Administration and Monitoring Workshop
What You Will Learn (aka Goals)
The goal of the Spark Administration and Monitoring Workshop is to give you a practical, complete and hands-on introduction to Apache Spark and the tools to manage, monitor, troubleshoot and fine-tune Spark infrastructure.
NOTE: The workshop uses a tailor-made Docker image but it is also possible to use a commercial distribution for Spark like Cloudera’s CDH (possibly Hortonworks Data Platform (HDP) or MapR Sandbox).
The workshop uses an intense learn-by-doing approach in which the modules start with just enough knowledge to get you going and quickly move on to applying the concepts in assignments. There are a lot of practical exercises.
The workshop comes with many practical sessions that should meet (and possibly exceed) expectations of administrators, operators, devops, and other technical roles like system architects or technical leads (perhaps also software developers for whom Spark and Scala (Application Development) Workshop might be a better fit).
CAUTION: The workshop is very hands-on and practical, i.e. not for faint-hearted. Seriously! After just a couple of days your mind, eyes, and hands will all be trained to recognise the patterns how to set up and operate Spark infrastructure in your Big Data projects.
CAUTION I have already trained people who expressed their concern that there were too many exercises. Your dear drill sergeant, Jacek.
Duration
5 days
Target Audience
- Aspiring Spark administrators, operators, devops
- Perhaps system architects or technical leads
Agenda
- Anatomy of Spark Data Processing
SparkContext
- Transformations and Actions
- Units of Physical Execution: Jobs, Stages, and Tasks
- RDD Lineage
- DAG View of RDDs
- Logical Execution Plan
- Spark Execution Engine
- DAGScheduler
- TaskScheduler
- Scheduler Backends
- Executor Backends
- Partitions and Partitioning
- Shuffle
- Wide and Narrow Dependencies
- Caching and Persistence
- Checkpointing
- Elements of Spark Runtime Environment
- The Driver
- Executors
- Deploy Modes
- Spark Clusters
- RPC Environment (RpcEnv)
- BlockManagers
- Spark Tools
spark-shell
spark-submit
- web UI
spark-class
- Monitoring Spark Applications using web UI
- The Different Tabs in web UI
- Exercise: Monitoring using web UI
- Executing Spark Jobs to Enable Different Statistics and Statuses
- Spark on Hadoop YARN cluster
- Exercise: Setting up Hadoop YARN
- Accessing Resource Manager’s web UI
- Exercise: Submitting Applications using
spark-submit
--master yarn
yarn-site.xml
yarn application -list
yarn application -status
yarn application -kill
- Runtime Properties - Meaning and Application
- Troubleshooting
- YarnShuffleService – ExternalShuffleService on YARN
- Multi-tenant YARN Cluster Setup and Spark
- Overview of YARN Schedulers (e.g. Capacity Scheduler)
spark-submit --queue
- Clustering Spark using Spark Standalone
- Exercise: Setting up Spark Standalone
- Using standalone Master’s web UI
- Exercise: Submitting Applications using spark-submit
--master spark://...
--deploy-mode
with client
and cluster
- Clustering Spark using Spark Standalone
- Tuning Spark Infrastructure
- Exercise: Configuring CPU and Memory for Master and Executors
- Exercise: Observing Shuffling using
groupByKey
-like operations.
- Scheduling Modes: FIFO and FAIR
- Exercise: Configuring Pools in FAIR Scheduling Mode
- Monitoring Spark using
SparkListeners
LiveListenerBus
StatsReportListener
- Event Logging using
EventLoggingListener
and History Server
- Exercise: Event Logging using
EventLoggingListener
- Exercise: Developing Custom SparkListener
- Dynamic Allocation (of Executors)
- External Shuffle Service
- Spark Metrics System
- (optional) Using Spark Streaming and Kafka
- (optional) Clustering Spark using Apache Mesos
- Exercise: Setting up Mesos cluster
- Exercise: Submitting Applications using
spark-submit
Requirements
- Training classes are best for groups up to 8 participants.
- Participants have decent computers, preferably with Linux or Mac OS operating systems
- Participants have to download the following packages to their computers before the class:
- Participants are requested to
git clone
this project and follow README.