Tutorial: MapReduce by Pietro Michiardi

Posted on at


Tutorial: MapReduce
Theory and Practice of Data-intensive Applications
Pietro Michiardi
Eurecom
Pietro Michiardi (Eurecom) Tutorial: MapReduce 1 / 131
Introduction
Introduction
Pietro Michiardi (Eurecom) Tutorial: MapReduce 2 / 131
Introduction
What is MapReduce
A programming model:
I Inspired by functional programming
I Allows expressing distributed computations on massive amounts of
data
An execution framework:
I Designed for large-scale data processing
I Designed to run on clusters of commodity hardware
Pietro Michiardi (Eurecom) Tutorial: MapReduce 3 / 131
Introduction
What is this Tutorial About
Design of scalable algorithms with MapReduce
I Applied algorithm design and case studies
In-depth description of MapReduce
I Principles of functional programming
I The execution framework
In-depth description of Hadoop
I Architecture internals
I Software components
I Cluster deployments
Pietro Michiardi (Eurecom) Tutorial: MapReduce 4 / 131
Introduction Motivations
Motivations
Pietro Michiardi (Eurecom) Tutorial: MapReduce 5 / 131
Introduction Motivations
Big Data
Vast repositories of data
I Web-scale processing
I Behavioral data
I Physics
I Astronomy
I Finance
“The fourth paradigm” of science [6]
I Data-intensive processing is fast becoming a necessity
I Design algorithms capable of scaling to real-world datasets
It’s not the algorithm, it’s the data! [2]
I More data leads to better accuracy
I With more data, accuracy of different algorithms converges
Pietro Michiardi (Eurecom) Tutorial: MapReduce 6 / 131
Introduction Big Ideas
Key Ideas Behind MapReduce
Pietro Michiardi (Eurecom) Tutorial: MapReduce 7 / 131
Introduction Big Ideas
Scale out, not up!
For data-intensive workloads, a large number of commodity
servers is preferred over a small number of high-end servers
I Cost of super-computers is not linear
I But datacenter efficiency is a difficult problem to solve [3, 5]
Some numbers (∼ 2010):
I Data processed by Google every day: 20 PB
I Data processed by Facebook every day: 15 TB
Pietro Michiardi (Eurecom) Tutorial: MapReduce 8 / 131
Introduction Big Ideas
Implications of Scaling Out
Processing data is quick, I/O is very slow
I 1 HDD = 75 MB/sec
I 1000 HDDs = 75 GB/sec
Sharing vs. Shared nothing:
I Sharing: manage a common/global state
I Shared nothing: independent entities, no common state
Sharing is difficult:
I Synchronization, deadlocks
I Finite bandwidth to access data from SAN
I Temporal dependencies are complicated (restarts)
Pietro Michiardi (Eurecom) Tutorial: MapReduce 9 / 131
Introduction Big Ideas
Failures are the norm, not the exception
LALN data [DSN 2006]
I Data for 5000 machines, for 9 years
I Hardware: 60%, Software: 20%, Network 5%
DRAM error analysis [Sigmetrics 2009]
I Data for 2.5 years
I 8% of DIMMs affected by errors
Disk drive failure analysis [FAST 2007]
I Utilization and temperature major causes of failures
Amazon Web Service failure [April 2011]
I Cascading effect
Pietro Michiardi (Eurecom) Tutorial: MapReduce 10 / 131
Introduction Big Ideas
Implications of Failures
Failures are part of everyday life
I Mostly due to the scale and shared environment
Sources of Failures
I Hardware / Software
I Electrical, Cooling, ...
I Unavailability of a resource due to overload
Failure Types
I Permanent
I Transient
Pietro Michiardi (Eurecom) Tutorial: MapReduce 11 / 131
Introduction Big Ideas
Move Processing to the Data
Drastic departure from high-performance computing model
I HPC: distinction between processing nodes and storage nodes
I HPC: CPU intensive tasks
Data intensive workloads
I Generally not processor demanding
I The network becomes the bottleneck
I MapReduce assumes processing and storage nodes to be
colocated: Data Locality
Distributed filesystems are necessary
Pietro Michiardi (Eurecom) Tutorial: MapReduce 12 / 131
Introduction Big Ideas
Process Data Sequentially and Avoid Random Access
Data intensive workloads
I Relevant datasets are too large to fit in memory
I Such data resides on disks
Disk performance is a bottleneck
I Seek times for random disk access are the problem
F Example: 1 TB DB with 1010 100-byte records. Updates on 1%
requires 1 month, reading and rewriting the whole DB would take 1
day1
I Organize computation for sequential reads
1From a post by Ted Dunning on the Hadoop mailing list
Pietro Michiardi (Eurecom) Tutorial: MapReduce 13 / 131
Introduction Big Ideas
Implications of Data Access Patterns
MapReduce is designed for
I batch processing
I involving (mostly) full scans of the dataset
Typically, data is collected “elsewhere” and copied to the
distributed filesystem
Data-intensive applications
I Read and process the whole Internet dataset from a crawler
I Read and process the whole Social Graph
Pietro Michiardi (Eurecom) Tutorial: MapReduce 14 / 131
Introduction Big Ideas
Hide System-level Details
Separate the what from the how
I MapReduce abstracts away the “distributed” part of the system
I Such details are handled by the framework
In-depth knowledge of the framework is key
I Custom data reader/writer
I Custom data partitioning
I Memory utilization
Auxiliary components
I Hadoop Pig
I Hadoop Hive
I Cascading/Scalding
I ... and many many more!
Pietro Michiardi (Eurecom) Tutorial: MapReduce 15 / 131
Introduction Big Ideas
Seamless Scalability
We can define scalability along two dimensions
I In terms of data: given twice the amount of data, the same
algorithm should take no more than twice as long to run
I In terms of resources: given a cluster twice the size, the same
algorithm should take no more than half as long to run
Embarassingly parallel problems
I Simple definition: independent (shared nothing) computations on
fragments of the dataset
I It’s not easy to decide whether a problem is embarrassingly parallel
or not
MapReduce is a first attempt, not the final answer
Pietro Michiardi (Eurecom) Tutorial: MapReduce 16 / 131
Introduction Big Ideas
Part One
Pietro Michiardi (Eurecom) Tutorial: MapReduce 17 / 131
MapReduce Framework
The MapReduce Framework
Pietro Michiardi (Eurecom) Tutorial: MapReduce 18 / 131
MapReduce Framework Preliminaries
Preliminaries
Pietro Michiardi (Eurecom) Tutorial: MapReduce 19 / 131
MapReduce Framework Preliminaries
Divide and Conquer
A feasible approach to tackling large-data problems
I Partition a large problem into smaller sub-problems
I Independent sub-problems executed in parallel
I Combine intermediate results from each individual worker
The workers can be:
I Threads in a processor core
I Cores in a multi-core processor
I Multiple processors in a machine
I Many machines in a cluster
Implementation details of divide and conquer are complex
Pietro Michiardi (Eurecom) Tutorial: MapReduce 20 / 131
MapReduce Framework Preliminaries
Divide and Conquer: How to?
Decompose the original problem in smaller, parallel tasks
Schedule tasks on workers distributed in a cluster
I Data locality
I Resource availability
Ensure workers get the data they need
Coordinate synchronization among workers
Share partial results
Handle failures
Pietro Michiardi (Eurecom) Tutorial: MapReduce 21 / 131
MapReduce Framework Preliminaries
The MapReduce Approach
Shared memory approach (OpenMP, MPI, ...)
I Developer needs to take care of (almost) everything
I Synchronization, Concurrency
I Resource allocation
MapReduce: a shared nothing approach
I Most of the above issues are taken care of
I Problem decomposition and sharing partial results need particular
attention
I Optimizations (memory and network consumption) are tricky
Pietro Michiardi (Eurecom) Tutorial: MapReduce 22 / 131
MapReduce Framework Programming Model
The MapReduce Programming model
Pietro Michiardi (Eurecom) Tutorial: MapReduce 23 / 131
MapReduce Framework Programming Model
Functional Programming Roots
Key feature: higher order functions
I Functions that accept other functions as arguments
I Map and Fold
f f f f f
g g g g g
Figure: Illustration of map and fold.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 24 / 131
MapReduce Framework Programming Model
Functional Programming Roots
map phase:
I Given a list, map takes as an argument a function f (that takes a
single argument) and applies it to all element in a list
fold phase:
I Given a list, fold takes as arguments a function g (that takes two
arguments) and an initial value
I g is first applied to the initial value and the first item in the list
I The result is stored in an intermediate variable, which is used as an
input together with the next item to a second application of g
I The process is repeated until all items in the list have been
consumed
Pietro Michiardi (Eurecom) Tutorial: MapReduce 25 / 131
MapReduce Framework Programming Model
Functional Programming Roots
We can view map as a transformation over a dataset
I This transformation is specified by the function f
I Each functional application happens in isolation
I The application of f to each element of a dataset can be
parallelized in a straightforward manner
We can view fold as an aggregation operation
I The aggregation is defined by the function g
I Data locality: elements in the list must be “brought together”
I If we can group element of the list, also the fold phase can proceed
in parallel
Associative and commutative operations
I Allow performance gains through local aggregation and reordeing
Pietro Michiardi (Eurecom) Tutorial: MapReduce 26 / 131
MapReduce Framework Programming Model
Functional Programming and MapReduce
Equivalence of MapReduce and Functional Programming:
I The map of MapReduce corresponds to the map operation
I The reduce of MapReduce corresponds to the fold operation
The framework coordinates the map and reduce phases:
I Grouping intermediate results happens in parallel
In practice:
I User-specified computation is applied (in parallel) to all input
records of a dataset
I Intermediate results are aggregated by another user-specified
computation
Pietro Michiardi (Eurecom) Tutorial: MapReduce 27 / 131
MapReduce Framework Programming Model
What can we do with MapReduce?
MapReduce “implements” a subset of functional
programming
I The programming model appears quite limited
There are several important problems that can be adapted to
MapReduce
I In this tutorial we will focus on illustrative cases
I We will see in detail “design patterns”
F How to transform a problem and its input
F How to save memory and bandwidth in the system
Pietro Michiardi (Eurecom) Tutorial: MapReduce 28 / 131
MapReduce Framework The Framework
Mappers and Reducers
Pietro Michiardi (Eurecom) Tutorial: MapReduce 29 / 131
MapReduce Framework The Framework
Data Structures
Key-value pairs are the basic data structure in MapReduce
I Keys and values can be: integers, float, strings, raw bytes
I They can also be arbitrary data structures
The design of MapReduce algorithms involes:
I Imposing the key-value structure on arbitrary datasets
F E.g.: for a collection of Web pages, input keys may be URLs and
values may be the HTML content
I In some algorithms, input keys are not used, in others they uniquely
identify a record
I Keys can be combined in complex ways to design various
algorithms
Pietro Michiardi (Eurecom) Tutorial: MapReduce 30 / 131
MapReduce Framework The Framework
A MapReduce job
The programmer defines a mapper and a reducer as follows2:
I map: (k1, v1) → [(k2, v2)]
I reduce: (k2,[v2]) → [(k3, v3)]
A MapReduce job consists in:
I A dataset stored on the underlying distributed filesystem, which is
split in a number of files across machines
I The mapper is applied to every input key-value pair to generate
intermediate key-value pairs
I The reducer is applied to all values associated with the same
intermediate key to generate output key-value pairs
2We use the convention [· · · ] to denote a list.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 31 / 131
MapReduce Framework The Framework
Where the magic happens
Implicit between the map and reduce phases is a distributed
“group by” operation on intermediate keys
I Intermediate data arrive at each reducer in order, sorted by the key
I No ordering is guaranteed across reducers
Output keys from reducers are written back to the distributed
filesystem
I The output may consist of r distinct files, where r is the number of
reducers
I Such output may be the input to a subsequent MapReduce phase
Intermediate keys are transient:
I They are not stored on the distributed filesystem
I They are “spilled” to the local disk of each machine in the cluster
Pietro Michiardi (Eurecom) Tutorial: MapReduce 32 / 131
MapReduce Framework The Framework
A Simplified view of MapReduce
Figure: Mappers are applied to all input key-value pairs, to generate an
arbitrary number of intermediate pairs. Reducers are applied to all
intermediate values associated with the same intermediate key. Between the
map and reduce phase lies a barrier that involves a large distributed sort and
group by.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 33 / 131
MapReduce Framework The Framework
“Hello World” in MapReduce
Figure: Pseudo-code for the word count algorithm.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 34 / 131
MapReduce Framework The Framework
“Hello World” in MapReduce
Input:
I Key-value pairs: (docid, doc) stored on the distributed filesystem
I docid: unique identifier of a document
I doc: is the text of the document itself
Mapper:
I Takes an input key-value pair, tokenize the document
I Emits intermediate key-value pairs: the word is the key and the
integer is the value
The framework:
I Guarantees all values associated with the same key (the word) are
brought to the same reducer
The reducer:
I Receives all values associated to some keys
I Sums the values and writes output key-value pairs: the key is the
word and the value is the number of occurrences
Pietro Michiardi (Eurecom) Tutorial: MapReduce 35 / 131
MapReduce Framework The Framework
Implementation and Execution Details
The partitioner is in charge of assigning intermediate keys
(words) to reducers
I Note that the partitioner can be customized
How many map and reduce tasks?
I The framework essentially takes care of map tasks
I The designer/developer takes care of reduce tasks
In this tutorial we will focus on Hadoop
I Other implementations of the framework exist: Google, Disco, ...
Pietro Michiardi (Eurecom) Tutorial: MapReduce 36 / 131
MapReduce Framework The Framework
Handle with care!
Using external resources
I E.g.: Other data stores than the distributed file system
I Concurrent access by many map/reduce tasks
Side effects
I Not allowed in functional programming
I E.g.: preserving state across multiple inputs
I State is kept internal
I/O and execution
I External side effects using distributed data stores (e.g. BigTable)
I No input (e.g. computing π), no reducers, never no mappers
Pietro Michiardi (Eurecom) Tutorial: MapReduce 37 / 131
MapReduce Framework The Framework
The Execution Framework
Pietro Michiardi (Eurecom) Tutorial: MapReduce 38 / 131
MapReduce Framework The Framework
The Execution Framework
MapReduce program, a.k.a. a job:
I Code of mappers and reducers
I Code for combiners and partitioners (optional)
I Configuration parameters
I All packaged together
A MapReduce job is submitted to the cluster
I The framework takes care of eveything else
I Next, we will delve into the details
Pietro Michiardi (Eurecom) Tutorial: MapReduce 39 / 131
MapReduce Framework The Framework
Scheduling
Each Job is broken into tasks
I Map tasks work on fractions of the input dataset, as defined by the
underlying distributed filesystem
I Reduce tasks work on intermediate inputs and write back to the
distributed filesystem
The number of tasks may exceed the number of available
machines in a cluster
I The scheduler takes care of maintaining something similar to a
queue of pending tasks to be assigned to machines with available
resources
Jobs to be executed in a cluster requires scheduling as well
I Different users may submit jobs
I Jobs may be of various complexity
I Fairness is generally a requirement
Pietro Michiardi (Eurecom) Tutorial: MapReduce 40 / 131
MapReduce Framework The Framework
Scheduling
The scheduler component can be customized
I As of today, for Hadoop, there are various schedulers
Dealing with stragglers
I Job execution time depends on the slowest map and reduce tasks
I Speculative execution can help with slow machines
F But data locality may be at stake
Dealing with skew in the distribution of values
I E.g.: temperature readings from sensors
I In this case, scheduling cannot help
I It is possible to work on customized partitioning and sampling to
solve such issues [Advanced Topic]
Pietro Michiardi (Eurecom) Tutorial: MapReduce 41 / 131
MapReduce Framework The Framework
Data/code co-location
How to feed data to the code
I In MapReduce, this issue is intertwined with scheduling and the
underlying distributed filesystem
How data locality is achieved
I The scheduler starts the task on the node that holds a particular
block of data required by the task
I If this is not possible, tasks are started elsewhere, and data will
cross the network
F Note that usually input data is replicated
I Distance rules [11] help dealing with bandwidth consumption
F Same rack scheduling
Pietro Michiardi (Eurecom) Tutorial: MapReduce 42 / 131
MapReduce Framework The Framework
Synchronization
In MapReduce, synchronization is achieved by the “shuffle and
sort” bareer
I Intermediate key-value pairs are grouped by key
I This requires a distributed sort involving all mappers, and taking
into account all reducers
I If you have m mappers and r reducers this phase involves up to
m × r copying operations
IMPORTANT: the reduce operation cannot start until all
mappers have finished
I This is different from functional programming that allows “lazy”
aggregation
I In practice, a common optimization is for reducers to pull data from
mappers as soon as they finish
Pietro Michiardi (Eurecom) Tutorial: MapReduce 43 / 131
MapReduce Framework The Framework
Errors and faults
Using quite simple mechanisms, the MapReduce framework deals
with:
Hardware failures
I Individual machines: disks, RAM
I Networking equipment
I Power / cooling
Software failures
I Exceptions, bugs
Corrupt and/or invalid input data
Pietro Michiardi (Eurecom) Tutorial: MapReduce 44 / 131
MapReduce Framework The Framework
Partitioners and Combiners
Pietro Michiardi (Eurecom) Tutorial: MapReduce 45 / 131
MapReduce Framework The Framework
Partitioners
Partitioners are responsible for:
I Dividing up the intermediate key space
I Assigning intermediate key-value pairs to reducers
→ Specify the task to which an intermediate key-value pair must be
copied
Hash-based partitioner
I Computes the hash of the key modulo the number of reducers r
I This ensures a roughly even partitioning of the key space
F However, it ignores values: this can cause imbalance in the data
processed by each reducer
I When dealing with complex keys, even the base partitioner may
need customization
Pietro Michiardi (Eurecom) Tutorial: MapReduce 46 / 131
MapReduce Framework The Framework
Combiners
Combiners are an (optional) optimization:
I Allow local aggregation before the “shuffle and sort” phase
I Each combiner operates in isolation
Essentially, combiners are used to save bandwidth
I E.g.: word count program
Combiners can be implemented using local data-structures
I E.g., an associative array keeps intermediate computations and
aggregation thereof
I The map function only emits once all input records (even all input
splits) are processed
Pietro Michiardi (Eurecom) Tutorial: MapReduce 47 / 131
MapReduce Framework The Framework
Partitioners and Combiners, an Illustration
Figure: Complete view of MapReduce illustrating combiners and partitioners.
Note: in Hadoop, partitioners are executed before combiners.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 48 / 131
MapReduce Framework The Framework
The Distributed Filesystem
Pietro Michiardi (Eurecom) Tutorial: MapReduce 49 / 131
MapReduce Framework The Framework
Colocate data and computation!
As dataset sizes increase, more computing capacity is
required for processing
As compute capacity grows, the link between the compute
nodes and the storage nodes becomes a bottleneck
I One could eventually think of special-purpose interconnects for
high-performance networking
I This is often a costly solution as cost does not increase linearly with
performance
Key idea: abandon the separation between compute and
storage nodes
I This is exactly what happens in current implementations of the
MapReduce framework
I A distributed filesystem is not mandatory, but highly desirable
Pietro Michiardi (Eurecom) Tutorial: MapReduce 50 / 131
MapReduce Framework The Framework
Distributed filesystems
In this tutorial we will focus on HDFS, the Hadoop
implementation of the Google distributed filesystem (GFS)
Distributed filesystems are not new!
I HDFS builds upon previous results, tailored to the specific
requirements of MapReduce
I Write once, read many workloads
I Does not handle concurrency, but allow replication
I Optimized for throughput, not latency
Pietro Michiardi (Eurecom) Tutorial: MapReduce 51 / 131
MapReduce Framework The Framework
HDFS
Divide user data into blocks
I Blocks are big! [64, 128] MB
I Avoids problems related to metadata management
Replicate blocks across the local disks of nodes in the
cluster
I Replication is handled by storage nodes themselves (similar to
chain replication) and follows distance rules
Master-slave architecture
I NameNode: master maintains the namespace (metadata, file to
block mapping, location of blocks) and maintains overall health of
the file system
I DataNode: slaves manage the data blocks
Pietro Michiardi (Eurecom) Tutorial: MapReduce 52 / 131
MapReduce Framework The Framework
HDFS, an Illustration
Figure: The architecture of HDFS.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 53 / 131
MapReduce Framework The Framework
HDFS I/O
A typical read from a client involves:
1 Contact the NameNode to determine where the actual data is stored
2 NameNode replies with block identifiers and locations (i.e., which
DataNode)
3 Contact the DataNode to fetch data
A typical write from a client involves:
1 Contact the NameNode to update the namespace and verify
permissions
2 NameNode allocates a new block on a suitable DataNode
3 The client directly streams to the selected DataNode
4 Currently, HDFS files are immutable
Data is never moved through the NameNode
I Hence, there is no bottleneck
Pietro Michiardi (Eurecom) Tutorial: MapReduce 54 / 131
MapReduce Framework The Framework
HDFS Replication
By default, HDFS stores 3 sperate copies of each block
I This ensures reliability, availability and performance
Replication policy
I Spread replicas across differen racks
I Robust against cluster node failures
I Robust against rack failures
Block replication benefits MapReduce
I Scheduling decisions can take replicas into account
I Exploit better data locality
Pietro Michiardi (Eurecom) Tutorial: MapReduce 55 / 131
MapReduce Framework The Framework
HDFS: more on operational assumptions
A small number of large files is preferred over a large number
of small files
I Metadata may explode
I Input splits fo MapReduce based on individual files
→ Mappers are launched for every file
F High startup costs
F Inefficient “shuffle and sort”
Workloads are batch oriented
Not full POSIX
Cooperative scenario
Pietro Michiardi (Eurecom) Tutorial: MapReduce 56 / 131
MapReduce Framework The Framework
Part Two
Pietro Michiardi (Eurecom) Tutorial: MapReduce 57 / 131
Hadoop MapReduce
Hadoop implementation of MapReduce
Pietro Michiardi (Eurecom) Tutorial: MapReduce 58 / 131
Hadoop MapReduce Preliminaries
Preliminaries
Pietro Michiardi (Eurecom) Tutorial: MapReduce 59 / 131
Hadoop MapReduce Preliminaries
From Theory to Practice
The story so far
I Concepts behind the MapReduce Framework
I Overview of the programming model
Hadoop implementation of MapReduce
I HDFS in details
I Hadoop I/O
I Hadoop MapReduce
F Implementation details
F Types and Formats
F Features in Hadoop
Hadoop Deployments
I The BigFoot platform (if time allows)
Pietro Michiardi (Eurecom) Tutorial: MapReduce 60 / 131
Hadoop MapReduce Preliminaries
Terminology
MapReduce:
I Job: an execution of a Mapper and Reducer across a data set
I Task: an execution of a Mapper or a Reducer on a slice of data
I Task Attempt: instance of an attempt to execute a task
I Example:
F Running “Word Count” across 20 files is one job
F 20 files to be mapped = 20 map tasks + some number of reduce tasks
F At least 20 attempts will be performed... more if a machine crashes
Task Attempts
I Task attempted at least once, possibly more
I Multiple crashes on input imply discarding it
I Multiple attempts may occur in parallel (speculative execution)
I Task ID from TaskInProgress is not a unique identifier
Pietro Michiardi (Eurecom) Tutorial: MapReduce 61 / 131
Hadoop MapReduce HDFS in details
HDFS in details
Pietro Michiardi (Eurecom) Tutorial: MapReduce 62 / 131
Hadoop MapReduce HDFS in details
The Hadoop Distributed Filesystem
Large dataset(s) outgrowing the storage capacity of a single
physical machine
I Need to partition it across a number of separate machines
I Network-based system, with all its complications
I Tolerate failures of machines
Hadoop Distributed Filesystem[10, 11]
I Very large files
I Streaming data access
I Commodity hardware
Pietro Michiardi (Eurecom) Tutorial: MapReduce 63 / 131
Hadoop MapReduce HDFS in details
HDFS Blocks
(Big) files are broken into block-sized chunks
I NOTE: A file that is smaller than a single block does not occupy a
full block’s worth of underlying storage
Blocks are stored on independent machines
I Reliability and parallel access
Why is a block so large?
I Make transfer times larger than seek latency
I E.g.: Assume seek time is 10ms and the transfer rate is 100 MB/s,
if you want seek time to be 1% of transfer time, then the block size
should be 100MB
Pietro Michiardi (Eurecom) Tutorial: MapReduce 64 / 131
Hadoop MapReduce HDFS in details
NameNodes and DataNodes
NameNode
I Keeps metadata in RAM
I Each block information occupies roughly 150 bytes of memory
I Without NameNode, the filesystem cannot be used
F Persistence of metadata: synchronous and atomic writes to NFS
Secondary NameNode
I Merges the namespce with the edit log
I A useful trick to recover from a failure of the NameNode is to use the
NFS copy of metadata and switch the secondary to primary
DataNode
I They store data and talk to clients
I They report periodically to the NameNode the list of blocks they hold
Pietro Michiardi (Eurecom) Tutorial: MapReduce 65 / 131
Hadoop MapReduce HDFS in details
Anatomy of a File Read
NameNode is only used to get block location
I Unresponsive DataNode are discarded by clients
I Batch reading of blocks is allowed
“External” clients
I For each block, the NameNode returns a set of DataNodes holding
a copy thereof
I DataNodes are sorted according to their proximity to the client
“MapReduce” clients
I TaskTracker and DataNodes are colocated
I For each block, the NameNode usually3 returns the local DataNode
3Exceptions exist due to stragglers.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 66 / 131
Hadoop MapReduce HDFS in details
Anatomy of a File Write
Details on replication
I Clients ask NameNode for a list of suitable DataNodes
I This list forms a pipeline: first DataNode stores a copy of a
block, then forwards it to the second, and so on
Replica Placement
I Tradeoff between reliability and bandwidth
I Default placement:
F First copy on the “same” node of the client, second replica is off-rack,
third replica is on the same rack as the second but on a different node
F Since Hadoop 0.21, replica placement can be customized
Pietro Michiardi (Eurecom) Tutorial: MapReduce 67 / 131
Hadoop MapReduce HDFS in details
Network Topology and HDFS
Pietro Michiardi (Eurecom) Tutorial: MapReduce 68 / 131
Hadoop MapReduce HDFS in details
HDFS Coherency Model
Read your writes is not guaranteed
I The namespace is updated
I Block contents may not be visible after a write is finished
I Application design (other than MapReduce) should use sync() to
force synchronization
I sync() involves some overhead: tradeoff between
robustness/consistency and throughput
Multiple writers (for the same block) are not supported
I Instead, different blocks can be written in parallel (using
MapReduce)
Pietro Michiardi (Eurecom) Tutorial: MapReduce 69 / 131
Hadoop MapReduce Hadoop I/O
Hadoop I/O
Pietro Michiardi (Eurecom) Tutorial: MapReduce 70 / 131
Hadoop MapReduce Hadoop I/O
I/O operations in Hadoop
Reading and writing data
I From/to HDFS
I From/to local disk drives
I Across machines (inter-process communication)
Customized tools for large amounts of data
I Hadoop does not use Java native classes
I Allows flexibility for dealing with custom data (e.g. binary)
What’s next
I Overview of what Hadoop offers
I For an in depth knowledge, use [11]
Pietro Michiardi (Eurecom) Tutorial: MapReduce 71 / 131
Hadoop MapReduce Hadoop I/O
Data Integrity
Every I/O operation on disks or the network may corrupt data
I Users expect data not to be corrupted during storage or processing
I Data integrity usually achieved with checksums
HDFS transparently checksums all data during I/O
I HDFS makes sure that storage overhead is roughly 1%
I DataNodes are in charge of checksumming
F With replication, the last replica performs the check
F Checksums are timestamped and logged for statistcs on disks
I Checksumming is also run periodically in a separate thread
F Note that thanks to replication, error correction is possible
Pietro Michiardi (Eurecom) Tutorial: MapReduce 72 / 131
Hadoop MapReduce Hadoop I/O
Compression
Why using compression
I Reduce storage requirements
I Speed up data transfers (across the network or from disks)
Compression and Input Splits
I IMPORTANT: use compression that supports splitting (e.g. bzip2)
Splittable files, Example 1
I Consider an uncompressed file of 1GB
I HDFS will split it in 16 blocks, 64MB each, to be processed by
separate Mappers
Pietro Michiardi (Eurecom) Tutorial: MapReduce 73 / 131
Hadoop MapReduce Hadoop I/O
Compression
Splittable files, Example 2 (gzip)
I Consider a compressed file of 1GB
I HDFS will split it in 16 blocks of 64MB each
I Creating an InputSplit for each block will not work, since it is not
possible to read at an arbitrary point
What’s the problem?
I This forces MapReduce to treat the file as a single split
I Then, a single Mapper is fired by the framework
I For this Mapper, only 1/16-th is local, the rest comes from the
network
Which compression format to use?
I Use bzip2
I Otherwise, use SequenceFiles
I See Chapter 4 (page 84) [11]
Pietro Michiardi (Eurecom) Tutorial: MapReduce 74 / 131
Hadoop MapReduce Hadoop I/O
Serialization
Transforms structured objects into a byte stream
I For transmission over the network: Hadoop uses RPC
I For persistent storage on disks
Hadoop uses its own serialization format, Writable
I Comparison of types is crucial (Shuffle and Sort phase): Hadoop
provides a custom RawComparator, which avoids deserialization
I Custom Writable for having full control on the binary
representation of data
I Also “external” frameworks are allowed: enter Avro
Fixed-lenght or variable-length encoding?
I Fixed-lenght: when the distribution of values is uniform
I Variable-length: when the distribution of values is not uniform
Pietro Michiardi (Eurecom) Tutorial: MapReduce 75 / 131
Hadoop MapReduce Hadoop I/O
Sequence Files
Specialized data structure to hold custom input data
I Using blobs of binaries is not efficient
SequenceFiles
I Provide a persistent data structure for binary key-value pairs
I Also work well as containers for smaller files so that the framework
is more happy (remember, better few large files than lots of small
files)
I They come with the sync() method to introduce sync points to
help managing InputSplits for MapReduce
Pietro Michiardi (Eurecom) Tutorial: MapReduce 76 / 131
Hadoop MapReduce Hadoop MapReduce in details
How Hadoop MapReduce Works
Pietro Michiardi (Eurecom) Tutorial: MapReduce 77 / 131
Hadoop MapReduce Hadoop MapReduce in details
Anatomy of a MapReduce Job Run
Pietro Michiardi (Eurecom) Tutorial: MapReduce 78 / 131
Hadoop MapReduce Hadoop MapReduce in details
Job Submission
JobClient class
I The runJob() method creates a new instance of a JobClient
I Then it calls the submitJob() on this class
Simple verifications on the Job
I Is there an output directory?
I Are there any input splits?
I Can I copy the JAR of the job to HDFS?
NOTE: the JAR of the job is replicated 10 times
Pietro Michiardi (Eurecom) Tutorial: MapReduce 79 / 131
Hadoop MapReduce Hadoop MapReduce in details
Job Initialization
The JobTracker is responsible for:
I Create an object for the job
I Encapsulate its tasks
I Bookkeeping with the tasks’ status and progress
This is where the scheduling happens
I JobTracker performs scheduling by maintaining a queue
I Queueing disciplines are pluggable
Compute mappers and reducers
I JobTracker retrieves input splits (computed by JobClient)
I Determines the number of Mappers based on the number of input
splits
I Reads the configuration file to set the number of Reducers
Pietro Michiardi (Eurecom) Tutorial: MapReduce 80 / 131
Hadoop MapReduce Hadoop MapReduce in details
Task Assignment
Hearbeat-based mechanism
I TaskTrackers periodically send hearbeats to the JobTracker
I TaskTracker is alive
I Heartbeat contains also information on availability of the
TaskTrackers to execute a task
I JobTracker piggybacks a task if TaskTracker is available
Selecting a task
I JobTracker first needs to select a job (i.e. scheduling)
I TaskTrackers have a fixed number of slots for map and reduce
tasks
I JobTracker gives priority to map tasks (WHY?)
Data locality
I JobTracker is topology aware
F Useful for map tasks
F Unused for reduce tasks
Pietro Michiardi (Eurecom) Tutorial: MapReduce 81 / 131
Hadoop MapReduce Hadoop MapReduce in details
Task Execution
Task Assignement is done, now TaskTrackers can execute
I Copy the JAR from the HDFS
I Create a local working directory
I Create an instance of TaskRunner
TaskRunner launches a child JVM
I This prevents bugs from stalling the TaskTracker
I A new child JVM is created per InputSplit
F Can be overriden by specifying JVM Reuse option, which is very
useful for custom, in-memory, combiners
Streaming and Pipes
I User-defined map and reduce methods need not to be in Java
I Streaming and Pipes allow C++ or python mappers and reducers
I We will cover Dumbo
Pietro Michiardi (Eurecom) Tutorial: MapReduce 82 / 131
Hadoop MapReduce Hadoop MapReduce in details
Handling Failures
In the real world, code is buggy, processes crash and machine fails
Task Failure
I Case 1: map or reduce task throws a runtime exception
F The child JVM reports back to the parent TaskTracker
F TaskTracker logs the error and marks the TaskAttempt as failed
F TaskTracker frees up a slot to run another task
I Case 2: Hanging tasks
F TaskTracker notices no progress updates (timeout = 10 minutes)
F TaskTracker kills the child JVM4
I JobTracker is notified of a failed task
F Avoids rescheduling the task on the same TaskTracker
F If a task fails 4 times, it is not re-scheduled5
F Default behavior: if any task fails 4 times, the job fails
4With streaming, you need to take care of the orphaned process.
5Exception is made for speculative execution
Pietro Michiardi (Eurecom) Tutorial: MapReduce 83 / 131
Hadoop MapReduce Hadoop MapReduce in details
Handling Failures
TaskTracker Failure
I Types: crash, running very slowly
I Heartbeats will not be sent to JobTracker
I JobTracker waits for a timeout (10 minutes), then it removes the
TaskTracker from its scheduling pool
I JobTracker needs to reschedule even completed tasks (WHY?)
I JobTracker needs to reschedule tasks in progress
I JobTracker may even blacklist a TaskTracker if too many tasks
failed
JobTracker Failure
I Currently, Hadoop has no mechanism for this kind of failure
I In future releases:
F Multiple JobTrackers
F Use ZooKeeper as a coordination mechanisms
Pietro Michiardi (Eurecom) Tutorial: MapReduce 84 / 131
Hadoop MapReduce Hadoop MapReduce in details
Scheduling
FIFO Scheduler (default behavior)
I Each job uses the whole cluster
I Not suitable for shared production-level cluster
F Long jobs monopolize the cluster
F Short jobs can hold back and have no guarantees on execution time
Fair Scheduler
I Every user gets a fair share of the cluster capacity over time
I Jobs are placed in to pools, one for each user
F Users that submit more jobs have no more resources than oterhs
F Can guarantee minimum capacity per pool
I Supports preemption
I “Contrib” module, requires manual installation
Capacity Scheduler
I Hierarchical queues (mimic an oragnization)
I FIFO scheduling in each queue
I Supports priority
Pietro Michiardi (Eurecom) Tutorial: MapReduce 85 / 131
Hadoop MapReduce Hadoop MapReduce in details
Shuffle and Sort
The MapReduce framework guarantees the input to every
reducer to be sorted by key
I The process by which the system sorts and transfers map outputs
to reducers is known as shuffle
Shuffle is the most important part of the framework, where
the “magic” happens
I Good understanding allows optimizing both the framework and the
execution time of MapReduce jobs
Subject to continuous refinements
Pietro Michiardi (Eurecom) Tutorial: MapReduce 86 / 131
Hadoop MapReduce Hadoop MapReduce in details
Shuffle and Sort: the Map Side
Pietro Michiardi (Eurecom) Tutorial: MapReduce 87 / 131
Hadoop MapReduce Hadoop MapReduce in details
Shuffle and Sort: the Map Side
The output of a map task is not simply written to disk
I In memory buffering
I Pre-sorting
Circular memory buffer
I 100 MB by default
I Threshold based mechanism to spill buffer content to disk
I Map output written to the buffer while spilling to disk
I If buffer fills up while spilling, the map task is blocked
Disk spills
I Written in round-robin to a local dir
I Output data is parttioned corresponding to the reducers they will be
sent to
I Within each partition, data is sorted (in-memory)
I Optionally, if there is a combiner, it is executed just after the sort
phase
Pietro Michiardi (Eurecom) Tutorial: MapReduce 88 / 131
Hadoop MapReduce Hadoop MapReduce in details
Shuffle and Sort: the Map Side
More on spills and memory buffer
I Each time the buffer is full, a new spill is created
I Once the map task finishes, there are many spills
I Such spills are merged into a single partitioned and sorted output
file
The output file partitions are made available to reducers over
HTTP
I There are 40 (default) threads dedicated to serve the file partitions
to reducers
Pietro Michiardi (Eurecom) Tutorial: MapReduce 89 / 131
Hadoop MapReduce Hadoop MapReduce in details
Shuffle and Sort: the Map Side
Pietro Michiardi (Eurecom) Tutorial: MapReduce 90 / 131
Hadoop MapReduce Hadoop MapReduce in details
Shuffle and Sort: the Reduce Side
The map output file is located on the local disk of tasktracker
Another tasktracker (in charge of a reduce task) requires
input from many other TaskTracker (that finished their map
tasks)
I How do reducers know which tasktrackers to fetch map output
from?
F When a map task finishes it notifies the parent tasktracker
F The tasktracker notifies (with the heartbeat mechanism) the jobtracker
F A thread in the reducer polls periodically the jobtracker
F Tasktrackers do not delete local map output as soon as a reduce task
has fetched them (WHY?)
Copy phase: a pull approach
I There is a small number (5) of copy threads that can fetch map
outputs in parallel
Pietro Michiardi (Eurecom) Tutorial: MapReduce 91 / 131
Hadoop MapReduce Hadoop MapReduce in details
Shuffle and Sort: the Reduce Side
The map outputs are copied to the the trasktracker running
the reducer in memory (if they fit)
I Otherwise they are copied to disk
Input consolidation
I A background thread merges all partial inputs into larger, sorted
files
I Note that if compression was used (for map outputs to save
bandwidth), decompression will take place in memory
Sorting the input
I When all map outputs have been copied a merge phase starts
I All map outputs are sorted maintaining their sort ordering, in rounds
Pietro Michiardi (Eurecom) Tutorial: MapReduce 92 / 131
Hadoop MapReduce Hadoop MapReduce in details
Hadoop MapReduce Types and Formats
Pietro Michiardi (Eurecom) Tutorial: MapReduce 93 / 131
Hadoop MapReduce Hadoop MapReduce in details
MapReduce Types
Input / output to mappers and reducers
I map: (k1, v1) → [(k2, v2)]
I reduce: (k2,[v2]) → [(k3, v3)]
In Hadoop, a mapper is created as follows:
I void map(K1 key, V1 value, OutputCollector V2> output, Reporter reporter)
Types:
I K types implement WritableComparable
I V types implement Writable
Pietro Michiardi (Eurecom) Tutorial: MapReduce 94 / 131
Hadoop MapReduce Hadoop MapReduce in details
What is a Writable
Hadoop defines its own classes for strings (Text), integers
(intWritable), etc...
All keys are instances of WritableComparable
I Why comparable?
All values are instances of Writable
Pietro Michiardi (Eurecom) Tutorial: MapReduce 95 / 131
Hadoop MapReduce Hadoop MapReduce in details
Getting Data to the Mapper
Pietro Michiardi (Eurecom) Tutorial: MapReduce 96 / 131
Hadoop MapReduce Hadoop MapReduce in details
Reading Data
Datasets are specified by InputFormats
I InputFormats define input data (e.g. a file, a directory)
I InputFormats is a factory for RecordReader objects to extract
key-value records from the input source
InputFormats identify partitions of the data that form an
InputSplit
I InputSplit is a (reference to a) chunk of the input processed by
a single map
F Largest split is processed first
I Each split is divided into records, and the map processes each
record (a key-value pair) in turn
I Splits and records are logical, they are not physically bound to a file
Pietro Michiardi (Eurecom) Tutorial: MapReduce 97 / 131
Hadoop MapReduce Hadoop MapReduce in details
The relationship between InputSplit and HDFS blocks
Pietro Michiardi (Eurecom) Tutorial: MapReduce 98 / 131
Hadoop MapReduce Hadoop MapReduce in details
FileInputFormat and Friends
TextInputFormat
I Traeats each newline-terminated line of a file as a value
KeyValueTextInputFormat
I Maps newline-terminated text lines of “key” SEPARATOR “value”
SequenceFileInputFormat
I Binary file of key-value pairs with some additional metadata
SequenceFileAsTextInputFormat
I Same as before but, maps (k.toString(), v.toString())
Pietro Michiardi (Eurecom) Tutorial: MapReduce 99 / 131
Hadoop MapReduce Hadoop MapReduce in details
Filtering File Inputs
FileInputFormat reads all files out of a specified directory
and send them to the mapper
Delegates filtering this file list to a method subclasses may
override
I Example: create your own “xyzFileInputFormat” to read
*.xyz from a directory list
Pietro Michiardi (Eurecom) Tutorial: MapReduce 100 / 131
Hadoop MapReduce Hadoop MapReduce in details
Record Readers
Each InputFormat provides its own RecordReader
implementation
LineRecordReader
I Reads a line from a text file
KeyValueRecordReader
I Used by KeyValueTextInputFormat
Pietro Michiardi (Eurecom) Tutorial: MapReduce 101 / 131
Hadoop MapReduce Hadoop MapReduce in details
Input Split Size
FileInputFormat divides large files into chunks
I Exact size controlled by mapred.min.split.size
Record readers receive file, offset, and length of chunk
I Example
On the top of the Crumpetty Tree→
The Quangle Wangle sat,→
But his face you could not see,→
On account of his Beaver Hat.→
(0, On the top of the Crumpetty Tree)
(33, The Quangle Wangle sat,)
(57, But his face you could not see,)
(89, On account of his Beaver Hat.)
Custom InputFormat implementaions may override split
size
Pietro Michiardi (Eurecom) Tutorial: MapReduce 102 / 131
Hadoop MapReduce Hadoop MapReduce in details
Sending Data to Reducers
Map function receives OutputCollector object
I OutputCollector.collect() receives key-value elements
Any (WritableComparable, Writable) can be used
By defalut, mapper output type assumed to be the same as
the reducer output type
Pietro Michiardi (Eurecom) Tutorial: MapReduce 103 / 131
Hadoop MapReduce Hadoop MapReduce in details
WritableComparator
Compares WritableComparable data
I Will call the WritableComparable.compare() method
I Can provide fast path for serialized data
Configured through:
JobConf.setOutputValueGroupingComparator()
Pietro Michiardi (Eurecom) Tutorial: MapReduce 104 / 131
Hadoop MapReduce Hadoop MapReduce in details
Partiotioner
int getPartition(key, value, numPartitions)
I Outputs the partition number for a given key
I One partition == all values sent to a single reduce task
HasPartitioner used by default
I Uses key.hashCode() to return partion number
JobConf used to set Partitioner implementation
Pietro Michiardi (Eurecom) Tutorial: MapReduce 105 / 131
Hadoop MapReduce Hadoop MapReduce in details
The Reducer
void reduce(k2 key, Iterator values,
OutputCollector output, Reporter
reporter )
Keys and values sent to one partition all go to the same
reduce task
Calls are sorted by key
I “Early” keys are reduced and output before “late” keys
Pietro Michiardi (Eurecom) Tutorial: MapReduce 106 / 131
Hadoop MapReduce Hadoop MapReduce in details
Writing the Output
Pietro Michiardi (Eurecom) Tutorial: MapReduce 107 / 131
Hadoop MapReduce Hadoop MapReduce in details
Writing the Output
Analogous to InputFormat
TextOutputFormat writes “key value ” strings to
output file
SequenceFileOutputFormat uses a binary format to pack
key-value pairs
NullOutputFormat discards output
Pietro Michiardi (Eurecom) Tutorial: MapReduce 108 / 131
Hadoop MapReduce Hadoop MapReduce in details
Hadoop MapReduce Features
Pietro Michiardi (Eurecom) Tutorial: MapReduce 109 / 131
Hadoop MapReduce Hadoop MapReduce in details
Developing a MapReduce Application
Pietro Michiardi (Eurecom) Tutorial: MapReduce 110 / 131
Hadoop MapReduce Hadoop MapReduce in details
Preliminaries
Writing a program in MapReduce has a certain flow to it
I Start by writing the map and reduce functions
F Write unit tests to make sure they do what they should
I Write a driver program to run a job
F The job can be run from the IDE using a small subset of the data
F The debugger of the IDE can be used
I Evenutally, you can unleash the job on a cluster
F Debugging a distributed program is challenging
Once the job is running properly
I Perform standard checks to improve performance
I Perform task profiling
Pietro Michiardi (Eurecom) Tutorial: MapReduce 111 / 131
Hadoop MapReduce Hadoop MapReduce in details
Configuration
Before writing a MapReduce program, we need to set up and
cofigure the development environment
I Components in Hadoop are configured with an ad hoc API
I Configuration class is a collection of properties and their values
I Resources can be combined into a configuration
Configuring the IDE
I In the IDE create a new project and add all the JAR files from the
top level of the distribution and form the lib directory
I For Eclipse there are also available plugins
I Commercial IDE also exist (Karmasphere)
Alternatives
I Switch configurations (local, cluster)
I Alternatives (see Cloudera documentation for Ubuntu) is very
effective
Pietro Michiardi (Eurecom) Tutorial: MapReduce 112 / 131
Hadoop MapReduce Hadoop MapReduce in details
Local Execution
Use the GenericOptionsParser, Tool and ToolRunner
I These helper classes makes it easy to intervene on job
configurations
I These are additional configurations to the core configuration
The run() method
I Constructs and configure a JobConf object and launch it
How many reducers?
I In a local execution, there is a single (eventually none) reducer
I Even by setting a number of reducer larger than one, the option will
be ignored
Pietro Michiardi (Eurecom) Tutorial: MapReduce 113 / 131
Hadoop MapReduce Hadoop MapReduce in details
Cluster Execution
Packaging
Launching a Job
The WebUI
Hadoop Logs
Running Dependent Jobs, and Oozie
Pietro Michiardi (Eurecom) Tutorial: MapReduce 114 / 131
Hadoop MapReduce Hadoop Deployments
Hadoop Deployments
Pietro Michiardi (Eurecom) Tutorial: MapReduce 115 / 131
Hadoop MapReduce Hadoop Deployments
Setting up a Hadoop Cluster
Cluster deployment
I Private cluster
I Cloud-based cluster
I AWS Elasitc MapReduce
Outlook:
I Cluster specification
F Hardware
F Network Topology
I Hadoop Configuration
F Memory considerations
Pietro Michiardi (Eurecom) Tutorial: MapReduce 116 / 131
Hadoop MapReduce Hadoop Deployments
Cluster Specification
Commodity Hardware
I Commodity 6= Low-end
F False economy due to failure rate and maintenance costs
I Commodity 6= High-end
F High-end machines perform better, which would imply a smaller
cluster
F A single machine failure would compromise a large fraction of the
cluster
A 2010 specification:
I 2 quad-cores
I 16-24 GB ECC RAM
I 4 × 1 TB SATA disks6
I Gigabit Ethernet
6Why not using RAID instead of JBOD?
Pietro Michiardi (Eurecom) Tutorial: MapReduce 117 / 131
Hadoop MapReduce Hadoop Deployments
Cluster Specification
Example:
I Assume your data grows by 1 TB per week
I Assume you have three-way replication in HDFS
→ You need additional 3TB of raw storage per week
I Allow for some overhead (temporary files, logs)
→ This is a new machine per week
How to dimension a cluster?
I Obviously, you won’t buy a machine per week!!
I The idea is that the above back-of-the-envelope calculation is that
you can project over a 2 year life-time of your system
→ You would need a 100-machine cluster
Where should you put the various components?
I Small cluster: NameNode and JobTracker can be colocated
I Large cluster: requires more RAM at the NameNode
Pietro Michiardi (Eurecom) Tutorial: MapReduce 118 / 131
Hadoop MapReduce Hadoop Deployments
Cluster Specification
Should we use 64-bit or 32-bit machines?
I NameNode should run on a 64-bit machine: this avoids the 3GB
Java heap size limit on 32-bit machines
I Other components should run on 32-bit machines to avoid the
memory overhead of large pointers
What’s the role of Java?
I Recent releases (Java6) implement some optimization to eliminate
large pointer overhead
→ A cluster of 64-bit machines has no downside
Pietro Michiardi (Eurecom) Tutorial: MapReduce 119 / 131
Hadoop MapReduce Hadoop Deployments
Cluster Specification: Network Topology
Pietro Michiardi (Eurecom) Tutorial: MapReduce 120 / 131
Hadoop MapReduce Hadoop Deployments
Cluster Specification: Network Topology
Two-level network topology
I Switch redundancy is not shown in the figure
Typical configuration
I 30-40 servers per rack
I 1 GB switch per rack
I Core switch or router with 1GB or better
Features
I Aggregate bandwidth between nodes on the same rack is much
larger than for nodes on different racks
I Rack awareness
F Hadoop should know the cluster topology
F Benefits both HDFS (data placement) and MapReduce (locality)
Pietro Michiardi (Eurecom) Tutorial: MapReduce 121 / 131
Hadoop MapReduce Hadoop Deployments
Hadoop Configuration
There are a handful of files for controlling the operation of an
Hadoop Cluster
I See next slide for a summary table
Managing the configuration across several machines
I All machines of an Hadoop cluster must be in sync!
I What happens if you dispatch an update and some machines are
down?
I What happens when you add (new) machines to your cluster?
I What if you need to patch MapReduce?
Common practice: use configuration management tools
I Chef, Puppet, ...
I Declarative language to specify configurations
I Allow also to install software
Pietro Michiardi (Eurecom) Tutorial: MapReduce 122 / 131
Hadoop MapReduce Hadoop Deployments
Hadoop Configuration
Filename Format Description
hadoop-env.sh Bash script Environment variables that are used in the scripts to run Hadoop.
core-site.xml Hadoop configuration XML I/O settings that are common to HDFS and MapReduce.
hdfs-site.xml Hadoop configuration XML Namenode, the secondary namenode, and the datanodes.
mapred-site.xml Hadoop configuration XML Jobtracker, and the tasktrackers.
masters Plain text A list of machines that each run a secondary namenode.
slaves Plain text A list of machines that each run a datanode and a tasktracker.
Table: Hadoop Configuration Files
Pietro Michiardi (Eurecom) Tutorial: MapReduce 123 / 131
Hadoop MapReduce Hadoop Deployments
Hadoop Configuration: memory utilization
Hadoop uses a lot of memory
I Default values, for a typical cluster configuration
F DataNode: 1 GB
F TaskTracker: 1 GB
F Child JVM map task: 2 × 200MB
F Child JVM reduce task: 2 × 200MB
All the moving parts of Hadoop (HDFS and MapReduce) can
be individually configured
I This is true for cluster configuration but also for job specific
configurations
Hadoop is fast when using RAM
I Generally, MapReduce Jobs are not CPU-bound
I Avoid I/O on disk as much as you can
I Minimize network traffic
F Customize the partitioner
F Use compression (→ decompression is in RAM)
Pietro Michiardi (Eurecom) Tutorial: MapReduce 124 / 131
Hadoop MapReduce Hadoop Deployments
Elephants in the cloud!
May organization run Hadoop in private clusters
I Pros and cons
Cloud based Hadoop installations (Amazon biased)
I Use Cloudera + Whirr
I Use Elastic MapReduce
Pietro Michiardi (Eurecom) Tutorial: MapReduce 125 / 131
Hadoop MapReduce Hadoop Deployments
Hadoop on EC2
Launch instances of a cluster on demand, paying by hour
I CPU, in general bandwidth is used from within a datacenter, hence
it’s free
Apache Whirr project
I Launch, terminate, modify a running cluster
I Requires AWS credentials
Example
I Launch a cluster test-hadoop-cluster, with one master node
(JobTracker and NameNode) and 5 worker nodes (DataNodes
and TaskTrackers)
→ hadoop-ec2 launch-cluster test-hadoop-cluster 5
I See project webpage and Chapter 9, page 290 [11]
Pietro Michiardi (Eurecom) Tutorial: MapReduce 126 / 131
Hadoop MapReduce Hadoop Deployments
AWS Elastic MapReduce
Hadoop as a service
I Amazon handles everything, which becomes transparent
I How this is done remains a mistery
Focus on What not How
I All you need to do is to package a MapReduce Job in a JAR and
upload it using a Web Interface
I Other Jobs are available: python, pig, hive, ...
I Test your jobs locally!!!
Pietro Michiardi (Eurecom) Tutorial: MapReduce 127 / 131
References
References I
[1] Adversarial information retrieval workshop.
[2] Michele Banko and Eric Brill.
Scaling to very very large corpora for natural language
disambiguation.
In Proc. of the 39th Annual Meeting of the Association for
Computational Linguistic (ACL), 2001.
[3] Luiz Andre Barroso and Urs Holzle.
The datacebter as a computer: An introduction to the design of
warehouse-scale machines.
Morgan & Claypool Publishers, 2009.
[4] Monica Bianchini, Marco Gori, and Franco Scarselli.
Inside pagerank.
In ACM Transactions on Internet Technology, 2005.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 128 / 131
References
References II
[5] James Hamilton.
Cooperative expendable micro-slice servers (cems): Low cost,
low power servers for internet-scale services.
In Proc. of the 4th Biennal Conference on Innovative Data
Systems Research (CIDR), 2009.
[6] Tony Hey, Stewart Tansley, and Kristin Tolle.
The fourth paradigm: Data-intensive scientific discovery.
Microsoft Research, 2009.
[7] Silvio Lattanzi, Benjamin Moseley, Siddharth Suri, and Sergei
Vassilvitskii.
Filtering: a method for solving graph problems in mapreduce.
In Proc. of SPAA, 2011.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 129 / 131
References
References III
[8] Jure Leskovec, Jon Kleinberg, and Christos Faloutsos.
Graphs over time: Densification laws, shrinking diamters and
possible explanations.
In Proc. of SIGKDD, 2005.
[9] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry
Winograd.
The pagerank citation ranking: Bringin order to the web.
In Stanford Digital Library Working Paper, 1999.
[10] Konstantin Shvachko, Hairong Kuang, Sanjay Radia, and Robert
Chansler.
The hadoop distributed file system.
In Proc. of the 26th IEEE Symposium on Massive Storage
Systems and Technologies (MSST). IEEE, 2010.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 130 / 131
References
References IV
[11] Tom White.
Hadoop, The Definitive Guide.
O’Reilly, Yahoo, 2010.
Pietro Michiardi (Eurecom) Tutorial: MapReduce 131 / 131


About the author

160