面试hadoop可能被问到的问题,你能回答出几个 ?
1、hadoop运行的原理?
2、mapreduce的原理?
3、HDFS存储的机制?
4、举一个简单的例子说明mapreduce是怎么来运行的 ?
5、面试的人给你出一些问题,让你用mapreduce来实现?
比如:现在有10个文件夹,每个文件夹都有1000000个url.现在让你找出top1000000url。
6、hadoop中Combiner的作用?
Src: http://p-x1984.javaeye.com/blog/859843
Following 2 are most common InputFormats defined in Hadoop
- TextInputFormat
- KeyValueInputFormat
- SequenceFileInputFormat
Q2. What is the difference between TextInputFormatand KeyValueInputFormat class
TextInputFormat:
It reads lines of text files and provides the offset of the line as key
to the Mapper and actual line as Value to the mapper
KeyValueInputFormat:
Reads text file and parses lines into key, val pairs. Everything up to
the first tab character is sent as key to the Mapper and the remainder
of the line is sent as value to the mapper.
InputSplithas defined a slice of work, but does not describe how to
access it. The RecordReaderclass actually loads the data from its source
and converts it into (key, value) pairs suitable for reading by the
Mapper. The RecordReader instance is defined by the InputFormat
is the process of determining which reducer instance will receive which
intermediate keys and values. Each mapper must determine for all of its
output (key, value) pairs which reducer will receive them. It is
necessary that for any key, regardless of which mapper instance
generated it, the destination partition is the same
the first map tasks have completed, the nodes may still be performing
several more map tasks each. But they also begin exchanging the
intermediate outputs from the map tasks to where they are required by
the reducers. This process of moving map outputs to the reducers is
known as shuffling.
task is responsible for reducing the values associated with several
intermediate keys. The set of intermediate keys on a single node is
automatically sorted by Hadoop before they are presented to the Reducer
Combiner is a "mini-reduce" process which operates only on data
generated by a mapper. The Combiner will receive as input all data
emitted by the Mapper instances on a given node. The output from the
Combiner is then sent to the Reducers, instead of the output from the
Mappers.
submits the work to the chosen Task Tracker nodes and monitors progress
of each task by receiving heartbeat signals from Task tracker
will restart the task again on some other task tracker and only if the
task fails more than 4 (default setting and can be changed) times will
it kill the job
parallelism by dividing the tasks across many nodes, it is possible for
a few slow nodes to rate-limit the rest of the program and slow down
the program. What mechanism Hadoop provides to combat this
tracker makes different task trackers process same input. When tasks
complete, they announce this fact to the Job Tracker. Whichever copy of a
task finishes first becomes the definitive copy. If other copies were
executing speculatively, Hadoop tells
the Task Trackers to abandon the tasks and discard their outputs. The
Reducers then receive their inputs from whichever Mapper completed
successfully, first.
What is the characteristic of streaming API that makes it flexible run
map reduce jobs in languages like perl, ruby, awk etc.
allows to use arbitrary programs for the Mapper and Reducer phases of a
Map Reduce job by having both Mappers and Reducers receive their input
on stdin and emit output (key, value) pairs on stdout.
Distributed
Cache is a facility provided by the Map/Reduce framework to cache files
(text, archives, jars and so on) needed by applications during
execution of the job. The framework will copy the necessary files to the
slave node before any tasks for the job are executed on that node.
This
is because distributed cache is much faster. It copies the file to all
trackers at the start of the job. Now if the task tracker runs 10 or 100
mappers or reducer, it will use the same copy of distributed cache. On
the other hand, if you put code in file to read it from HDFS in the MR
job then every mapper will try to access it from HDFS hence if a task
tracker run 100 map jobs then it will try to read this file 100 times
from HDFS. Also HDFS is not very efficient when used like this.
Q.24 What mechanism does Hadoop framework provides to synchronize changes made in Distribution Cache during runtime of the application
This is a trick questions. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution
Q25. Have you ever used Counters in Hadoop. Give us an example scenario
Anybody who claims to have worked on a Hadoop project is expected to use counters
Q26. Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job
Yes, The input format class provides methods to add multiple directories as input to a Hadoop job
Q27. Is it possible to have Hadoop job output in multiple directories. If yes then how
Yes, by using Multiple Outputs class
Q28. What will a hadoop job do if you try to run it with an output directory that is already present? Will it
- overwrite it
- warn you and continue
- throw an exception and exit
The hadoop job will throw an exception and exit.
Q29. How can you set an arbitary number of mappers to be created for a job in Hadoop
This is a trick question. You cannot set it
Q30. How can you set an arbitary number of reducers to be created for a job in Hadoop
You can either do it progamatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting