来自:http://blog.csdn.net/dandingyy/article/details/7490046
众所周知,Hadoop对处理单个大文件比处理多个小文件更有效率,另外单个文件也非常占用HDFS的存储空间。所以往往要将其合并起来。
1,getmerge
hadoop有一个命令行工具getmerge,用于将一组HDFS上的文件复制到本地计算机以前进行合并
参考:http://hadoop.apache.org/common/docs/r0.19.2/cn/hdfs_shell.html
使用方法:hadoop fs -getmerge <src> <localdst> [addnl]
接受一个源目录和一个目标文件作为输入,并且将源目录中所有的文件连接成本地目标文件。addnl是可选的,用于指定在每个文件结尾添加一个换行符。
多嘴几句:调用文件系统(FS)Shell命令应使用 bin/hadoop fs <args>的形式。 所有的的FS shell命令使用URI路径作为参数。URI格式是scheme://authority/path。
2.putmerge
将本地小文件合并上传到HDFS文件系统中。
一种方法可以现在本地写一个脚本,先将一个文件合并为一个大文件,然后将整个大文件上传,这种方法占用大量的本地磁盘空间;
另一种方法如下,在复制的过程中上传。参考:《hadoop in action》
- import java.io.IOException;
- import org.apache.hadoop.conf.Configuration;
- import org.apache.hadoop.fs.FSDataInputStream;
- import org.apache.hadoop.fs.FSDataOutputStream;
- import org.apache.hadoop.fs.FileStatus;
- import org.apache.hadoop.fs.FileSystem;
- import org.apache.hadoop.fs.Path;
- import org.apache.hadoop.io.IOUtils;
- //参数1为本地目录,参数2为HDFS上的文件
- public class PutMerge {
- public static void putMergeFunc(String LocalDir, String fsFile) throws IOException
- {
- Configuration conf = new Configuration();
- FileSystem fs = FileSystem.get(conf); //fs是HDFS文件系统
- FileSystem local = FileSystem.getLocal(conf); //本地文件系统
- Path localDir = new Path(LocalDir);
- Path HDFSFile = new Path(fsFile);
- FileStatus[] status = local.listStatus(localDir); //得到输入目录
- FSDataOutputStream out = fs.create(HDFSFile); //在HDFS上创建输出文件
- for(FileStatus st: status)
- {
- Path temp = st.getPath();
- FSDataInputStream in = local.open(temp);
- IOUtils.copyBytes(in, out, 4096, false); //读取in流中的内容放入out
- in.close(); //完成后,关闭当前文件输入流
- }
- out.close();
- }
- public static void main(String [] args) throws IOException
- {
- String l = "/home/kqiao/hadoop/MyHadoopCodes/putmergeFiles";
- String f = "hdfs://ubuntu:9000/user/kqiao/test/PutMergeTest";
- putMergeFunc(l,f);
- }
- }
3.将小文件打包成SequenceFile的MapReduce任务
来自:《hadoop权威指南》
实现将整个文件作为一条记录处理的InputFormat:
- public class WholeFileInputFormat
- extends FileInputFormat<NullWritable, BytesWritable> {
- @Override
- protected boolean isSplitable(JobContext context, Path file) {
- return false;
- }
- @Override
- public RecordReader<NullWritable, BytesWritable> createRecordReader(
- InputSplit split, TaskAttemptContext context) throws IOException,
- InterruptedException {
- WholeFileRecordReader reader = new WholeFileRecordReader();
- reader.initialize(split, context);
- return reader;
- }
- }
实现上面类中使用的定制的RecordReader:
- /实现一个定制的RecordReader,这六个方法均为继承的RecordReader要求的虚函数。
- //实现的RecordReader,为自定义的InputFormat服务
- public class WholeFileRecordReader extends RecordReader<NullWritable, BytesWritable>{
- private FileSplit fileSplit;
- private Configuration conf;
- private BytesWritable value = new BytesWritable();
- private boolean processed = false;
- @Override
- public void close() throws IOException {
- // do nothing
- }
- @Override
- public NullWritable getCurrentKey() throws IOException,
- InterruptedException {
- return NullWritable.get();
- }
- @Override
- public BytesWritable getCurrentValue() throws IOException,
- InterruptedException {
- return value;
- }
- @Override
- public float getProgress() throws IOException, InterruptedException {
- return processed? 1.0f : 0.0f;
- }
- @Override
- public void initialize(InputSplit split, TaskAttemptContext context)
- throws IOException, InterruptedException {
- this.fileSplit = (FileSplit) split;
- this.conf = context.getConfiguration();
- }
- //process表示记录是否已经被处理过
- @Override
- public boolean nextKeyValue() throws IOException, InterruptedException {
- if (!processed) {
- byte[] contents = new byte[(int) fileSplit.getLength()];
- Path file = fileSplit.getPath();
- FileSystem fs = file.getFileSystem(conf);
- FSDataInputStream in = null;
- try {
- in = fs.open(file);
- //将file文件中 的内容放入contents数组中。使用了IOUtils实用类的readFully方法,将in流中得内容放入
- //contents字节数组中。
- IOUtils.readFully(in, contents, 0, contents.length);
- //BytesWritable是一个可用做key或value的字节序列,而ByteWritable是单个字节。
- //将value的内容设置为contents的值
- value.set(contents, 0, contents.length);
- } finally {
- IOUtils.closeStream(in);
- }
- processed = true;
- return true;
- }
- return false;
- }
- }
将小文件打包成SequenceFile:
- public class SmallFilesToSequenceFileConverter extends Configured implements Tool{
- //静态内部类,作为mapper
- static class SequenceFileMapper extends Mapper<NullWritable, BytesWritable, Text, BytesWritable>
- {
- private Text filenameKey;
- //setup在task开始前调用,这里主要是初始化filenamekey
- @Override
- protected void setup(Context context)
- {
- InputSplit split = context.getInputSplit();
- Path path = ((FileSplit) split).getPath();
- filenameKey = new Text(path.toString());
- }
- @Override
- public void map(NullWritable key, BytesWritable value, Context context)
- throws IOException, InterruptedException{
- context.write(filenameKey, value);
- }
- }
- @Override
- public int run(String[] args) throws Exception {
- Configuration conf = new Configuration();
- Job job = new Job(conf);
- job.setJobName("SmallFilesToSequenceFileConverter");
- FileInputFormat.addInputPath(job, new Path(args[0]));
- FileOutputFormat.setOutputPath(job, new Path(args[1]));
- //再次理解此处设置的输入输出格式。。。它表示的是一种对文件划分,索引的方法
- job.setInputFormatClass(WholeFileInputFormat.class);
- job.setOutputFormatClass(SequenceFileOutputFormat.class);
- //此处的设置是最终输出的key/value,一定要注意!
- job.setOutputKeyClass(Text.class);
- job.setOutputValueClass(BytesWritable.class);
- job.setMapperClass(SequenceFileMapper.class);
- return job.waitForCompletion(true) ? 0 : 1;
- }
- public static void main(String [] args) throws Exception
- {
- int exitCode = ToolRunner.run(new SmallFilesToSequenceFileConverter(), args);
- System.exit(exitCode);
- }
- }