一.Hadoop支持Snappy
1.重新编译Hadoop 2.7.2源代码,使其支持Snappy解压压缩库 http://blog.itpub.net/30089851/viewspace-2120631/
2.查看libsnappy.so.1.2.0
[root@sht-sgmhadoopnn-01 ~]# ll $HADOOP_HOME/lib/native/
total 4880
-rw-r--r-- 1 root root 1211196 Jun 21 19:11 libhadoop.a
-rw-r--r-- 1 root root 1485756 Jun 21 19:12 libhadooppipes.a
lrwxrwxrwx 1 root root 18 Jun 21 19:45 libhadoop.so -> libhadoop.so.1.0.0
-rwxr-xr-x 1 root root 717060 Jun 21 19:11 libhadoop.so.1.0.0
-rw-r--r-- 1 root root 582128 Jun 21 19:12 libhadooputils.a
-rw-r--r-- 1 root root 365052 Jun 21 19:11 libhdfs.a
lrwxrwxrwx 1 root root 16 Jun 21 19:45 libhdfs.so -> libhdfs.so.0.0.0
-rwxr-xr-x 1 root root 229289 Jun 21 19:11 libhdfs.so.0.0.0
-rw-r--r-- 1 root root 233538 Jun 21 19:11 libsnappy.a
-rwxr-xr-x 1 root root 953 Jun 21 19:11 libsnappy.la
lrwxrwxrwx 1 root root 18 Jun 21 19:45 libsnappy.so -> libsnappy.so.1.2.0
lrwxrwxrwx 1 root root 18 Jun 21 19:45 libsnappy.so.1 -> libsnappy.so.1.2.0
-rwxr-xr-x 1 root root 147726 Jun 21 19:11 libsnappy.so.1.2.0
[root@sht-sgmhadoopnn-01 ~]#
###假如集群已经安装好
3.修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh,添加
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"
############会解决Warn:” Unable to load native-hadoop library ”################################################
[root@sht-sgmhadoopnn-01 ~]# hadoop fs -ls /
16/06/21 15:08:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
##################################################################################################
4.修改$HADOOP_HOME/etc/hadoop/core-site.xml
点击(此处)折叠或打开
-
<property>
-
-
<name>io.compression.codecs</name>
-
-
<value>org.apache.hadoop.io.compress.GzipCodec,
-
-
org.apache.hadoop.io.compress.DefaultCodec,
-
-
org.apache.hadoop.io.compress.BZip2Codec,
-
-
org.apache.hadoop.io.compress.SnappyCodec
-
-
</value>
-
- </property>
5.修改 $HADOOP_HOME/etc/hadoop/mapred-site.xml 中有关压缩属性,测试snappy
点击(此处)折叠或打开
-
<property>
-
-
<name>mapreduce.map.output.compress</name>
-
-
<value>true</value>
-
-
</property>
-
-
-
-
<property>
-
-
<name>mapreduce.map.output.compress.codec</name>
-
-
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
-
- </property>
6.新增$HADOOP_HOME/etc/hadoop/yarn-site.xml(是否启用日志聚集功能、yarn日志服务的地址、配置yarn的memory和cpu)
点击(此处)折叠或打开
-
<property>
-
<name>yarn.log-aggregation-enable</name>
-
<value>true</value>
-
</property>
-
<property>
-
<name>yarn.log.server.url</name>
-
<value>http://sht-sgmhadoopnn-01:19888/jobhistory/logs</value>
-
</property>
-
-
<property>
-
<name>yarn.nodemanager.resource.memory-mb</name>
-
<value>10240</value>
-
</property>
-
<property>
-
<name>yarn.scheduler.minimum-allocation-mb</name>
-
<value>1500</value>
-
<discription>单个任务可申请最少内存,默认1024MB</discription>
-
</property>
-
-
<property>
-
<name>yarn.scheduler.maximum-allocation-mb</name>
-
<value>2500</value>
-
<discription>单个任务可申请最大内存,默认8192MB</discription>
-
</property>
-
<property>
-
<name>yarn.nodemanager.resource.cpu-vcores</name>
-
<value>2</value>
- </property>
7.将hadoop-env.sh, core-site.xml, mapred-site.xml,yarn-site.xml 同步到集群其他节点
8.重启Hadoop集群
9.验证1: hadoop checknative
[root@sht-sgmhadoopnn-01 ~]# hadoop checknative
16/06/25 12:58:13 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
16/06/25 12:58:13 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /hadoop/hadoop-2.7.2/lib/native/libhadoop.so.1.0.0
zlib: true /usr/local/lib/libz.so.1
snappy: true /hadoop/hadoop-2.7.2/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2: false
openssl: true /usr/lib64/libcrypto.so
[root@sht-sgmhadoopnn-01 ~]#
####支持本地native,支持snappy
10.验证2
[root@sht-sgmhadoopnn-01 ~]# vi test.log
a
c d
c d d d a
1 2
a
[root@sht-sgmhadoopnn-01 ~]# hadoop fs -mkdir /input
[root@sht-sgmhadoopnn-01 ~]#hadoop fs -put test.log /input/
[root@sht-sgmhadoopnn-01 ~]#hadoop jar /hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar wordcount /input /output1
为了验证是否成功,往
- >>>
-
-
-
-
-
>
-
- >