Hadoop 2.x 集群环境搭建

======================================================
基础环境设置,以CentOS7为例:

1.配置/etc/sysconfig/network-scripts/ifcfg-ens33 绑定ip
2.配置主机名ip解析,编辑 /etc/hosts
3.修改主机名,编辑/etc/sysconfig/network ,添加一行

HOSTNAME=hostname

4.关闭iptables,selinux,firewalld
5.安装jdk,配置$JAVA_HOME
6.解压hadoop2.x 至/opt/app下,配置$HADOOP_HOME
7.所有主机之间设置ssh免验证登陆,包括本机自己ssh也要配置 (3台机器都有同一个用户,beifeng)

========================================================
hadoop 2.x 分布式部署方案

HOSTNAME IPADDR HDFS YARN MAPREDUCE

hadoop-master 192.168.1.129 NameNode,DataNode NodeManager Job_History_server
hadoop-slave1 192.168.1.130 DataNode ResourceManager,NodeManager
hadoop-slave2 192.168.1.131 SecondaryNameNode,DataNode NodeManager

==========================================================
hadoop 2.x 各守护进程相关配置文件

hdfs:

hadoop-env.sh   -->   配置$JAVA_HOME
core-site.xml   -->   配置NameNode节点(fs.defaultFS)
                      配置Hadoop的临时目录(tmp.dir)
hdfs-site.xml    -->      配置SecondaryNameNode(dfs.namenode.secondary.http-address)
slaves            -->      配置DataNode节点的ip/hostname

yarn:

yarn-env.sh     -->   配置$JAVA_HOME
yarn-site.xml   -->      配置ResourceManager节点
                      配置日志聚集(yarn.log-aggregetion-enable)
                      配置MapReduce的shuffle(yarn.nodemanager.aux-services----mapreduce_shuffle )
slaves            -->      配置NodeManager节点的ip/hostname

mapreduce:

mapred-site.xml -->   配置job history
                      配置在yarn上运行mapreduce
                     

===============================================================
在hadoop-master节点配置hdfs、yarn及mapreduce

1.配置hdfs
(一般配置好javahome不用再配置hadoop-env.sh)
a.$HADOOP_HOME/etc/hadoop/core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-master:8020</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>/opt/data/tmp</value>
</property>

</configuration>

b.$HADOOP_HOME/etc/hadoop/hdfs-site.xml

不需要配置分片


dfs.namenode.secondary.http-address
http://hadoop-slave2:50090

c.$HADOOP_HOME/etc/hadoop/slaves

同时配置了NodeManager的节点地址

hadoop-master
hadoop-slave1
hadoop-slave2

2.配置yarn

a.yarn-site.xml


<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-slave1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>


<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>640800</value>
</property>

3.配置MapReduce

a.mapred-site.xml

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>


<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop-master:10020</value>
</property>


<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop-master:19888</value>
</property>

======================================================================

拷贝hadoop到hadoop-slave1,slave2

scp -r $HADOOP_HOME hadoop-slave1:/opt/app
scp -r $HADOOP_HOME hadoop-slave2:/opt/app

========================================================================
启动hadoop 集群

1.在hadoop-master上首次初始格式化namenode节点

hdfs namenode -format

2.启动hdfs集群

start-dfs.sh

3.启动yarn集群

start-yarn.sh

4.启动job-history server

mr-jobhistory-daemon.sh start historyserver

5.各节点查看运行状态

jps

===================================================================

END

上一篇:angularjs中$http、$location、$watch及双向数据绑定学习实现简单登陆验证


下一篇:在CentOS6.5无外网环境下的MariaDB-Galera-Cluster 5.5集群的安装和配置