Hadoop HA集群启动

实验一:高可用集群启动

实验任务一:HA的启动

步骤一:启动journalnode守护进程

[hadoop@master hadoop]$  hadoop-daemons.sh start journalnode

master: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-root-journalnode-master.out

slave1: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-root-journalnode-slave1.out

slave2: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-root-journalnode-slave2.out

步骤二:初始化namenode

[hadoop@master ~]$ hdfs namenode -format

Hadoop HA集群启动 

步骤三:注册ZNod

[hadoop@master ~]$ hdfs zkfc -formatZK

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/src/hadoop/lib:/usr/local/src/hadoop/lib/native

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-693.el7.x86_64

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Client environment:user.name=root

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Client environment:user.home=/root

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/src/hadoop/etc/hadoop

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=master:2181,slave1:2181,slave2:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@27ce24aa

20/07/01 17:23:15 INFO zookeeper.ClientCnxn: Opening socket connection to server slave2/192.168.1.8:2181. Will not attempt to authenticate using SASL (unknown error)

20/07/01 17:23:15 INFO zookeeper.ClientCnxn: Socket connection established to slave2/192.168.1.8:2181, initiating session

20/07/01 17:23:15 INFO zookeeper.ClientCnxn: Session establishment complete on server slave2/192.168.1.8:2181, sessionid = 0x373099bfa8c0000, negotiated timeout = 5000

20/07/01 17:23:15 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ns in ZK.

20/07/01 17:23:15 INFO zookeeper.ZooKeeper: Session: 0x373099bfa8c0000 closed

20/07/01 17:23:15 WARN ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x373099bfa8c0000

20/07/01 17:23:15 INFO zookeeper.ClientCnxn: EventThread shut down

步骤四:启动hdfs

[hadoop@master ~]$ start-dfs.sh

Starting namenodes on [master slave1]

master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-root-namenode-master.out

slave1: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-root-namenode-slave1.out

master: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-root-datanode-master.out

slave1: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-root-datanode-slave1.out

slave2: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-root-datanode-slave2.out

Starting journal nodes [master slave1 slave2]

master: journalnode running as process 1787. Stop it first.

slave2: journalnode running as process 1613. Stop it first.

slave1: journalnode running as process 1634. Stop it first.

Starting ZK Failover Controllers on NN hosts [master slave1]

slave1: starting zkfc, logging to /usr/local/src/hadoop/logs/hadoop-root-zkfc-slave1.out

master: starting zkfc, logging to /usr/local/src/hadoop/logs/hadoop-root-zkfc-master.out

步骤五:启动yarn

[hadoop@master ~]$ start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-root-resourcemanager-master.out

master: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-root-nodemanager-master.out

slave1: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-root-nodemanager-slave1.out

slave2: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-root-nodemanager-slave2.out

步骤六:同步master数据

复制namenode元数据到其它节点(在master节点执行)

[hadoop@master ~]$ scp -r /usr/local/src/hadoop/tmp/hdfs/nn/* slave1:/usr/local/src/hadoop/tmp/hdfs/nn/

[hadoop@master ~]$ scp -r /usr/local/src/hadoop/tmp/hdfs/nn/* slave2:/usr/local/src/hadoop/tmp/hdfs/nn/

Hadoop HA集群启动

步骤七:在slave1上启动resourcemanagernamenode进程

 [hadoop@slave1 ~]$  yarn-daemon.sh start resourcemanager

starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-root-resourcemanager-slave1.out

[hadoop@slave1 ~]$ hadoop-daemon.sh start namenode

starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-slave1.out

步骤九:启动 MapReduce任务历史服务

[hadoop@master ~]$ yarn-daemon.sh start proxyserver

starting proxyserver, logging to /usr/local/src/hadoop/logs/yarn-root-proxyserver-master.out

[hadoop@master ~]$  mr-jobhistory-daemon.sh start historyserver

starting historyserver, logging to /usr/local/src/hadoop/logs/mapred-root-historyserver-master.out

步骤十:查看端口和进程

[hadoop@master ~]$ jps

Hadoop HA集群启动 

[hadoop@slave1 ~]$ jps

Hadoop HA集群启动 

[hadoop@slave2 ~]$ jps

Hadoop HA集群启动 

master:50070

Hadoop HA集群启动 Hadoop HA集群启动

slave1:50070

Hadoop HA集群启动

master:8088

Hadoop HA集群启动

实验任务二:HA的测试

步骤一:创建一个测试文件

[hadoop@master ~]$ vi a.txt
//内容如下:

Hello World

Hello Hadoop

步骤二:在hdfs创建文件夹

[hadoop@master ~]# hadoop fs -mkdir /input

步骤三:将a.txt传输到input

[hadoop@master ~]$ hadoop fs -put ~/a.txt /input

步骤四:进入到jar包测试文件目录下

[hadoop@master ~]$ cd /usr/local/src/hadoop/share/hadoop/mapreduce/

步骤五:测试mapreduce

[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.7.1.jar wordcount /input/a.txt /output

成功如下:

Hadoop HA集群启动 

Hadoop HA集群启动 

Hadoop HA集群启动

步骤六:查看hdfs下的传输结果

[hadoop@master mapreduce]$ hadoop fs -lsr /output
 

Hadoop HA集群启动

步骤七:查看文件测试的结果

[hadoop@master mapreduce]$ hadoop fs -cat /output/part-r-00000

Hadoop 1

Hello 2

World 1

实验任务三:高可用性验证

步骤一::自动切换服务状态

输入代码:

[hadoop@master mapreduce]$ cd

#hdfs haadmin -failover --forcefence --forceactive 主 备

[hadoop@master ~]$ hdfs haadmin -failover --forcefence --forceactive slave1 master

查看状态

[hadoop@master ~]$ hdfs haadmin -getServiceState slave1

Hadoop HA集群启动 

[hadoop@master ~]$ hdfs haadmin -getServiceState master

Hadoop HA集群启动 步骤二:手动切换服务状态

在maste停止并启动namenode

[hadoop@master ~]$  hadoop-daemon.sh stop namenode

stopping namenode

查看状态

[hadoop@master ~]$ hdfs haadmin -getServiceState master

[hadoop@master ~]$ hdfs haadmin -getServiceState slave1

Hadoop HA集群启动 

[hadoop@master ~]$  hadoop-daemon.sh start namenode

Hadoop HA集群启动

查看状态

[hadoop@master ~]$ hdfs haadmin -getServiceState slave1

Hadoop HA集群启动 

[hadoop@master ~]$ hdfs haadmin -getServiceState master

Hadoop HA集群启动

查看web服务端 

master:50070

Hadoop HA集群启动 

slave1:50070

 

Hadoop HA集群启动 

上一篇:数据结构题目(实验报告题目)———用单链表ha 存储多项式


下一篇:Hadoop HA 高可用