Before performing any upgrades or uninstalling software, stop all of the Hadoop services in the following order:
Ranger
Knox
Oozie
WebHCat
HiveServer2
Hive Metastore
HBase
YARN
HDFS
Zookeeper
Storm
Kafka
Instructions
-
Stop Ranger. Execute the following commands on the Ranger host machine:
sudo service ranger-admin stop sudo service ranger-usersync stop
-
Stop Knox. Execute the following command on the Knox host machine.
su -l knox -c "/usr/hdp/current/knox-server/bin/gateway.sh stop"
-
Stop Oozie. Execute the following command on the Oozie host machine.
su -l oozie -c "/usr/hdp/current/oozie-server/bin/oozied.sh stop"
-
Stop WebHCat. On the WebHCat host machine, execute the following command:
su -l hcat -c "/usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh stop"
-
Stop Hive. Execute this command on the Hive Metastore and Hive Server2 host machine.
ps aux | awk '{print $1,$2}' | grep hive | awk '{print $2}' | xargs kill >/dev/null 2>&1
-
Stop HBase
-
Execute this command on all RegionServers:
su -l hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh stop regionserver"
-
Execute this command on the HBase Master host machine:
su -l hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh stop master"
-
-
Stop YARN
-
Execute this command on all NodeManagers:
su -l yarn -c "/usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh stop nodemanager"
-
Execute this command on the History Server host machine:
su -l yarn -c "/usr/hdp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh stop historyserver"
-
Execute this command on the ResourceManager host machine(s):
su -l yarn -c "/usr/hdp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh stop resourcemanager"
-
-
Stop HDFS
-
Execute this command on all DataNodes:
su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh stop datanode"
-
If you are not running NameNode HA (High Availability), stop the Secondary NameNode by executing this command on the Secondary NameNode host machine:
su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh stop secondarynamenode"
-
Execute this command on the NameNode host machine(s):
su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh stop namenode"
-
If you are running NameNode HA, stop the Zookeeper Failover Controllers (ZKFC) by executing this command on the NameNode host machines:
su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh stop zkfc"
-
If you are running NameNode HA, stop the JournalNodes by executing these commands on the JournalNode host machines:
su $HDFS_USER /usr/hdp/current/hadoop-hdfs-journalnode/../hadoop/sbin/hadoop-daemon.sh stop journalnode
where
$HDFS_USER
is the HDFS user. For example,hdfs
.
-
-
Stop ZooKeeper. Execute this command on the ZooKeeper host machine(s):
su - zookeeper -c "export ZOOCFGDIR=/usr/hdp/current/zookeeper-server/conf ; export ZOOCFG=zoo.cfg; source /usr/hdp/current/zookeeper-server/conf/zookeeper-env.sh ; /usr/hdp/current/zookeeper-server/bin/zkServer.sh stop"
-
Start Storm services using a process controller, such as supervisord. See "Installing and Configuring Apache Storm" in the Manual Install Guide. For example, to stop the storm-nimbus service:
sudo /usr/bin/supervisorctl storm-drpc RUNNING pid 9801, uptime 0:03:20 storm-nimbus RUNNING pid 9802, uptime 0:03:20 storm-ui RUNNING pid 9800, uptime 0:03:20 supervisor> stop storm-nimbus storm-nimbus: stopped
where
$STORM_USER
is the operating system user that installed Storm. For example,storm
. -
Stop Kafka. Execute this command on the Kafka host machine(s):
su $KAFKA_USER /usr/hdp/current/kafka-broker/bin/kafka stop
where
$KAFKA_USER
is the operating system user that installed Kafka. For example,kafka
.