主节点10g 其他节点2G 其他节点2G 硬盘情况: [root@hadoop104 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 47G 5.8G 39G 13% / tmpfs 4.9G 72K 4.9G 1% /dev/shm /dev/sda1 190M 39M 142M 22% /boot 1.安装JDK配置环境变量 2. 安装配置mysql [root@hadoop104 software]# rpm -qa | grep mysql [root@hadoop104 software]# rpm -e --nodeps mysql-libs-5.1.73-7.el6.x86_64 [root@hadoop104 software]# rpm -ivh MySQL-server-5.6.24-1.el6.x86_64.rpm # cat /root/.mysql_secret Q3MnNS54TvuMpPgU service mysql start [root@hadoop104 software]# rpm -ivh MySQL-client-5.6.24-1.el6.x86_64.rpm 3.关闭SELINUX 1) 临时关闭: setenforce 0 2) 修改配置文件/etc/selinux/config(重启生效) 将SELINUX=enforcing 改为SELINUX=disabled 4 ssh免密登录 将hadoop104,hadoop105,hadoop106相互之间配置免密登陆 5.下载第三方依赖 在三台节点(所有agent的节点)上执行下载第三方依赖 yum -y install chkconfig python bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse fuse-libs redhat-lsb httpd mod_ssl
创建CM用的数据库:
(1)集群监控数据库 create database amon DEFAULT CHARSET utf8 COLLATE utf8_general_ci; (2)hive数据库 create database hive DEFAULT CHARSET utf8 COLLATE utf8_general_ci; ##安装Hive时需要创建hive数据库;如果安装失败要把它删除重写创建 (3)oozie数据库 create database oozie DEFAULT CHARSET utf8 COLLATE utf8_general_ci; (4)hue数据库 create database hue DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
CM安装部署
按装CM 1. 解压cloudera-manager-el6-cm5.12.1_x86_64.tar.gz [root@hadoop104 module]# mkdir /opt/module/cloudera-manager [root@hadoop104 module]# tar -zxvf /opt/software/cloudera-manager-el6-cm5.12.1_x86_64.tar.gz -C /opt/module/cloudera-manager/ 2. 创建用户cloudera-scm(所有节点,三个节点都创建) [root@hadoop104 cloudera-manager]# useradd --system --home=/opt/module/cloudera-manager/cm-5.12.1/run/cloudera-scm-server --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm [root@hadoop104 cloudera-manager]# id cloudera-scm uid=495(cloudera-scm) gid=492(cloudera-scm) 组=492(cloudera-scm)
#####hadoop104、hadoop105、hadoop106都需要创建cloudera-scm用户,否则在Parcel分发状态下,没有创建用户的分发,激活不了; 3. 配置CM Agent 修改文件/opt/module/cloudera-manager/cm-5.12.1/etc/cloudera-scm-agent/ config.ini [root@hadoop104 cloudera-scm-agent]# vim config.ini [General] # Hostname of the CM server. server_host=hadoop104 4. 配置CM的数据库;在主节点创建即可; 拷贝mysql jar文件到目录 /usr/share/java/ [root@hadoop104 share]# mkdir /usr/share/java/ [root@hadoop104 cm-5.12.1]# cp /opt/software/mysql-libs/mysql-connector-java-5.1.27/mysql-connector-java-5.1.27-bin.jar /usr/share/java/ [root@hadoop104 share]# mv /usr/share/java/mysql-connector-java-5.1.27-bin.jar /usr/share/java/mysql-connector-java.jar • 注意jar包名称要修改为mysql-connector-java.jar 在mysql中创建cm库 [root@hadoop104 cm-5.12.1]# /opt/module/cloudera-manager/cm-5.12.1/share/cmf/schema/scm_prepare_database.sh mysql cm -hhadoop104 -uroot -p123456 --scm-host hadoop104 scm scm scm
创建Parcel-repo目录
创建Parcel-repo 目录 1. Servre 节点创建目录/opt/cloudera/parcel-repo [root@hadoop104 module]# mkdir -p /opt/cloudera/parcel-repo [root@hadoop104 module]# chown cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo 2.拷贝下载文件到/opt/cloudera/parcel-repo (1)CDH-5.12.1-1.cdh5.12.1.p0.3-el6.parcel (2)CDH-5.12.1-1.cdh5.12.1.p0.3-el6.parcel.sha1:需改名为CDH-5.12.1-1.cdh5.12.1.p0.3-el6.parcel.sha (3)manifest.json [root@hadoop104 cm-5.12.1]# mv /opt/software/CDH-5.12.1-1.cdh5.12.1.p0.3-el6.parcel.sha1 /opt/software/CDH-5.12.1-1.cdh5.12.1.p0.3-el6.parcel.sha [root@hadoop104 module]# cp /opt/software/CDH-5.12.1-1.cdh5.12.1.p0.3-el6.parcel.sha /opt/cloudera/parcel-repo/ [root@hadoop104 module]# cp /opt/software/CDH-5.12.1-1.cdh5.12.1.p0.3-el6.parcel /opt/cloudera/parcel-repo/ [root@hadoop104 module]# cp /opt/software/manifest.json /opt/cloudera/parcel-repo/ 3.在Agent 节点(hadoop102,hadoop103,hadoop104)创建目录/opt/cloudera/parcels [root@hadoop104 module]# mkdir -p /opt/cloudera/parcels [root@hadoop104 module]# chown cloudera-scm:cloudera-scm /opt/cloudera/parcels [root@hadoop105 module]# mkdir -p /opt/cloudera/parcels [root@hadoop105 module]# chown cloudera-scm:cloudera-scm /opt/cloudera/parcels [root@hadoop106 module]# mkdir -p /opt/cloudera/parcels [root@hadoop106 module]# chown cloudera-scm:cloudera-scm /opt/cloudera/parcels 4. 分发Parcel-repo [root@hadoop104 cloudera]# xsync /opt/cloudera/
Cloudera Manager
HDFS重新安装的时候要把它的df文件夹给删掉
hive重新安装要把mysql里的hive库删掉;
修改权限问题两种方法:
第一种:
#第二种方式 [hdfs@hadoop104 init.d]$ su - hdfs ##su用户切换时 加 - 是会把环境也切换过来;建议用这种形式;
[hdfs@hadoop104 init.d]$ hadoop fs -chmod -R 777 /
[hdfs@hadoop104 init.d]$ exit ###不要直接su套用户,先exit结束掉
exit
[root@hadoop104 init.d]#
[root@hadoop104 ~]# hive ##启动hive时直接启动; Java HotSpot(TM) 64-Bit [root@hadoop104 ~]# beeline ##直接启动beeline,不用启动hiveserver2,因为在CDH上hive已经给我们启动了 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0 beeline> !connect jdbc:hive2://hadoop104:10000 scan complete in 13ms Connecting to jdbc:hive2://hadoop104:10000 Enter username for jdbc:hive2://hadoop104:10000: hive ####用户是启动hiveserver2的用户,而不是root Enter password for jdbc:hive2://hadoop104:10000: