使用阿里云服务器进行CDH部署学习

1.准备的工具:

软件安装包:

ossutil64工具下载:

 1 [root@hadoop001 ~]# wget http://gosspublic.alicdn.com/ossutil/1.6.3/ossutil64
 2 --2019-06-29 09:48:21--  http://gosspublic.alicdn.com/ossutil/1.6.3/ossutil64
 3 Resolving gosspublic.alicdn.com (gosspublic.alicdn.com)... 205.204.104.233, 205.204.104.242
 4 Connecting to gosspublic.alicdn.com (gosspublic.alicdn.com)|205.204.104.233|:80... connected.
 5 HTTP request sent, awaiting response... 200 OK
 6 Length: 9741374 (9.3M) [application/octet-stream]
 7 Saving to: ‘ossutil64’
 8 
 9 100%[=========================================================================================================================================================>] 9,741,374   3.21MB/s   in 2.9s   
10 
11 2019-06-29 09:48:25 (3.21 MB/s) - ‘ossutil64’ saved [9741374/9741374]

软件安装包我事先收集好放在OSS上存储,通过ossutil64命令行工具下载到本机上

[root@hadoop001 ~]# ./ossutil64 cp -r oss://20190616 /root
Succeed: Total num: 9, size: 3,701,212,166. OK num: 9(download 9 objects).  

 安装所需软件如下

 1 [root@hadoop001 cdh5.16.1]# ll
 2 total 3614496
 3 -rw-r--r-- 1 root root 2127506677 Jun 29 10:04 CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
 4 -rw-r--r-- 1 root root         41 Jun 29 10:04 CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha1
 5 -rw-r--r-- 1 root root  841524318 Jun 29 10:04 cloudera-manager-centos7-cm5.16.1_x86_64.tar.gz
 6 -rw-r--r-- 1 root root  173271626 Jun 29 10:04 jdk-8u45-linux-x64.gz
 7 -rw-r--r-- 1 root root      66538 Jun 29 10:04 manifest.json
 8 -rw-r--r-- 1 root root  548193637 Jun 29 10:04 mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz
 9 -rw-r--r-- 1 root root    1007502 Jun 29 10:05 mysql-connector-java-5.1.47.jar
10 -rw-r--r-- 1 root root    9641827 Jun 29 10:05 ossutil64

 

阿里云服务器:

购买4台阿里云抢占式服务器,一小时大概8毛钱,学习成本还算挺低得了,使用完毕还可以随时释放掉;拿来练手是不错的选择。

使用阿里云服务器进行CDH部署学习

2.依赖环境部署安装:

centos7.4

1.进行SSH免登陆设置

ssh免登陆设置方便后续文件进行scp命令分发

使用root用户进入家目录.ssh文件夹进行如下操作

 1 [root@hadoop001 .ssh]# ssh-keygen -t rsa
 2 Generating public/private rsa key pair.
 3 Enter file in which to save the key (/root/.ssh/id_rsa): 
 4 Enter passphrase (empty for no passphrase): 
 5 Enter same passphrase again: 
 6 Your identification has been saved in /root/.ssh/id_rsa.
 7 Your public key has been saved in /root/.ssh/id_rsa.pub.
 8 The key fingerprint is:
 9 SHA256:QLf04jdE2pLmSEyWlbg/7YKlmuVFAeNAEZE24HKePpA root@hadoop001
10 The key's randomart image is:
11 +---[RSA 2048]----+
12 |  .o*=*++..      |
13 | .  +Bo=.B       |
14 |. o. .=.O +      |
15 | = . ..* =       |
16 |E o   ..S.o      |
17 | o     .+...     |
18 |  o   .+.o       |
19 |   . +o.. .      |
20 |    o..  .       |
21 +----[SHA256]-----+

把生成的id_rsa.pub更名为authorized_keys,批量分发到其他主机上

 1 [root@hadoop001 .ssh]# cat id_rsa.pub >> /root/authorized_keys
 2 [root@hadoop001 .ssh]# cd ..
 3 [root@hadoop001 ~]# ll
 4 total 9524
 5 -rw-r--r-- 1 root root     396 Jun 29 11:05 authorized_keys
 6 drwxr-xr-x 2 root root    4096 Jun 29 10:05 cdh5.16.1
 7 -rwxr-xr-x 1 root root 9741374 Jun 20 15:57 ossutil64
 8 [root@hadoop001 ~]# scp id_rsa.pub hadoop003:/root/.ssh/
 9 root@hadoop003's password: 
10 id_rsa.pub: No such file or directory
11 [root@hadoop001 ~]# scp authorized_keys hadoop003:/root/.ssh/
12 root@hadoop003's password: 
13 authorized_keys                                                                                                                                                  100%  396     1.5MB/s   00:00    
14 [root@hadoop001 ~]# scp authorized_keys hadoop004:/root/.ssh/
15 root@hadoop004's password: 
16 authorized_keys

2.hosts文件设置

1 [root@hadoop001 ~]# vi /etc/hosts

进入hosts文件添加集群主机

1 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
2 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
3 # cdh
4 172.31.188.179 hadoop001
5 172.31.188.180 hadoop002
6 172.31.188.181 hadoop003
7 172.31.188.182 hadoop004
8 ~                                                                                                                                                                                                  
9 ~          

 

测试集群主机是否连通,四台主机均访问正常

 1 [root@hadoop001 cdh5.16.1]# ping hadoop002
 2 PING hadoop002 (172.31.188.180) 56(84) bytes of data.
 3 64 bytes from hadoop002 (172.31.188.180): icmp_seq=1 ttl=64 time=0.146 ms
 4 ^Z
 5 [2]+  Stopped                 ping hadoop002
 6 [root@hadoop001 cdh5.16.1]# ping hadoop003
 7 PING hadoop003 (172.31.188.181) 56(84) bytes of data.
 8 64 bytes from hadoop003 (172.31.188.181): icmp_seq=1 ttl=64 time=0.295 ms
 9 ^Z
10 [3]+  Stopped                 ping hadoop003
11 [root@hadoop001 cdh5.16.1]# ping hadoop004
12 PING hadoop004 (172.31.188.182) 56(84) bytes of data.
13 64 bytes from hadoop004 (172.31.188.182): icmp_seq=1 ttl=64 time=0.331 ms
14 ^Z
15 [4]+  Stopped                 ping hadoop004

通过scp命令把修改好的hosts文件分发到其他三台主机上

1 [root@hadoop001 ~]# scp /etc/hosts hadoop002:/etc
2 hosts                                                                                                                                                            100%  264     1.2MB/s   00:00    
3 [root@hadoop001 ~]# scp /etc/hosts hadoop003:/etc
4 hosts                                                                                                                                                            100%  264     1.2MB/s   00:00    
5 [root@hadoop001 ~]# scp /etc/hosts hadoop004:/etc
6 hosts                                                                                                                                                            100%  264     1.0MB/s   00:00  

JAVA安装

 解压java压缩包

1 [root@hadoop001 ~]# tar -xzvf cdh5.16.1/jdk-8u45-linux-x64.gz -C /usr/java/

添加java全局变量

#env
export JAVA_HOME=/usr/java/jdk1.8.0_45
export PATH=$PATH:$JAVA_HOME/bin

 执行soure命令,使全局变量生效

1 [root@hadoop001 ~]# source /etc/profile
2 [root@hadoop001 ~]# java -version
3 java version "1.8.0_45"
4 Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
5 Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)

 

上一篇:剑指数据仓库-Shell命令三


下一篇:Spark学习之高可用集群搭建