不多说,直接上干货!
我的集群机器情况是 bigdatamaster(192.168.80.10)、bigdataslave1(192.168.80.11)和bigdataslave2(192.168.80.12)
然后,安装目录是在/home/hadoop/app下。
官方建议在master机器上安装Hue,我这里也不例外。安装在bigdatamaster机器上。
Hue版本:hue-3.9.0-cdh5.5.4
需要编译才能使用(联网) 说给大家的话:大家电脑的配置好的话,一定要安装cloudera manager。毕竟是一家人的。废话不多说,因为我目前读研,自己笔记本电脑最大8G,只能玩手动来练手。
纯粹是为了给身边没高配且条件有限的学生党看的! 但我已经在实验室机器群里搭建好cloudera manager 以及 ambari都有。
大数据领域两大最主流集群管理工具Ambari和Cloudera Manger
CentOS6.5下Cloudera安装搭建部署大数据集群(图文分五大步详解)(博主强烈推荐)
CentOS6.5下Ambari安装搭建部署大数据集群(图文分五大步详解)(博主强烈推荐)
Ubuntu14.04下Ambari安装搭建部署大数据集群(图文分五大步详解)(博主强烈推荐)
Ubuntu14.04下Cloudera安装搭建部署大数据集群(图文分五大步详解)(博主强烈推荐)(在线或离线)
因为,这篇博客,我是以CentOS为例的。
http://archive.cloudera.com/cdh5/cdh/5/hue-3.9.0-cdh5.5.4/manual.html#_install_hue
其实,其他系统如Ubuntu而言,就是这些依赖安装有些区别而已。这里,大家自行去看官网吧!
一、hue-3.9.0-cdh5.5.4.tar.gz的下载地址
http://archive.cloudera.com/cdh5/cdh/5/hue-3.9.0-cdh5.5.4.tar.gz
二、在安装Hue之前,需要安装各种依赖包
ant
asciidoc
cyrus-sasl-devel
cyrus-sasl-gssapi
gcc
gcc-c++
krb5-devel
libtidy (for unit tests only)
libxml2-devel
libxslt-devel
make
mvn (from ``maven`` package or maven3 tarball)
MySQL(可以不用安装)(当然,我在安装Hive时,已经在bigdatamaster这台机器就安装了MySQL)
mysql-devel (可以不用安装)(当然,我在安装Hive时,已经在bigdatamaster这台机器就安装了MySQL)
openldap-devel
Python-devel
sqlite-devel
openssl-devel (for version 7+)
gmp-devel
或者
ant
asciidoc
cyrus-sasl-devel
cyrus-sasl-gssapi
gcc
gcc-c++
krb5-devel
libtidy (for unit tests only)
libxml2-devel
libxslt-devel
make
mvn (from maven package or maven3 tarball)
mysql (我这里不安装了,因为在hive那边已经安装了)
mysql-devel (我这里不安装了,因为在hive那边已经安装了)
openldap-devel
python-devel
sqlite-devel
openssl-devel (for version +)
gmp-devel
检查系统上有没有上述的那些包
rpm -qa | grep package_name
注意,不是上述的用法,是具体的。
[hadoop@bigdatamaster app]$ rpm -qa | grep ant
wpa_supplicant-0.7.-.el6_3.x86_64
anthy-9100h-10.1.el6.x86_64
ibus-anthy-1.2.-.el6.x86_64
enchant-1.5.-.el6.x86_64
[hadoop@bigdatamaster app]$ rpm -qa | grep asciidoc
[hadoop@bigdatamaster app]$ rpm -qa | grep cyrus-sasl-devel
[hadoop@bigdatamaster app]$ rpm -qa | grep gcc
libgcc-4.4.-.el6.x86_64
[hadoop@bigdatamaster app]$ rpm -qa | grep gcc-c++
[hadoop@bigdatamaster app]$ rpm -qa | grep krb5-devel
krb5-devel-1.10.-.el6.x86_64
[hadoop@bigdatamaster app]$ rpm -qa | grep libtidy
[hadoop@bigdatamaster app]$ rpm -qa | grep libxml2-devel
[hadoop@bigdatamaster app]$ rpm -qa | grep libxslt-devel
[hadoop@bigdatamaster app]$ rpm -qa | grep make
make-3.81-.el6.x86_64
[hadoop@bigdatamaster app]$ rpm -qa | grep mvn
[hadoop@bigdatamaster app]$ rpm -qa | grep mysql-devel
mysql-devel-5.1.-.el6_8.x86_64
[hadoop@bigdatamaster app]$ rpm -qa | grep openldap-devel
[hadoop@bigdatamaster app]$ rpm -qa | grep python-devel
[hadoop@bigdatamaster app]$ rpm -qa | grep sqlite-devel
[hadoop@bigdatamaster app]$ rpm -qa | grep openssl-devel
openssl-devel-1.0.1e-.el6.x86_64
[hadoop@bigdatamaster app]$ rpm -qa | grep gmp-devel
[hadoop@bigdatamaster app]$
这一步,我看到有些资料上说,需要先卸载掉,自带的这些。不然会对后续的安装,产生版本冲突的问题。
直接用yum安装的,中间可能会报一些依赖的版本冲突问题,可以卸载已经安装的版本,然后再装。
卸载自带的包
rpm -e --nodeps ***
查阅和卸载自带的ant
[hadoop@bigdatamaster app]$ su root
Password:
[root@bigdatamaster app]# rpm -qa | grep ant
wpa_supplicant-0.7.-.el6_3.x86_64
anthy-9100h-10.1.el6.x86_64
ibus-anthy-1.2.-.el6.x86_64
enchant-1.5.-.el6.x86_64
[root@bigdatamaster app]# rpm -e --nodeps wpa_supplicant-0.7.-.el6_3.x86_64
[root@bigdatamaster app]# rpm -e --nodeps anthy-9100h-10.1.el6.x86_64
[root@bigdatamaster app]# rpm -e --nodeps ibus-anthy-1.2.-.el6.x86_64
[root@bigdatamaster app]# rpm -e --nodeps enchant-1.5.-.el6.x86_64
[root@bigdatamaster app]#
查阅和卸载自带的asciidoc、cyrus-sasl-devel、gcc
[root@bigdatamaster app]# rpm -qa | grep asciidoc
[root@bigdatamaster app]# rpm -qa | grep cyrus-sasl-devel
[root@bigdatamaster app]# rpm -qa | grep gcc
libgcc-4.4.-.el6.x86_64
[root@bigdatamaster app]# rpm -e --nodeps libgcc-4.4.-.el6.x86_64 (删除完这个命令,我就后悔了)
[root@bigdatamaster app]#
其他的,我都不多赘述了,也可以大家在中间,删除的时候,会出现如下的问题。
[root@bigdatamaster app]# rpm -qa | grep krb5-devel
rpm: error while loading shared libraries: libgcc_s.so.: cannot open shared object file: No such file or directory
[root@bigdatamaster app]# rpm -qa | grep libtidy
rpm: error while loading shared libraries: libgcc_s.so.: cannot open shared object file: No such file or directory
乱删rpm导致再次安装包时出现 error while loading shared libraries: libgcc_s.so.1问题
解决办法
说先搜下有没有这个libgcc_s.so.1共享库,果然是有的。
[root@bigdatamaster app]# locate libgcc_s.so.
在/lib64/libgcc_s.so.1。
然后,我的还在/lib64下,则
error while loading shared libraries: xxx.so.x"错误的原因和解决办法
即上面的这篇博客,里的1) 如果共享库文件安装到了/lib或/usr/lib目录下, 那么需执行一下ldconfig命令。
其实吧,我感觉,这些资料都不好。
最简单的办法就是,我们不是做大数据的么,直接,把另外一台机器的libgcc_s-4.4.6-20110824.so.1到/lib64下恢复正常。
rpm: error while loading shared libraries: libgcc_s.so.1: cannot open shared object file: No such file or directory解决办法 (还是看我自己总结的博客)
添加maven源
wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
[root@bigdatamaster app]# wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
---- ::-- http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo
Resolving repos.fedorapeople.org... 152.19.134.199, :::::ff:fea7:
Connecting to repos.fedorapeople.org|152.19.134.199|:... connected.
HTTP request sent, awaiting response... Found
Location: https://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo [following]
---- ::-- https://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo
Connecting to repos.fedorapeople.org|152.19.134.199|:... connected.
HTTP request sent, awaiting response... OK
Length:
Saving to: “/etc/yum.repos.d/epel-apache-maven.repo” %[=====================================================================================================================================================>] --.-K/s in 0s -- :: (11.8 MB/s) - “/etc/yum.repos.d/epel-apache-maven.repo” saved [/] [root@bigdatamaster app]#
安装依赖 (注意mysql 和 mysql - dever 不需要安装了,因为我在hive那边已经安装好了)(别怪我没提醒你)(你这里若再安装,版本不兼容,会出问题的)
yum install -y ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libtidy libxml2-devel libxslt-devel make mvn openldap-devel python-devel sqlite-devel openssl-devel gmp-devel
或者
下载需要的系统包
yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel ibtidy libxml2-devel libxslt-devel openldap-devel python-devel
sqlite-devel openssl-devel mysql-devel gmp-devel
Verifying : python-libs-2.6.-.el6.x86_64 /
Verifying : gmp-4.3.-.el6_2..x86_64 /
Verifying : python-2.6.-.el6.x86_64 /
Verifying : libxml2-python-2.7.-.el6.x86_64 /
Verifying : sqlite-3.6.-.el6.x86_64 / Installed:
ant.x86_64 :1.7.-.el6 asciidoc.noarch :8.4.-4.1.el6 cyrus-sasl-devel.x86_64 :2.1.-.el6_6. gcc.x86_64 :4.4.-.el6
gcc-c++.x86_64 :4.4.-.el6 gmp-devel.x86_64 :4.3.-.el6 libtidy.x86_64 :0.99.-19.20070615..el6 libxml2-devel.x86_64 :2.7.-.el6_8.
libxslt-devel.x86_64 :1.1.-.el6_3. openldap-devel.x86_64 :2.4.-.el6 python-devel.x86_64 :2.6.-.el6_8 sqlite-devel.x86_64 :3.6.-.el6_7. Dependency Installed:
cloog-ppl.x86_64 :0.15.-1.2.el6 cpp.x86_64 :4.4.-.el6 docbook-style-xsl.noarch :1.75.-.el6
java-1.5.-gcj.x86_64 :1.5.0.0-29.1.el6 java-1.7.-openjdk.x86_64 :1.7.0.131-2.6.9.0.el6_8 java-1.7.-openjdk-devel.x86_64 :1.7.0.131-2.6.9.0.el6_8
java_cup.x86_64 :.10k-.el6 libgcc.x86_64 :4.4.-.el6 libgcj.x86_64 :4.4.-.el6
libgcrypt-devel.x86_64 :1.4.-.el6_8 libgpg-error-devel.x86_64 :1.7-.el6 libstdc++-devel.x86_64 :4.4.-.el6
lksctp-tools.x86_64 :1.0.-.el6 mpfr.x86_64 :2.4.-.el6 pcsc-lite-libs.x86_64 :1.5.-.el6
ppl.x86_64 :0.10.-.el6 sinjdoc.x86_64 :0.5-9.1.el6 tzdata-java.noarch :2017b-.el6
xerces-j2.x86_64 :2.7.-12.7.el6_5 xml-commons-apis.x86_64 :1.3.-3.6.el6 xml-commons-resolver.x86_64 :1.1-4.18.el6 Updated:
cyrus-sasl-gssapi.x86_64 :2.1.-.el6_6. make.x86_64 :3.81-.el6 Dependency Updated:
cyrus-sasl.x86_64 :2.1.-.el6_6. cyrus-sasl-lib.x86_64 :2.1.-.el6_6. cyrus-sasl-md5.x86_64 :2.1.-.el6_6. cyrus-sasl-plain.x86_64 :2.1.-.el6_6.
gmp.x86_64 :4.3.-.el6 libgcrypt.x86_64 :1.4.-.el6_8 libgomp.x86_64 :4.4.-.el6 libstdc++.x86_64 :4.4.-.el6
libxml2.x86_64 :2.7.-.el6_8. libxml2-python.x86_64 :2.7.-.el6_8. nspr.x86_64 :4.13.-.el6 nss.x86_64 :3.28.-.el6_9
nss-softokn.x86_64 :3.14.-23.3.el6_8 nss-softokn-freebl.x86_64 :3.14.-23.3.el6_8 nss-sysinit.x86_64 :3.28.-.el6_9 nss-tools.x86_64 :3.28.-.el6_9
nss-util.x86_64 :3.28.-.el6_9 openldap.x86_64 :2.4.-.el6 python.x86_64 :2.6.-.el6_8 python-libs.x86_64 :2.6.-.el6_8
sqlite.x86_64 :3.6.-.el6_7. Complete!
[root@bigdatamaster app]#
上传hue-3.9.0-cdh5.5.4.tar.gz(我这里选择先下载好,再上传)
当然,这一步,大家也是可以下载,编译源码(hue 3.9),编译时间较长
git clone https://github.com/cloudera/hue.git branch-3.9
cd branch-3.9
make apps
编译完后也可以选择安装
make install
我这里选择上传
[hadoop@bigdatamaster app]$ pwd
/home/hadoop/app
[hadoop@bigdatamaster app]$ ll
total
drwxr-xr-x hadoop hadoop Apr apache-flume-1.6.-cdh5.5.4-bin
lrwxrwxrwx hadoop hadoop May : elasticsearch -> elasticsearch-2.4.
drwxrwxr-x hadoop hadoop May : elasticsearch-2.4.
lrwxrwxrwx hadoop hadoop May : filebeat -> filebeat-1.3.-x86_64/
drwxr-xr-x hadoop hadoop May : filebeat-1.3.-x86_64
lrwxrwxrwx hadoop hadoop May : flume -> apache-flume-1.6.-cdh5.5.4-bin/
lrwxrwxrwx. hadoop hadoop May : hadoop -> hadoop-2.6.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hadoop-2.6.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hbase -> hbase-1.0.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hbase-1.0.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hive -> hive-1.1.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop Apr hive-1.1.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : jdk -> jdk1..0_79
drwxr-xr-x. hadoop hadoop Apr jdk1..0_79
drwxr-xr-x. hadoop hadoop Aug jdk1..0_60
lrwxrwxrwx. hadoop hadoop May : kafka -> kafka_2.-0.8.2.2/
drwxr-xr-x. hadoop hadoop May : kafka_2.-0.8.2.2
lrwxrwxrwx hadoop hadoop May : kibana -> kibana-4.6.-linux-x86_64/
drwxrwxr-x hadoop hadoop Nov : kibana-4.6.-linux-x86_64
lrwxrwxrwx hadoop hadoop May : logstash -> logstash-2.4./
drwxrwxr-x hadoop hadoop May : logstash-2.4.
lrwxrwxrwx hadoop hadoop May : scala -> scala-2.11.
drwxrwxr-x hadoop hadoop Mar scala-2.11.
lrwxrwxrwx hadoop hadoop May : spark -> spark-2.1.-bin-hadoop2.
drwxr-xr-x hadoop hadoop May : spark-2.1.-bin-hadoop2.
lrwxrwxrwx hadoop hadoop May : sqoop -> sqoop-1.4.-cdh5.5.4/
drwxr-xr-x hadoop hadoop Apr sqoop-1.4.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : zookeeper -> zookeeper-3.4.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop May : zookeeper-3.4.-cdh5.5.4
[hadoop@bigdatamaster app]$ rz [hadoop@bigdatamaster app]$ ll
total
drwxr-xr-x hadoop hadoop Apr apache-flume-1.6.-cdh5.5.4-bin
lrwxrwxrwx hadoop hadoop May : elasticsearch -> elasticsearch-2.4.
drwxrwxr-x hadoop hadoop May : elasticsearch-2.4.
lrwxrwxrwx hadoop hadoop May : filebeat -> filebeat-1.3.-x86_64/
drwxr-xr-x hadoop hadoop May : filebeat-1.3.-x86_64
lrwxrwxrwx hadoop hadoop May : flume -> apache-flume-1.6.-cdh5.5.4-bin/
lrwxrwxrwx. hadoop hadoop May : hadoop -> hadoop-2.6.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hadoop-2.6.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hbase -> hbase-1.0.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hbase-1.0.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hive -> hive-1.1.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop Apr hive-1.1.-cdh5.5.4
-rw-r--r-- hadoop hadoop May : hue-3.9.-cdh5.5.4.tar.gz
lrwxrwxrwx. hadoop hadoop May : jdk -> jdk1..0_79
drwxr-xr-x. hadoop hadoop Apr jdk1..0_79
drwxr-xr-x. hadoop hadoop Aug jdk1..0_60
lrwxrwxrwx. hadoop hadoop May : kafka -> kafka_2.-0.8.2.2/
drwxr-xr-x. hadoop hadoop May : kafka_2.-0.8.2.2
lrwxrwxrwx hadoop hadoop May : kibana -> kibana-4.6.-linux-x86_64/
drwxrwxr-x hadoop hadoop Nov : kibana-4.6.-linux-x86_64
lrwxrwxrwx hadoop hadoop May : logstash -> logstash-2.4./
drwxrwxr-x hadoop hadoop May : logstash-2.4.
lrwxrwxrwx hadoop hadoop May : scala -> scala-2.11.
drwxrwxr-x hadoop hadoop Mar scala-2.11.
lrwxrwxrwx hadoop hadoop May : spark -> spark-2.1.-bin-hadoop2.
drwxr-xr-x hadoop hadoop May : spark-2.1.-bin-hadoop2.
lrwxrwxrwx hadoop hadoop May : sqoop -> sqoop-1.4.-cdh5.5.4/
drwxr-xr-x hadoop hadoop Apr sqoop-1.4.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : zookeeper -> zookeeper-3.4.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop May : zookeeper-3.4.-cdh5.5.4
[hadoop@bigdatamaster app]$
解压
[hadoop@bigdatamaster app]$ ll
total
drwxr-xr-x hadoop hadoop Apr apache-flume-1.6.-cdh5.5.4-bin
lrwxrwxrwx hadoop hadoop May : elasticsearch -> elasticsearch-2.4.
drwxrwxr-x hadoop hadoop May : elasticsearch-2.4.
lrwxrwxrwx hadoop hadoop May : filebeat -> filebeat-1.3.-x86_64/
drwxr-xr-x hadoop hadoop May : filebeat-1.3.-x86_64
lrwxrwxrwx hadoop hadoop May : flume -> apache-flume-1.6.-cdh5.5.4-bin/
lrwxrwxrwx. hadoop hadoop May : hadoop -> hadoop-2.6.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hadoop-2.6.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hbase -> hbase-1.0.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hbase-1.0.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hive -> hive-1.1.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop Apr hive-1.1.-cdh5.5.4
-rw-r--r-- hadoop hadoop May : hue-3.9.-cdh5.5.4.tar.gz
lrwxrwxrwx. hadoop hadoop May : jdk -> jdk1..0_79
drwxr-xr-x. hadoop hadoop Apr jdk1..0_79
drwxr-xr-x. hadoop hadoop Aug jdk1..0_60
lrwxrwxrwx. hadoop hadoop May : kafka -> kafka_2.-0.8.2.2/
drwxr-xr-x. hadoop hadoop May : kafka_2.-0.8.2.2
lrwxrwxrwx hadoop hadoop May : kibana -> kibana-4.6.-linux-x86_64/
drwxrwxr-x hadoop hadoop Nov : kibana-4.6.-linux-x86_64
lrwxrwxrwx hadoop hadoop May : logstash -> logstash-2.4./
drwxrwxr-x hadoop hadoop May : logstash-2.4.
lrwxrwxrwx hadoop hadoop May : scala -> scala-2.11.
drwxrwxr-x hadoop hadoop Mar scala-2.11.
lrwxrwxrwx hadoop hadoop May : spark -> spark-2.1.-bin-hadoop2.
drwxr-xr-x hadoop hadoop May : spark-2.1.-bin-hadoop2.
lrwxrwxrwx hadoop hadoop May : sqoop -> sqoop-1.4.-cdh5.5.4/
drwxr-xr-x hadoop hadoop Apr sqoop-1.4.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : zookeeper -> zookeeper-3.4.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop May : zookeeper-3.4.-cdh5.5.4
[hadoop@bigdatamaster app]$ tar -zxvf hue-3.9.-cdh5.5.4.tar.gz
total
drwxr-xr-x hadoop hadoop Apr apache-flume-1.6.-cdh5.5.4-bin
lrwxrwxrwx hadoop hadoop May : elasticsearch -> elasticsearch-2.4.
drwxrwxr-x hadoop hadoop May : elasticsearch-2.4.
lrwxrwxrwx hadoop hadoop May : filebeat -> filebeat-1.3.-x86_64/
drwxr-xr-x hadoop hadoop May : filebeat-1.3.-x86_64
lrwxrwxrwx hadoop hadoop May : flume -> apache-flume-1.6.-cdh5.5.4-bin/
lrwxrwxrwx. hadoop hadoop May : hadoop -> hadoop-2.6.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hadoop-2.6.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hbase -> hbase-1.0.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hbase-1.0.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hive -> hive-1.1.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop Apr hive-1.1.-cdh5.5.4
drwxr-xr-x hadoop hadoop Apr hue-3.9.-cdh5.5.4
-rw-r--r-- hadoop hadoop May : hue-3.9.-cdh5.5.4.tar.gz
lrwxrwxrwx. hadoop hadoop May : jdk -> jdk1..0_79
drwxr-xr-x. hadoop hadoop Apr jdk1..0_79
drwxr-xr-x. hadoop hadoop Aug jdk1..0_60
lrwxrwxrwx. hadoop hadoop May : kafka -> kafka_2.-0.8.2.2/
drwxr-xr-x. hadoop hadoop May : kafka_2.-0.8.2.2
lrwxrwxrwx hadoop hadoop May : kibana -> kibana-4.6.-linux-x86_64/
drwxrwxr-x hadoop hadoop Nov : kibana-4.6.-linux-x86_64
lrwxrwxrwx hadoop hadoop May : logstash -> logstash-2.4./
drwxrwxr-x hadoop hadoop May : logstash-2.4.
lrwxrwxrwx hadoop hadoop May : scala -> scala-2.11.
drwxrwxr-x hadoop hadoop Mar scala-2.11.
lrwxrwxrwx hadoop hadoop May : spark -> spark-2.1.-bin-hadoop2.
drwxr-xr-x hadoop hadoop May : spark-2.1.-bin-hadoop2.
lrwxrwxrwx hadoop hadoop May : sqoop -> sqoop-1.4.-cdh5.5.4/
drwxr-xr-x hadoop hadoop Apr sqoop-1.4.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : zookeeper -> zookeeper-3.4.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop May : zookeeper-3.4.-cdh5.5.4
[hadoop@bigdatamaster app]$ rm hue-3.9.-cdh5.5.4.tar.gz
创建软链接
[hadoop@bigdatamaster app]$ ll
total
drwxr-xr-x hadoop hadoop Apr apache-flume-1.6.-cdh5.5.4-bin
lrwxrwxrwx hadoop hadoop May : elasticsearch -> elasticsearch-2.4.
drwxrwxr-x hadoop hadoop May : elasticsearch-2.4.
lrwxrwxrwx hadoop hadoop May : filebeat -> filebeat-1.3.-x86_64/
drwxr-xr-x hadoop hadoop May : filebeat-1.3.-x86_64
lrwxrwxrwx hadoop hadoop May : flume -> apache-flume-1.6.-cdh5.5.4-bin/
lrwxrwxrwx. hadoop hadoop May : hadoop -> hadoop-2.6.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hadoop-2.6.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hbase -> hbase-1.0.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hbase-1.0.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hive -> hive-1.1.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop Apr hive-1.1.-cdh5.5.4
drwxr-xr-x hadoop hadoop Apr hue-3.9.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : jdk -> jdk1..0_79
drwxr-xr-x. hadoop hadoop Apr jdk1..0_79
drwxr-xr-x. hadoop hadoop Aug jdk1..0_60
lrwxrwxrwx. hadoop hadoop May : kafka -> kafka_2.-0.8.2.2/
drwxr-xr-x. hadoop hadoop May : kafka_2.-0.8.2.2
lrwxrwxrwx hadoop hadoop May : kibana -> kibana-4.6.-linux-x86_64/
drwxrwxr-x hadoop hadoop Nov : kibana-4.6.-linux-x86_64
lrwxrwxrwx hadoop hadoop May : logstash -> logstash-2.4./
drwxrwxr-x hadoop hadoop May : logstash-2.4.
lrwxrwxrwx hadoop hadoop May : scala -> scala-2.11.
drwxrwxr-x hadoop hadoop Mar scala-2.11.
lrwxrwxrwx hadoop hadoop May : spark -> spark-2.1.-bin-hadoop2.
drwxr-xr-x hadoop hadoop May : spark-2.1.-bin-hadoop2.
lrwxrwxrwx hadoop hadoop May : sqoop -> sqoop-1.4.-cdh5.5.4/
drwxr-xr-x hadoop hadoop Apr sqoop-1.4.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : zookeeper -> zookeeper-3.4.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop May : zookeeper-3.4.-cdh5.5.4
[hadoop@bigdatamaster app]$ ln -s hue-3.9.-cdh5.5.4/ hue
[hadoop@bigdatamaster app]$ ll
total
drwxr-xr-x hadoop hadoop Apr apache-flume-1.6.-cdh5.5.4-bin
lrwxrwxrwx hadoop hadoop May : elasticsearch -> elasticsearch-2.4.
drwxrwxr-x hadoop hadoop May : elasticsearch-2.4.
lrwxrwxrwx hadoop hadoop May : filebeat -> filebeat-1.3.-x86_64/
drwxr-xr-x hadoop hadoop May : filebeat-1.3.-x86_64
lrwxrwxrwx hadoop hadoop May : flume -> apache-flume-1.6.-cdh5.5.4-bin/
lrwxrwxrwx. hadoop hadoop May : hadoop -> hadoop-2.6.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hadoop-2.6.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hbase -> hbase-1.0.-cdh5.5.4
drwxr-xr-x. hadoop hadoop May : hbase-1.0.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : hive -> hive-1.1.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop Apr hive-1.1.-cdh5.5.4
lrwxrwxrwx hadoop hadoop May : hue -> hue-3.9.-cdh5.5.4/
drwxr-xr-x hadoop hadoop Apr hue-3.9.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : jdk -> jdk1..0_79
drwxr-xr-x. hadoop hadoop Apr jdk1..0_79
drwxr-xr-x. hadoop hadoop Aug jdk1..0_60
lrwxrwxrwx. hadoop hadoop May : kafka -> kafka_2.-0.8.2.2/
drwxr-xr-x. hadoop hadoop May : kafka_2.-0.8.2.2
lrwxrwxrwx hadoop hadoop May : kibana -> kibana-4.6.-linux-x86_64/
drwxrwxr-x hadoop hadoop Nov : kibana-4.6.-linux-x86_64
lrwxrwxrwx hadoop hadoop May : logstash -> logstash-2.4./
drwxrwxr-x hadoop hadoop May : logstash-2.4.
lrwxrwxrwx hadoop hadoop May : scala -> scala-2.11.
drwxrwxr-x hadoop hadoop Mar scala-2.11.
lrwxrwxrwx hadoop hadoop May : spark -> spark-2.1.-bin-hadoop2.
drwxr-xr-x hadoop hadoop May : spark-2.1.-bin-hadoop2.
lrwxrwxrwx hadoop hadoop May : sqoop -> sqoop-1.4.-cdh5.5.4/
drwxr-xr-x hadoop hadoop Apr sqoop-1.4.-cdh5.5.4
lrwxrwxrwx. hadoop hadoop May : zookeeper -> zookeeper-3.4.-cdh5.5.4/
drwxr-xr-x. hadoop hadoop May : zookeeper-3.4.-cdh5.5.4
[hadoop@bigdatamaster app]$
进入hue的安装目录,进行编译。
make apps
看个人的网速吧,这个过程中,需要几分钟。
为什么需要make apps,其实啊,是为了得到如下
首先,大家一应要看清我的3个节点的集群机器情况!(不看清楚,自己去后悔吧)
(这个配置文件表格,我是为了给大家方便看,制作出来展示,是根据大家的机器变动而走的)(动态的)
Hue配置段 | Hue配置项 | Hue配置值 | 说明 |
desktop | default_hdfs_superuser | hadoop | HDFS管理用户 |
desktop | http_host | 192.168.80.10 | Hue Web Server所在主机/IP |
desktop | http_port | 8000 | Hue Web Server服务端口 |
desktop | server_user | hue | 运行Hue Web Server的进程用户 |
desktop | server_group | hue | 运行Hue Web Server的进程用户组 |
desktop | default_user | hue | Hue管理员 |
desktop | default_hdfs_superuser | hadoop |
更改为你的hadoop用户,网上有些资料写为什么将 修改 文件desktop/libs/hadoop/src/hadoop/fs/webhdfs.py 中的 DEFAULT_HDFS_SUPERUSER = ‘hdfs’ 更改为你的hadoop用户。 我的这里是hadoop 修改默认的hdfs访问用户 修改hue.ini中的配置 default_hdfs_superuser=hdfs 改为 default_hdfs_superuser=root (注意,这里别人的用户是root) |
hadoop/hdfs_clusters | fs_defaultfs | hdfs://bigdatamaster:9000 | 对应core-site.xml配置项fs.defaultFS |
hadoop/hdfs_clusters | hadoop_conf_dir | /home/hadoop/app/hadoop/etc/hadoop/conf | Hadoop配置文件目录 |
hadoop/yarn_clusters | resourcemanager_host | bigdatamaster | 对应yarn-site.xml配置项yarn.resourcemanager.hostname |
hadoop/yarn_clusters | resourcemanager_port | 8032 | ResourceManager服务端口号 |
hadoop/yarn_clusters | resourcemanager_api_url | http://bigdatamaster:23188 | 对应于yarn-site.xml配置项yarn.resourcemanager.webapp.address(我这里是为了避免跟spark那边的端口冲突,当然你也可以改为其他的端口) |
hadoop/yarn_clusters | proxy_api_url | http://bigdatamaster:8888 | 对应yarn-site.xml配置项yarn.web-proxy.address |
hadoop/yarn_clusters | history_server_api_url | http://bigdatamaster:19888 |
对应mapred-site.xml配置项mapreduce.jobhistory.webapp.address |
zookeeper |
host_ports |
bigdatamaster:2181,bigdataslave1:2181,bigdataslave2:2181 |
zookeeper集群管理 |
beeswax | hive_server_host | bigdatamaster | Hive所在节点主机名/IP |
beeswax | hive_server_port | 10000 | HiveServer2服务端口号 |
beeswax |
hive_conf_dir |
home/hadoop/app/hive/conf |
Hive配置文件目录 |
因为,我的Hue仅只安装在bigdatamaster(192.168.80.10)这台机器上即可!!!
配置hue文件(重点,一定要细心)
$HUE_HOME/desktop/conf/hue.ini
[hadoop@bigdatamaster conf]$ pwd
/home/hadoop/app/hue/desktop/conf
[hadoop@bigdatamaster conf]$ ll
total
-rw-r--r-- hadoop hadoop Apr hue.ini
-rw-r--r-- hadoop hadoop Apr log4j.properties
-rw-r--r-- hadoop hadoop Apr log.conf
[hadoop@bigdatamaster conf]$ vim hue.ini
http://archive.cloudera.com/cdh5/cdh/5/hue-3.9.0-cdh5.5.4/manual.html#_install_hue
[desktop]这块,配置如下
[desktop]
# hue webServer 地址和端口号
secret_key= secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o
http_host=192.168.80.10
http_port=
time_zone=Asia/Shanghai
# Webserver runs as this user
server_user=hue
server_group=hue # This should be the Hue admin and proxy user
default_user=hue # This should be the hadoop cluster admin
default_hdfs_superuser=hadoop
注意,这里也可以不弄,保持默认的
http://archive.cloudera.com/cdh5/cdh/5/hue-3.9.0-cdh5.5.4/manual.html#_install_hue
[hadoop]这块,配置如下 (注意官网说,WebHdfs 或者 HttpFS)(一般用WebHdfs,那是因为非HA集群。如果是HA集群,则必须还要配置HttpFS) 下面这篇博客,我给了具体的配置和原因。
HUE配置文件hue.ini 的hdfs_clusters模块详解(图文详解)(分HA集群)
好的,我们继续往下。因为本博客立足于是由bigdatamaster、bigdataslave1和bigdataslave2组成的非HA的3节点集群,所以选择WebHdfs。
bigdataslave1 和 bigdataslave2都操作,不多赘述。
然后,修改完三台机器的hdfs-site.xml之后,再修改core-site.xml
bigdataslave1和bigdataslave2都操作,不多赘述。
hadoop模块
[hadoop] # Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
# HA support by using HttpFs [[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://bigdatamaster:9000 # NameNode logical name.
## logical_name= # Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is for HttpFs.
webhdfs_url=http://bigdatamaster:50070/webhdfs/v1 # Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
# have to be verified against certificate authority
## ssl_cert_ca_verify=True # Directory of the Hadoop configuration
hadoop_conf_dir=/home/hadoop/app/hadoop/etc/hadoop/conf
注意,我的 fs_defaultfs=hdfs://bigdatamaster:9000 ,大家要根据自己的机器来配置,思路一定要清晰,别一味地看别人博客怎么配置的
网上有些如, fs_defaultfs=hdfs://mycluster ,以及 fs_defaultfs=hdfs://master:8020。注意,这是别人的机器是这么配置的。
总之,跟自己机器的core-site.xml的fs.defaultFS属性保持一致即可。
其中,bigdatamaster,是我安装Hue这台机器的主机名,192.168.80.10是它对应的静态ip。
那么,为什么是上面这样来配置的呢。大家要知道为什么
http://hadoop.apache.org/docs/r2.5.2/
[yarn_clusters]这块
[[yarn_clusters]] [[[default]]]
# Enter the host on which you are running the ResourceManager
resourcemanager_host=192.168.80.10 # The port where the ResourceManager IPC listens on
resourcemanager_port= # Whether to submit jobs to this cluster
submit_to=True # Resource Manager logical name (required for HA)
## logical_name= # Change this if your YARN cluster is Kerberos-secured
## security_enabled=false # URL of the ResourceManager API
resourcemanager_api_url=http://192.168.80.10:8088 # URL of the ProxyServer API
proxy_api_url=http://192.168.80.10:8088 # URL of the HistoryServer API
history_server_api_url=http://192.168.80.10:19888
进一步深入的话,请移步我的博客
HUE配置文件hue.ini 的yarn_clusters模块详解(图文详解)(分HA集群)
[zookeeper]这块
[zookeeper]
host_ports=bigdatamaster:,bigdataslave1:,bigdataslave2:
[beeswax] 和 hive 这块
[beeswax] # Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=bigdatamaster # Port where HiveServer2 Thrift server runs on.
hive_server_port= # Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/home/hadoop/app/hive/conf
因为,我的hive是安装在bigdatamaster这台机器上。大家一定要根据自己的机器情况来配置啊!
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>bigdatamaster</value>
</property>
同时,是还要将hive-site.xml里的hive.server2.thrift.port属性 和 hive.server2.thrift.bind.host属性。我这里的hive是安装在bigdatamaster机器上。
更深入,想请请见
HUE配置文件hue.ini 的hive和beeswax模块详解(图文详解)(分HA集群)
database模块
HUE配置文件hue.ini 的database模块详解(包含qlite、mysql、 psql、和oracle)(图文详解)(分HA集群)
###########################################################################
# Settings for the RDBMS application
########################################################################### [librdbms]
# The RDBMS app can have any number of databases configured in the databases
# section. A database is known by its section name
# (IE sqlite, mysql, psql, and oracle in the list below). [[databases]]
# sqlite configuration.
[[[sqlite]]]
# Name to show in the UI.
nice_name=SQLite # For SQLite, name defines the path to the database.
name=/home/hadoop/app/hue/desktop/desktop.db # Database backend to use.
engine=sqlite
hive> show databases;
OK
default
hive
Time taken: 0.074 seconds, Fetched: row(s)
hive>
# mysql, oracle, or postgresql configuration.
[[[mysql]]]
# Name to show in the UI.
nice_name="My SQL DB" # For MySQL and PostgreSQL, name is the name of the database.
# For Oracle, Name is instance of the Oracle server. For express edition
# this is 'xe' by default.
name=hive # Database backend to use. This can be:
# . mysql
# . postgresql
# . oracle
engine=mysql # IP or hostname of the database to connect to.
host=bigdatamaster # Port the database server is listening to. Defaults are:
# . MySQL:
# . PostgreSQL:
# . Oracle Express Edition:
port= # Username to authenticate with when connecting to the database.
user=hive # Password matching the username to authenticate with when
# connecting to the database.
password=hive # Database options to send to the server when connecting.
# https://docs.djangoproject.com/en/1.4/ref/databases/
## options={}
pig模块的配置
具体,见
HUE配置文件hue.ini 的pig模块详解(图文详解)(分HA集群)
zookeeper模块的配置
具体,见
HUE配置文件hue.ini 的zookeeper模块详解(图文详解)(分HA集群)
spark模块的配置
具体,见
HUE配置文件hue.ini 的Spark模块详解(图文详解)(分HA集群)
impala模块的配置
具体请见
HUE配置文件hue.ini 的impala模块详解(图文详解)(分HA集群)
liboozie和oozie模块的配置
具体,见
HUE配置文件hue.ini 的liboozie和oozie模块详解(图文详解)(分HA集群)
sqoop模块的配置
具体,见
HUE配置文件hue.ini 的sqoop模块详解(图文详解)(分HA集群)
hbase模块的配置(暂时这里遇到了点问题)
1、配置HBase
Hue需要读取HBase的数据是使用thrift的方式,默认HBase的thrift服务没有开启,所有需要手动额外开启thrift 服务。
thrift service默认使用的是9090端口,使用如下命令查看端口是否被占用。
[hadoop@bigdatamaster conf]$ netstat -nl|grep
[hadoop@bigdatamaster conf]$
这里,最好保持默认端口。
对于Hbase的配置,有点错误。
HUE配置文件hue.ini 的hbase模块详解(图文详解)(分HA集群)
Hue的启动
也就是说,你Hue配置文件里面配置了什么,进程都要先提前启动。
build/env/bin/supervisor
[hadoop@bigdatamaster hue]$ pwd
/home/hadoop/app/hue
[hadoop@bigdatamaster hue]$ ll
total
-rw-rw-r-- hadoop hadoop May : app.reg
drwxr-xr-x hadoop hadoop May : apps
drwxrwxr-x hadoop hadoop May : build
drwxr-xr-x hadoop hadoop Apr cloudera
drwxr-xr-x hadoop hadoop May : desktop
drwxr-xr-x hadoop hadoop Apr docs
drwxr-xr-x hadoop hadoop Apr ext
-rw-r--r-- hadoop hadoop Apr LICENSE.txt
drwxrwxr-x hadoop hadoop May : logs
-rw-r--r-- hadoop hadoop Apr Makefile
-rw-r--r-- hadoop hadoop Apr Makefile.sdk
-rw-r--r-- hadoop hadoop Apr Makefile.vars
-rw-r--r-- hadoop hadoop Apr Makefile.vars.priv
drwxr-xr-x hadoop hadoop Apr maven
-rw-r--r-- hadoop hadoop Apr NOTICE.txt
-rw-r--r-- hadoop hadoop Apr README
drwxr-xr-x hadoop hadoop Apr tools
-rw-r--r-- hadoop hadoop Apr VERSION
[hadoop@bigdatamaster hue]$ build/env/bin/supervisor
[hadoop@bigdatamaster hue]$ build/env/bin/supervisor
[INFO] Not running as root, skipping privilege drop
starting server with options:
{'daemonize': False,
'host': '192.168.80.10',
'pidfile': None,
'port': ,
'server_group': 'hue',
'server_name': 'localhost',
'server_user': 'hue',
'ssl_certificate': None,
'ssl_certificate_chain': None,
'ssl_cipher_list': 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA',
'ssl_private_key': None,
'threads': ,
'workdir': None}
/home/hadoop/app/hue-3.9.-cdh5.5.4/build/env/lib/python2./site-packages/django_axes-1.4.-py2..egg/axes/decorators.py:: DeprecationWarning: The use of AUTH_PROFILE_MODULE to define user profiles has been deprecated.
profile = user.get_profile()
http://bigdatamaster:8888
这里,不多赘述了。
如果大家,在启动Hue之后,遇到一些问题,相应可以去看我以下写的博客
安装Hue后的一些功能的问题解决干货总结(博主推荐)
同时,大家可以关注我的个人博客:
http://www.cnblogs.com/zlslch/ 和 http://www.cnblogs.com/lchzls/ http://www.cnblogs.com/sunnyDream/
详情请见:http://www.cnblogs.com/zlslch/p/7473861.html
人生苦短,我愿分享。本公众号将秉持活到老学到老学习无休止的交流分享开源精神,汇聚于互联网和个人学习工作的精华干货知识,一切来于互联网,反馈回互联网。
目前研究领域:大数据、机器学习、深度学习、人工智能、数据挖掘、数据分析。 语言涉及:Java、Scala、Python、Shell、Linux等 。同时还涉及平常所使用的手机、电脑和互联网上的使用技巧、问题和实用软件。 只要你一直关注和呆在群里,每天必须有收获
对应本平台的讨论和答疑QQ群:大数据和人工智能躺过的坑(总群)(161156071)