一、安装前准备
- hadoop3.1.3安装及环境配置
- hbase2.3.0安装及环境配置
- jdk1.8安装及环境配置
- 修改hostname(hostnamectl set-hostname **),本次安装hostname为node02.example.com,启动hadoop,hbase的用户是ambri
- 配置ssh,启动hadoop时需要用,在ambri用户下(ssh-keygen -t rsa -C "ambri.rsa" )将生成的公钥分发给本机(ssh-copy-id -i ~/.ssh/id_rsa.pub node02.example.com)
二、配置和启动hadoop
- 配置hadoop-env.sh
2. 配置core-site.xml
3.配置hdfs-site.cml
<property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.permissions</name> <value>true</value> </property> <property> <name>dfs.datanode.max.transfer.threads</name> <value>4096</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/ambari/bigdata/hadoop/local/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/ambari/bigdata/hadoop/local/dfs/data</value> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>/ambari/bigdata/hadoop/local/dfs/namesecondary</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>node02.example.com:50090</value> </property>hdfs-site.xml
4.配置workers
5.格式化namenode
进入$HADOOP_HOME/bin目录
6.启动hdfs(用非root用户启动)
cd $HADOOP_HOME/sbin
./start-dfs.sh
7.查看hadoop UI
IP:9870
三、配置hbase并启动
1.配置hbase-env.sh
* 注:这里我使用hbase自带的zookeeper
2.配置hbase-site.xml
<property> <name>hbase.rootdir</name> <value>hdfs://node02.example.com:9000/hbase</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/ambari/zookeeper/data</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.unsafe.stream.capability.enforce</name> <value>false</value> </property> <property> <name>hbase.tmp.dir</name> <value>/ambari/hbase-2.3.0/tmp</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>node02.example.com</value> </property>hbase-site.xml
3.启动hbase
4.查看hbase UI
IP:16010