记录学习《Hadoop+Spark大数据巨量分析与机器学习整合开发》这本书。
第五章 Hadoop Multi Node Cluster
windows利用虚拟机实现模拟多节点集群构建
5.2-5.3 设置VirtualBox网卡,设置data1服务器
1. 设置网卡
网卡1设为网络地址转换(NAT)
网卡2设为仅主机(Host-Only)适配器
2. 编辑网络配置文件设置固定IP
sudo gedit /etc/network/interfaces
# NAT interface
auto eth0
iface eth0 inet dhcp
# host only interface
auto eth1
iface eth1 inet static
address 192.168.56.101
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.255
3. 设置hostname
sudo gedit /etc/hostname
data1
4. 设置hosts文件
sudo gedit /etc/hosts
192.168.56.100 master
192.168.56.101 data1
192.168.56.102 data2
192.168.56.103 data3
5. 编辑core-site.xml
sudo gedit /usr/local/hadoop/etc/hadoop/core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
6.编辑yarn-site.xml
sudo gedit /usr/local/hadoop/etc/hadoop/yarn-site.xml
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8050</value>
</property>
7. 编辑mapred-site.xml
sudo gedit /usr/local/hadoop/etc/hadoop/mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>master:54311</value>
</property>
8. 编辑hdfs-site.xml
sudo gedit /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/hadoop_data/hdfs/datanode</value>
</property>
9. 重新启动
10. ifconfig查看网络设置
5.4 复制data1服务器到data2、data3、master
5.5 设置data2、data3服务器
1. 设置data2固定IP地址
sudo gedit /etc/network/interfaces
192.168.56.102
2. 设置hostname
sudo gedit /etc/hostname
data2
3. 设置data3固定IP地址
sudo gedit /etc/network/interfaces
192.168.56.103
4. 设置hostname
sudo gedit /etc/hostname
data3
5.6 设置master服务器
1. 设置master固定IP地址
sudo gedit /etc/network/interfaces
192.168.56.100
2. 设置hostname
sudo gedit /etc/hostname
master
3. 设置hdfs-site.xml
sudo gedit /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value>
</property>
4. 编辑masters文件
sudo gedit /usr/local/hadoop/etc/hadoop/masters
master
5. 编辑slaves文件
sudo gedit /usr/local/hadoop/etc/hadoop/slaves
data1
data2
data3