1.hadoop的内存配置调优
mapred-site.xml的内存调整
<property>
<name>mapreduce.map.memory.mb</name>
<value></value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx1024M</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value></value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx2560M</value>
</property> yarn-site.xml
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value></value>
<discription>每个节点可用内存,单位MB</discription>
</property> <property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value></value>
<discription>单个任务可申请最少内存,默认1024MB</discription>
</property> <property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value></value>
<discription>单个任务可申请最大内存,默认8192MB</discription>
</property> hadoop的内存调整hadoop-env.sh
export HADOOP_HEAPSIZE_MAX=
export HADOOP_HEAPSIZE_MIN=
2.Hbase的参数调优
hbase的内存调整hbase-env.sh
export HBASE_HEAPSIZE=8G
3.数据的导入导出
hbase数据的导出
hbase org.apache.hadoop.hbase.mapreduce.Export NS1.GROUPCHAT /do1/GROUPCHAT
hdfs dfs -get /do1/GROUPCHAT /opt/GROUPCHAT 尝试删除数据
hdfs dfs -rm -r /do1/GROUPCHAT hbase数据的导入
hdfs dfs -put /opt/GROUPCHAT /do1/GROUPCHAT
hdfs dfs -ls /do1/GROUPCHAT
hbase org.apache.hadoop.hbase.mapreduce.Import NS1.GROUPCHAT /do1/GROUPCHAT
4.两个master,再每一个node加配置
backup-masters
[root@do2cloud01 conf]# pwd
/do1cloud/hbase-2.0./conf
[root@do1cloud01 conf]# cat backup-masters
do1cloud02
5.通过控制台就可以看出参数设置是否生效
10.0.0.99: hbase控制台
10.0.0.99: Hadoop 前端控制台
10.0.0.99: 集群 前端控制台
6.regionserver
[root@do2cloud01 conf]# cat regionservers
do1cloud02
do1cloud03
do1cloud04
do1cloud05