CentOS 6.5 伪分布式 安装 hadoop 2.6.0

安装 jdk

 yum install java-1.7.-openjdk*
检查安装:java -version

创建Hadoop用户,设置Hadoop用户使之可以免密码ssh到localhost

 su - hadoop
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub>> ~/.ssh/authorized_keys cd /home/hadoop/.ssh
chmod authorized_keys

注意这里的权限问题,保证.ssh目录权限为700,authorized_keys为600

验证:

 [hadoop@localhost .ssh]$ ssh localhost
Last login: Sun Nov ::

解压hadoop,安装在/opt/hadoop

 tar -xzvf hadoop-2.6..tar.gz
mv -i /home/erik/hadoop-2.6. /opt/hadoop
chown -R hadoop /opt/hadoop

要修改的文件有hadoop-env.sh、core-site.xml  、 hdfs-site.xml 、 yarn-site.xml 、mapred-site.xml几个文件。

1 cd /usr/opt/hadoop/etc/hadoop  

设置hadoop-env.sh中的java环境变量,改成这样JAVA_HOME好像没效

 export JAVA_HOME= {你的java环境变量} 

core-site.xml

 <configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>localhost:</value>
</property>
</configuration>

hdfs.xml

 <configuration>
<property>
<name>dfs.replication</name>
<value></value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

yarn-site.xml

 <configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property> <property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

mapred-site.xml

 <configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:</value>
</property>
</configuration>

配置环境变量,修改/etc/profile, 写在最后面即可。配置完要重启!!!

 export JAVA_HOME=/usr/lib/jvm/java-1.7.-openjdk-1.7.0.95.x86_64
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/bin
export HADOOP_INSTALL=/opt/hadoop
export PATH=${HADOOP_INSTALL}/bin:${HADOOP_INSTALL}/sbin${PATH}
export HADOOP_MAPRED_HOME=${HADOOP_INSTALL}
export HADOOP_COMMON_HOME=${HADOOP_INSTALL}
export HADOOP_HDFS_HOME=${HADOOP_INSTALL}
export YARN_HOME=${HADOOP_INSTALLL}
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_INSTALL}/lib/natvie
export HADOOP_OPTS="-Djava.library.path=${HADOOP_INSTALL}/lib:${HADOOP_INSTALL}/lib/native"

之后就是见证奇迹的时候了,

 cd /opt/hadoop/

格式化hdfs

 bin/hdfs namenode -format 

启动hdfs

 sbin/start-dfs.sh
sbin/start-yarn.sh

理论上会见到

 Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/opt/hadoop-2.6./logs/hadoop-hadoop-namenode-.out
localhost: starting datanode, logging to /usr/opt/hadoop-2.6./logs/hadoop-hadoop-datanode-.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/opt/hadoop-2.6./logs/hadoop-hadoop-secondarynamenode-.out

输入网址127.0.0.1:50070就可以看见hadoop的网页了,这就说明成功了。

参考:

http://www.centoscn.com/hadoop/2015/0118/4525.html

http://blog.csdn.net/yinan9/article/details/16805275

http://www.aboutyun.com/thread-10554-1-1.html

上一篇:Effective Java 学习笔记----第7章 通用程序设计


下一篇:css3+jquery制作3d旋转相册