一、前置条件
1、操作系统准备
(1)Linux可以用作开发平台及产品平台。
(2)win32只可用作开发平台,且需要cygwin的支持。
2、安装jdk 1.6或以上
3、安装ssh,并配置免密码登录。
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
注意
(1)若.ssh目录不存在,则先建立目录。
(2).ssh/的权限为700,authorized_keys的权限为700,权限大了小了都不行。
4、若为初次安装,为避免权限问题,建议使用root用户。
二、基本准备
1、下载hadoop1.2.1并解压
[root@jediael jediael]$wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
[root@jediael jediael]$ tar -zxvf hadoop-1.2.1-bin.tar.gz
选择国内镜像,速度较快。
2、修改conf/hadoop-env.sh,添加JAVA_HOME变量
(1)增加JAVA_HOME
[root@jediael hadoop-1.2.1]$ vi conf/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_51
(2)执行hadoop命令
[root@jediael hadoop-1.2.1]$ bin/hadoop
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
datanode run a DFS datanode
dfsadmin run a DFS admin client
mradmin run a Map-Reduce admin client
fsck run a DFS filesystem checking utility
fs run a generic filesystem user client
以上输出表明安装正常。
三、配置伪分布模式
1、配置core-site.xml,增加以下属性
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
2、配置hdfs-site.xml,增加以下属性
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
3、配置mapred-site.xml,增加以下属性
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
四、启动hadoop
1、格式化hdfs
[root@jediael hadoop-1.2.1]$ bin/hadoop namenode -format
14/08/16 23:50:02 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = jediael/10.171.29.191
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_51
************************************************************/
14/08/16 23:50:02 INFO util.GSet: Computing capacity for map BlocksMap
14/08/16 23:50:02 INFO util.GSet: VM type = 64-bit
14/08/16 23:50:02 INFO util.GSet: 2.0% max memory = 1013645312
14/08/16 23:50:02 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/08/16 23:50:02 INFO util.GSet: recommended=2097152, actual=2097152
14/08/16 23:50:02 INFO namenode.FSNamesystem: fsOwner=jediael
14/08/16 23:50:02 INFO namenode.FSNamesystem: supergroup=supergroup
14/08/16 23:50:02 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/08/16 23:50:02 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/08/16 23:50:02 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/08/16 23:50:02 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
14/08/16 23:50:02 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/08/16 23:50:03 INFO common.Storage: Image file /tmp/hadoop-jediael/dfs/name/current/fsimage of size 113 bytes saved in 0 seconds.
14/08/16 23:50:03 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-jediael/dfs/name/current/edits
14/08/16 23:50:03 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-jediael/dfs/name/current/edits
14/08/16 23:50:03 INFO common.Storage: Storage directory /tmp/hadoop-jediael/dfs/name has been successfully formatted.
14/08/16 23:50:03 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at jediael/10.171.29.191
************************************************************/
2、启动hadoop
[root@jediael hadoop-1.2.1]# bin/start-all.sh
starting namenode, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-root-namenode-jediael.out
localhost: starting datanode, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-root-datanode-jediael.out
localhost: starting secondarynamenode, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-root-secondarynamenode-jediael.out
starting jobtracker, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-root-jobtracker-jediael.out
localhost: starting tasktracker, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-root-tasktracker-jediael.out
默认情况下,日志将被输出至{HADOOP_HOME}/logs,除非修改了${HADOOP_LOG_DIR}。
3、访问以下2个页面,验证是否已经安装成功
- NameNode - http://localhost:50070/
- JobTracker - http://localhost:50030/
4、使用jps查看各个进程的运行情况
[root@jediael hadoop-1.2.0]# jps
3148 JobTracker
3280 TaskTracker
3052 SecondaryNameNode
2920 DataNode
2801 NameNode
3442 Jps
五、使用一个简单的hadoop程序验证环境
版权声明:本文为博主原创文章,未经博主允许不得转载。