Hadoop集群-HDFS集群中大数据运维常用的命令总结

           Hadoop集群-HDFS集群中大数据运维常用的命令总结

                                        作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

  本篇博客会简单涉及到滚动编辑,融合镜像文件,目录的空间配额等运维操作简介。话不多少,直接上命令便于以后查看。

 

一.查看hadf的帮助信息

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs
Usage: hdfs [--config confdir] COMMAND
where COMMAND is one of:
dfs run a filesystem command on the file systems supported in Hadoop.
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
journalnode run the DFS journalnode
zkfc run the ZK Failover Controller daemon
datanode run a DFS datanode
dfsadmin run a DFS admin client
diskbalancer Distributes data evenly among disks on a given node
haadmin run a DFS HA admin client
fsck run a DFS filesystem checking utility
balancer run a cluster balancing utility
jmxget get JMX exported values from NameNode or DataNode.
mover run a utility to move block replicas across
storage types
oiv apply the offline fsimage viewer to an fsimage
oiv_legacy apply the offline fsimage viewer to an legacy fsimage
oev apply the offline edits viewer to an edits file
fetchdt fetch a delegation token from the NameNode
getconf get config values from configuration
groups get the groups which users belong to
snapshotDiff diff two snapshots of a directory or diff the
current directory contents with a snapshot
lsSnapshottableDir list all snapshottable dirs owned by the current user
Use -help to see options
portmap run a portmap service
nfs3 run an NFS version gateway
cacheadmin configure the HDFS cache
crypto configure HDFS encryption zones
storagepolicies list/get/set block storage policies
version print the version Most commands print help when invoked w/o parameters.
[hdfs@node101.yinzhengjie.org.cn ~]$

  综上所述,hdfs有多个子选项,作为一枚新手建议从dfs入手,dfs子选项意思是在hdfs文件系统上运行当前系统的命令,而这些命令跟咱们学习的Linux命令长得几乎一样,接下来我们一起来看看如果使用它们吧。

二.hdfs与dfs结合使用的案例

  其实hdfs 和dfs 结合使用的话实际上调用的是hadoop fs这个命令。不信你自己看帮助信息如下:

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] [-h] [-v] [-x] <path> ...]
[-cp [-f] [-p | -p[topax]] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] [-x] <path> ...]
[-expunge]
[-find <path> ... <expression> ...]
[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getfattr [-R] {-n name | -d} [-e en] <path>]
[-getmerge [-nl] <src> <localdst>]
[-help [cmd ...]]
[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] [-l] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setfattr {-n name [-v value] | -x name} <path>]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touchz <path> ...]
[-usage [cmd ...]] Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|resourcemanager:port> specify a ResourceManager
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions] [hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs

1>.查看hdfs子命令的帮助信息

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -help ls
-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...] :
List the contents that match the specified file pattern. If path is not
specified, the contents of /user/<currentUser> will be listed. For a directory a
list of its direct children is returned (unless -d option is specified). Directory entries are of the form:
permissions - userId groupId sizeOfDirectory(in bytes)
modificationDate(yyyy-MM-dd HH:mm) directoryName and file entries are of the form:
permissions numberOfReplicas userId groupId sizeOfFile(in bytes)
modificationDate(yyyy-MM-dd HH:mm) fileName -C Display the paths of files and directories only.
-d Directories are listed as plain files.
-h Formats the sizes of files in a human-readable fashion
rather than a number of bytes.
-q Print ? instead of non-printable characters.
-R Recursively list the contents of directories.
-t Sort files by modification time (most recent first).
-S Sort files by size.
-r Reverse the order of the sort.
-u Use time of last access instead of modification for
display and sorting.
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -help ls

2>.查看hdfs文件系统中已经存在的文件

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -ls /
Found items
drwxr-xr-x - hbase hbase -- : /hbase
drwxr-xr-x - hdfs supergroup -- : /jobtracker
drwxr-xr-x - hdfs supergroup -- : /system
drwxrwxrwt - hdfs supergroup -- : /tmp
drwxrwxrwx - hdfs supergroup -- : /user
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -ls /

3>.在hdfs文件系统中创建文件

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -ls /user/yinzhengjie/data/
Found items
drwxr-xr-x - root supergroup -- : /user/yinzhengjie/data/day001
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -touchz /user/yinzhengjie/data/.txt
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -ls /user/yinzhengjie/data/
Found items
-rw-r--r-- hdfs supergroup -- : /user/yinzhengjie/data/.txt
drwxr-xr-x - root supergroup -- : /user/yinzhengjie/data/day001
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -touchz /user/yinzhengjie/data/1.txt

4>.上传文件至根目录(在上传的过程中会产生一个以"*.Copying"字样的临时文件)

[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /.txt
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$ hdfs dfs -put hadoop-2.7..tar.gz /
[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /.txt
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -put hadoop-2.7.3.tar.gz /

5>.在hdfs文件系统中下载文件

[yinzhengjie@s101 ~]$ ll
total
drwxrwxr-x. yinzhengjie yinzhengjie May : hadoop
drwxrwxr-x. yinzhengjie yinzhengjie May : shell
[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /.txt
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -get /.txt
[yinzhengjie@s101 ~]$ ll
total
-rw-r--r--. yinzhengjie yinzhengjie May : .txt
drwxrwxr-x. yinzhengjie yinzhengjie May : hadoop
drwxrwxr-x. yinzhengjie yinzhengjie May : shell
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -get /1.txt

6>.在hdfs文件系统中删除文件

[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /.txt
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$ hdfs dfs -rm /.txt
// :: INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = minutes, Emptier interval = minutes.
Deleted /.txt
[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -rm /1.txt

7>.在hdfs文件系统中查看文件内容

[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$ hdfs dfs -cat /xrsync.sh
#!/bin/bash
#@author :yinzhengjie
#blog:http://www.cnblogs.com/yinzhengjie
#EMAIL:y1053419035@qq.com #判断用户是否传参
if [ $# -lt ];then
echo "请输入参数";
exit
fi #获取文件路径
file=$@ #获取子路径
filename=`basename $file` #获取父路径
dirpath=`dirname $file` #获取完整路径
cd $dirpath
fullpath=`pwd -P` #同步文件到DataNode
for (( i=;i<=;i++ ))
do
#使终端变绿色
tput setaf
echo =========== s$i %file ===========
#使终端变回原来的颜色,即白灰色
tput setaf
#远程执行命令
rsync -lr $filename `whoami`@s$i:$fullpath
#判断命令是否执行成功
if [ $? == ];then
echo "命令执行成功"
fi
done
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -cat /xrsync.sh

8>.在hdfs文件系统中创建目录

[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -mkdir /shell
[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
drwxr-xr-x - yinzhengjie supergroup -- : /shell
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -mkdir /shell

9>.在hdfs文件系统中修改文件名称(当然你可以可以用来移动文件到目录哟)

[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
drwxr-xr-x - yinzhengjie supergroup -- : /shell
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -mv /xcall.sh /call.sh
[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /call.sh
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
drwxr-xr-x - yinzhengjie supergroup -- : /shell
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -mv /xcall.sh /call.sh

[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /call.sh
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
drwxr-xr-x - yinzhengjie supergroup -- : /shell
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$ hdfs dfs -mv /call.sh /shell
[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
drwxr-xr-x - yinzhengjie supergroup -- : /shell
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$ hdfs dfs -ls /shell
Found items
-rw-r--r-- yinzhengjie supergroup -- : /shell/call.sh
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -mv /call.sh /shell

10>.在hdfs问系统中拷贝文件到目录

[yinzhengjie@s101 ~]$ hdfs dfs -ls /shell
Found items
-rw-r--r-- yinzhengjie supergroup -- : /shell/call.sh
[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
drwxr-xr-x - yinzhengjie supergroup -- : /shell
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$ hdfs dfs -cp /xrsync.sh /shell
[yinzhengjie@s101 ~]$ hdfs dfs -ls /shell
Found items
-rw-r--r-- yinzhengjie supergroup -- : /shell/call.sh
-rw-r--r-- yinzhengjie supergroup -- : /shell/xrsync.sh
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -cp /xrsync.sh /shell

11>.递归删除目录

[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
drwxr-xr-x - yinzhengjie supergroup -- : /shell
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$ hdfs dfs -rmr /shell
rmr: DEPRECATED: Please use 'rm -r' instead.
// :: INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = minutes, Emptier interval = minutes.
Deleted /shell
[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
-rw-r--r-- yinzhengjie supergroup -- : /xrsync.sh
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -rmr /shell

12>.列出本地文件的内容(默认是hdfs文件系统哟)

[yinzhengjie@s101 ~]$ hdfs dfs -ls file:///home/yinzhengjie/
Found items
-rw------- yinzhengjie yinzhengjie -- : file:///home/yinzhengjie/.bash_history
-rw-r--r-- yinzhengjie yinzhengjie -- : file:///home/yinzhengjie/.bash_logout
-rw-r--r-- yinzhengjie yinzhengjie -- : file:///home/yinzhengjie/.bash_profile
-rw-r--r-- yinzhengjie yinzhengjie -- : file:///home/yinzhengjie/.bashrc
drwxrwxr-x - yinzhengjie yinzhengjie -- : file:///home/yinzhengjie/.oracle_jre_usage
drwx------ - yinzhengjie yinzhengjie -- : file:///home/yinzhengjie/.ssh
-rw-r--r-- yinzhengjie yinzhengjie -- : file:///home/yinzhengjie/1.txt
drwxrwxr-x - yinzhengjie yinzhengjie -- : file:///home/yinzhengjie/hadoop
drwxrwxr-x - yinzhengjie yinzhengjie -- : file:///home/yinzhengjie/shell
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -ls file:///home/yinzhengjie/

[yinzhengjie@s101 ~]$ hdfs dfs -ls hdfs:/
Found items
-rw-r--r-- yinzhengjie supergroup -- : hdfs:///hadoop-2.7.3.tar.gz
-rw-r--r-- yinzhengjie supergroup -- : hdfs:///xrsync.sh
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -ls hdfs:/

13>.追加文件内容到hdfs文件系统中的文件

[yinzhengjie@s101 ~]$ ll
total
drwxrwxr-x. yinzhengjie yinzhengjie May : hadoop
drwxr-xr-x. yinzhengjie yinzhengjie Aug hadoop-2.7.
-rw-rw-r--. yinzhengjie yinzhengjie Aug hadoop-2.7..tar.gz
-rw-rw-r--. yinzhengjie yinzhengjie May jdk-8u131-linux-x64.tar.gz
-rwxrwxr-x. yinzhengjie yinzhengjie May : xcall.sh
-rwxrwxr-x. yinzhengjie yinzhengjie May : xrsync.sh
[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -appendToFile xrsync.sh /xcall.sh
[yinzhengjie@s101 ~]$ hdfs dfs -ls /
Found items
-rw-r--r-- yinzhengjie supergroup -- : /hadoop-2.7..tar.gz
-rw-r--r-- yinzhengjie supergroup -- : /xcall.sh
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -appendToFile xrsync.sh /xcall.sh

14>.格式化名称节点

[root@yinzhengjie ~]# hdfs namenode
// :: INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = yinzhengjie/211.98.71.195
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3
STARTUP_MSG: classpath = /soft/hadoop-2.7.3/etc/hadoop:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_131
************************************************************/
// :: INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
// :: INFO namenode.NameNode: createNameNode []
// :: INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
// :: INFO impl.MetricsSystemImpl: Scheduled snapshot period at second(s).
// :: INFO impl.MetricsSystemImpl: NameNode metrics system started
// :: INFO namenode.NameNode: fs.defaultFS is hdfs://localhost/
// :: INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
// :: INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
// :: INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
// :: INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
// :: INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
// :: INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
// :: INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
// :: INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
// :: INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
// :: INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
18/05/27 17:23:57 INFO http.HttpServer2: Jetty bound to port 50070
18/05/27 17:23:57 INFO mortbay.log: jetty-6.1.26
18/05/27 17:23:58 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
18/05/27 17:23:58 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
18/05/27 17:23:58 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
18/05/27 17:23:58 INFO namenode.FSNamesystem: No KeyProvider found.
18/05/27 17:23:58 INFO namenode.FSNamesystem: fsLock is fair:true
18/05/27 17:23:58 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
18/05/27 17:23:58 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
18/05/27 17:23:58 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18/05/27 17:23:58 INFO blockmanagement.BlockManager: The block deletion will start around 2018 May 27 17:23:58
18/05/27 17:23:58 INFO util.GSet: Computing capacity for map BlocksMap
18/05/27 17:23:58 INFO util.GSet: VM type = 64-bit
18/05/27 17:23:58 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
18/05/27 17:23:58 INFO util.GSet: capacity = 2^21 = 2097152 entries
18/05/27 17:23:58 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
18/05/27 17:23:58 INFO blockmanagement.BlockManager: defaultReplication = 1
18/05/27 17:23:58 INFO blockmanagement.BlockManager: maxReplication = 512
18/05/27 17:23:58 INFO blockmanagement.BlockManager: minReplication = 1
18/05/27 17:23:58 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
18/05/27 17:23:58 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
18/05/27 17:23:58 INFO blockmanagement.BlockManager: encryptDataTransfer = false
18/05/27 17:23:58 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
18/05/27 17:23:58 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
18/05/27 17:23:58 INFO namenode.FSNamesystem: supergroup = supergroup
18/05/27 17:23:58 INFO namenode.FSNamesystem: isPermissionEnabled = true
18/05/27 17:23:58 INFO namenode.FSNamesystem: HA Enabled: false
18/05/27 17:23:58 INFO namenode.FSNamesystem: Append Enabled: true
18/05/27 17:23:58 INFO util.GSet: Computing capacity for map INodeMap
18/05/27 17:23:58 INFO util.GSet: VM type = 64-bit
18/05/27 17:23:58 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
18/05/27 17:23:58 INFO util.GSet: capacity = 2^20 = 1048576 entries
18/05/27 17:23:58 INFO namenode.FSDirectory: ACLs enabled? false
18/05/27 17:23:58 INFO namenode.FSDirectory: XAttrs enabled? true
18/05/27 17:23:58 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
18/05/27 17:23:58 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/05/27 17:23:58 INFO util.GSet: Computing capacity for map cachedBlocks
18/05/27 17:23:58 INFO util.GSet: VM type = 64-bit
18/05/27 17:23:58 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
18/05/27 17:23:58 INFO util.GSet: capacity = 2^18 = 262144 entries
18/05/27 17:23:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
18/05/27 17:23:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
18/05/27 17:23:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
18/05/27 17:23:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
18/05/27 17:23:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
18/05/27 17:23:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18/05/27 17:23:58 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/05/27 17:23:58 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/05/27 17:23:58 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/05/27 17:23:58 INFO util.GSet: VM type = 64-bit
18/05/27 17:23:58 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
18/05/27 17:23:58 INFO util.GSet: capacity = 2^15 = 32768 entries
18/05/27 17:23:58 WARN common.Storage: Storage directory /tmp/hadoop-root/dfs/name does not exist
18/05/27 17:23:58 WARN namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:327)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
18/05/27 17:23:58 INFO mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
18/05/27 17:23:58 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
18/05/27 17:23:58 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
18/05/27 17:23:58 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
18/05/27 17:23:58 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:327)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
18/05/27 17:23:58 INFO util.ExitUtil: Exiting with status 1
18/05/27 17:23:58 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at yinzhengjie/211.98.71.195
************************************************************/
[root@yinzhengjie ~]#

[root@yinzhengjie ~]# hdfs namenode

15>.创建快照(关于快照更详细的用法请参考:https://www.cnblogs.com/yinzhengjie/p/9099529.html)

[root@yinzhengjie ~]# hdfs dfs -ls -R /
drwxr-xr-x - root supergroup -- : /data
drwxr-xr-x - root supergroup -- : /data/etc
-rw-r--r-- root supergroup -- : /data/index.html
-rw-r--r-- root supergroup -- : /data/name.txt
-rw-r--r-- root supergroup -- : /data/yinzhengjie.sql
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# echo "hello" > .txt
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# echo "world" > .txt
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -put .txt /data
[root@yinzhengjie ~]# hdfs dfs -put .txt /data/etc
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -ls -R /
drwxr-xr-x - root supergroup -- : /data
-rw-r--r-- root supergroup -- : /data/.txt
drwxr-xr-x - root supergroup -- : /data/etc
-rw-r--r-- root supergroup -- : /data/etc/.txt
-rw-r--r-- root supergroup -- : /data/index.html
-rw-r--r-- root supergroup -- : /data/name.txt
-rw-r--r-- root supergroup -- : /data/yinzhengjie.sql
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfsadmin -allowSnapshot /data #启用快照功能
Allowing snaphot on /data succeeded
[root@yinzhengjie ~]# hdfs dfs -createSnapshot /data firstSnapshot #创建快照并起名为“firstSnapshot”。
Created snapshot /data/.snapshot/firstSnapshot
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -ls -R /data/.snapshot/firstSnapshot
-rw-r--r-- root supergroup -- : /data/.snapshot/firstSnapshot/.txt
drwxr-xr-x - root supergroup -- : /data/.snapshot/firstSnapshot/etc
-rw-r--r-- root supergroup -- : /data/.snapshot/firstSnapshot/etc/.txt
-rw-r--r-- root supergroup -- : /data/.snapshot/firstSnapshot/index.html
-rw-r--r-- root supergroup -- : /data/.snapshot/firstSnapshot/name.txt
-rw-r--r-- root supergroup -- : /data/.snapshot/firstSnapshot/yinzhengjie.sql
[root@yinzhengjie ~]#

[root@yinzhengjie ~]# hdfs dfs -createSnapshot /data firstSnapshot

16>.重命名快照

[root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/
Found items
drwxr-xr-x - root supergroup -- : /data/.snapshot/firstSnapshot
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -renameSnapshot /data firstSnapshot newSnapshot #将/data目录的firstSnapshot快照名称改名为newSnapshot
[root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/
Found items
drwxr-xr-x - root supergroup -- : /data/.snapshot/newSnapshot
[root@yinzhengjie ~]#

[root@yinzhengjie ~]# hdfs dfs -renameSnapshot /data firstSnapshot newSnapshot

17>.删除快照

[root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/
Found items
drwxr-xr-x - root supergroup -- : /data/.snapshot/newSnapshot
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -deleteSnapshot /data newSnapshot
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/
[root@yinzhengjie ~]#
[root@yinzhengjie ~]#

[root@yinzhengjie ~]# hdfs dfs -deleteSnapshot /data newSnapshot

18>.查看hadoop的Sequencefile文件内容

[yinzhengjie@s101 data]$ hdfs dfs -text file:///home/yinzhengjie/data/seq
// :: INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
// :: INFO compress.CodecPool: Got brand-new decompressor [.deflate]
yinzhengjie
[yinzhengjie@s101 data]$

[yinzhengjie@s101 data]$ hdfs dfs -text file:///home/yinzhengjie/data/seq

19>.使用df命令查看可用空间

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -df
Filesystem Size Used Available Use%
hdfs://yinzhengjie-hdfs-ha 1804514672640 4035805184 1800478867456 0%
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -df

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -df -h
Filesystem Size Used Available Use%
hdfs://yinzhengjie-hdfs-ha 1.6 T 3.8 G 1.6 T 0%
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -df -h

20>.降低复制因子

[hdfs@node105.yinzhengjie.org.cn ~]$ hdfs dfs -ls /user/yinzhengjie/data/
Found items
-rw-r--r-- hdfs supergroup -- : /user/yinzhengjie/data/.txt
drwxr-xr-x - root supergroup -- : /user/yinzhengjie/data/day001
[hdfs@node105.yinzhengjie.org.cn ~]$
[hdfs@node105.yinzhengjie.org.cn ~]$ hdfs dfs -setrep -w /user/yinzhengjie/data/.txt
Replication set: /user/yinzhengjie/data/.txt
Waiting for /user/yinzhengjie/data/.txt ... done
[hdfs@node105.yinzhengjie.org.cn ~]$
[hdfs@node105.yinzhengjie.org.cn ~]$ hdfs dfs -ls /user/yinzhengjie/data/
Found items
-rw-r--r-- hdfs supergroup -- : /user/yinzhengjie/data/.txt
drwxr-xr-x - root supergroup -- : /user/yinzhengjie/data/day001
[hdfs@node105.yinzhengjie.org.cn ~]$

[hdfs@node105.yinzhengjie.org.cn ~]$ hdfs dfs -setrep -w 2 /user/yinzhengjie/data/1.txt

21>.使用du命令查看已用空间

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -du /user/yinzhengjie/data/day001
/user/yinzhengjie/data/day001/test_input
/user/yinzhengjie/data/day001/test_output
/user/yinzhengjie/data/day001/ts_validate
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -du /user/yinzhengjie/data/day001

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -du -h /user/yinzhengjie/data/day001
953.7 M 2.8 G /user/yinzhengjie/data/day001/test_input
953.7 M 953.7 M /user/yinzhengjie/data/day001/test_output
/user/yinzhengjie/data/day001/ts_validate
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -du -h /user/yinzhengjie/data/day001

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -du -s -h /user/yinzhengjie/data/day001
1.9 G 3.7 G /user/yinzhengjie/data/day001
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -du -s -h /user/yinzhengjie/data/day001

 

三.hdfs与getconf结合使用的案例

1>.获取NameNode的节点名称(可能包含多个)

[yinzhengjie@s101 ~]$ hdfs getconf
hdfs getconf is utility for getting configuration information from the config file. hadoop getconf
[-namenodes] gets list of namenodes in the cluster.
[-secondaryNameNodes] gets list of secondary namenodes in the cluster.
[-backupNodes] gets list of backup nodes in the cluster.
[-includeFile] gets the include file path that defines the datanodes that can join the cluster.
[-excludeFile] gets the exclude file path that defines the datanodes that need to decommissioned.
[-nnRpcAddresses] gets the namenode rpc addresses
[-confKey [key]] gets a specific key from the configuration [yinzhengjie@s101 ~]$ hdfs getconf -namenodes
s101
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs getconf -namenodes

2>.获取hdfs最小块信息(默认大小为1M,即1048576字节,如果想要修改的话必须为512的倍数,因为HDFS底层传输数据是每512字节进行校验)

[yinzhengjie@s101 ~]$ hdfs getconf
hdfs getconf is utility for getting configuration information from the config file. hadoop getconf
[-namenodes] gets list of namenodes in the cluster.
[-secondaryNameNodes] gets list of secondary namenodes in the cluster.
[-backupNodes] gets list of backup nodes in the cluster.
[-includeFile] gets the include file path that defines the datanodes that can join the cluster.
[-excludeFile] gets the exclude file path that defines the datanodes that need to decommissioned.
[-nnRpcAddresses] gets the namenode rpc addresses
[-confKey [key]] gets a specific key from the configuration [yinzhengjie@s101 ~]$ hdfs getconf -confKey dfs.namenode.fs-limits.min-block-size [yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs getconf -confKey dfs.namenode.fs-limits.min-block-size

 3>.查找hdfs的NameNode的RPC地址

[root@node101.yinzhengjie.org.cn ~]# hdfs getconf -nnRpcAddresses
calculation101.aggrx:
calculation111.aggrx:
[root@node101.yinzhengjie.org.cn ~]#

[root@node101.yinzhengjie.org.cn ~]# hdfs getconf -nnRpcAddresses

四.hdfs与dfsadmin结合使用的案例

1>.查看hdfs dfsadmin的帮助信息

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin
Usage: hdfs dfsadmin
Note: Administrative commands can only be run as the HDFS superuser.
[-report [-live] [-dead] [-decommissioning]]
[-safemode <enter | leave | get | wait>]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-setQuota <quota> <dirname>...<dirname>]
[-clrQuota <dirname>...<dirname>]
[-setSpaceQuota <quota> <dirname>...<dirname>]
[-clrSpaceQuota <dirname>...<dirname>]
[-finalizeUpgrade]
[-rollingUpgrade [<query|prepare|finalize>]]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh <host:ipc_port> <key> [arg1..argn]
[-reconfig <datanode|...> <host:ipc_port> <start|status|properties>]
[-printTopology]
[-refreshNamenodes datanode_host:ipc_port]
[-deleteBlockPool datanode_host:ipc_port blockpoolId [force]]
[-setBalancerBandwidth <bandwidth in bytes per second>]
[-fetchImage <local directory>]
[-allowSnapshot <snapshotDir>]
[-disallowSnapshot <snapshotDir>]
[-shutdownDatanode <datanode_host:ipc_port> [upgrade]]
[-getDatanodeInfo <datanode_host:ipc_port>]
[-metasave filename]
[-triggerBlockReport [-incremental] <datanode_host:ipc_port>]
[-listOpenFiles [-blockingDecommission] [-path <path>]]
[-help [cmd]] Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|resourcemanager:port> specify a ResourceManager
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions] [hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin

2>.查看指定命令的帮助信息

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -help rollEdits
-rollEdits: Rolls the edit log. [hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -help rollEdits

3>.手动滚动日志(关于日志滚动更详细的用法请参考:https://www.cnblogs.com/yinzhengjie/p/9098092.html)

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -rollEdits
Successfully rolled edit logs.
New segment starts at txid
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -rollEdits

4>.查看当前的模式

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get
Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:
Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get

5>.进入安全模式

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get
Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:
Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode enter
Safe mode is ON in node105.yinzhengjie.org.cn/10.1.2.105:
Safe mode is ON in node101.yinzhengjie.org.cn/10.1.2.101:
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get
Safe mode is ON in node105.yinzhengjie.org.cn/10.1.2.105:
Safe mode is ON in node101.yinzhengjie.org.cn/10.1.2.101:
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode enter

6>.离开安全模式

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get
Safe mode is ON in node105.yinzhengjie.org.cn/10.1.2.105:
Safe mode is ON in node101.yinzhengjie.org.cn/10.1.2.101:
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode leave
Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:
Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get
Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:
Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode leave

7>.安全模式的wait状态

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait]
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode wait
Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:
Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get
Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:
Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode wait

8>.检查HDFS集群的状态

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -help report
-report [-live] [-dead] [-decommissioning]:
Reports basic filesystem information and statistics.
The dfs usage can be different from "du" usage, because it
measures raw space used by replication, checksums, snapshots
and etc. on all the DNs.
Optional flags may be used to filter the list of displayed DNs. [hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -help report

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -report
Configured Capacity: (1.64 TB)           #此集群中HDFS已配置的容量
Present Capacity: (1.64 TB)             #现有的HDFS容量。
DFS Remaining: (1.64 TB)               #剩余的HDFS容量。
DFS Used: (3.76 GB)                    #HDFS使用存储的统计信息,按照文件大小统计。
DFS Used%: 0.22%                            #同上,这里安装的是百分比统计。
Under replicated blocks:                       #显示是否有任何未充分复制的块。 
Blocks with corrupt replicas:                    #显示是否有损坏的块。
Missing blocks:                             #显示是否有丢失的块。
Missing blocks (with replication factor ):            #同上 -------------------------------------------------
Live datanodes ():                          #显示集群中有多少个DataNode是活动的并可用。 Name: 10.1.2.102: (node102.yinzhengjie.org.cn)       #主机名称或者机架名称
Hostname: node102.yinzhengjie.org.cn                #主机名
Rack: /default                             #默认机架 
Decommission Status : Normal                    #当前节点的DataNode的状态(已停用或者未停用,Normal表示正常的)
Configured Capacity: (420.15 GB)          #DataNode的配置和使用的容量
DFS Used: (1.01 GB)                  
Non DFS Used: ( B)
DFS Remaining: (419.13 GB)
DFS Used%: 0.24%
DFS Remaining%: 99.76%
Configured Cache Capacity: (1.66 GB)
Cache Used: ( B)                          #缓存使用情况统计信息(如果已配置)
Cache Remaining: (1.66 GB)              
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Xceivers:
Last contact: Tue May :: CST Name: 10.1.2.103: (node103.yinzhengjie.org.cn)
Hostname: node103.yinzhengjie.org.cn
Rack: /default
Decommission Status : Normal
Configured Capacity: (420.15 GB)
DFS Used: (962.52 MB)
Non DFS Used: ( B)
DFS Remaining: (419.21 GB)
DFS Used%: 0.22%
DFS Remaining%: 99.78%
Configured Cache Capacity: (1.66 GB)
Cache Used: ( B)
Cache Remaining: (1.66 GB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Xceivers:
Last contact: Tue May :: CST Name: 10.1.2.104: (node104.yinzhengjie.org.cn)
Hostname: node104.yinzhengjie.org.cn
Rack: /default
Decommission Status : Normal
Configured Capacity: (420.15 GB)
DFS Used: (936.12 MB)
Non DFS Used: ( B)
DFS Remaining: (419.23 GB)
DFS Used%: 0.22%
DFS Remaining%: 99.78%
Configured Cache Capacity: (1.66 GB)
Cache Used: ( B)
Cache Remaining: (1.66 GB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Xceivers:
Last contact: Tue May :: CST Name: 10.1.2.105: (node105.yinzhengjie.org.cn)
Hostname: node105.yinzhengjie.org.cn
Rack: /default
Decommission Status : Normal
Configured Capacity: (420.15 GB)
DFS Used: (910.93 MB)
Non DFS Used: ( B)
DFS Remaining: (419.26 GB)
DFS Used%: 0.21%
DFS Remaining%: 99.79%
Configured Cache Capacity: ( MB)
Cache Used: ( B)
Cache Remaining: ( MB)
Cache Used%: 0.00%
Cache Remaining%: 100.00%
Xceivers:
Last contact: Tue May :: CST [hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -report

9>.目录配额(计算目录下的所有文件的总个数,如果为1,表示目录下不能放文件,即空目录!)

[root@yinzhengjie ~]# ll
total
-rw-r--r--. root root May : index.html
-rw-r--r--. root root May : nginx.conf
-rw-r--r--. root root May : yinzhengjie.sql
-rw-r--r--. root root May : zabbix.conf
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -ls /
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -mkdir -p /data/etc
[root@yinzhengjie ~]# hdfs dfs -ls /
Found items
drwxr-xr-x - root supergroup -- : /data
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfsadmin -setQuota /data
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -put index.html /data
[root@yinzhengjie ~]# hdfs dfs -put yinzhengjie.sql /data
put: The NameSpace quota (directories and files) of directory /data is exceeded: quota= file count=
[root@yinzhengjie ~]# hdfs dfs -ls /data
Found items
drwxr-xr-x - root supergroup -- : /data/etc
-rw-r--r-- root supergroup -- : /data/index.html
[root@yinzhengjie ~]# hdfs dfsadmin -setQuota /data
[root@yinzhengjie ~]# hdfs dfs -put yinzhengjie.sql /data
[root@yinzhengjie ~]# hdfs dfs -ls /data
Found items
drwxr-xr-x - root supergroup -- : /data/etc
-rw-r--r-- root supergroup -- : /data/index.html
-rw-r--r-- root supergroup -- : /data/yinzhengjie.sql
[root@yinzhengjie ~]#

[root@yinzhengjie ~]# hdfs dfsadmin -setQuota 5 /data

10>.空间配额(计算目录下所有文件的总大小,包括副本数,因此空间配最小的值可以得到一个等式:"空间配最小的值  >= 需要上传文件的实际大小 * 副本数")

[root@yinzhengjie ~]# ll
total
-rw-r--r--. root root May : jdk-8u131-linux-x64.tar.gz
-rw-r--r--. root root May : name.txt
[root@yinzhengjie ~]#
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -ls -R /
drwxr-xr-x - root supergroup -- : /data
drwxr-xr-x - root supergroup -- : /data/etc
-rw-r--r-- root supergroup -- : /data/index.html
-rw-r--r-- root supergroup -- : /data/yinzhengjie.sql
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfsadmin -setSpaceQuota /data #这里设置/data 目录配额大小为128M,我测试机器是伪分布式,指定副本数为1,因此设置目录配个大小为128
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -put name.txt /data #上传文件的大小/data目录中去,发现可以正常上传
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -ls -R / #检查上传已经成功
drwxr-xr-x - root supergroup -- : /data
drwxr-xr-x - root supergroup -- : /data/etc
-rw-r--r-- root supergroup -- : /data/index.html
-rw-r--r-- root supergroup -- : /data/name.txt
-rw-r--r-- root supergroup -- : /data/yinzhengjie.sql
[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -put jdk-8u131-linux-x64.tar.gz /data #当我们上传第二个文件时,就会报以下的错误!
// :: WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /data is exceeded: quota = B = 128.00 MB but diskspace consumed = B = 128.00 MB
at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:)
at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.storeAllocatedBlock(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.DSQuotaExceededException): The DiskSpace quota of /data is exceeded: quota = B = 128.00 MB but diskspace consumed = B = 128.00 MB
at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:)
at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.storeAllocatedBlock(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at org.apache.hadoop.ipc.Server$Handler$.run(Server.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:) at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:)
... more
put: The DiskSpace quota of /data is exceeded: quota = B = 128.00 MB but diskspace consumed = B = 128.00 MB
[root@yinzhengjie ~]#

[root@yinzhengjie ~]# hdfs dfsadmin -setSpaceQuota 134217745 /data

11>.清空配额管理

[root@yinzhengjie ~]# hdfs dfsadmin -clrSpaceQuota /data
[root@yinzhengjie ~]# echo $? [root@yinzhengjie ~]#

[root@yinzhengjie ~]# hdfs dfsadmin -clrSpaceQuota /data

12>.对某个目录启用快照功能(快照功能默认为禁用状态)

[root@yinzhengjie ~]# hdfs dfsadmin -allowSnapShot /data
Allowing snaphot on /data succeeded
[root@yinzhengjie ~]#

[root@yinzhengjie ~]# hdfs dfsadmin -allowSnapShot /data

13>.对某个目录禁用快照功能

[root@yinzhengjie ~]# hdfs dfsadmin -disallowSnapShot /data
Disallowing snaphot on /data succeeded
[root@yinzhengjie ~]#

[root@yinzhengjie ~]# hdfs dfsadmin -disallowSnapShot /data

14>.获取某个namenode的节点状态

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs haadmin -getServiceState namenode23
active
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs haadmin -getServiceState namenode31
standby
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs haadmin -getServiceState namenode23        #注意,这个namenode23是在hdfs-site.xml配置文件中指定的

15>.使用dfsadmin -metasave命令提供的信息比dfsadmin -report命令提供的更多。使用此命令可以获取各种的块相关的信息(比如:块总数,正在等待复制的块,当前正在复制的块) 

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -ls /
Found items
drwxr-xr-x - hbase hbase -- : /hbase
drwxr-xr-x - hdfs supergroup -- : /jobtracker
drwxr-xr-x - hdfs supergroup -- : /system
drwxrwxrwt - hdfs supergroup -- : /tmp
drwxrwxrwx - hdfs supergroup -- : /user
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -metasave /hbase            #我们获取某个目录的详细信息,允许成功后会有以下输出,并在“/var/log/hadoop-hdfs/”目录中创建一个文件,该文件名称和咱们这里输入的path名称一致,即“hbase”
Created metasave file /hbase in the log directory of namenode node105.yinzhengjie.org.cn/10.1.2.105:
Created metasave file /hbase in the log directory of namenode node101.yinzhengjie.org.cn/10.1.2.101:
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ cat /var/log/hadoop-hdfs/hbase             #查看输入文件包含有关块的以下信息
files and directories, blocks = total
Live Datanodes:
Dead Datanodes:
Metasave: Blocks waiting for replication:
Mis-replicated blocks that have been postponed:
Metasave: Blocks being replicated:
Metasave: Blocks waiting deletion from datanodes.
Metasave: Number of datanodes:
10.1.2.102: /default IN (420.15 GB) (1.01 GB) 0.24% (419.13 GB) (1.66 GB) ( B) 0.00% (1.66 GB) Tue May :: CST
10.1.2.105: /default IN (420.15 GB) (910.93 MB) 0.21% (419.26 GB) ( MB) ( B) 0.00% ( MB) Tue May :: CST
10.1.2.103: /default IN (420.15 GB) (962.52 MB) 0.22% (419.21 GB) (1.66 GB) ( B) 0.00% (1.66 GB) Tue May :: CST
10.1.2.104: /default IN (420.15 GB) (936.12 MB) 0.22% (419.23 GB) (1.66 GB) ( B) 0.00% (1.66 GB) Tue May :: CST
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -metasave /hbase  #我们获取某个目录的详细信息,允许成功后会有以下输出,并在“/var/log/hadoop-hdfs/”目录中创建一个文件,该文件名称和咱们这里输入的path名称一致,即“hbase”

五.hdfs与fsck结合使用的案例

1>.查看hdfs文件系统信息

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs fsck /
Connecting to namenode via http://node101.yinzhengjie.org.cn:50070/fsck?ugi=hdfs&path=%2F
FSCK started by hdfs (auth:SIMPLE) from /10.1.2.101 for path / at Thu May :: CST
.......................................
/user/yinzhengjie/data/day001/test_output/_partition.lst: Under replicated BP--10.1.2.101-:blk_1073742006_1182. Target Replicas is but found live replica(s), decommissioned replica(s), decommissioning replica(s).
..................Status: HEALTHY      #代表这次HDFS上block检查结果
Total size: B (Total open files size: B)        #代表根目录下文件总大小
Total dirs: 189                                #代表检测的目录下总共有多少目录
Total files: 57                                 #代表检测的目录下总共有多少文件
Total symlinks: (Files currently being written: )   #代表检测的目录下有多少个符号链接
Total blocks (validated): (avg. block size B) (Total open file blocks (not validated): )    #代表检测的目录下有多少的block是有效的。
Minimally replicated blocks: (100.0 %)        #代表拷贝的最小block块数。
Over-replicated blocks: (0.0 %)          #代表当前副本数大于指定副本数的block数量。
Under-replicated blocks: (1.7241379 %)       #代表当前副本书小于指定副本数的block数量。
Mis-replicated blocks: (0.0 %)          #代表丢失的block块数量。
Default replication factor: 3               #代表默认的副本数(自身一份,默认拷贝两份)。
Average block replication: 2.3965516          #代表块的平均复制数,即平均备份的数目,Default replication factor 的值为3,因此需要备份在备份2个,这里的平均备份数等于2是理想值,如果大于2说明可能有多余的副本数存在。
Corrupt blocks: 0               #代表坏的块数,这个指不为0,说明当前集群有不可恢复的块,即数据丢失啦!
Missing replicas: (4.137931 %)       #代表丢失的副本数 
Number of data-nodes: 4               #代表有多好个DN节点
Number of racks: 1               #代表有多少个机架
FSCK ended at Thu May :: CST in milliseconds The filesystem under path '/' is HEALTHY
[hdfs@node101.yinzhengjie.org.cn ~]$

2>.fsck指令显示HDFS块信息

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs fsck / -files -blocks
Connecting to namenode via http://node101.yinzhengjie.org.cn:50070/fsck?ugi=hdfs&files=1&blocks=1&path=%2F
FSCK started by hdfs (auth:SIMPLE) from /10.1.2.101 for path / at Thu May 23 14:30:51 CST 2019
/ <dir>
/hbase <dir>
/hbase/.tmp <dir>
/hbase/MasterProcWALs <dir>
/hbase/MasterProcWALs/state-00000000000000000002.log 30 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743172_2348 len=30 Live_repl=3 /hbase/WALs <dir>
/hbase/WALs/node102.yinzhengjie.org.cn,60020,1558589829098 <dir>
/hbase/WALs/node102.yinzhengjie.org.cn,60020,1558590692594 <dir>
/hbase/WALs/node103.yinzhengjie.org.cn,60020,1558589826957 <dir>
/hbase/WALs/node103.yinzhengjie.org.cn,60020,1558590692071 <dir>
/hbase/WALs/node104.yinzhengjie.org.cn,60020,1558590690690 <dir>
/hbase/WALs/node105.yinzhengjie.org.cn,60020,1558589830953 <dir>
/hbase/WALs/node105.yinzhengjie.org.cn,60020,1558590695092 <dir>
/hbase/data <dir>
/hbase/data/default <dir>
/hbase/data/hbase <dir>
/hbase/data/hbase/meta <dir>
/hbase/data/hbase/meta/.tabledesc <dir>
/hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000001 398 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743148_2324 len=398 Live_repl=3 /hbase/data/hbase/meta/.tmp <dir>
/hbase/data/hbase/meta/1588230740 <dir>
/hbase/data/hbase/meta/1588230740/.regioninfo 32 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743147_2323 len=32 Live_repl=3 /hbase/data/hbase/meta/1588230740/info <dir>
/hbase/data/hbase/meta/1588230740/info/4502037817cf408da4c31f38632d386e 1389 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743185_2361 len=1389 Live_repl=3 /hbase/data/hbase/meta/1588230740/info/cc017533033a4b57904c694bf156d9a6 1529 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743168_2344 len=1529 Live_repl=3 /hbase/data/hbase/meta/1588230740/recovered.edits <dir>
/hbase/data/hbase/meta/1588230740/recovered.edits/20.seqid 0 bytes, 0 block(s): OK /hbase/data/hbase/namespace <dir>
/hbase/data/hbase/namespace/.tabledesc <dir>
/hbase/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 312 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743156_2332 len=312 Live_repl=3 /hbase/data/hbase/namespace/.tmp <dir>
/hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7 <dir>
/hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7/.regioninfo 42 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743157_2333 len=42 Live_repl=3 /hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7/info <dir>
/hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7/info/2efca8e894a4419f9d6e86bb8c8c736b 1079 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743166_2342 len=1079 Live_repl=3 /hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7/recovered.edits <dir>
/hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7/recovered.edits/11.seqid 0 bytes, 0 block(s): OK /hbase/hbase.id 42 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743146_2322 len=42 Live_repl=3 /hbase/hbase.version 7 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743145_2321 len=7 Live_repl=3 /hbase/oldWALs <dir>
/jobtracker <dir>
/jobtracker/jobsInfo <dir>
/jobtracker/jobsInfo/job_201905221917_0001.info 1013 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743091_2267 len=1013 Live_repl=3 /jobtracker/jobsInfo/job_201905221917_0002.info 1013 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743117_2293 len=1013 Live_repl=3 /tmp <dir>
/tmp/.cloudera_health_monitoring_canary_files <dir>
/tmp/hbase-staging <dir>
/tmp/hbase-staging/DONOTERASE <dir>
/tmp/hive <dir>
/tmp/hive/hive <dir>
/tmp/hive/hive/2cd86efc-ec86-40ac-8472-d78c9e6b90a4 <dir>
/tmp/hive/hive/2cd86efc-ec86-40ac-8472-d78c9e6b90a4/_tmp_space.db <dir>
/tmp/hive/root <dir>
/tmp/logs <dir>
/tmp/logs/root <dir>
/tmp/logs/root/logs <dir>
/tmp/logs/root/logs/application_1558520562958_0001 <dir>
/tmp/logs/root/logs/application_1558520562958_0001/node102.yinzhengjie.org.cn_8041 3197 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742001_1177 len=3197 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0001/node103.yinzhengjie.org.cn_8041 3197 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742002_1178 len=3197 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0001/node104.yinzhengjie.org.cn_8041 35641 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742003_1179 len=35641 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0002 <dir>
/tmp/logs/root/logs/application_1558520562958_0002/node102.yinzhengjie.org.cn_8041 68827 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742034_1210 len=68827 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0002/node103.yinzhengjie.org.cn_8041 191005 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742037_1213 len=191005 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0002/node104.yinzhengjie.org.cn_8041 80248 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742036_1212 len=80248 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0002/node105.yinzhengjie.org.cn_8041 64631 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742035_1211 len=64631 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0003 <dir>
/tmp/logs/root/logs/application_1558520562958_0003/node102.yinzhengjie.org.cn_8041 23256 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742056_1232 len=23256 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0003/node103.yinzhengjie.org.cn_8041 35498 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742055_1231 len=35498 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0003/node104.yinzhengjie.org.cn_8041 131199 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742057_1233 len=131199 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0003/node105.yinzhengjie.org.cn_8041 19428 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742054_1230 len=19428 Live_repl=3 /tmp/mapred <dir>
/tmp/mapred/system <dir>
/tmp/mapred/system/jobtracker.info 4 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073741947_1123 len=4 Live_repl=3 /tmp/mapred/system/seq-000000000002 <dir>
/tmp/mapred/system/seq-000000000002/jobtracker.info 4 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743173_2349 len=4 Live_repl=3 /user <dir>
/user/history <dir>
/user/history/done <dir>
/user/history/done/2019 <dir>
/user/history/done/2019/05 <dir>
/user/history/done/2019/05/22 <dir>
/user/history/done/2019/05/22/000000 <dir>
/user/history/done/2019/05/22/000000/job_1558520562958_0001-1558525119627-root-TeraGen-1558525149229-2-0-SUCCEEDED-root.users.root-1558525126064.jhist 18715 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073741999_1175 len=18715 Live_repl=3 /user/history/done/2019/05/22/000000/job_1558520562958_0001_conf.xml 153279 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742000_1176 len=153279 Live_repl=3 /user/history/done/2019/05/22/000000/job_1558520562958_0002-1558525279858-root-TeraSort-1558525343312-8-16-SUCCEEDED-root.users.root-1558525284386.jhist 102347 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742032_1208 len=102347 Live_repl=1 /user/history/done/2019/05/22/000000/job_1558520562958_0002_conf.xml 154575 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742033_1209 len=154575 Live_repl=1 /user/history/done/2019/05/22/000000/job_1558520562958_0003-1558525587458-root-TeraValidate-1558525623716-16-1-SUCCEEDED-root.users.root-1558525591653.jhist 71381 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742052_1228 len=71381 Live_repl=3 /user/history/done/2019/05/22/000000/job_1558520562958_0003_conf.xml 153701 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742053_1229 len=153701 Live_repl=3 /user/history/done_intermediate <dir>
/user/history/done_intermediate/root <dir>
/user/hive <dir>
/user/hive/warehouse <dir>
/user/hive/warehouse/page_view <dir>
/user/hive/warehouse/page_view/PageViewData.csv 1584 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743063_2239 len=1584 Live_repl=3 /user/hue <dir>
/user/hue/.Trash <dir>
/user/hue/.Trash/190523130000 <dir>
/user/hue/.Trash/190523130000/user <dir>
/user/hue/.Trash/190523130000/user/hue <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586450195 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586450195/p1=p0 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586450195/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586450195/p1=p1 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586450195/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586750132 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586750132/p1=p0 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586750132/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586750132/p1=p1 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586750132/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587050151 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587050151/p1=p0 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587050151/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587050151/p1=p1 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587050151/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587350133 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587350133/p1=p0 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587350133/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587350133/p1=p1 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587350133/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558586146864 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558586450229 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558586750160 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558587050181 <dir>
/user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558587350162 <dir>
/user/hue/.Trash/190523140000 <dir>
/user/hue/.Trash/190523140000/user <dir>
/user/hue/.Trash/190523140000/user/hue <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558589871630 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558589871630/p1=p0 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558589871630/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558589871630/p1=p1 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558589871630/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590170701 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590170701/p1=p0 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590170701/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590170701/p1=p1 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590170701/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590470694 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590470694/p1=p0 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590470694/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590470694/p1=p1 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590470694/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590772147 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590772147/p1=p0 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590772147/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590772147/p1=p1 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590772147/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591070813 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591070813/p1=p0 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591070813/p1=p0/p2=420 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591070813/p1=p1 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591070813/p1=p1/p2=421 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558587650383 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558589871818 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558590170843 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558590470827 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558590772354 <dir>
/user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558591070964 <dir>
/user/hue/.Trash/Current <dir>
/user/hue/.Trash/Current/user <dir>
/user/hue/.Trash/Current/user/hue <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0/p2=420 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1/p2=421 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591670694 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591670694/p1=p0 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591670694/p1=p0/p2=420 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591670694/p1=p1 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591670694/p1=p1/p2=421 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591970721 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591970721/p1=p0 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591970721/p1=p0/p2=420 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591970721/p1=p1 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591970721/p1=p1/p2=421 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592270766 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592270766/p1=p0 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592270766/p1=p0/p2=420 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592270766/p1=p1 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592270766/p1=p1/p2=421 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592570728 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592570728/p1=p0 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592570728/p1=p0/p2=420 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592570728/p1=p1 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592570728/p1=p1/p2=421 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592870675 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592870675/p1=p0 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592870675/p1=p0/p2=420 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592870675/p1=p1 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592870675/p1=p1/p2=421 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558591370839 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558591670884 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558591970864 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558592270913 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558592570869 <dir>
/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558592870812 <dir>
/user/hue/.cloudera_manager_hive_metastore_canary <dir>
/user/impala <dir>
/user/root <dir>
/user/root/.staging <dir>
/user/yinzhengjie <dir>
/user/yinzhengjie/data <dir>
/user/yinzhengjie/data/day001 <dir>
/user/yinzhengjie/data/day001/test_input <dir>
/user/yinzhengjie/data/day001/test_input/_SUCCESS 0 bytes, 0 block(s): OK /user/yinzhengjie/data/day001/test_input/part-m-00000 500000000 bytes, 4 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073741990_1166 len=134217728 Live_repl=3
1. BP-1230584423-10.1.2.101-1558513980919:blk_1073741992_1168 len=134217728 Live_repl=3
2. BP-1230584423-10.1.2.101-1558513980919:blk_1073741994_1170 len=134217728 Live_repl=3
3. BP-1230584423-10.1.2.101-1558513980919:blk_1073741996_1172 len=97346816 Live_repl=3 /user/yinzhengjie/data/day001/test_input/part-m-00001 500000000 bytes, 4 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073741989_1165 len=134217728 Live_repl=3
1. BP-1230584423-10.1.2.101-1558513980919:blk_1073741991_1167 len=134217728 Live_repl=3
2. BP-1230584423-10.1.2.101-1558513980919:blk_1073741993_1169 len=134217728 Live_repl=3
3. BP-1230584423-10.1.2.101-1558513980919:blk_1073741995_1171 len=97346816 Live_repl=3 /user/yinzhengjie/data/day001/test_output <dir>
/user/yinzhengjie/data/day001/test_output/_SUCCESS 0 bytes, 0 block(s): OK /user/yinzhengjie/data/day001/test_output/_partition.lst 165 bytes, 1 block(s): Under replicated BP-1230584423-10.1.2.101-1558513980919:blk_1073742006_1182. Target Replicas is 10 but found 4 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742006_1182 len=165 Live_repl=4 /user/yinzhengjie/data/day001/test_output/part-r-00000 62307000 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742015_1191 len=62307000 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00001 62782700 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742016_1192 len=62782700 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00002 61993900 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742017_1193 len=61993900 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00003 63217700 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742019_1195 len=63217700 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00004 62628600 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742018_1194 len=62628600 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00005 62884100 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742020_1196 len=62884100 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00006 63079700 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742021_1197 len=63079700 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00007 61421800 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742022_1198 len=61421800 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00008 61319800 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742023_1199 len=61319800 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00009 61467300 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742025_1201 len=61467300 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00010 62823400 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742024_1200 len=62823400 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00011 63392200 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742026_1202 len=63392200 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00012 62889200 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742027_1203 len=62889200 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00013 62953000 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742028_1204 len=62953000 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00014 62072800 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742029_1205 len=62072800 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00015 62766800 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742030_1206 len=62766800 Live_repl=1 /user/yinzhengjie/data/day001/ts_validate <dir>
/user/yinzhengjie/data/day001/ts_validate/_SUCCESS 0 bytes, 0 block(s): OK /user/yinzhengjie/data/day001/ts_validate/part-r-00000 24 bytes, 1 block(s): OK
0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742050_1226 len=24 Live_repl=3 Status: HEALTHY
Total size: 2001318792 B (Total open files size: 498 B)
Total dirs: 189
Total files: 57
Total symlinks: 0 (Files currently being written: 7)
Total blocks (validated): 58 (avg. block size 34505496 B) (Total open file blocks (not validated): 6)
Minimally replicated blocks: 58 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 1 (1.7241379 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.3965516
Corrupt blocks: 0
Missing replicas: 6 (4.137931 %)
Number of data-nodes: 4
Number of racks: 1
FSCK ended at Thu May 23 14:30:51 CST 2019 in 8 milliseconds The filesystem under path '/' is HEALTHY
[hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$

[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs fsck / -files -blocks

3>.

六.hdfs与oiv结合我使用案例

1>.查看hdfs oiv的帮助信息

[yinzhengjie@s101 ~]$ hdfs oiv
Usage: bin/hdfs oiv [OPTIONS] -i INPUTFILE -o OUTPUTFILE
Offline Image Viewer
View a Hadoop fsimage INPUTFILE using the specified PROCESSOR,
saving the results in OUTPUTFILE. The oiv utility will attempt to parse correctly formed image files
and will abort fail with mal-formed image files. The tool works offline and does not require a running cluster in
order to process an image file. The following image processors are available:
* XML: This processor creates an XML document with all elements of
the fsimage enumerated, suitable for further analysis by XML
tools.
* FileDistribution: This processor analyzes the file size
distribution in the image.
-maxSize specifies the range [, maxSize] of file sizes to be
analyzed (128GB by default).
-step defines the granularity of the distribution. (2MB by default)
* Web: Run a viewer to expose read-only WebHDFS API.
-addr specifies the address to listen. (localhost: by default)
* Delimited (experimental): Generate a text file with all of the elements common
to both inodes and inodes-under-construction, separated by a
delimiter. The default delimiter is \t, though this may be
changed via the -delimiter argument. Required command line arguments:
-i,--inputFile <arg> FSImage file to process. Optional command line arguments:
-o,--outputFile <arg> Name of output file. If the specified
file exists, it will be overwritten.
(output to stdout by default)
-p,--processor <arg> Select which type of processor to apply
against image file. (XML|FileDistribution|Web|Delimited)
(Web by default)
-delimiter <arg> Delimiting string to use with Delimited processor.
-t,--temp <arg> Use temporary dir to cache intermediate result to generate
Delimited outputs. If not set, Delimited processor constructs
the namespace in memory before outputting text.
-h,--help Display usage information and exit [yinzhengjie@s101 ~]$

2>.使用oiv命令查询hadoop镜像文件

[yinzhengjie@s101 ~]$ ll
total
drwxrwxr-x. yinzhengjie yinzhengjie May : hadoop
drwxrwxr-x. yinzhengjie yinzhengjie May : shell
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep fsimage
-rw-rw-r--. yinzhengjie yinzhengjie .2K May : fsimage_0000000000000000767
-rw-rw-r--. yinzhengjie yinzhengjie May : fsimage_0000000000000000767.md5
-rw-rw-r--. yinzhengjie yinzhengjie .4K May : fsimage_0000000000000000932
-rw-rw-r--. yinzhengjie yinzhengjie May : fsimage_0000000000000000932.md5
[yinzhengjie@s101 ~]$ hdfs oiv -i ./hadoop/dfs/name/current/fsimage_0000000000000000767 -o yinzhengjie.xml -p XML
[yinzhengjie@s101 ~]$ ll
total
drwxrwxr-x. yinzhengjie yinzhengjie May : hadoop
drwxrwxr-x. yinzhengjie yinzhengjie May : shell
-rw-rw-r--. yinzhengjie yinzhengjie May : yinzhengjie.xml
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs oiv -i ./hadoop/dfs/name/current/fsimage_0000000000000000767 -o yinzhengjie.xml -p XML

3>.

七.hdfs与oev结合我使用案例

1>.查看hdfs oev的帮助信息

[yinzhengjie@s101 ~]$ hdfs oev
Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
Offline edits viewer
Parse a Hadoop edits log file INPUT_FILE and save results
in OUTPUT_FILE.
Required command line arguments:
-i,--inputFile <arg> edits file to process, xml (case
insensitive) extension means XML format,
any other filename means binary format
-o,--outputFile <arg> Name of output file. If the specified
file exists, it will be overwritten,
format of the file is determined
by -p option Optional command line arguments:
-p,--processor <arg> Select which type of processor to apply
against image file, currently supported
processors are: binary (native binary format
that Hadoop uses), xml (default, XML
format), stats (prints statistics about
edits file)
-h,--help Display usage information and exit
-f,--fix-txids Renumber the transaction IDs in the input,
so that there are no gaps or invalid transaction IDs.
-r,--recover When reading binary edit logs, use recovery
mode. This will give you the chance to skip
corrupt parts of the edit log.
-v,--verbose More verbose output, prints the input and
output filenames, for processors that write
to a file, also output to screen. On large
image files this will dramatically increase
processing time (default is false). Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|resourcemanager:port> specify a ResourceManager
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions] [yinzhengjie@s101 ~]$

2>.使用oev命令查询hadoop的编辑日志文件

[yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep edits | tail -
-rw-rw-r--. yinzhengjie yinzhengjie May : edits_0000000000000001001-
-rw-rw-r--. yinzhengjie yinzhengjie May : edits_0000000000000001003-
-rw-rw-r--. yinzhengjie yinzhengjie May : edits_0000000000000001005-
-rw-rw-r--. yinzhengjie yinzhengjie May : edits_0000000000000001007-
-rw-rw-r--. yinzhengjie yinzhengjie 1.0M May : edits_inprogress_0000000000000001009
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ ll
total
drwxrwxr-x. yinzhengjie yinzhengjie May : hadoop
drwxrwxr-x. yinzhengjie yinzhengjie May : shell
-rw-rw-r--. yinzhengjie yinzhengjie May : yinzhengjie.xml
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs oev -i ./hadoop/dfs/name/current/edits_0000000000000001007- -o edits.xml -p XML
[yinzhengjie@s101 ~]$ ll
total
-rw-rw-r--. yinzhengjie yinzhengjie May : edits.xml
drwxrwxr-x. yinzhengjie yinzhengjie May : hadoop
drwxrwxr-x. yinzhengjie yinzhengjie May : shell
-rw-rw-r--. yinzhengjie yinzhengjie May : yinzhengjie.xml
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ cat edits.xml
<?xml version="1.0" encoding="UTF-8"?>
<EDITS>
<EDITS_VERSION>-</EDITS_VERSION>
<RECORD>
<OPCODE>OP_START_LOG_SEGMENT</OPCODE>
<DATA>
<TXID></TXID>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_END_LOG_SEGMENT</OPCODE>
<DATA>
<TXID></TXID>
</DATA>
</RECORD>
</EDITS>
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs oev -i ./hadoop/dfs/name/current/edits_0000000000000001007-0000000000000001008 -o edits.xml -p XML

3>.

八.hadoop命令介绍

  在上面我们以及提到过,"hadoop fs"其实就等价于“hdfs dfs”,但是hadoop有些命令是hdfs 命令所不支持的,我们举几个例子:

1>.检查压缩库本地安装情况

[yinzhengjie@s101 ~]$ hadoop checknative
// :: WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
// :: INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /soft/hadoop-2.7./lib/native/libhadoop.so.1.0.
zlib: true /lib64/libz.so.
snappy: false
lz4: true revision:
bzip2: false
openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared object file: No such file or directory)!
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hadoop checknative

2>.格式化名称节点

[root@yinzhengjie ~]# hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. // :: INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = yinzhengjie/211.98.71.195
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.3
STARTUP_MSG: classpath = /soft/hadoop-2.7.3/etc/hadoop:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_131
************************************************************/
// :: INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
// :: INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-36f63542-a60c-46d0-8df1-f8fa32730764
// :: INFO namenode.FSNamesystem: No KeyProvider found.
// :: INFO namenode.FSNamesystem: fsLock is fair:true
// :: INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=
// :: INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
// :: INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to :::00.000
// :: INFO blockmanagement.BlockManager: The block deletion will start around May ::
// :: INFO util.GSet: Computing capacity for map BlocksMap
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 2.0% max memory MB = 17.8 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
// :: INFO blockmanagement.BlockManager: defaultReplication =
// :: INFO blockmanagement.BlockManager: maxReplication =
// :: INFO blockmanagement.BlockManager: minReplication =
// :: INFO blockmanagement.BlockManager: maxReplicationStreams =
// :: INFO blockmanagement.BlockManager: replicationRecheckInterval =
// :: INFO blockmanagement.BlockManager: encryptDataTransfer = false
// :: INFO blockmanagement.BlockManager: maxNumBlocksToLog =
// :: INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
// :: INFO namenode.FSNamesystem: supergroup = supergroup
// :: INFO namenode.FSNamesystem: isPermissionEnabled = true
// :: INFO namenode.FSNamesystem: HA Enabled: false
// :: INFO namenode.FSNamesystem: Append Enabled: true
// :: INFO util.GSet: Computing capacity for map INodeMap
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 1.0% max memory MB = 8.9 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.FSDirectory: ACLs enabled? false
// :: INFO namenode.FSDirectory: XAttrs enabled? true
// :: INFO namenode.FSDirectory: Maximum size of an xattr:
// :: INFO namenode.NameNode: Caching file names occuring more than times
// :: INFO util.GSet: Computing capacity for map cachedBlocks
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 0.25% max memory MB = 2.2 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
// :: INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes =
// :: INFO namenode.FSNamesystem: dfs.namenode.safemode.extension =
// :: INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets =
// :: INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users =
// :: INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = ,,
// :: INFO namenode.FSNamesystem: Retry cache on namenode is enabled
// :: INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is millis
// :: INFO util.GSet: Computing capacity for map NameNodeRetryCache
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 0.029999999329447746% max memory MB = 273.1 KB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.FSImage: Allocated new BlockPoolId: BP--211.98.71.195-
// :: INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
// :: INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
// :: INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 of size bytes saved in seconds.
// :: INFO namenode.NNStorageRetentionManager: Going to retain images with txid >=
// :: INFO util.ExitUtil: Exiting with status
// :: INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at yinzhengjie/211.98.71.195
************************************************************/
[root@yinzhengjie ~]#

[root@yinzhengjie ~]# hadoop namenode -format

3>.执行自定义jar包

[yinzhengjie@s101 data]$ hadoop jar YinzhengjieMapReduce-1.0-SNAPSHOT.jar cn.org.yinzhengjie.mapreduce.wordcount.WordCountApp /world.txt /out
// :: WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
// :: INFO input.FileInputFormat: Total input paths to process :
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528935621892_0001
// :: INFO impl.YarnClientImpl: Submitted application application_1528935621892_0001
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528935621892_0001/
// :: INFO mapreduce.Job: Running job: job_1528935621892_0001
// :: INFO mapreduce.Job: Job job_1528935621892_0001 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1528935621892_0001 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Launched reduce tasks=
Data-local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total time spent by all reduce tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total vcore-milliseconds taken by all reduce tasks=
Total megabyte-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all reduce tasks=
Map-Reduce Framework
Map input records=
Map output records=
Map output bytes=
Map output materialized bytes=
Input split bytes=
Combine input records=
Combine output records=
Reduce input groups=
Reduce shuffle bytes=
Reduce input records=
Reduce output records=
Spilled Records=
Shuffled Maps =
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
Shuffle Errors
BAD_ID=
CONNECTION=
IO_ERROR=
WRONG_LENGTH=
WRONG_MAP=
WRONG_REDUCE=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
[yinzhengjie@s101 data]$

[yinzhengjie@s101 data]$ hadoop jar YinzhengjieMapReduce-1.0-SNAPSHOT.jar cn.org.yinzhengjie.mapreduce.wordcount.WordCountApp /world.txt /out

  关于“Hadoop fs”更多相关命令请参考我的笔记:https://www.cnblogs.com/yinzhengjie/p/9906360.html

上一篇:html页面源代码


下一篇:Spring MVC之@RequestMapping 详解(转)