HDFS-日常操作命令(Linux环境)

(0)启动Hadoop集群

[ck@hadoop102 hadoop-2.9.0]$ sbin/start-dfs.sh
[ck@hadoop103 hadoop-2.9.0]$ sbin/start-yarn.sh

(1) -help:输出这个命令的参数

[ck@hadoop103 hadoop-2.9.0]$ hadoop fs -help rm

(2)-ls:显示目录信息

[ck@hadoop103 hadoop-2.9.0]$ hadoop fs -ls /   (查看/目录)
[ck@hadoop103 hadoop-2.9.0]$ hadoop fs -lsr /  (查看/目录及子目录)

(3)-mkdir:在HDFS上创建目录

[ck@hadoop103 hadoop-2.9.0]$ hadoop fs -mkdir -p /sanguo/shuguo

(4)-moveFromLocal:从本地剪切粘贴到HDFS

[ck@hadoop103 hadoop-2.9.0]$ touch kongming.txt
[ck@hadoop103 hadoop-2.9.0]$ vim kongming.txt
[ck@hadoop103 hadoop-2.9.0]$ bin/hadoop fs -moveFromLocal ./kongming.txt /sanguo/shuguo/

(5)-appendToFile:追加一个文件到已经存在的文件末尾(删除本地文件)

[ck@hadoop102 hadoop-2.9.0]$ touch liubei.txt
[ck@hadoop102 hadoop-2.9.0]$ vim liubei.txt
[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -appendToFile liubei.txt  /sanguo/shuguo/kongming.txt
[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -cat /sanguo/shuguo/kongming.txt

(6)-chgrp、-chmod、-chown:Linux文件系统中的用法一样,修改文件所属权限

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -chgrp ck /sanguo/shuguo/kongming.txt

(7)-copyFromLocal:从本地文件系统中拷贝到HDFS路径中

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -copyFromLocal ./caochao.txt /sanguo/shuguo/

(8) -copyToLocal:从HDFS拷贝到本地

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -copyToLocal /sanguo/shuguo/kongming.txt ./

(9) -cp:从HDFS的一个路径拷贝到HDFS的另一个路径

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -cp /sanguo/shuguo/kongming.txt /sanguo/

(10)-mv:在HDFS中移动文件

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -mv /sanguo/kongming.txt /

(11)-get:等同于copyToLocal,从HDFS下载到本地

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -get /kongming.txt ./

(12)-getmerge:合并下载多个文件,比如HDFS的目录/sanguo/shuguo下的文件全部合并下载到本地目录。

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -getmerge /sanguo/shuguo/* ./zaiyiqi.txt

(13)-put:等同于copyFromLocal

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -put ./LICENSE.txt  /sanguo/shuguo/

(14)-tail:显示一个文件的末尾

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -tail /sanguo/shuguo/LICENSE.txt

(15)-rm:删除文件或文件夹

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -rm /sanguo/shuguo/LICENSE.txt

(16)-rmdir:删除空目录

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -mkdir /test
[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -rmdir /test

(17)-du: 统计文件夹的大小信息

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -du /
366744329  /hadoop-2.9.0.tar.gz
16         /kongming.txt
49         /sanguo
45         /wc.input
[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -du -h / (换算显示)
349.8 M  /hadoop-2.9.0.tar.gz
16       /kongming.txt
49       /sanguo
45       /wc.input
[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -du -h -s /  (目录总和)
   349.8 M  /

(18)-setrep: 设置HDFS中文件的副本数据

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -setrep 2 /kongming.txt

        这里设置的副本数只是记录在NameNode的元数据中,是否真的会有这么多副本,还得看DataNode的数量。因为目前只有3台设备,最多也就3个副本,只有节点数增加到10台时,副本数才能达到10。

 

案例整理来源于atguigu视频

HDFS-日常操作命令(Linux环境)

上一篇:腾讯最大股东收购了 Stack Overflow,以后“抄代码”都要付费了么?


下一篇:Linux系统安装01-centos7系统安装