当hdfs空间不足时,除了删除临时数据或垃圾数据之外,还可以适当调整部分大目录的副本数量,多管齐下;
1 查看
$ hdfs dfs -ls /user/hive/warehouse/temp.db/test_ext_o
-rwxr-xr-x 3 hadoop supergroup 44324200 2019-02-28 16:36 /user/hive/warehouse/temp.db/test_ext_o/000000_0
权限后边的3即为副本数量
2 修改
$ hadoop fs -setrep [-w] 2 /user/hive/warehouse/temp.db/test_ext_o/000000_0
or
$ hdfs dfs -setrep [-w] 2 /user/hive/warehouse/temp.db/test_ext_o/000000_0
WARNING: the waiting time may be long for DECREASING the number of replications.
修改副本数量为2,启动路径可以指定文件,也可以指定目录;
可以增加-w参数,会一直等待操作完成;
The -w flag requests that the command wait for the replication to complete. This can potentially take a very long time.