Warning: fs.defaultFS is not set when running "ls" command.

问题现象

使用cloudera-manager 新增了两台节点,并在上面部署了Datanode服务,服务部署成功后,并且也添加了Gateway,但是在新增的节点上使用HDFS的命令报错。

报错内容

Warning: fs.defaultFS is not set when running "ls" command.
Found 22 items
-rw-r--r--   1 root root          0 2020-11-03 05:30 /.autorelabel
dr-xr-xr-x   - root root      28672 2021-04-18 21:42 /bin
dr-xr-xr-x   - root root       4096 2020-10-27 04:18 /boot
drwxr-xr-x   - root root       2720 2021-04-14 22:51 /dev
drwxr-xr-x   - root root       8192 2021-04-18 23:11 /etc
drwxr-xr-x   - root root        124 2020-12-18 05:15 /falcon-agent
drwxr-xr-x   - root root         82 2021-04-18 22:32 /home
dr-xr-xr-x   - root root       4096 2021-04-18 21:42 /lib
dr-xr-xr-x   - root root      28672 2021-04-18 21:20 /lib64
drwxr-xr-x   - root root          6 2018-04-10 23:59 /media
drwxr-xr-x   - root root          6 2018-04-10 23:59 /mnt
drwxr-xr-x   - root root        156 2021-04-18 21:04 /opt
dr-xr-xr-x   - root root          0 2021-04-14 22:50 /proc
dr-xr-x---   - root root       4096 2021-04-18 23:11 /root
drwxr-xr-x   - root root        860 2021-04-18 22:29 /run
dr-xr-xr-x   - root root      12288 2021-04-18 21:20 /sbin
drwxr-xr-x   - root root          6 2018-04-10 23:59 /srv
dr-xr-xr-x   - root root          0 2021-04-18 23:12 /sys

Warning: fs.defaultFS is not set when running "ls" command.

问题原因

请检查/opt/cloudera/parcels/CDH/lib/hadoop/etc/hadoop或者是/etc/hadoop/conf目录,
可能是新添加的节点/opt/cloudera/parcels/CDH/lib/hadoop/etc/hadoop目录的内容与原来集群的内容不一致。因此,我们需要强行删除内容并手动添加它们。

解决方案

将新添加的datanode节点/etc/hadoop/conf目录下的内容清空,然后将原有的datanode节点/etc/hadoop/conf的内容拷贝到新节点下就可以。

Warning: fs.defaultFS is not set when running "ls" command.

上一篇:css属性


下一篇:限制input type=“file“ 文件上传类型