hdfs追加解决报错:Failed to replace a bad datanode on the existing pipeline due to no more goo......

  在查询过程中,网上的大部分方法是修改hdfs-site.xml配置文件,添加

<property>
<name>dfs.namenode.http.address</name>
<value>slave1:50070</value>
</property>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>NEVER</value>
</property>
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>true</value>
</property>

但我添加后依旧报错,在下面博文找到解决方法(https://blog.csdn.net/caiandyong/article/details/44730031?utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7EOPENSEARCH%7Edefault-2.no_search_link&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7EOPENSEARCH%7Edefault-2.no_search_link)

java代码中添加

conf.set("dfs.client.block.write.replace-datanode-on-failure.policy","NEVER");
conf.set("dfs.client.block.write.replace-datanode-on-failure.enable","true");

hdfs追加解决报错:Failed to replace a bad datanode on the existing pipeline due to no more goo......

上一篇:LDAP常用属性及其描述


下一篇:知方可补不足~Sqlserver中的几把锁和.net中的事务级别