hadoop 2.7.3 集群搭建遇到问题以及解决

  • 系统安装相关

    1. center os安装到系统中需要用软碟通制作一个启动盘,先选择一个镜像,然在在启动菜单选择写入硬盘
    2. 出现错误 center os 7 starting timeout报了一长溜后出现dev/root dosnot exist

      这是没有找到镜像文件,在报错界面cd dev,可以看到挂载的设备,有几十个,sd开头的是存储相关。我这边出现的是sda sda4 sdb。重启,在选择安装界面按e进入编辑,将vmlinuz initrd=initrd.imginst.stage2=hd:LABEL=CentOS\x207\x20x86_64 rd.live.check quiet 改为:vmlinuz initrd=initrd.imginst.stage2=hd:/dev/sda quiet 即可,然后按ctrl+x安装如果没成功改为sdb sda4总有一个会让你成功 
  • SSH相关

1. 明明SSH已经配置值成功了,为什么start-all.sh的时候还要输入s2的密码 s3的密码?
答:ssh设置的是给hadoop用户配置的,需要su hadoop 切换到hadoop用户下再执行start-all.sh
2. ssh没有配置成功如何重新开始配置
答:先将用户切换到hadoop然后打开~/.ssh目录,将目录下的所有文件都删除(三个节点都这么做),弄完后执行ssh localhost初始化ssh
  • hadoop相关

    1. start-all.sh报错:ssh: Could not resolve hostname master: Temporary failure in name resolution
      答:请确保各配置文件里的三台机器的host名没有填错,包括/etc/hosts core-site.xml hdfs-site.xml mapred-site.xml 这几个文件都要排查
    2. DataNode节点没有起来,查看日志发现是/user/hadoop/dfs/data 文件路径不存在
      答:请确保hdfs-site.xml里面的name为dfs.datanode.data.dir所对应的value路径目录文件夹已创建
    3. DataNode节点没有起来,查看节点日志文件发现是对该文件夹没有权限,无法操作
      答:chown -R hadoop:hadoop /usr/hadoop 给三个节点的改目录赋该权限给hadoop群组的hadoop用户
    4. Cannot create file/business1/2017-08-02/15/.logs.1501660021526.tmp. Name node is in safe mode.
      答:表示目前是安全模式,值要退出安全模式就可以了hadoop dfsadmin -safemode leave
    5. DataXceiver error processing WRITE_BLOCK operation
        2017-08-03 01:27:55,667 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: s3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.1.113:47061 dst: /192.168.1.113:50010
        java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/192.168.1.113:50010 remote=/192.168.1.113:47061]. 60000 millis timeout left.
        at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
        at java.lang.Thread.run(Thread.java:748)
    字面理解为文件操作超租期,实际上就是data stream操作过程中文件被删掉了。之前也遇到过,通常是因为Mapred多个task操作同一个文件,一个task完成后删掉文件导致。
解决方案:
继续改大 xceiverCount 至8192并重启集群生效。
修改hdfs-site.xml (针对2.x版本,1.x版本属性名应该是:dfs.datanode.max.xcievers):
<property>
        <name>dfs.datanode.max.transfer.threads</name> 
        <value>8192</value> 
</property>
拷贝到各datanode节点并重启datanode即可

    6. java.io.IOException: Incompatible clusterIDs

        java.io.IOException: Incompatible clusterIDs in /usr/hadoop/dfs/d    ata: namenode clusterID = CID-86e16085-c061-4806-aac1-6f125689d567; datanode clusterID = CID-888eeac4-405f-4e3e-a5c3-c5195da71455
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
        at java.lang.Thread.run(Thread.java:748)

        解决方案:

        将name/current下的VERSION中的clusterID复制到data/current下的VERSION中,覆盖掉原来的clusterID
- ###flume相关
#####1.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight
    答:
    1)修改配置参数:
    agent.channels.memoryChanne3.keep-alive = 60
    agent.channels.memoryChanne3.capacity = 1000000(给一个合适的值)
    2)修改java最大内存大小
    vim bin/flume-ng
上一篇:# iOS 使用 InjectionIII 注入动态库实现快速调试


下一篇:iOS Coding Style Guide 代码规范