今天群里有人问hadoop的问题,说百度上怎么都查不到,还好hadoop之前玩过一阵,也遇上过这个问题
hadoop-2.2.0 hbase 0.95-hadoop2的 ,hdfs正常 ,启动 hbase 的时候hmaster启动进程启动就shutdown了
2013-11-21 20:07:58,892 FATAL [master:master:60000] master.HMaster: Unhandled exception. Starting shutdown.
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: "master/192.168.0.102"; destination host is: "master":8010;
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: "master/192.168.0.102"; destination host is: "master":8010;
这种问题大部分都是版本的问题,hadoop2.2这个版本比较新,最好用2.0.2的,因为要跟hbase 0.95相配合
也有推荐使用这种方式解决的:将proto文件中的callId和status由required改为optional,或者始终给callId和status设置值
后来换成hbase 0.96就没有这个问题了,然后出现新问题
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcServerException): Unknown out of band call #-2147483647
这个问题好像百度也查不到,看了看国外的
http://search-hadoop.com/m/0Pnu41YGCIi/hbase+tianying+0.96&subj=RE+Hbase+0+96+and+Hadoop+2+2
What is your HBase version and Hadoop version? There is a RPC break change in hadoop 2.2. As a workaround, I removed the hadoop hadoop-common-2.2.0.2.0.6.0-68.jar and hadoop-hdfs-2.2.0.2.0.6.0-68.jar from my Hbase/lib and let it use the one in hadoop path, then this error is gone.
这里说的解决办法是把hadoop 的hadoop-common-2.2.0.2.0.6.0-68.jar和hdfs-2.2.0.2.0.6.0-68.jar 从hbase/lib中移除,然后放到hadoop的路径下
后来我懒得找,直接把所有hadoop的jar通通拷到hbase下,就OK了
后来我懒得找,直接把所有hadoop的jar通通拷到hbase下,就OK了
还有一个办法就是
Did you replace the hadoop jars that is under hbase/lib with those of the cluster you are connecting too? 0.96.0 bundles 2.1.0-beta and sounds like you want to connect to hadoop-2.2.0. The two hadoops need to be the same version (the above is a new variant on the mismatch message, each more cryptic than the last).
把hadoop2.2换成hadoop2.1 应该也可以解决,这个没有试过