前阶段线上在做Hive升级(CDH4.2.0 Hive 0.10——> Apache Hive0.11 with our patches)和Shark上线踩了不少坑,先来说一个Hiveserver的问题.
beeline进入后随便执行一个查询就会报错:
USERxxx don’t have write privilegs under /tmp/hive-hdfs
不对啊,已经启用了impersonation怎么还会去hdfs下的scratchdir写入临时文件呢?查看下代码发现原来CDH4.2的Hive的impersonation和hive0.11在这处的判断行为是不同的:
Hive0.11 apache:只有在启用kerberos才使用hive-xxx作为scratchdir否则使用hiveserver的start user的scratchdir
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
if (
cliService.getHiveConf().getVar(ConfVars.HIVE_SERVER2_AUTHENTICATION)
.equals(HiveAuthFactory.AuthTypes.KERBEROS.toString())
&&
cliService.getHiveConf().
getBoolVar(ConfVars.HIVE_SERVER2_ENABLE_DOAS)
)
{
String delegationTokenStr = null ;
try {
delegationTokenStr = cliService.getDelegationTokenFromMetaStore(userName);
} catch (UnsupportedOperationException e) {
// The delegation token is not applicable in the given deployment mode
}
sessionHandle = cliService.openSessionWithImpersonation(userName, req.getPassword(),
req.getConfiguration(), delegationTokenStr);
} else {
sessionHandle = cliService.openSession(userName, req.getPassword(),
req.getConfiguration());
}
|
Cloudera4.2.0的Hive0.10是只要启用了impersonation就使用独自的scratchdir...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
if (cliService.getHiveConf().
getBoolVar(HiveConf.ConfVars.HIVE_SERVER2_KERBEROS_IMPERSONATION)) {
String delegationTokenStr = null ;
try {
delegationTokenStr = cliService.getDelegationTokenFromMetaStore(userName);
} catch (UnsupportedOperationException e) {
// The delegation token is not applicable in the given deployment mode
}
sessionHandle = cliService.openSessionWithImpersonation(userName, req.getPassword(),
req.getConfiguration(), delegationTokenStr);
} else {
sessionHandle = cliService.openSession(userName, req.getPassword(),
req.getConfiguration());
}
|
并且这个作为一个Hiveserver的bug在0.13被修复:https://issues.apache.org/jira/browse/HIVE-5486
workaround也简单,就是把/tmp/hive-hdfs改成777就好了=。=坑爹啊
本文转自MIKE老毕 51CTO博客,原文链接:http://blog.51cto.com/boylook/1352929,如需转载请自行联系原作者