我确实对kerberos凭据有问题。这项工作基于集群,每个数据节点上都提供了键表。基本上,它是一个oozie工作流shell操作,其目的是通过spark作业向hbase写入数据。如果作业在没有oozie的集群模式下运行,那么它可以正常工作。但对于oozie,它抛出了一个异常,如下所示:
WARN AbstractRpcClient: Exception encountered while connecting to the server
: javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to find
any Kerberos tgt)]
18/11/26 15:30:24 ERROR AbstractRpcClient: SASL authentication failed. The
most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to find
any Kerberos tgt)]
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge
(GssKrb5Client.java:211)
at
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.
saslConnect(HBaseSaslRpcClient.java:179)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection
.setupSaslConnection(RpcClientImpl.java:611)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection
.access$600(RpcClientImpl.java:156)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2
.run(RpcClientImpl.java:737)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2
.run(RpcClientImpl.java:734)
at java.security.AccessController.doPrivileged(Native Method)
oozie shell操作如下所示:
<action name="spark-hbase" retry-max="${retryMax}" retry-interval="${retryInterval}">
<shell xmlns="uri:oozie:shell-action:0.3">
<exec>submit.sh</exec>
<env-var>QUEUE_NAME=${queueName}</env-var>
<env-var>PRINCIPAL=${principal}</env-var>
<env-var>KEYTAB=${keytab}</env-var>
<env-var>VERBOSE=${verbose}</env-var>
<env-var>CURR_DATE=${firstNotNull(currentDate, "")}</env-var>
<env-var>DATA_TABLE=${dataTable}</env-var>
<file>bin/submit.sh</file>
</shell>
<ok to="end"/>
<error to="kill"/>
</action>
submit.sh文件的spark submit命令如下所示:
enter code here
CLASS="App class location"
JAR="compiled jar file"
HBASE_JARS="HBase jars"
HBASE_CONF='hbase-site.xml location'
HIVE_JARS="Hive jars"
HIVE_CONF='tez-site.xml location'
HADOOP_CONF='hdfs-site.xml location'
SPARK_BIN_DIR="spark2-client bin directory location"
${SPARK_BIN_DIR}/spark-submit \
--class ${CLASS} \
--principal "${PRINCIPAL}" \
--keytab "${KEYTAB}" \
--master yarn \
--deploy-mode cluster \
--driver-memory 10G \
--executor-memory 4G \
--num-executors 10 \
--conf spark.default.parallelism=24 \
--jars ${HBASE_JARS},${HIVE_JARS} \
--files ${HBASE_CONF},${HIVE_CONF},${HADOOP_CONF} \
--conf spark.ui.port=4042 \
--conf "spark.executor.extraJavaOptions=-verbose:class -
Dsun.security.krb5.debug=true" \
--conf "spark.driver.extraJavaOptions=-verbose:class -
Dsun.security.krb5.debug=true" \
--queue "${QUEUE_NAME}" \
${JAR} \
--app.name "spark-hbase" \
--data.table "${DATA_TABLE}" \
--verbose
1条答案
按热度按时间o2gm4chl1#
在集群中的所有节点上创建软链接可能并不总是可行的。我们通过在spark submit命令之前重写shell中的spark\u conf\u dir环境变量,在spark配置中添加hbase配置目录来解决这个问题。