我用hortonworks数据平台2.5建立了一个hadoop集群,其中还包括ambari2.4、kerberos、spark1.6.2和hdfs。
例如,我为以下用户提供了kerberos主体和keytab:
spark(由ambari在启用kerberos期间创建)
hdfsusera(由kadmin->add\u原则创建)
用户 spark
运行 spark-submit
命令,而spark应用程序必须打开hdfs目录中的一些文件 /user/hdfsuserA/...
,属于hdfsusera(700)。
由于启用了kerberos,我的spark应用程序将不再运行,它将失败,出现以下异常
[Stage 1:> (0 + 92) / 162]Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 55 in stage 1.0 failed 4 times, most recent failure: Lost task 55.3 in stage 1.0 (TID 225, had-data1): org.apache.hadoop.security.AccessControlException: Permission denied: user=spark, access=EXECUTE, inode="/user/hdfsuserA/new/data/Export_PDM_Hadoop_05_2016.csv":hdfsuserA:hadoop:drwx------
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1785)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1862)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
问题是,我用用户身份验证 spark
为了能够启动spark应用程序,但是在应用程序内部,我得到一个异常作为 /user/hdfsuserA
spark用户无法访问hdfs目录。
当我对用户运行spark submit命令时 hdfsuserA
我得到:
[hdfsuserA@had-job ~]$ kinit -kt /etc/security/keytabs/hdfsuserA.keytab hdfsuserA
[hdfsuserA@had-job ~]$ spark-submit --class spark.sales.TestAnalysis --master yarn --deploy-mode client /home/hdfsuserA/application_new.jar hdfs://had-job:8020/user/hdfsuserA/new/data/*
16/12/03 09:44:46 INFO Remoting: Starting remoting
16/12/03 09:44:46 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@141.79.71.34:46996]
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
spark.driver.cores is set but does not apply in client mode.
16/12/03 09:44:49 INFO metastore: Trying to connect to metastore with URI thrift://had-job:9083
16/12/03 09:44:49 INFO metastore: Connected to metastore.
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:122)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
at myutil.SparkContextFactory.createSparkContext(SparkContextFactory.java:34)
at spark.sales.BasketBasedSalesAnalysis.main(BasketBasedSalesAnalysis.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
对于这样的问题,正确的解决办法是什么?我能举个例子吗。 kinit
应用程序内的其他用户?
1条答案
按热度按时间yhived7q1#
我发现了问题:这是一个用户问题!因为我只创造了
hdfsuserA
在集群的namenode主机上运行spark-submit
命令时,应用程序无法通过其他主机上的键选项卡作为此用户进行身份验证。所以要解决这个问题:在集群的所有主机上添加相同的用户:
调用spark应用程序之后应该可以工作(使用
master yarn
中的参数spark-submit
,与master local[x]
它总是有效的)!