pyspark在分组和写入时出错是因为分区吗?

a64a0gku  于 2021-07-13  发布在  Spark
关注(0)|答案(0)|浏览(274)

我正在阅读来自hive数据源的一些评论。你也可以看看实物图

comments = hive.executeQuery(f"""
    select *
from  mytable
order by number,entity, seq
        """)

comments.explain()
== Physical Plan ==

* (1) DataSourceV2Scan number, seq, entity, comment#546],

com.hortonworks.spark.sql.hive.llap.readers.HiveWarehouseDataSourceReader@602bc52a

我们可以在上面看到,在物理计划中没有关于分区键的描述。。我不确定这里是否有分区列信息。是吗?

filtered_comments=comments.filter(comments["number"].isin(nums_list[:10]))

filtered_comments.explain()

== Physical Plan ==

* (1) Filter number#538 IN (1,2,3,4,5,6,7,8,9,10)

+- *(1) DataSourceV2Scan number, seq, entity, comment#546], 
com.hortonworks.spark.sql.hive.llap.readers.HiveWarehouseDataSourceReader@602bc52a

过滤我的评论。。。nums\u list是数字列表。。为了简单起见,我只用了10个数字

comments_group =filtered_comments.groupBy(['number','entity']).agg(concat_ws(" ", collect_list("comment")).alias("comments"))
comments_group.explain()

== Physical Plan ==
ObjectHashAggregate(keys=[number#538, entity#544], functions=[collect_list(comment#546, 0, 0)])
+- Exchange hashpartitioning(number#538, entity#544, 200)
   +- ObjectHashAggregate(keys=[number#538, entity#544], functions=[partial_collect_list(comment#546, 0, 0)])
      +- *(1) Project [number#538, entity#544, comments#546]
         *(1) Filter number#538 IN (1,2,3,4,5,6,7,8,9,10)
    +- *(1) DataSourceV2Scan number, seq, entity, comment#546], 
        com.hortonworks.spark.sql.hive.llap.readers.HiveWarehouseDataSourceReader@602bc52a

comment是一个字符串列,我按数字和实体分组以获取所有注解。

comments_group.write.parquet('group_new.parquet')

错误:

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-46-8221c51a1614> in <module>
----> 1 comments_group.write.parquet('group_new.parquet')

/usr/hdp/current/spark2-client/python/pyspark/sql/readwriter.py in parquet(self, path, mode, partitionBy, compression)
    802             self.partitionBy(partitionBy)
    803         self._set_opts(compression=compression)
--> 804         self._jwrite.parquet(path)
    805 
    806     @since(1.6)

/usr/hdp/current/spark2-client/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/usr/hdp/current/spark2-client/python/pyspark/sql/utils.py in deco(*a,**kw)
     61     def deco(*a,**kw):
     62         try:
---> 63             return f(*a,**kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/hdp/current/spark2-client/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o99243.parquet.
: org.apache.spark.SparkException: Job aborted.
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:224)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:664)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:664)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:664)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
    at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:557)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
ObjectHashAggregate(keys=[number#262, entity#268], functions=[collect_list(comment#270, 0, 0)], output=[number#262, entity#268, comments#378])
+- Exchange hashpartitioning(number#262, entity#268, 200)

   +- ObjectHashAggregate(keys=[number#262, entity#268], functions=[partial_collect_list(comments_x#270, 0, 0)], output=[number#262, entity#268, buf#383])
      +- *(1) Project [number#262, entity#268, comments#270]
         +- *(1) Filter number#262 INSET (1,2,3,4,5,6,7,8,9,10)
+- *(1) DataSourceV2Scan number, seq, entity, comment#546], 
com.hortonworks.spark.sql.hive.llap.readers.HiveWarehouseDataSourceReader@602bc52a

        at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
    at org.apache.spark.sql.execution.aggregate.ObjectHashAggregateExec.doExecute(ObjectHashAggregateExec.scala:100)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.prepareShuffleDependency(ShuffleExchangeExec.scala:92)
    at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:128)
    at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:119)
    at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
    ... 49 more
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: shadehive.org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create temp table: org.apache.hadoop.hive.ql.metadata.HiveException: Vertex failed, vertexName=Map 4, vertexId=vertex_1610893834202_1574226_310_00, diagnostics=[Vertex vertex_1610893834202_1574226_310_00 [Map 4] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator
    at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:71)
    at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89)
    at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:152)
    at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
    at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148)
    at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121)
    at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:4123)
    at org.apache.tez.dag.app.dag.impl.VertexImpl.access$3100(VertexImpl.java:208)
    at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:2933)
    at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2880)
    at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2862)
    at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
    at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
    at org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
    at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
    at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:59)
    at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1958)
    at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:207)
    at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2317)
    at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2303)
    at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:180)
    at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:115)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68)
    ... 25 more
Caused by: java.lang.IllegalArgumentException: No running LLAP daemons! Please check LLAP service status and zookeeper configuration
    at com.google.common.base.Preconditions.checkArgument(Preconditions.java:142)
    at org.apache.hadoop.hive.ql.exec.tez.Utils.getCustomSplitLocationProvider(Utils.java:81)
    at org.apache.hadoop.hive.ql.exec.tez.Utils.getSplitLocationProvider(Utils.java:53)
    at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.<init>(HiveSplitGenerator.java:140)
    ... 30 more
    ]

Vertex killed, vertexName=Reducer 5, vertexId=vertex_1610893834202_1574226_310_01, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1610893834202_1574226_310_01 [Reducer 5] killed/failed due to:OTHER_VERTEX_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1610893834202_1574226_310_03, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1610893834202_1574226_310_03 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]Vertex killed, vertexName=Reducer 3, vertexId=vertex_1610893834202_1574226_310_04, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1610893834202_1574226_310_04 [Reducer 3] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2 killedVertices:3
        at com.hortonworks.spark.sql.hive.llap.readers.HiveWarehouseDataSourceReader.getSplits(HiveWarehouseDataSourceReader.java:315)
        at com.hortonworks.spark.sql.hive.llap.readers.HiveWarehouseDataSourceReader.getDataReaderSplitsFactories(HiveWarehouseDataSourceReader.java:251)
        at com.hortonworks.spark.sql.hive.llap.readers.HiveWarehouseDataSourceReader.createBatchDataReaderFactories(HiveWarehouseDataSourceReader.java:214)
        ... 75 more
    Caused by: java.io.IOException: shadehive.org.apache.hive.service.cli.HiveSQLException: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create temp table: org.apache.hadoop.hive.ql.metadata.HiveException: Vertex failed, vertexName=Map 4, vertexId=vertex_1610893834202_1574226_310_00, diagnostics=[Vertex vertex_1610893834202_1574226_310_00 [Map 4] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator
        at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:71)
        at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:152)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121)
        at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:4123)
        at org.apache.tez.dag.app.dag.impl.VertexImpl.access$3100(VertexImpl.java:208)
        at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:2933)
        at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2880)
        at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2862)
        at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
        at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
        at org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
        at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
        at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:59)
        at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1958)
        at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:207)
        at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2317)
        at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2303)
        at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:180)
        at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:115)
        at java.lang.Thread.run(Thread.java:748)
    Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68)
        ... 25 more
    Caused by: java.lang.IllegalArgumentException: No running LLAP daemons! Please check LLAP service status and zookeeper configuration
        at com.google.common.base.Preconditions.checkArgument(Preconditions.java:142)
        at org.apache.hadoop.hive.ql.exec.tez.Utils.getCustomSplitLocationProvider(Utils.java:81)
        at org.apache.hadoop.hive.ql.exec.tez.Utils.getSplitLocationProvider(Utils.java:53)
        at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.<init>(HiveSplitGenerator.java:140)
        ... 30 more
    ]
    ]Vertex failed, vertexName=Map 1, vertexId=vertex_1610893834202_1574226_310_02, diagnostics=[Vertex vertex_1610893834202_1574226_310_02 [Map 1] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator
        at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:71)
        at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:152)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121)
        at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:4123)
        at org.apache.tez.dag.app.dag.impl.VertexImpl.access$3100(VertexImpl.java:208)
        at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:2933)
        at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2880)
        at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2862)
        at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
        at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
        at org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
        at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
        at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:59)
        at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1958)
        at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:207)
        at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2317)
        at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2303)
        at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:180)
        at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:115)
        at java.lang.Thread.run(Thread.java:748)
    Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68)
        ... 25 more
    Caused by: java.lang.IllegalArgumentException: No running LLAP daemons! Please check LLAP service status and zookeeper configuration
        at com.google.common.base.Preconditions.checkArgument(Preconditions.java:142)
        at org.apache.hadoop.hive.ql.exec.tez.Utils.getCustomSplitLocationProvider(Utils.java:81)
        at org.apache.hadoop.hive.ql.exec.tez.Utils.getSplitLocationProvider(Utils.java:53)
        at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.<init>(HiveSplitGenerator.java:140)
        ... 30 more
    ]Vertex killed, vertexName=Reducer 5, vertexId=vertex_1610893834202_1574226_310_01, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1610893834202_1574226_310_01 [Reducer 5] killed/failed due to:OTHER_VERTEX_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1610893834202_1574226_310_03, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1610893834202_1574226_310_03 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]Vertex killed, vertexName=Reducer 3, vertexId=vertex_1610893834202_1574226_310_04, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1610893834202_1574226_310_04 [Reducer 3] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2 killedVertices:3
        at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.createPlanFragment(GenericUDTFGetSplits.java:358)
        at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplitResult(GenericUDTFGetSplits.java:244)
        at org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits2.process(GenericUDTFGetSplits2.java:78)
        ... 27 more

这些是我基于探索的观察结果:我查询的表是一个视图。。所以我的问题是,这是因为视图没有分区吗?
这是否也是因为我的数据是按数字和实体划分的,而当我试图按这些列分组时,我没有获得足够的资源?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题