尝试从spark连接到grpc。它在我的本地上工作正常,但在AWS EMR中测试它时(在做sbt汇编后)-与emr中的spark包发生冲突,所以阴影化了已经存在于spark中的库
assembly / assemblyShadeRules := Seq(
ShadeRule.rename("io.grpc.**" -> "shade.io.grpc.@1").inAll,
ShadeRule.rename("io.netty.**" -> "shade.io.netty.@1").inAll,
ShadeRule.rename("com.google.protobuf.**" -> "shade.com.google.protobuf.@1").inAll,
ShadeRule.rename("com.google.common.**" -> "shade.com.google.common.@1").inAll
)
使用Spark版本:3.1.1 scala版本:2.12.10 sbt版本:1.6.2 AWS电子病历版本:6.3.1 Java版本:8这些就是我们正在得到的错误:
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 6.0 failed 4 times, most recent failure: Lost task 2.3 in stage 6.0 (TID 27) (ip-10-50-133-143.ec2.internal executor 2): java.lang.VerifyError: Operand stack overflow
Exception Details:
Location:
shade/io/grpc/internal/TransportTracer.getStats()Lshade/io/grpc/InternalChannelz$TransportStats; @102: lload_3
Reason:
Exceeded max stack size.
Current Frame:
bci: @102
flags: { }
locals: { 'shade/io/grpc/internal/TransportTracer', long, long_2nd, long, long_2nd }
stack: { uninitialized 52, uninitialized 52, long, long_2nd, long, long_2nd, long, long_2nd, long, long_2nd, long, long_2nd, long, long_2nd, long, long_2nd, long, long_2nd, long, long_2nd, long, long_2nd, long, long_2nd }
Bytecode:
0x0000000: 2ab4 0041 c700 0914 0042 a700 0f2a b400
0x0000010: 41b9 0047 0100 b400 4a40 2ab4 0041 c700
0x0000020: 0914 0042 a700 0f2a b400 41b9 0047 0100
0x0000030: b400 4d42 bb00 1259 2ab4 004f 2ab4 0051
0x0000040: 2ab4 0053 2ab4 0055 2ab4 0057 2ab4 0059
0x0000050: 2ab4 0033 b900 5f01 002a b400 612a b400
0x0000060: 632a b400 651f 21b7 0068 b0
Stackmap Table:
same_frame(@13)
same_locals_1_stack_item_frame(@25,Long)
append_frame(@39,Long)
same_locals_1_stack_item_frame(@51,Long)
1条答案
按热度按时间kx7yvsdv1#
你使用的是什么版本的sbt-assembly?一些着色错误已经被修复,所以请确保你使用的是最新版本(目前是2.0.0)。我已经看到了非常类似的异常与过时版本的插件。