scala 将数据集转换为红色时出现任务不可序列化异常

8yparm6h  于 2023-03-30  发布在  Scala
关注(0)|答案(2)|浏览(118)

我有一个数据集,看起来像下面这样:

dataset.show(10)

|   features|
+-----------+
|[14.378858]|
|[14.388442]|
|[14.384361]|
|[14.386358]|
|[14.390068]|
|[14.423256]|
|[14.425567]|
|[14.434074]|
|[14.437667]|
|[14.445997]|
+-----------+
only showing top 10 rows

但是,当我尝试使用.rdd将此DataSet转换为RDD时,如下所示:

val myRDD = dataset.rdd

我得到如下的异常:

Task not serializable: java.io.NotSerializableException: scala.runtime.LazyRef
Serialization stack:
    - object not serializable (class: scala.runtime.LazyRef, value: LazyRef thunk)
    - element of array (index: 2)
    - array (class [Ljava.lang.Object;, size 3)
    - field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;)
    - object (class java.lang.invoke.SerializedLambda, SerializedLambda[capturingClass=class org.apache.spark.sql.catalyst.expressions.ScalaUDF, functionalInterfaceMethod=scala/Function1.apply:(Ljava/lang/Object;)Ljava/lang/Object;, implementation=invokeStatic org/apache/spark/sql/catalyst/expressions/ScalaUDF.$anonfun$f$2:(Lscala/Function1;Lorg/apache/spark/sql/catalyst/expressions/Expression;Lscala/runtime/LazyRef;Lorg/apache/spark/sql/catalyst/InternalRow;)Ljava/lang/Object;, instantiatedMethodType=(Lorg/apache/spark/sql/catalyst/InternalRow;)Ljava/lang/Object;, numCaptured=3])
    - writeReplace data (class: java.lang.invoke.SerializedLambda)

我该怎么解决这个问题?

6yoyoihd

6yoyoihd1#

java.io.NotSerializableException: scala.runtime.LazyRef

明确指出运行时版本不匹配问题。您没有提到您的spark版本...
This is scala version issue downgrade to scala 2.11 it should work
从这个网址https://mvnrepository.com/artifact/org.apache.spark/spark-core查看这个版本表,并适当地更改你的scala版本。

brc7rcf0

brc7rcf02#

我遇到过类似的问题,我的scala版本是2.12.2,在移动到2.12.8时解决了这个问题。

相关问题