在google colab上使用randomsplit()时出现py4jjavaerror

yyhrrdl8  于 2021-09-08  发布在  Java
关注(0)|答案(1)|浏览(425)

我试图在pyspark dataframe上拆分数据,我使用了代码;

```train, validation, test = movie_ratings_spark.randomSplit([6, 2, 2])

# cache data

train.cache()
validation.cache()
test.cache()```

我得到以下错误;

```Py4JJavaError: An error occurred while calling o46.randomSplit.
: java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Double
    at scala.runtime.BoxesRunTime.unboxToDouble(BoxesRunTime.java:116)
    at scala.runtime.ScalaRunTime$.array_update(ScalaRunTime.scala:77)
    at scala.collection.IterableLike.copyToArray(IterableLike.scala:256)
    at scala.collection.IterableLike.copyToArray$(IterableLike.scala:251)
    at scala.collection.AbstractIterable.copyToArray(Iterable.scala:56)
    at scala.collection.TraversableOnce.copyToArray(TraversableOnce.scala:283)
    at scala.collection.TraversableOnce.copyToArray$(TraversableOnce.scala:282)
    at scala.collection.AbstractTraversable.copyToArray(Traversable.scala:108)
    at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:291)
    at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
    at scala.collection.AbstractTraversable.toArray(Traversable.scala:108)
    at org.apache.spark.sql.Dataset.randomSplit(Dataset.scala:2292)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)```

我做了一些搜索,发现它们可能是pyspark2.1和python3之间的兼容性问题。我正在使用谷歌colab,任何人谁已经能够解决这个问题,请帮助。感谢

gkn4icbw

gkn4icbw1#

这是的文档字符串 randomSplit 方法:
随机拆分:类:dataframe和提供的权重。
:param weights:double作为权重的列表,用于拆分:class:dataframe。如果权重总和不等于1.0,则将对其进行标准化。
:param seed:用于采样的种子。
所以你应该传递一个双精度的列表,而不是整数

train, validation, test = movie_ratings_spark.randomSplit([6., 2., 2.])

相关问题