pyspark 文档中的简单UDF应用功能在Spark 3.3中失败

sdnqo3pr  于 2023-10-15  发布在  Spark
关注(0)|答案(1)|浏览(128)

这段来自最新文档的简单代码在EMR Studio Spark集群上不起作用(当前版本:3.3.1-amzn-0

df = spark.createDataFrame(
    [(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0)],
    ("id", "v"))

def subtract_mean(pdf: pd.DataFrame) -> pd.DataFrame:
    # pdf is a pandas.DataFrame
    v = pdf.v
    return pdf.assign(v=v - v.mean())

df.groupby("id").applyInPandas(subtract_mean, schema="id long, v double").show()

错误如下所示:

An error was encountered:
An error occurred while calling o184.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7.0 (TID 59) (ip-10-130-55-119.us-east-1.aws.thousandeyes.com executor 7): java.lang.RuntimeException: Failed to run command: /usr/bin/virtualenv -p python3 --no-pip --system-site-packages virtualenv_application_1693557403809_0024_0
    at org.apache.spark.api.python.VirtualEnvFactory.execCommand(VirtualEnvFactory.scala:125)
    at org.apache.spark.api.python.VirtualEnvFactory.setupVirtualEnv(VirtualEnvFactory.scala:83)
    at org.apache.spark.api.python.PythonWorkerFactory.<init>(PythonWorkerFactory.scala:95)

我确信这是Python包版本的问题,因为另一个用户在使用以前版本的Spark(see here)时遇到了类似的问题。然而,我没有成功地找到正确的版本的pandas/pyarrow使用.

cx6n0qe3

cx6n0qe31#

解决方案是与AWS Support打开票证,他们解决了这个问题。部分解决方案是在第一个笔记本电池中使用它:

%%configure -f
{
    "conf": {
        "spark.pyspark.python":"python3",
        "spark.pyspark.virtualenv.enabled": "true",
        "spark.pyspark.virtualenv.type": "native", 
        "spark.pyspark.virtualenv.bin.path": "/usr/local/bin/virtualenv"
        
    }
}

相关问题