“在aws emr中使用pyspark pandas\u udf时没有名为'pandas'的模块”错误

1tu0hz3e  于 2021-07-13  发布在  Spark
关注(0)|答案(1)|浏览(590)

我运行了这个网站的代码(https://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html#co-在aws emr上使用齐柏林飞艇。

%pyspark
import pandas as pd
from pyspark.sql.functions import pandas_udf, PandasUDFType
    df1 = spark.createDataFrame(
        [(20000101, 1, 1.0), (20000101, 2, 2.0), (20000102, 1, 3.0), (20000102, 2, 4.0)],
        ("time", "id", "v1"))

df2 = spark.createDataFrame(
    [(20000101, 1, "x"), (20000101, 2, "y")],
    ("time", "id", "v2"))

def asof_join(l, r):
    return pd.merge_asof(l, r, on="time", by="id")

df1.groupby("id").cogroup(df2.groupby("id")).applyInPandas(
    asof_join, schema="time int, id int, v1 double, v2 string").show()

并在运行最后一行时得到“modulenotfounderror:no module named'pandas'”错误。df1.groupby(“id”).cogroup(df2.groupby(“id”)).applyinpandas(asof\u join,schema=“time int,id int,v1 double,v2 string”).show()

> pyspark.sql.utils.PythonException:   An exception was thrown from
> Python worker in the executor. The below is the Python worker
> stacktrace. Traceback (most recent call last):   File
> "/mnt/yarn/usercache/zeppelin/appcache/application_1765329837897_0004/container_1765329837897_0004_01_000026/pyspark.zip/pyspark/worker.py",
> line 589, in main
>     func, profiler, deserializer, serializer = read_udfs(pickleSer, infile, eval_type)   File
> "/mnt/yarn/usercache/zeppelin/appcache/application_1765329837897_0004/container_1765329837897_0004_01_000026/pyspark.zip/pyspark/worker.py",
> line 434, in read_udfs
>     arg_offsets, f = read_single_udf(pickleSer, infile, eval_type, runner_conf, udf_index=0)   File
> "/mnt/yarn/usercache/zeppelin/appcache/application_1765329837897_0004/container_1765329837897_0004_01_000026/pyspark.zip/pyspark/worker.py",
> line 254, in read_single_udf
>     f, return_type = read_command(pickleSer, infile)   File "/mnt/yarn/usercache/zeppelin/appcache/application_1765329837897_0004/container_1765329837897_0004_01_000026/pyspark.zip/pyspark/worker.py",
> line 74, in read_command
>     command = serializer._read_with_length(file)   File "/mnt/yarn/usercache/zeppelin/appcache/application_1765329837897_0004/container_1765329837897_0004_01_000026/pyspark.zip/pyspark/serializers.py",
> line 172, in _read_with_length
>     return self.loads(obj)   File "/mnt/yarn/usercache/zeppelin/appcache/application_1765329837897_0004/container_1765329837897_0004_01_000026/pyspark.zip/pyspark/serializers.py",
> line 458, in loads
>     return pickle.loads(obj, encoding=encoding)   File "/mnt/yarn/usercache/zeppelin/appcache/application_1765329837897_0004/container_1765329837897_0004_01_000026/pyspark.zip/pyspark/cloudpickle.py",
> line 1110, in subimport
>     __import__(name)
> ModuleNotFoundError: No module named 'pandas'

您正在使用的库的版本如下“pyspark 3.0.0 spark 3.0.0 pyarrow 0.15.1 zeppelin 0.9.0”,并将zeppelin.pyspark.python config属性设置为python3
由于pandas没有安装在原来的emr环境中,所以我使用命令sudo python3-mpipinstallpandas安装了它。我已经确认,如果我在齐柏林飞艇上运行“import pandas”代码,它运行正常。
但是,当我使用pyspark的pandas\u udf时,我得到一个错误pandas找不到。为什么会这样?我怎样才能正确地做呢?

lvjbypge

lvjbypge1#

将“sudo python3-m install pandas”写入bootsrap操作的shell脚本可以解决这个问题。

相关问题