udf给出与pyarrow相关的错误

nlejzf6q  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(480)

我有一个dataframe,我想用pysaprk中的polyline库获取给定地理位置的纬度

+-----------------+--------------------+----------+                             
|              vid|        geolocations| trip_date|
+-----------------+--------------------+----------+
|58AC21B17LU006754|eurnE||yqU???????...|2020-02-22|
|2T3EWRFV0LW060632|uocbGfjniOK[Fs@rC...|2020-02-25|
|JTDP4RCE0LJ014008|w}wtFpdxtM????Q_@...|2020-02-25|
|4T1BZ1HK8KU029845|}rz_Dp~hhN?@?@???...|2020-03-03|

我正在使用udf,并且已经启用了apache arrow

from pyspark.sql.functions import col, pandas_udf
spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")
spark.conf.set("spark.sql.execution.arrow.pyspark.fallback.enabled", "true")

lat_long_udf = pandas_udf(lambda geoloc:  polyline.decode(geoloc)[0],ArrayType(StringType()))
df1=df.withColumn('lat_long',lat_long_udf(df.geolocations))

当调用df.count()给出结果时,但在执行df.show()时,我得到如下错误:

248, in init_stream_yield_batches
    for series in iterator:
  File "/Users/prantik.pariksha/opt/anaconda3/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/worker.py", line 450, in mapper
    result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs)
  File "/Users/prantik.pariksha/opt/anaconda3/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/worker.py", line 450, in <genexpr>
    result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs)
  File "/Users/prantik.pariksha/opt/anaconda3/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/worker.py", line 110, in <lambda>
    verify_result_type(f(*a)), len(a[0])), arrow_return_type)
  File "/Users/prantik.pariksha/opt/anaconda3/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/util.py", line 107, in wrapper
    return f(*args,**kwargs)
  File "<stdin>", line 1, in <lambda>
  File "/Users/prantik.pariksha/opt/anaconda3/lib/python3.8/site-packages/polyline/__init__.py", line 16, in decode
    return PolylineCodec().decode(expression, precision, geojson)
  File "/Users/prantik.pariksha/opt/anaconda3/lib/python3.8/site-packages/polyline/codec.py", line 43, in decode
    lat_change, index = self._trans(expression, index)
  File "/Users/prantik.pariksha/opt/anaconda3/lib/python3.8/site-packages/polyline/codec.py", line 31, in _trans
    byte = ord(value[index]) - 63
TypeError: ord() expected a character, but string of length 87 found

>>> print(pandas.__version__)
1.1.1
>>> print(numpy.__version__)
1.19.1
>>> import pyarrow
>>> print(pyarrow.__version__)
1.0.1
lskq00tm

lskq00tm1#

您很可能因为 pandas_udf 将Pandas系列作为输入,然后应用 decode 函数直接应用于此序列,而不是将其应用于序列中的值。
e、 在下面的例子中,我对lambda函数进行了一些扩展,以便您可以看到它。我拿Pandas系列,应用 polyline.decode 函数,然后再次返回结果序列。注意,我还将返回类型更改为 ArrayType(DoubleType()) 而不是 ArrayType(StringType()) .

import pandas as pd

from pyspark.sql.types import ArrayType, DoubleType

....

df = spark.createDataFrame([["~sqU__pR_jpv@_pR"], ["_~t[__pR~qy@_pR"]], ["geolocations"])

@pandas_udf(ArrayType(DoubleType()))
def lat_long_udf(s: pd.Series) -> pd.Series:
  return s.apply(lambda x: polyline.decode(x)[0])

df1=df.withColumn('decoded', lat_long_udf(df.geolocations))
df1.collect()

相关问题