我使用Python 2.7和Spark 2.2.0。我在PySpark中创建了一个 Dataframe ,它具有字符串列类型并包含URL。
df = spark.createDataFrame([('example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0',)], ['url'])
df.show(1, False)
+-------------------------------------------------------------------------------------------------------+
|url |
+-------------------------------------------------------------------------------------------------------+
|example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0|
+-------------------------------------------------------------------------------------------------------+
为了解码列中的所有URL,我尝试使用urllib。我创建了udf
。我是这样使用它的:
from pyspark.sql.types import StringType
from pyspark.sql.functions import udf
decode_url = udf(lambda val: (urllib.unquote(val).decode('utf8'), StringType()))
在我的列上应用udf
之后,我期待着这样:
+---------------------------------+
|url |
+---------------------------------+
|example.com?title=правовая+защита|
+---------------------------------+
但我得到了一个错误:
UnicodeEncodeError: 'ascii' codec can't encode characters in position 18-33: ordinal not in range(128)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:144)
at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:87)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
如果我从列中取出一些url并尝试单独解码,它工作正常:
import urllib
url='example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0'
print urllib.unquote(url).decode('utf8')
example.com?title=правовая+защита
3条答案
按热度按时间oyxsuwqo1#
在引擎盖下似乎有一些奇怪的编码。你为什么不自己显式编码呢?
zpjtge222#
注:编码字段名为dctr
输入:
im_segments%3Debejz4gv%2Ck1GmZLwg%2C8zY92P4g%2Cka6ee4eb%2CgPKlZXXb%2CqkVvpGk9%2Cky1ee4Dk%2CgvqKoW0b%2CgO5l6Zrk%2CgO5lGpdk%2CxkD6AYgm%2CgO5rENWk%2Cg7VrxvDb
预期输出:
im_segments=ebejz4gv,k1GmZLwg,8zY92P4g,ka6ee4eb,gPKlZXXb,qkVvpGk9,ky1ee4Dk,gvqKoW0b,gO5l6Zrk,gO5lGpdk,xkD6AYgm,gO5rENWk,g7VrxvDb
答案:
mjqavswn3#
Spark 3.5+
Spark 3.4+
示例如下: