regex模式在应用逻辑后在pyspark中不起作用

qrjkbowd  于 2021-05-27  发布在  Hadoop
关注(0)|答案(1)|浏览(403)

我的数据如下:

>>> df1.show()
+-----------------+--------------------+
|     corruptNames|       standardNames|
+-----------------+--------------------+
|Sid is (Good boy)|     Sid is Good Boy|
|    New York Life| New York Life In...|
+-----------------+--------------------+

因此,根据上面的数据,我需要应用regex,创建一个新的列,并获得第二列中的数据,即 standardNames . 我试过以下代码:

spark.sql("select *, case when corruptNames rlike '[^a-zA-Z ()]+(?![^(]*))' or corruptNames rlike 'standardNames' then standardNames else 0 end as standard from temp1").show()

它抛出以下错误:

pyspark.sql.utils.AnalysisException: "cannot resolve '`standardNames`' given input columns: [temp1.corruptNames, temp1. standardNames];
vsmadaxz

vsmadaxz1#

试试这个例子 select sql . 如果regex模式是真的,我假设您想基于corruptnames创建一个名为standardnames的新列,否则“do something other…”。
注意:您的模式将无法编译,因为您需要用\转义第二个(最后一个)。

pattern = '[^a-zA-Z ()]+(?![^(]*))' #this won't compile
pattern = r'[^a-zA-Z ()]+(?![^(]*\))' #this will

代码

import pyspark.sql.functions as F

df_text = spark.createDataFrame([('Sid is (Good boy)',),('New York Life',)], ('corruptNames',))

pattern = r'[^a-zA-Z ()]+(?![^(]*\))'

df = (df_text.withColumn('standardNames', F.when(F.col('corruptNames').rlike(pattern), F.col('corruptNames'))
             .otherwise('Do something else'))
             .show()
     )

df.show()

# +-----------------+---------------------+

# |     corruptNames|        standardNames|

# +-----------------+---------------------+

# |Sid is (Good boy)|    Do something else|

# |    New York Life|    Do something else|

# +-----------------+---------------------+

相关问题