给定Dataframe:
+---+-----------+---------+-------+------------+
| id| score|tx_amount|isValid| greeting|
+---+-----------+---------+-------+------------+
| 1| 0.2| 23.78| true| hello_world|
| 2| 0.6| 12.41| false|byebye_world|
+---+-----------+---------+-------+------------+
我想将这些列分解成一个名为“col\u value”的行,并对每一行应用逻辑,这样我就得到如下结果:
+---+------------+--------+---------+----------+-------+
| id| col_value|is_score|is_amount|is_boolean|is_text|
+---+------------+--------+---------+----------+-------+
| 1| 0.2| Y| N| N| N|
| 1| 23.78| N| Y| N| N|
| 1| true| N| N| Y| N|
| 1| hello_world| N| N| N| Y|
| 2| 0.6| Y| N| N| N|
| 2| 12.41| N| Y| N| N|
| 2| false| N| N| Y| N|
| 2|byebye_world| N| N| N| Y|
+---+------------+--------+---------+----------+-------+
到目前为止,我用 F.arrays_zip
spark 2.4的功能:
from pyspark.sql import functions as F
df.withColumn("cols", F.explode(F.arrays_zip(F.array("score", "tx_amount", "isValid", "greeting")))) \
.select("id", F.col("cols.*")) \
.withColumnRenamed("0", "col_value")\
.withColumn("text", (F.regexp_extract(F.col("col_value"),"([A-Za-z]+)",1)))\
.withColumn("boolean", F.when((F.col("text")=='true')|(F.col("text")=='false'),F.col("text")).otherwise(F.lit("")))\
.withColumn("text", F.when(F.col("text")==F.col("boolean"), F.lit("")).otherwise(F.col("text")))\
.withColumn("numeric", F.regexp_extract(F.col("col_value"),"([0-9]+)",1))\
.withColumn("is_text", F.when(F.col("text")!="", F.lit("Y")).otherwise(F.lit("N")))\
.withColumn("is_score", F.when(F.col("numeric")<=1, F.lit("Y")).otherwise(F.lit("N")))\
.withColumn("is_amount", F.when(F.col("numeric")>1, F.lit("Y")).otherwise(F.lit("N")))\
.withColumn("is_boolean", F.when(F.col("boolean")!="", F.lit("Y")).otherwise(F.lit("N")))\
.select("id", "col_value","is_score","is_amount","is_boolean","is_text").show()
没有你我怎么办 F.arrays_zip
使用spark 2.3?
暂无答案!
目前还没有任何答案,快来回答吧!