合并列后从数组中删除空值-pyspark

hs1ihplo  于 2021-07-09  发布在  Spark
关注(0)|答案(2)|浏览(412)

我有这个Pypark数据框
测向:

+---------+----+----+----+----+----+----+----+----+----+                        
|partition|   1|   2|   3|   4|   5|   6|   7|   8|   9|
+---------+----+----+----+----+----+----+----+----+----+
|        7|null|null|null|null|null|null| 0.7|null|null|
|        1| 0.2| 0.1| 0.3|null|null|null|null|null|null|
|        8|null|null|null|null|null|null|null| 0.8|null|
|        4|null|null|null| 0.4| 0.5| 0.6|null|null| 0.9|
+---------+----+----+----+----+----+----+----+----+----+

我把

+---------+--------------------+                                                
|partition|            vec_comb|
+---------+--------------------+
|        7|      [,,,,,,,, 0.7]|
|        1|[,,,,,, 0.1, 0.2,...|
|        8|      [,,,,,,,, 0.8]|
|        4|[,,,,, 0.4, 0.5, ...|
+---------+--------------------+

如何移除 NullTypesvec_comb 列?
预期产量:

+---------+--------------------+                                                
|partition|            vec_comb|
+---------+--------------------+
|        7|               [0.7]|
|        1|      [0.1, 0.2,0.3]|
|        8|               [0.8]|
|        4|[0.4, 0.5, 0.6, 0,9]|
+---------+--------------------+

我已经试过了(显然是错的,但我不能把我的头绕在这上面):

def clean_vec(array):
    new_Array = []
    for element in array:
        if type(element)==FloatType():
            new_Array.append(element)
    return new_Array

udf_clean_vec = F.udf(f=(lambda c: clean_vec(c)), returnType=ArrayType(FloatType()))
df = df.withColumn('vec_comb_cleaned', udf_clean_vec('vec_comb'))
zbwhf8kr

zbwhf8kr1#

你可以使用高阶函数 filter 要删除空元素:

import pyspark.sql.functions as F

df2 = df.withColumn('vec_comb_cleaned', F.expr('filter(vec_comb, x -> x is not null)'))

df2.show()
+---------+--------------------+--------------------+
|partition|            vec_comb|    vec_comb_cleaned|
+---------+--------------------+--------------------+
|        7|      [,,,,,, 0.7,,]|               [0.7]|
|        1|[0.2, 0.1, 0.3,,,...|     [0.2, 0.1, 0.3]|
|        8|      [,,,,,,, 0.8,]|               [0.8]|
|        4|[,,, 0.4, 0.5, 0....|[0.4, 0.5, 0.6, 0.9]|
+---------+--------------------+--------------------+

你可以使用自定义项,但它会慢一些,例如。

udf_clean_vec = F.udf(lambda x: [i for i in x if i is not None], 'array<float>')
df2 = df.withColumn('vec_comb_cleaned', udf_clean_vec('vec_comb'))
uxhixvfz

uxhixvfz2#

不使用特定于pyspark的特性,还可以创建 list 只是 filter 把这个 NaN 学生:

df['vec_comb'] = df.iloc[:, 1:10].apply(lambda r: list(filter(pd.notna, r)) , axis=1)
df

# Output:

   partition     1     2     3     4     5     6     7     8     9              vec_comb
0          7   NaN   NaN   NaN   NaN   NaN   NaN   0.7   NaN   NaN                 [0.7]
1          1   0.2   0.1   0.3   NaN   NaN   NaN   NaN   NaN   NaN       [0.2, 0.1, 0.3]
2          8   NaN   NaN   NaN   NaN   NaN   NaN   NaN   0.8   NaN                 [0.8]
3          4   NaN   NaN   NaN   0.4   0.5   0.6   NaN   NaN   0.9  [0.4, 0.5, 0.6, 0.9]

并通过只选择所需的两个列来删除旧列:

df = df[['partition', 'vec_comb']]
df

# Output:

   partition              vec_comb
0          7                 [0.7]
1          1       [0.2, 0.1, 0.3]
2          8                 [0.8]
3          4  [0.4, 0.5, 0.6, 0.9]

相关问题