如何在pysparkDataframe中将一列中的字典列表拆分为两列?

ldioqlga  于 2021-07-13  发布在  Spark
关注(0)|答案(1)|浏览(434)


我想将上面sparkDataframe的filteredaddress列拆分为两个新列,分别是flag和address:

customer_id|pincode|filteredaddress|                                                              Flag| Address
1000045801 |121005 |[{'flag':'0', 'address':'House number 172, Parvatiya Colony Part-2 , N.I.T'}]
1000045801 |121005 |[{'flag':'1', 'address':'House number 172, Parvatiya Colony Part-2 , N.I.T'}]
1000045801 |121005 |[{'flag':'1', 'address':'House number 172, Parvatiya Colony Part-2 , N.I.T'}]

谁能告诉我怎么做吗?

ghg1uchk

ghg1uchk1#

你可以从 filteredaddress 使用键Map列:

df2 = df.selectExpr(
    'customer_id', 'pincode',
    "filteredaddress['flag'] as flag", "filteredaddress['address'] as address"
)

访问Map值的其他方法有:

import pyspark.sql.functions as F

df.select(
    'customer_id', 'pincode',
    F.col('filteredaddress')['flag'],
    F.col('filteredaddress')['address']
)

# or, more simply

df.select(
    'customer_id', 'pincode',
    'filteredaddress.flag',
    'filteredaddress.address'
)

相关问题