基于pyspark中的值的组rdd

sqyvllje  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(398)

我创建了一个rdd并用以下内容打印结果:

finalRDD = replacetimestampRDD.map(lambda x: (x[1], x[0:]))
print("Partitions structure: {}".format(finalRDD.glom().collect()))

输出(示例):

Partitions structure: [[('a', ['2020-05-22 15:17:10', 'John', '9535175']), 
                        ('b', ['2020-05-22 15:17:10', 'Nick', '7383554',]),
                        ('c', ['2020-05-22 15:17:10', 'George', '8915433']),
                        ('a', ['2020-05-22 15:17:10', 'Paul', '9615224'])
                      ]]

我试着按键对结果进行分组(按键我的意思是‘a’、‘b’、‘c’)。期望输出:

Partitions structure: [[('a', [['2020-05-22 15:17:10', 'John', '9535175'],['2020-05-22 15:17:10', 'Paul', '9615224']]), 
                        ('b', ['2020-05-22 15:17:10', 'Nick', '7383554',]),
                        ('c', ['2020-05-22 15:17:10', 'George', '8915433'])
                          ]]

我试着 results = finalRDD.groupByKey().collect() 但似乎不起作用?
有人能帮我吗?

bgibtngc

bgibtngc1#

你可以用 mapValues() 之后 groupByKey() 要创建值列表,请执行以下操作:

rdd.groupByKey().mapValues(list).collect()

输出:

[('a',
  [['2020-05-22 15:17:10', 'John', '9535175'],
   ['2020-05-22 15:17:10', 'Paul', '9615224']]),
 ('b', [['2020-05-22 15:17:10', 'Nick', '7383554']]),
 ('c', [['2020-05-22 15:17:10', 'George', '8915433']])]

相关问题