我试图随机排列rdd中元素的顺序。我目前的方法是用无序整数的rdd压缩元素,然后用这些整数连接。
然而,pyspark只包含100000000个整数。我正在使用下面的代码。
我的问题是:有没有更好的方法来压缩随机索引或以其他方式洗牌?
我试过按随机键排序,这很有效,但速度很慢。
def random_indices(n):
"""
return an iterable of random indices in range(0,n)
"""
indices = range(n)
random.shuffle(indices)
return indices
pyspark中会发生以下情况:
Using Python version 2.7.3 (default, Jun 22 2015 19:33:41)
SparkContext available as sc.
>>> import clean
>>> clean.sc = sc
>>> clean.random_indices(100000000)
Killed
2条答案
按热度按时间rqcrx0a61#
一种可能的方法是使用
mapParitions
```import os
import numpy as np
swap = lambda x: (x[1], x[0])
def add_random_key(it):
# make sure we get a proper random seed
seed = int(os.urandom(4).encode('hex'), 16)
# create separate generator
rs = np.random.RandomState(seed)
# Could be randint if you prefer integers
return ((rs.rand(), swap(x)) for x in it)
rdd_with_keys = (rdd
It will be used as final key. If you don't accept gaps
use zipWithIndex but this should be cheaper
.zipWithUniqueId()
.mapPartitions(add_random_key, preservesPartitioning=True))
n = rdd.getNumPartitions()
(rdd_with_keys
# partition by random key to put data on random partition
.partitionBy(n)
# Sort partition by random value to ensure random order on partition
.mapPartitions(sorted, preservesPartitioning=True)
# Extract (unique_id, value) pairs
.values())
理论上它可以通过输入压缩
rdd
但它需要匹配每个分区的元素数。hfwmuf9z2#
Pypark成功了!