我要分一杯羹 pyspark
Dataframe df
保存不同的块。
这就是我要做的:我定义一个列 id_tmp
我根据这个来分割Dataframe。
chunk = 10000
id1 = 0
id2 = chunk
df = df.withColumn('id_tmp', row_number().over(Window.orderBy(monotonically_increasing_id())) - 1)
c = df.count()
while id1 < c:
stop_df = df.filter( (tmp.id_tmp < id2) & (tmp.id_tmp >= id1))
stop_df.write.format('com.databricks.spark.csv').save('myFolder/')
id1+=chunk
id2+=chunk
有没有更有效的方法不用定义列 id_tmp
1条答案
按热度按时间epfja78i1#
我建议你使用
partitionBy
方法从DataFrameWriter
接口内置spark(docs)。下面是一个例子。鉴于
df
在dataframe中,chuck标识符必须是一列或多列。在我的例子中tmp_id
. 下面的代码段生成一个df,其中包含12条记录和4个块id。退货:
要独立地保存每个块,您需要:
output_folder/
output_folder/._SUCCESS.crc
output_folder/id_tmp=0
output_folder/id_tmp=0/.part-00000-eba244a4-ce95-4f4d-b9b8-8e5f972b144f.c000.csv.crc
output_folder/id_tmp=0/part-00000-eba244a4-ce95-4f4d-b9b8-8e5f972b144f.c000.csv
output_folder/id_tmp=1
output_folder/id_tmp=1/.part-00000-eba244a4-ce95-4f4d-b9b8-8e5f972b144f.c000.csv.crc
output_folder/id_tmp=1/part-00000-eba244a4-ce95-4f4d-b9b8-8e5f972b144f.c000.csv
output_folder/id_tmp=2
output_folder/id_tmp=2/.part-00000-eba244a4-ce95-4f4d-b9b8-8e5f972b144f.c000.csv.crc
output_folder/id_tmp=2/part-00000-eba244a4-ce95-4f4d-b9b8-8e5f972b144f.c000.csv
output_folder/id_tmp=3
output_folder/id_tmp=3/.part-00000-eba244a4-ce95-4f4d-b9b8-8e5f972b144f.c000.csv.crc
output_folder/id_tmp=3/part-00000-eba244a4-ce95-4f4d-b9b8-8e5f972b144f.c000.csv
output_folder/_SUCCESS