如何迭代分组的PySpark Pandas Dataframe

ih99xse1  于 2023-04-10  发布在  Spark
关注(0)|答案(1)|浏览(159)

我有一个分组的pyspark pandas dataframe ==〉'groups',我试图以与pandas相同的方式迭代组:

import pyspark.pandas as ps

dataframe = ps.read_excel("data.xlsx")
groups = dataframe.groupby(['col1', 'col2'])
for name, group in groups:
    print(name)
    ...

我得到以下错误:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[29], line 1
----> 1 for name, group in groups:
      2     print(name)

File /opt/spark/python/pyspark/pandas/groupby.py:2806, in DataFrameGroupBy.__getitem__(self, item)
   2803 def __getitem__(self, item: Any) -> GroupBy:
   2804     if self._as_index and is_name_like_value(item):
   2805         return SeriesGroupBy(
-> 2806             self._psdf._psser_for(item if is_name_like_tuple(item) else (item,)),
   2807             self._groupkeys,
   2808             dropna=self._dropna,
   2809         )
   2810     else:
   2811         if is_name_like_tuple(item):

File /opt/spark/python/pyspark/pandas/frame.py:699, in DataFrame._psser_for(self, label)
    672 def _psser_for(self, label: Label) -> "Series":
    673     """
    674     Create Series with a proper column label.
    675 
   (...)
    697     Name: id, dtype: int64
    698     """
--> 699     return self._pssers[label]

KeyError: (0,)

有没有办法做到这一点,或变通办法?

oxosxuxt

oxosxuxt1#

Group by在pandas中的工作方式与在Pyspark中的工作方式不同。您可以转换为pandas,然后再转换回Pyspark。如果您正在处理大型数据集,这并不理想,但它是一个解决方案。

import pyspark.pandas as ps
import pandas as pd

dataframe = ps.read_excel("data.xlsx")
pdf = dataframe.to_pandas() # convert to pandas dataframe
groups = pdf.groupby(['col1', 'col2'])
for name, group in groups:
    print(name)
    ...
ps_groups = ps.from_pandas(group) # convert back to PySpark dataframe

相关问题