我正在寻找如何在Pyspark中一起使用Group by Aggregate函数的解决方案?我的Dataframe看起来像这样:
df = sc.parallelize([
('23-09-2020', 'CRICKET'),
('25-11-2020', 'CRICKET'),
('13-09-2021', 'FOOTBALL'),
('20-11-2021', 'BASKETBALL'),
('12-12-2021', 'FOOTBALL')]).toDF(['DATE', 'SPORTS_INTERESTED'])
我想在SPORTS_INTERSTED列上应用group by,并从DATE列中选择日期的MIN。下面是我使用的查询
from pyspark.sql.functions import min
df=df.groupby('SPORTS_INTERESTED').agg(count('SPORTS_INTERESTED').alias('FIRST_COUNT'),(F.min('DATE').alias('MIN_OF_DATE_COLUMN'))).filter((col('FIRST_COUNT')> 1))
但是,当我应用上述查询时,我不知道为什么在输出值DESIRED OUTPUT中给出MAX日期而不是MIN日期
## +-----------------+-------------------+
## |SPORTS_INTERESTED| MIN_OF_DATE_COLUMN|
## +------+----------+-------------------+
## | CRICKET |23-09-2020 |
## +------+----------+-------------------+
## | FOOTBALL |13-09-2021 |
+-----------------+-------------------+
我得到的输出:
## +-----------------+----------------------+
## |SPORTS_INTERESTED| MIN_OF_DATE_COLUMN|
## +------+----------+-------------------+
## | CRICKET |25-11-2020 |
## +------+----------+-------------------+
## | FOOTBALL |12-12-2021 |
+-----------------+-------------------+
两列都是字符串数据类型
1条答案
按热度按时间v9tzhpje1#
首先,将字符串转换为日期格式,然后应用min: