给定PySpark DataFrame,是否可以获取DataFrame所引用的源列的列表?
也许一个更具体的例子可以帮助解释我所追求的。假设我有一个DataFrame定义为:
import pyspark.sql.functions as func
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
source_df = spark.createDataFrame(
[("pru", 23, "finance"), ("paul", 26, "HR"), ("noel", 20, "HR")],
["name", "age", "department"],
)
source_df.createOrReplaceTempView("people")
sqlDF = spark.sql("SELECT name, age, department FROM people")
df = sqlDF.groupBy("department").agg(func.max("age").alias("max_age"))
df.show()
返回:
+----------+--------+
|department|max_age |
+----------+--------+
| finance| 23|
| HR| 26|
+----------+--------+
df
引用的列是[department, age]
。是否可以通过编程方式获取引用列的列表?
多亏了Capturing the result of explain() in pyspark,我知道我可以将计划提取为字符串:
df._sc._jvm.PythonSQLUtils.explainString(df._jdf.queryExecution(), "formatted")
返回:
== Physical Plan ==
AdaptiveSparkPlan (6)
+- HashAggregate (5)
+- Exchange (4)
+- HashAggregate (3)
+- Project (2)
+- Scan ExistingRDD (1)
(1) Scan ExistingRDD
Output [3]: [name#0, age#1L, department#2]
Arguments: [name#0, age#1L, department#2], MapPartitionsRDD[4] at applySchemaToPythonRDD at NativeMethodAccessorImpl.java:0, ExistingRDD, UnknownPartitioning(0)
(2) Project
Output [2]: [age#1L, department#2]
Input [3]: [name#0, age#1L, department#2]
(3) HashAggregate
Input [2]: [age#1L, department#2]
Keys [1]: [department#2]
Functions [1]: [partial_max(age#1L)]
Aggregate Attributes [1]: [max#22L]
Results [2]: [department#2, max#23L]
(4) Exchange
Input [2]: [department#2, max#23L]
Arguments: hashpartitioning(department#2, 200), ENSURE_REQUIREMENTS, [plan_id=60]
(5) HashAggregate
Input [2]: [department#2, max#23L]
Keys [1]: [department#2]
Functions [1]: [max(age#1L)]
Aggregate Attributes [1]: [max(age#1L)#12L]
Results [2]: [department#2, max(age#1L)#12L AS max_age#13L]
(6) AdaptiveSparkPlan
Output [2]: [department#2, max_age#13L]
Arguments: isFinalPlan=false
这是有用的,但它不是我所需要的。我需要一个引用列的列表。这可能吗?
也许问这个问题的另一种方式是......是否有一种方法可以将解释计划作为一个对象来获取,以便我可以对其进行迭代/探索?
1条答案
按热度按时间von4xj4u1#
你可以尝试下面的代码,这将给予你列列表和它的数据类型在数据框。
对于df.schema.fields中字段:打印(字段.名称+”,“+字符串(字段.数据类型))