有没有一种方法可以从scala中Dataframe的现有列创建多个列?

bksxznpy  于 2021-05-29  发布在  Hadoop
关注(0)|答案(2)|浏览(316)

我正在尝试将rdbms表摄取到hive中。我通过以下方式获得Dataframe:

val yearDF = spark.read.format("jdbc").option("url", connectionUrl)
                                                   .option("dbtable", "(select * from schema.tablename where source_system_name='DB2' and period_year='2017') as year2017")
                                                   .option("user", devUserName)
                                                   .option("password", devPassword)
                                                   .option("numPartitions",15)
                                                   .load()

以下是Dataframe的列:

geography:string|
project:string|
reference_code:string
product_line:string
book_type:string
cc_region:string
cc_channel:string
cc_function:string
pl_market:string
ptd_balance:double
qtd_balance:double
ytd_balance:double
xx_last_update_tms:timestamp
xx_last_update_log_id:int
xx_data_hash_code:string
xx_data_hash_id:bigint

ptd_balance, qtd_balance, ytd_balance 是双精度列的数据类型。我们的项目希望通过创建新列将其数据类型从double转换为string: ptd_balance_text, qtd_balance_text, ytd_balance_text 以避免任何数据截断。 withColumn 将在Dataframe中创建一个新列。 withColumnRenamed 将重命名现有列。
Dataframe有近1000万条记录。有没有一种有效的方法来创建多个新列,这些列的数据相同,类型不同于Dataframe中现有的列?

ctehm74n

ctehm74n1#

如果我是你的话,我会在提取查询中做一些更改,或者让bi团队做一些努力:p在提取时动态添加和强制转换字段,但是你所要求的任何方式都是可能的。
您可以从现有列中添加列,如下所示。检查
addColsTosampleDF dataframe . 我希望下面的评论将足以理解,如果你有任何问题,请随时添加在评论,我会编辑我的答案。

scala> import org.apache.spark.sql.functions._
import org.apache.spark.sql.functions._

scala> import org.apache.spark.sql.{DataFrame, Row, SparkSession}
import org.apache.spark.sql.{DataFrame, Row, SparkSession}

scala> val ss = SparkSession.builder().appName("TEST").getOrCreate()
18/08/07 15:51:42 WARN SparkSession$Builder: Using an existing SparkSession; some configuration may not take effect.
ss: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@6de4071b

//Sample dataframe with int, double and string fields
scala> val sampleDf = Seq((100, 1.0, "row1"),(1,10.12,"col_float")).toDF("col1", "col2", "col3")
sampleDf: org.apache.spark.sql.DataFrame = [col1: int, col2: double ... 1 more field]

scala> sampleDf.printSchema
root
 |-- col1: integer (nullable = false)
 |-- col2: double (nullable = false)
 |-- col3: string (nullable = true)

//Adding columns col1_string from col1 and col2_doubletostring from col2 with casting and alias
scala> val addColsTosampleDF = sampleDf.
select(sampleDf.col("col1"),
sampleDf.col("col2"),
sampleDf.col("col3"),
sampleDf.col("col1").cast("string").alias("col1_string"),
sampleDf.col("col2").cast("string").alias("col2_doubletostring"))
addColsTosampleDF: org.apache.spark.sql.DataFrame = [col1: int, col2: double ... 3 more fields]

//Schema with added columns
scala> addColsTosampleDF.printSchema
root
 |-- col1: integer (nullable = false)
 |-- col2: double (nullable = false)
 |-- col3: string (nullable = true)
 |-- col1_string: string (nullable = false)
 |-- col2_doubletostring: string (nullable = false)

 scala> addColsTosampleDF.show()
+----+-----+---------+-----------+-------------------+
|col1| col2|     col3|col1_string|col2_doubletostring|
+----+-----+---------+-----------+-------------------+
| 100|  1.0|     row1|        100|                1.0|
|   1|10.12|col_float|          1|              10.12|
+----+-----+---------+-----------+-------------------+
bxjv4tth

bxjv4tth2#

你可以这样做 query 从所有 columns 就像下面一样

import org.apache.spark.sql.types.StringType

//Input: 

scala> df.show
+----+-----+--------+--------+
|  id| name|  salary|   bonus|
+----+-----+--------+--------+
|1001|Alice| 8000.25|1233.385|
|1002|  Bob|7526.365| 1856.69|
+----+-----+--------+--------+

scala> df.printSchema
root
 |-- id: integer (nullable = false)
 |-- name: string (nullable = true)
 |-- salary: double (nullable = false)
 |-- bonus: double (nullable = false)

//solution approach:
val query=df.columns.toList.map(cl=>if(cl=="salary" || cl=="bonus") col(cl).cast(StringType).as(cl+"_text") else col(cl))

//Output: 

scala> df.select(query:_*).printSchema
root
 |-- id: integer (nullable = false)
 |-- name: string (nullable = true)
 |-- salary_text: string (nullable = false)
 |-- bonus_text: string (nullable = false)

scala> df.select(query:_*).show
+----+-----+-----------+----------+
|  id| name|salary_text|bonus_text|
+----+-----+-----------+----------+
|1001|Alice|    8000.25|  1233.385|
|1002|  Bob|   7526.365|   1856.69|
+----+-----+-----------+----------+

相关问题