我正在尝试以以下方式将数据从table:system\u releases从greenplum移动到hive:
val yearDF = spark.read.format("jdbc").option("url", "urltemplate;MaxNumericScale=30;MaxNumericPrecision=40;")
.option("dbtable", s"(${execQuery}) as year2016")
.option("user", "user")
.option("password", "pwd")
.option("partitionColumn","release_number")
.option("lowerBound", 306)
.option("upperBound", 500)
.option("numPartitions",2)
.load()
spark推断的Dataframeyeardf架构:
description:string
status_date:timestamp
time_zone:string
table_refresh_delay_min:decimal(38,30)
online_patching_enabled_flag:string
release_number:decimal(38,30)
change_number:decimal(38,30)
interface_queue_enabled_flag:string
rework_enabled_flag:string
smart_transfer_enabled_flag:string
patch_number:decimal(38,30)
threading_enabled_flag:string
drm_gl_source_name:string
reverted_flag:string
table_refresh_delay_min_text:string
release_number_text:string
change_number_text:string
我在配置单元上有相同的表,具有以下数据类型:
val hiveCols=string,status_date:timestamp,time_zone:string,table_refresh_delay_min:double,online_patching_enabled_flag:string,release_number:double,change_number:double,interface_queue_enabled_flag:string,rework_enabled_flag:string,smart_transfer_enabled_flag:string,patch_number:double,threading_enabled_flag:string,drm_gl_source_name:string,reverted_flag:string,table_refresh_delay_min_text:string,release_number_text:string,change_number_text:string
列: table_refresh_delay_min, release_number, change_number and patch_number
即使gp中的小数点不多,但给出的小数点也太多。所以我试着把它保存为csv文件,看看spark是如何读取数据的。例如,gp上的最大版本号是:306.00,但在我保存的csv文件dataframe:yeardf中,值是306.000000000000000000。
我尝试采用配置单元表模式,并将其转换为structtype,以便将其应用于yeardf,如下所示。
def convertDatatype(datatype: String): DataType = {
val convert = datatype match {
case "string" => StringType
case "bigint" => LongType
case "int" => IntegerType
case "double" => DoubleType
case "date" => TimestampType
case "boolean" => BooleanType
case "timestamp" => TimestampType
}
convert
}
val schemaList = hiveCols.split(",")
val schemaStructType = new StructType(schemaList.map(col => col.split(":")).map(e => StructField(e(0), convertDatatype(e(1)), true)))
val newDF = spark.createDataFrame(yearDF.rdd, schemaStructType)
newDF.write.format("csv").save("hdfs/location")
但我得到了一个错误:
Caused by: java.lang.RuntimeException: java.math.BigDecimal is not a valid external type for schema of double
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalIfFalseExpr8$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_2$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:287)
... 17 more
我试图以下面的方式将十进制列转换为doubletype,但仍然面临相同的异常。
val pattern = """DecimalType\(\d+,(\d+)\)""".r
val df2 = dataDF.dtypes.
collect{ case (dn, dt) if pattern.findFirstMatchIn(dt).map(_.group(1)).getOrElse("0") != "0" => dn }.
foldLeft(dataDF)((accDF, c) => accDF.withColumn(c, col(c).cast("Double")))
Caused by: java.lang.RuntimeException: java.math.BigDecimal is not a valid external type for schema of double
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalIfFalseExpr8$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_2$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:287)
... 17 more
在尝试了以上两种方法之后,我已经没有想法了。有人能告诉我如何将Dataframe的列正确地转换为所需的数据类型吗?
1条答案
按热度按时间tkclm6bt1#
在这种情况下,当您将rdd转换为df时,您需要指定与spark schema使用的类型完全相同的类型。
例如,当你做一个
printSchema
在你的yearDF
Dataframe,你得到了这个当您将rdd转换为df时,对于那些字段
decimal(38,30)
,必须指定为DecimalType(38,30)
代替DoubleType
你用过。希望有帮助!