spark中bigint的兼容数据类型是什么?我们如何将bigint转换成spark兼容的数据类型?

shyt4zoc  于 2021-05-27  发布在  Hadoop
关注(0)|答案(1)|浏览(1623)

我正在尝试使用spark将数据从greenplum移动到hdfs。我可以成功地从源表中读取数据,并且(greenplum表的)dataframe的spark推断模式是:
Dataframe架构:

je_header_id: long (nullable = true)
 je_line_num: long (nullable = true)
 last_updated_by: decimal(15,0) (nullable = true)
 last_updated_by_name: string (nullable = true)
 ledger_id: long (nullable = true)
 code_combination_id: long (nullable = true)
 balancing_segment: string (nullable = true)
 cost_center_segment: string (nullable = true)
 period_name: string (nullable = true)
 effective_date: timestamp (nullable = true)
 status: string (nullable = true)
 creation_date: timestamp (nullable = true)
 created_by: decimal(15,0) (nullable = true)
 entered_dr: decimal(38,20) (nullable = true)
 entered_cr: decimal(38,20) (nullable = true)
 entered_amount: decimal(38,20) (nullable = true)
 accounted_dr: decimal(38,20) (nullable = true)
 accounted_cr: decimal(38,20) (nullable = true)
 accounted_amount: decimal(38,20) (nullable = true)
 xx_last_update_log_id: integer (nullable = true)
 source_system_name: string (nullable = true)
 period_year: decimal(15,0) (nullable = true)
 period_num: decimal(15,0) (nullable = true)

配置单元表的对应架构为:

je_header_id:bigint|je_line_num:bigint|last_updated_by:bigint|last_updated_by_name:string|ledger_id:bigint|code_combination_id:bigint|balancing_segment:string|cost_center_segment:string|period_name:string|effective_date:timestamp|status:string|creation_date:timestamp|created_by:bigint|entered_dr:double|entered_cr:double|entered_amount:double|accounted_dr:double|accounted_cr:double|accounted_amount:double|xx_last_update_log_id:int|source_system_name:string|period_year:bigint|period_num:bigint

使用上面提到的配置单元表架构,我使用以下逻辑创建了以下structtype:

def convertDatatype(datatype: String): DataType = {
  val convert = datatype match {
    case "string"     => StringType
    case "bigint"     => LongType
    case "int"        => IntegerType
    case "double"     => DoubleType
    case "date"       => TimestampType
    case "boolean"    => BooleanType
    case "timestamp"  => TimestampType
  }
  convert
}

准备的架构:

je_header_id: long (nullable = true)
 je_line_num: long (nullable = true)
 last_updated_by: long (nullable = true)
 last_updated_by_name: string (nullable = true)
 ledger_id: long (nullable = true)
 code_combination_id: long (nullable = true)
 balancing_segment: string (nullable = true)
 cost_center_segment: string (nullable = true)
 period_name: string (nullable = true)
 effective_date: timestamp (nullable = true)
 status: string (nullable = true)
 creation_date: timestamp (nullable = true)
 created_by: long (nullable = true)
 entered_dr: double (nullable = true)
 entered_cr: double (nullable = true)
 entered_amount: double (nullable = true)
 accounted_dr: double (nullable = true)
 accounted_cr: double (nullable = true)
 accounted_amount: double (nullable = true)
 xx_last_update_log_id: integer (nullable = true)
 source_system_name: string (nullable = true)
 period_year: long (nullable = true)
 period_num: long (nullable = true)

当我尝试将我的newschema应用于dataframe模式时,我得到一个异常:

java.lang.RuntimeException: java.math.BigDecimal is not a valid external type for schema of bigint

我知道它正在试图转变 BigDecimalBigint 它失败了,但是有人能告诉我如何将bigint转换成spark兼容的数据类型吗?如果不是,我如何修改我的逻辑以在case语句中为这个bigint/bigdecimal问题提供适当的数据类型?

9ceoxa92

9ceoxa921#

在这里看到您的问题,似乎您试图将bigint值转换为big decimal,这是不对的。 Bigdecimal 必须具有固定精度(最大位数)和小数位数(点右侧的位数)的十进制数。你的价值看起来很长。
在这里而不是使用 BigDecimal 数据类型,尝试使用 LongType 转换 bigint 值正确。看看这能不能解决你的问题。

相关问题