csv数据未正确加载为使用spark的Parquet地板

sh7euo9m  于 2021-05-27  发布在  Spark
关注(0)|答案(2)|浏览(363)

我在 hive 里有张table

CREATE TABLE tab_data (
  rec_id INT,
  rec_name STRING,
  rec_value DECIMAL(3,1),
  rec_created TIMESTAMP
) STORED AS PARQUET;

我想用这些.csv文件中的数据填充这个表

10|customer1|10.0|2016-09-07  08:38:00.0
20|customer2|24.0|2016-09-08  10:45:00.0
30|customer3|35.0|2016-09-10  03:26:00.0
40|customer1|46.0|2016-09-11  08:38:00.0
50|customer2|55.0|2016-09-12  10:45:00.0
60|customer3|62.0|2016-09-13  03:26:00.0
70|customer1|72.0|2016-09-14  08:38:00.0
80|customer2|23.0|2016-09-15  10:45:00.0
90|customer3|30.0|2016-09-16  03:26:00.0

使用spark和scala,代码如下

import org.apache.spark.sql.{SaveMode, SparkSession}
import org.apache.spark.sql.types.{DataTypes, IntegerType, StringType, StructField, StructType, TimestampType}

object MainApp {

  val spark = SparkSession
    .builder()
    .appName("MainApp")
    .master("local[*]")
    .config("spark.sql.shuffle.partitions","200") 
    .getOrCreate()

  val sc = spark.sparkContext

  val inputPath = "hdfs://host.hdfs:8020/..../tab_data.csv"
  val outputPath = "hdfs://host.hdfs:8020/...../warehouse/test.db/tab_data"

  def main(args: Array[String]): Unit = {

    try {

      val DecimalType = DataTypes.createDecimalType(3, 1)

      /**
        * schema
        */
      val schema = StructType(List(StructField("rec_id", IntegerType, true), StructField("rec_name",StringType, true),
        StructField("rec_value",DecimalType),StructField("rec_created",TimestampType, true)))

      /**
        * Reading the data from HDFS 
        */
      val data = spark
        .read
        .option("sep","|")
        .schema(schema)
        .csv(inputPath)

      data.show(truncate = false)
      data.schema.printTreeString()

      /**
        * Writing the data as Parquet
        */
      data
        .write
        .mode(SaveMode.Append)
        .parquet(outputPath)

    } finally {
      sc.stop()    
      spark.stop()
    }
  }
}

问题是我得到了这个输出

+------+--------+---------+-----------+
|rec_id|rec_name|rec_value|rec_created|
+------+--------+---------+-----------+
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |
|null  |null    |null     |null       |

root
 |-- rec_id: integer (nullable = true)
 |-- rec_name: string (nullable = true)
 |-- rec_value: decimal(3,1) (nullable = true)
 |-- rec_created: timestamp (nullable = true)

架构正常,但数据未正确加载到表中

SELECT * FROM tab_data;

+------------------+--------------------+---------------------+-----------------------+--+
| tab_data.rec_id  | tab_data.rec_name  | tab_data.rec_value  | tab_data.rec_created  |
+------------------+--------------------+---------------------+-----------------------+--+
| NULL             | NULL               | NULL                | NULL                  |
| NULL             | NULL               | NULL                | NULL                  |
| NULL             | NULL               | NULL                | NULL                  |
| NULL             | NULL               | NULL                | NULL                  |
| NULL             | NULL               | NULL                | NULL                  |
| NULL             | NULL               | NULL                | NULL                  |
| NULL             | NULL               | NULL                | NULL                  |
| NULL             | NULL               | NULL                | NULL                  |
| NULL             | NULL               | NULL                | NULL                  |

我做错什么了?
我是新来的Spark和一些帮助将不胜感激。

kse8i1jr

kse8i1jr1#

你越来越 null 所有列中的值,因为 String 无法转换为 Timestamp 类型。
要将字符串转换为时间戳类型,请使用以下命令指定时间戳格式 option("timestampFormat","yyyy-MM-dd HH:mm:ss.S") 加载csv数据时的选项。
检查以下代码。
架构

scala> val schema = StructType(List(
   StructField("rec_id", IntegerType, true), 
   StructField("rec_name",StringType, true),
   StructField("rec_value",DecimalType(3,1)),
   StructField("rec_created",TimestampType, true))
)

正在加载csv数据

scala> val df = spark
.read
.option("sep","|")
.option("inferSchema","true")
.option("timestampFormat","yyyy-MM-dd HH:mm:ss.S")
.schema(schema)
.csv("/tmp/sample")

scala> df.show(false)
+------+---------+---------+-------------------+
|rec_id|rec_name |rec_value|rec_created        |
+------+---------+---------+-------------------+
|10    |customer1|10.0     |2016-09-07 08:38:00|
|20    |customer2|24.0     |2016-09-08 10:45:00|
|30    |customer3|35.0     |2016-09-10 03:26:00|
|40    |customer1|46.0     |2016-09-11 08:38:00|
|50    |customer2|55.0     |2016-09-12 10:45:00|
|60    |customer3|62.0     |2016-09-13 03:26:00|
|70    |customer1|72.0     |2016-09-14 08:38:00|
|80    |customer2|23.0     |2016-09-15 10:45:00|
|90    |customer3|30.0     |2016-09-16 03:26:00|
+------+---------+---------+-------------------+

更新
由于表是托管表,所以不需要设置所有这些参数,可以使用 insertInto 函数将数据插入表中。

df.write.mode("append").insertInto("tab_data")
vltsax25

vltsax252#

处理双方之间的问题 Spark , Hive 以及 Parquet 设置你的 SparkSession 具体如下:

val spark = SparkSession
    .builder()
    .appName("CsvToParquet")
    .master("local[*]")
    .config("spark.sql.shuffle.partitions","200") //Change to a more reasonable default number of partitions for our data
    .config("spark.sql.parquet.writeLegacyFormat", true) // To skip issues with data type between Spark and Hive
                                                         // The convention used by Spark to write Parquet data is configurable.
                                                         // This is determined by the property spark.sql.parquet.writeLegacyFormat
                                                         // The default value is false. If set to "true",
                                                         // Spark will use the same convention as Hive for writing the Parquet data.

之后阅读 .csv 数据如下

val data = spark
        .read
        .option("sep","|")
        .option("timestampFormat","yyyy-MM-dd HH:mm:ss.S") // to read timestamp fields
        .option("inferSchema",false) // by default is false
        .schema(schema)
        .csv(inputPath)

然后将数据写入 parquetno compression (默认情况下数据是压缩的)如下所示

data
        .write
        .mode(SaveMode.Append)
        .option("compression", "none") // Assuming no data compression
        .parquet(outputPath)

注:这可能是 Hive 无法查询数据,因为数据已在中压缩 snappy 默认设置格式和 CREATE TABLE 语句将数据存储为 parquet 没有压缩。

相关问题