withcolumn在savemode.append中插入null

jckbn6z7  于 2021-06-26  发布在  Hive
关注(0)|答案(1)|浏览(223)

我有一个spark应用程序来创建配置单元外部表,这是第一次在使用分区的配置单元中创建表时工作正常。我有三个分区 event,centerCode,ExamDate ```
var sqlContext = spark.sqlContext
sqlContext.setConf("hive.exec.dynamic.partition", "true")
sqlContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
import org.apache.spark.sql.functions._

val candidateList = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("nullValue", "null").option("quote", """).option("dateFormat", "dd/MM/yyyy")
.schema(StructType(Array(StructField("RollNo/SeatNo", StringType, true), StructField("LabName", StringType, true), StructField("Student_Name", StringType, true), StructField("ExamName", StringType, true), StructField("ExamDate", DateType, true), StructField("ExamTime", StringType, true), StructField("CenterCode", StringType, true), StructField("Center", StringType, true)))).option("multiLine", "true").option("mode", "DROPMALFORMED").load(filePath(0))
val nef = candidateList.withColumn("event", lit(eventsId))

分隔柱 `event` 将不会出现在输入csv文件中,因此我将该列添加到Dataframe中 `candidateList` 使用 `withColumn("event", lit(eventsId))` 当我把它写到Hive表的时候,它工作得很好 `withColumn` 添加到带有“d”事件的表中,并按预期创建分区。

nef.repartition(1).write.mode(SaveMode.Overwrite).option("path", candidatePath).partitionBy("event", "CenterCode", "ExamDate").saveAsTable("sify_cvs_output.candidatelist")
``` candidateList.show() 给予

+-------------+--------------------+-------------------+----------+----------+--------+----------+--------------------+-----+
 |RollNo/SeatNo|             LabName|       Student_Name|  ExamName|  ExamDate|ExamTime|CenterCode|              Center|event|
 +-------------+--------------------+-------------------+----------+----------+--------+----------+--------------------+-----+
 |     80000077|BUILDING-MAIN FLO...|     ABBAS MOHAMMAD|PGECETICET|2018-07-30|10:00 AM|   500098A|500098A-SURYA TEC...| ABCD|
 |     80000056|BUILDING-MAIN FLO...|  ABDUL YASARARFATH|PGECETICET|2018-07-30|10:00 AM|   500098A|500098A-SURYA TEC...| ABCD|

但这是我第二次尝试将数据附加到已使用新事件“efgh”创建的配置单元表中,但这是第二次使用 withColumn 插入为 NULLnef.write.mode(SaveMode.Append).insertInto("sify_cvs_output.candidatelist") and the partitions also haven't come properly as one of the partition column becomes `NULL`, so I tried adding one more new column in the dataframe `.withColumn("sample", lit("sample"))` again for the first time it writes all the extra added columns to the table and the next time on `SaveMode.Append` inserts the `event` column and the `sample` column added to the table as `NULL`show create table 在......下面

CREATE EXTERNAL TABLE `candidatelist`(
   `rollno/seatno` string,
   `labname` string,
   `student_name` string,
   `examname` string,
   `examtime` string,
   `center` string,
   `sample` string)
 PARTITIONED BY (
   `event` string,
   `centercode` string,
   `examdate` date)
 ROW FORMAT SERDE
   'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
 WITH SERDEPROPERTIES (
   'path'='hdfs://172.16.2.191:8020/biometric/sify/cvs/output/candidate/')
 STORED AS INPUTFORMAT
   'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
 OUTPUTFORMAT
   'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
 LOCATION
   'hdfs://172.16.2.191:8020/biometric/sify/cvs/output/candidate'
 TBLPROPERTIES (
   'spark.sql.partitionProvider'='catalog',
   'spark.sql.sources.provider'='parquet',
   'spark.sql.sources.schema.numPartCols'='3',
   'spark.sql.sources.schema.numParts'='1',
   'spark.sql.sources.schema.part.0'='{\"type\":\"struct\",\"fields\":[{\"name\":\"RollNo/SeatNo\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"LabName\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Student_Name\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"ExamName\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"ExamTime\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Center\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"sample\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"event\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"CenterCode\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"ExamDate\",\"type\":\"date\",\"nullable\":true,\"metadata\":{}}]}',
   'spark.sql.sources.schema.partCol.0'='event',
   'spark.sql.sources.schema.partCol.1'='CenterCode',
   'spark.sql.sources.schema.partCol.2'='ExamDate',
   'transient_lastDdlTime'='1536040545')
 Time taken: 0.025 seconds, Fetched: 32 row(s)
 hive>

我做错什么了。。!
更新
@帕沙701,下面是我的sparksession

val Spark=SparkSession.builder().appName("splitInput").master("local").config("spark.hadoop.fs.defaultFS", "hdfs://" + hdfsIp)
    .config("hive.metastore.uris", "thrift://172.16.2.191:9083")
    .config("hive.exec.dynamic.partition", "true")
    .config("hive.exec.dynamic.partition.mode", "nonstrict")
    .enableHiveSupport()
    .getOrCreate()

如果我把partitionby加进去 InsertInto ```
nef.write.mode(SaveMode.Append).partitionBy("event", "CenterCode", "ExamDate").option("path", candidatePath).insertInto("sify_cvs_output.candidatelist")

它抛出异常 `org.apache.spark.sql.AnalysisException: insertInto() can't be used together with partitionBy(). Partition columns have already be defined for the table. It is not necessary to use partitionBy().;` 
v64noz0r

v64noz0r1#

第二次“partitionby”也必须使用。也可能需要选项“hive.exec.dynamic.partition.mode”。

相关问题