spark-hive-context-avro表,带分区和大写字段名

p4tfgftt  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(331)

对于已分区的avro配置单元表,在avro架构中具有大写字符的字段名将被作为null回调。我想知道是否有一些设置/解决方法我遗漏了,或者这只是一个与Hive上下文的错误。
我已经尝试将以下内容添加到ddl中:

WITH SERDEPROPERTIES ('casesensitive'='FieldName')

... 并将spark.sql.casesensitive设置为true/false
spark版本1.5.0 hive版本1.1.0
您可以通过在配置单元中运行以下ddl来重新创建问题:

-- Hive DDL using partitions
CREATE TABLE avro_partitions (Field string)
PARTITIONED BY (part string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES ('avro.schema.literal'=
  '{ "type":"record", "name":"avro_partitions", "namespace":"default", "fields":[ {"name":"Field", "type":"string"} ] }');
INSERT INTO avro_partitions PARTITION (part='01') VALUES('test');

-- Hive DDL without partitions
CREATE TABLE avro_no_partitions (Field string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES ('avro.schema.literal'=
  '{ "type":"record", "name":"avro_no_partitions", "namespace":"default", "fields":[ {"name":"Field", "type":"string"} ] }');
INSERT INTO avro_no_partitions VALUES('test');

... & 然后尝试使用spark sql(spark shell)从表中进行选择:

sqlContext.sql("select * from default.avro_partitions").show
+-----+----+
|field|part|
+-----+----+
| null|  01|
+-----+----+

sqlContext.sql("select * from default.avro_no_partitions").show
+-----+
|field|
+-----+
| test|
+-----+
vxqlmq5t

vxqlmq5t1#

问题是在tblproperty中指定avro.schema.literal-应该在serdeproperty中指定它:

CREATE TABLE avro_partitions (Field string)
PARTITIONED BY (part string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES ('avro.schema.literal'='{ "type":"record", "name":"avro_partitions", "namespace":"default", "fields":[ {"name":"Field", "type":"string"} ] }')
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat';
INSERT INTO avro_partitions PARTITION (part='01') VALUES('test');

spark版本1.6.0

相关问题