无法从文件架构中获取varchar字段的maxlength值:“string”

64jmpszr  于 2021-06-25  发布在  Hive
关注(0)|答案(1)|浏览(350)

我在hive中创建了2个表

CREATE external TABLE avro1(id INT,name VARCHAR(64),dept VARCHAR(64)) PARTITIONED BY (yoj VARCHAR(64)) STORED AS avro;

CREATE external TABLE avro2(id INT,name VARCHAR(64),dept VARCHAR(64)) PARTITIONED BY (yoj VARCHAR(64)) STORED AS avro;

将数据从配置单元输入表avro1console:-

INSERT INTO TABLE avro1 PARTITION (yoj = 2015) (id,name,dept) VALUES (1,'Mohan','CS');
INSERT INTO TABLE avro1 PARTITION (yoj = 2015) (id,name,dept) VALUES (2,'Rahul','HR');
INSERT INTO TABLE avro1 PARTITION (yoj = 2016) (id,name,dept) VALUES (3,'Kuldeep','EE');

现在运行了一个spark结构化流应用程序,将数据输入表avro2。现在,当我从配置单元控制台或使用spark读取数据时,从表avro2我得到了这个错误
失败,出现异常java.io.ioexception:org.apache.hadoop.hive.serde2.avro.avroserdeexception:无法从文件架构中获取varchar字段的maxlength值:“string”

mzsu5hc0

mzsu5hc01#

请尝试以下命令从spark shell在Hive表中插入数据,
sql(“insert into table avro1 partition(yoj=2015)(id,name,dept)values(1,'mohan','cs')”);sql(“insert into table avro1 partition(yoj=2015)(id,name,dept)values(2,'rahul','hr')”);sql(“insert into table avro1 partition(yoj=2016)(id,name,dept)value(3,'kuldeep','ee')”);

相关问题