配置单元插入覆盖显示错误

fykwrbwg  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(364)

我正在研究一个将hbase-0.98.19与hive-1.2.1集成的示例。我已经使用命令在hbase中创建了一个表

CREATE TABLE hbase_table_emp(id int, name string, role string) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" =     ":key,cf1:name,cf1:role")
 TBLPROPERTIES ("hbase.table.name" = "emp");

然后我创建了“testemp”,用于将数据导入“hbase\u table\u emp”。下面的代码显示了创建“testemp”表的方法

create table testemp(id int, name string, role string) row format delimited fields terminated by '\t';
load data local inpath '/home/hduser/sample.txt' into table testemp;
select * from testemp;

到现在为止,一切正常。但当我执行命令时 insert overwrite table hbase_table_emp select * from testemp; 我得到以下信息error:-
配置单元>插入覆盖表hbase\u table\u emp select*from testemp;query id=hduser\u 20160613131557\u ddef0b47-a773-477b-94d2-5cc070eb0de6 total jobs=1启动作业1共1个reduce任务数设置为0,因为没有reduce运算符java.io.ioexception:org.apache.hadoop.hive.ql.metadata.hiveexception:java.lang.illegalargumentexception:必须在指定表名org.apache.hadoop.hive.ql.exec.filesinkoperator.checkoutputspecs(filesinkoperator)。java:1117)在org.apache.hadoop.hive.ql.io.hiveoutputformatimpl.checkoutputspecs(hiveoutputformatimpl。java:67)在org.apache.hadoop.mapreduce.jobsubmitter.checkspecs(jobsubmitter。java:564)在org.apache.hadoop.mapreduce.jobsubmitter.submitjobinternal(jobsubmitter。java:432)在org.apache.hadoop.mapreduce.job$10.run(job。java:1296)在org.apache.hadoop.mapreduce.job$10.run(job。java:1293)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:415)在org.apache.hadoop.security.usergroupinformation.doas(用户组信息。java:1628)在org.apache.hadoop.mapreduce.job.submit(作业。java:1293)在org.apache.hadoop.mapred.jobclient$1.run(jobclient。java:562)在org.apache.hadoop.mapred.jobclient$1.run(jobclient。java:557)位于java.security.accesscontroller.doprivileged(本机方法)javax.security.auth.subject.doas(主题。java:415)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1628)在org.apache.hadoop.mapred.jobclient.submitjobinternal(jobclient。java:557)在org.apache.hadoop.mapred.jobclient.submitjob(jobclient。java:548)在org.apache.hadoop.hive.ql.exec.mr.execdriver.execute(execdriver。java:431)位于org.apache.hadoop.hive.ql.exec.mr.mapredtask.execute(mapredtask)。java:137)在org.apache.hadoop.hive.ql.exec.task.executetask(任务。java:160)位于org.apache.hadoop.hive.ql.exec.taskrunner.runsequential(taskrunner)。java:88)在org.apache.hadoop.hive.ql.driver.launchtask(驱动程序。java:1653)在org.apache.hadoop.hive.ql.driver.execute(driver。java:1412)在org.apache.hadoop.hive.ql.driver.runinternal(driver。java:1195)在org.apache.hadoop.hive.ql.driver.run(driver。java:1059)在org.apache.hadoop.hive.ql.driver.run(driver。java:1049)在org.apache.hadoop.hive.cli.clidriver.processlocalcmd(clidriver。java:213)在org.apache.hadoop.hive.cli.clidriver.processcmd(clidriver。java:165)在org.apache.hadoop.hive.cli.clidriver.processline(clidriver。java:376)在org.apache.hadoop.hive.cli.clidriver.executedriver(clidriver。java:736)在org.apache.hadoop.hive.cli.clidriver.run(clidriver。java:681)位于org.apache.hadoop.hive.cli.clidriver.main(clidriver。java:621)在sun.reflect.nativemethodaccessorimpl.invoke0(本机方法)在sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl)。java:57)在sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl。java:43)在java.lang.reflect.method.invoke(方法。java:606)在org.apache.hadoop.util.runjar.run(runjar。java:221)在org.apache.hadoop.util.runjar.main(runjar。java:136)原因:org.apache.hadoop.hive.ql.metadata.hiveexception:java.lang.illegalargumentexception:must在org.apache.hadoop.hive.ql.exec.filesinkoperator.createhiveoutputformat(filesinkoperator)中指定表名。java:1139)位于org.apache.hadoop.hive.ql.exec.filesinkoperator.checkoutputspecs(filesinkoperator)。java:1114) ... 37更多原因:java.lang.illegalargumentexception:必须在org.apache.hadoop.hbase.mapreduce.tableoutputformat.setconf(tableoutputformat)中指定表名。java:193)在org.apache.hive.common.util.reflectionutil.setconf(reflectionutil。java:101)位于org.apache.hive.common.util.reflectionutil.newinstance(reflectionutil。java:87)在org.apache.hadoop.hive.ql.io.hivefileformatutils.gethiveoutputformat(hivefileformatutils)。java:277)位于org.apache.hadoop.hive.ql.io.hivefileformatutils.gethiveoutputformat(hivefileformatutils)。java:267)在org.apache.hadoop.hive.ql.exec.filesinkoperator.createhiveoutputformat(filesinkoperator。java:1137) ... 38更多作业提交失败,出现异常“java.io.ioexception(org.apache.hadoop.hive.ql.metadata.hiveexception:java.lang.illegalargumentexception:must specify table name)”失败:执行错误,从org.apache.hadoop.hive.ql.exec.mr.mapredtask返回代码1`
ps:我在类路径中包含hbase.jar、zookeeper.jar和guava.jar。
提前谢谢。

vddsk6oq

vddsk6oq1#

对于配置单元hbase集成,要将数据插入hbase表,还需要在TBLProperty中指定hbase.mapred.output.outputtable。
hive hbase集成
hbase.mapred.output.outputtable属性是可选的;如果计划向表中插入数据,则需要该属性(hbase.mapreduce.tableoutputformat使用该属性)
因此,对于您的表,您需要运行以下命令:

ALTER TABLE table_name SET TBLPROPERTIES ("hbase.mapred.output.outputtable" = "emp");

相关问题