insert into table hivetest2 select*from hivetest:如果两个表都是事务表,则在配置单元0.14中不工作

cu6pst1q  于 2021-05-30  发布在  Hadoop
关注(0)|答案(0)|浏览(236)

**结束。**此问题需要详细的调试信息。它目前不接受答案。
**想改进这个问题吗?**更新问题,使其成为堆栈溢出的主题。

5年前关门了。
改进这个问题
我已经尝试在配置单元0.14中对这两个表执行insert-onto-table select hivetest 以及 hivetest2 ,它们是事务表。当两个表都是事务表时,这就不起作用了。下面是我使用过的查询。
我设置了以下参数

//setting up parameters for acid transactions
        set hive.support.concurrency=true; 
        set hive.enforce.bucketing=true;
        set hive.exec.dynamic.partition.mode=nonstrict;
        set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
        set hive.compactor.initiator.on=true;

        set hive.compactor.worker.threads=2;

//creating first transaction table    
        create table hivetest(key int,value String,Department String) clustered by (department) into 3 buckets stored as orc TBLPROPERTIES 
        ('transactional'='true') ;

        //creating second transaction table
        create table hivetest2(key int,value String,Department String) clustered by (department) into 3 buckets stored as orc TBLPROPERTIES 
        ('transactional'='true');
        //inserting data into table hivetest
        insert into table hivetest values (1,'jon','ABC'), (2,'rec','EFG');

        Finally, when I executed the below insert query,
        //executing insert overwrite command
        insert into table hivetest2 select * from hivetest ;

        I am getting the following exception,

        Query ID = A567812_20150416131818_1a260b18-f699-4b0a-ae66-94e07fcfa710
        Total jobs = 1
        Launching Job 1 out of 1
        Number of reduce tasks is set to 0 since there's no reduce operator
        java.lang.RuntimeException: serious problem
                at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$Context.waitForTasks(OrcInputFormat.java:478)
                at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:949)
                at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:974)
                at org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat.getSplits(BucketizedHiveInputFormat.java:148)
                at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624)
                at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616)
                at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
                at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
                at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
                at java.security.AccessController.doPrivileged(Native Method)
                at javax.security.auth.Subject.doAs(Subject.java:415)
                at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
                at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
                at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
                at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
                at java.security.AccessController.doPrivileged(Native Method)
                at javax.security.auth.Subject.doAs(Subject.java:415)
                at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
                at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
                at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
                at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:429)
                at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
                at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
                at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
                at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604)
                at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364)
                at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177)
                at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
                at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994)
                at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:247)
                at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
                at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
                at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)
                at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
                at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.lang.reflect.Method.invoke(Method.java:606)
                at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
                at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
        Caused by: java.lang.IllegalArgumentException: delta_0000352_0000352 does not start with base_
                at org.apache.hadoop.hive.ql.io.AcidUtils.parseBase(AcidUtils.java:136)
                at org.apache.hadoop.hive.ql.io.AcidUtils.parseBaseBucketFilename(AcidUtils.java:164)
                at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.run(OrcInputFormat.java:544)
                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
                at java.lang.Thread.run(Thread.java:744)
        Job Submission failed with exception 'java.lang.RuntimeException(serious problem)'
        FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

    Please help me to find a solution for this problem.I know that for bucketted table there should be a base_ file. But it is not created when inserted data in my table.

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题