insert into table hivetest2 select*from hivetest:如果两个表都是事务表,则在配置单元0.14中不工作

cu6pst1q  于 2021-05-30  发布在  Hadoop
关注(0)|答案(0)|浏览(263)

**结束。**此问题需要详细的调试信息。它目前不接受答案。
**想改进这个问题吗?**更新问题,使其成为堆栈溢出的主题。

5年前关门了。
改进这个问题
我已经尝试在配置单元0.14中对这两个表执行insert-onto-table select hivetest 以及 hivetest2 ,它们是事务表。当两个表都是事务表时,这就不起作用了。下面是我使用过的查询。
我设置了以下参数

  1. //setting up parameters for acid transactions
  2. set hive.support.concurrency=true;
  3. set hive.enforce.bucketing=true;
  4. set hive.exec.dynamic.partition.mode=nonstrict;
  5. set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
  6. set hive.compactor.initiator.on=true;
  7. set hive.compactor.worker.threads=2;
  8. //creating first transaction table
  9. create table hivetest(key int,value String,Department String) clustered by (department) into 3 buckets stored as orc TBLPROPERTIES
  10. ('transactional'='true') ;
  11. //creating second transaction table
  12. create table hivetest2(key int,value String,Department String) clustered by (department) into 3 buckets stored as orc TBLPROPERTIES
  13. ('transactional'='true');
  14. //inserting data into table hivetest
  15. insert into table hivetest values (1,'jon','ABC'), (2,'rec','EFG');
  16. Finally, when I executed the below insert query,
  17. //executing insert overwrite command
  18. insert into table hivetest2 select * from hivetest ;
  19. I am getting the following exception,
  20. Query ID = A567812_20150416131818_1a260b18-f699-4b0a-ae66-94e07fcfa710
  21. Total jobs = 1
  22. Launching Job 1 out of 1
  23. Number of reduce tasks is set to 0 since there's no reduce operator
  24. java.lang.RuntimeException: serious problem
  25. at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$Context.waitForTasks(OrcInputFormat.java:478)
  26. at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:949)
  27. at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:974)
  28. at org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat.getSplits(BucketizedHiveInputFormat.java:148)
  29. at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624)
  30. at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616)
  31. at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
  32. at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
  33. at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
  34. at java.security.AccessController.doPrivileged(Native Method)
  35. at javax.security.auth.Subject.doAs(Subject.java:415)
  36. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
  37. at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
  38. at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
  39. at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
  40. at java.security.AccessController.doPrivileged(Native Method)
  41. at javax.security.auth.Subject.doAs(Subject.java:415)
  42. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
  43. at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
  44. at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
  45. at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:429)
  46. at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
  47. at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
  48. at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
  49. at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604)
  50. at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364)
  51. at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177)
  52. at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
  53. at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994)
  54. at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:247)
  55. at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
  56. at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
  57. at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)
  58. at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
  59. at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
  60. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  61. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  62. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  63. at java.lang.reflect.Method.invoke(Method.java:606)
  64. at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
  65. at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
  66. Caused by: java.lang.IllegalArgumentException: delta_0000352_0000352 does not start with base_
  67. at org.apache.hadoop.hive.ql.io.AcidUtils.parseBase(AcidUtils.java:136)
  68. at org.apache.hadoop.hive.ql.io.AcidUtils.parseBaseBucketFilename(AcidUtils.java:164)
  69. at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.run(OrcInputFormat.java:544)
  70. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  71. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  72. at java.lang.Thread.run(Thread.java:744)
  73. Job Submission failed with exception 'java.lang.RuntimeException(serious problem)'
  74. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
  75. Please help me to find a solution for this problem.I know that for bucketted table there should be a base_ file. But it is not created when inserted data in my table.

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题