hive终端挂起使用insert命令插入数据

6tdlim6h  于 2021-06-02  发布在  Hadoop
关注(0)|答案(3)|浏览(444)

我正在尝试使用insert命令从另一个表在配置单元1.2的外部配置单元表中插入数据-

  1. INSERT INTO perf_tech_security_detail_extn_fltr partition
  2. (created_date)
  3. SELECT seq_num,
  4. action,
  5. sde_timestamp,
  6. instrmnt_id,
  7. dm_lstupddt,
  8. grnfthr_ind,
  9. grnfthr_tl_dt,
  10. grnfthr_frm_dt,
  11. ftc_chge_rsn,
  12. Substring (sde_timestamp, 0, 10)
  13. FROM tech_security_detail_extn_fltr
  14. WHERE Substring (sde_timestamp, 0, 10) = '2018-05-02';

但是 hive 的壳还挂着-

  1. hive> SET hive.exec.dynamic.partition=true;
  2. hive> set hive.exec.dynamic.partition.mode=nonstrict;
  3. hive> set hive.enforce.bucketing=true;
  4. hive> INSERT INTO PERF_TECH_SECURITY_DETAIL_EXTN_FLTR partition (created_date) select seq_num, action, sde_timestamp, instrmnt_id, dm_lstupddt, grnfthr_ind, grnfthr_tl_dt, grnfthr_frm_dt, ftc_chge_rsn, substring (sde_timestamp,0,10) from TECH_SECURITY_DETAIL_EXTN_FLTR where substring (sde_timestamp,0,10)='2018-05-02';
  5. Query ID = tcs_20180503215950_585152fd-ecdc-4296-85fc-d464fef44e68
  6. Total jobs = 1
  7. Launching Job 1 out of 1
  8. Number of reduce tasks determined at compile time: 100
  9. In order to change the average load for a reducer (in bytes):
  10. set hive.exec.reducers.bytes.per.reducer=<number>
  11. In order to limit the maximum number of reducers:
  12. set hive.exec.reducers.max=<number>
  13. In order to set a constant number of reducers:
  14. set mapreduce.job.reduces=<number>

Hive日志如下-
18-05-03 21:28:01703信息[主]:log.perflogger(perflogger。java:perflogend(148))-2018-05-03 21:28:01716错误[main]:mr.execdriver(execdriver。java:execute(400))-Yarn2018-05-03 21:28:01758信息[主要]:client.rmproxy(rmproxy。java:creatermproxy(98))-连接到resourcemanager,电话:0.0.0.0:8032 2018-05-03 21:28:01,903信息[main]:fs.fsstatspublisher(fsstatspublisher。java:init(49))-创建时间:hdfs://localhost:9000/datanode/nifi\u data/perf\u tech\u security\u detail\u extn\u fltr/.hive-staging\u hive\u 2018-05-03\u 21-27-59\u 433\u 5606951945441160381-1/-ext-10001 2018-05-03 21:28:01,960信息[main]:client.rmproxy(rmproxy。java:creatermproxy(98))-连接到resourcemanager,电话:0.0.0.0:8032 2018-05-03 21:28:01,965 info[main]:exec.utilities(实用程序。java:getbasework(389))-计划路径=hdfs://localhost:9000/tmp/hive/tcs/576b0aa3-059d-4fb2-bed8-c975781a5fce/hive\ 2018-05-03\ 21-27-59\ 433\ 5606951945441160381-1/-mr-10003/303a392c-2383-41ed-bc9d-78d37ae49f39/map.xml 2018-05-03 21:28:01,967 info[main]:exec.utilities(实用程序。java:getbasework(389))-计划路径=hdfs://localhost:9000/tmp/hive/tcs/576b0aa3-059d-4fb2-bed8-c975781a5fce/hive\u 2018-05-03\u 21-27-59\u 433\u 5606951945441160381-1/-mr-10003/303a392c-2383-41ed-bc9d-78d37ae49f39/reduce.xml 2018-05-03 21:28:22,009 info[main]:ipc.client(客户端。java:handleconnectiontimeout(832))-正在重试连接到服务器:0.0.0.0/0.0.0.0:8032。已尝试0次;maxretries=45 2018-05-03 21:28:42027信息[main]:ipc.client(客户端。java:handleconnectiontimeout(832))-正在重试连接到服务器:0.0.0.0/0.0.0.0:8032。已尝试1次;最大重试次数=45。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
我也尝试过在未分区的表中正常插入数据,但即使这样也不起作用-

  1. INSERT INTO emp values (1 ,'ROB')
mbjcgjjk

mbjcgjjk1#

在集群环境中,属性yarn.resourcemanager.hostname是避免此问题的关键。这对我很有效。
使用此命令监视Yarn性能: yarn application -list 以及 yarn node -list

6mzjoqzu

6mzjoqzu2#

断然的
由于framename错误,mapreduce未运行,因此已在mapred-site.xml中编辑属性mapreduce.framework.name

soat7uwm

soat7uwm3#

我不知道为什么您没有在表名之前写表,如下所示:

  1. INSERT INTO TABLE emp
  2. VALUES (1 ,'ROB'), (2 ,'Shailesh');

编写适当的命令使其工作

相关问题