作业提交失败,出现异常“org.apache.hadoop.util.diskchecker$diskerrorexception(任何本地目录中都没有可用空间)”

bnlyeluc  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(562)

我运行配置单元查询时出现以下错误。请帮我解决这个问题。
配置单元>插入覆盖表bucket\u emp1 select*from emp;
query id=hduser\u 20160426213038\u 58cbf1dc-a345-40f8-ab3d-a3258046b279 total jobs=3启动作业1共3个reduce任务数设置为0,因为没有reduce operator org.apache.hadoop.util.diskchecker$diskerrorexception:任何本地目录中都没有可用空间。在org.apache.hadoop.fs.localdirallocator$allocatorpercontext.getlocalpathforwrite(localdirallocator。java:366)在org.apache.hadoop.fs.localdirallocator.getlocalpathforwrite(localdirallocator。java:150)在org.apache.hadoop.fs.localdirallocator.getlocalpathforwrite(localdirallocator。java:131)在org.apache.hadoop.fs.localdirallocator.getlocalpathforwrite(localdirallocator。java:115)在org.apache.hadoop.mapred.localdistributedcachemanager.setup(localdistributedcachemanager。java:131)在org.apache.hadoop.mapred.localjobrunner$job。java:163)在org.apache.hadoop.mapred.localjobrunner.submitjob(localjobrunner。java:731)在org.apache.hadoop.mapreduce.jobsubmitter.submitjobinternal(jobsubmitter。java:536)在org.apache.hadoop.mapreduce.job$10.run(job。java:1296)在org.apache.hadoop.mapreduce.job$10.run(job。java:1293)位于javax.security.auth.subject.doas(subject)的java.security.accesscontroller.doprivileged(本机方法)。java:422)在org.apache.hadoop.security.usergroupinformation.doas(用户组信息。java:1628)在org.apache.hadoop.mapreduce.job.submit(作业。java:1293)在org.apache.hadoop.mapred.jobclient$1.run(jobclient。java:562)在org.apache.hadoop.mapred.jobclient$1.run(jobclient。java:557)位于java.security.accesscontroller.doprivileged(本机方法)javax.security.auth.subject.doas(主题。java:422)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation。java:1628)在org.apache.hadoop.mapred.jobclient.submitjobinternal(jobclient。java:557)在org.apache.hadoop.mapred.jobclient.submitjob(jobclient。java:548)在org.apache.hadoop.hive.ql.exec.mr.execdriver.execute(execdriver。java:431)位于org.apache.hadoop.hive.ql.exec.mr.mapredtask.execute(mapredtask)。java:137)在org.apache.hadoop.hive.ql.exec.task.executetask(任务。java:160)位于org.apache.hadoop.hive.ql.exec.taskrunner.runsequential(taskrunner)。java:88)在org.apache.hadoop.hive.ql.driver.launchtask(驱动程序。java:1653)在org.apache.hadoop.hive.ql.driver.execute(driver。java:1412)在org.apache.hadoop.hive.ql.driver.runinternal(driver。java:1195)在org.apache.hadoop.hive.ql.driver.run(driver。java:1059)在org.apache.hadoop.hive.ql.driver.run(driver。java:1049)在org.apache.hadoop.hive.cli.clidriver.processlocalcmd(clidriver。java:213)在org.apache.hadoop.hive.cli.clidriver.processcmd(clidriver。java:165)在org.apache.hadoop.hive.cli.clidriver.processline(clidriver。java:376)在org.apache.hadoop.hive.cli.clidriver.executedriver(clidriver。java:736)在org.apache.hadoop.hive.cli.clidriver.run(clidriver。java:681)位于org.apache.hadoop.hive.cli.clidriver.main(clidriver。java:621)在sun.reflect.nativemethodaccessorimpl.invoke0(本机方法)在sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl)。java:62)在sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl。java:43)在java.lang.reflect.method.invoke(方法。java:498)在org.apache.hadoop.util.runjar.run(runjar。java:221)在org.apache.hadoop.util.runjar.main(runjar。java:136)作业提交失败,出现异常“org.apache.hadoop.util.diskchecker$diskerrorexception(没有空间'失败:执行错误,从org.apache.hadoop.hive.ql.exec.mr.mapredtask返回代码1

68bkxrlz

68bkxrlz1#

map-reduce框架查找 mapreduce.cluster.local.dir 参数,然后验证目录上是否有足够的空间,以便为其创建中间文件。
如果您的目录没有所需的可用空间,map reduce作业将失败,并报告您共享的错误。
确保本地目录上有足够的空间。
最好是压缩(如 Gzip compression )中间输出文件,以便在处理过程中占用较少的空间。

conf.set(“mapred.compress.map.output”, “true”)
conf.set(“mapred.output.compression.type”, “BLOCK”);
conf.set(“mapred.map.output.compression.codec”, “org.apache.hadoop.io.compress.GzipCodec”);

相关问题