我在跑步
apache-hive-1.2.1-bin
hadoop-2.7.1版本
spark-1.5.1-bin-hadoop2.6版本
我能够在spark上配置配置单元,但是当我尝试执行查询时,它会给出下面的错误消息。
hive> SELECT COUNT(*) AS rcount, yom From service GROUP BY yom;
Query ID = hduser_20160110105649_4c90528a-76ba-4127-8849-54f2152be817
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Spark Job = b9cbbd47-f41f-48b5-98c3-efcaa145390e
Status: SENT
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
我怎样才能解决这个问题?
2条答案
按热度按时间rta7y2nd1#
我有同样的问题,但我没有配置Yarn,因为一些作业正在运行。我不确定那是问题的解决办法。
你有没有像文件上说的那样配置Yarn?
lf3rwulv2#
yarn-site.xml: