从配置单元创建druid数据源时出错

0yycz8jy  于 2021-06-26  发布在  Hive
关注(0)|答案(1)|浏览(269)

跟随Druid文献https://cwiki.apache.org/confluence/display/hive/druid+integration.
我面临的错误是:-

Number of reduce tasks not specified. Estimated from input data size: 1
 In order to change the average load for a reducer (in bytes):
 set hive.exec.reducers.bytes.per.reducer=<number>
 In order to limit the maximum number of reducers:
 set hive.exec.reducers.max=<number>
 In order to set a constant number of reducers:
 set mapreduce.job.reduces=<number>
 java.io.FileNotFoundException: File does not exist: 
 /usr/lib/hive/lib/hive-druid-handler-2.3.0.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1530)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1523)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1

尽管我使用的是hivehive-2.3.2,但错误是找不到“/usr/lib/hive/lib/hive-druid-handler-2.3.0.jar”。为了克服上述问题,我们下载了jar并重启了hadoop。但这个问题还没有解决。

c0vxltue

c0vxltue1#

看起来您正在使用hive1。所有Druid的整合都是通过hive2完成的。

相关问题