谁知道为什么上课 HiveConf
没有 HADOOPCONF
枚举类型输入 hive-common jar
现在?
我使用hive-common-1.2.1.jar hiveconf类编写代码来访问hdfs(ha namenode),下面有一个错误。
我意识到我的代码没有配置 HADOOPCONF
所以它不能连接到hdfs,但是没有 HADOOPCONF
在hive-common-1.2.1.jar中,我发现以前版本的hive-common有hadoopconf。
http://www.docjar.com/html/api/org/apache/hadoop/hive/conf/hiveconf.java.html
我的问题是,如何使用hive-common-1.2.1.jar配置hiveconf类来访问hdfs(namenodeha)?
错误如下:
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: cluster
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:665)
我的代码是:
hiveConf.setVar(HiveConf.ConfVars.HADOOPBIN, "/opt/modules/hadoop/bin");
hiveConf.setVar(HiveConf.ConfVars.HADOOPFS, "hdfs://cluster");
hiveConf.setVar(HiveConf.ConfVars.LOCALSCRATCHDIR, "/opt/modules/hive/temp");
hiveConf.setVar(HiveConf.ConfVars.DOWNLOADED_RESOURCES_DIR, "/opt/modules/hive/temp");
hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY, false);
hiveConf.setVar(HiveConf.ConfVars.METASTOREWAREHOUSE, "/warehouse");
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://127.0.0.1:9083");
hiveConf.setVar(HiveConf.ConfVars.METASTORE_CONNECTION_DRIVER, "com.mysql.jdbc.Driver");
hiveConf.setVar(HiveConf.ConfVars.METASTORECONNECTURLKEY, "jdbc:mysql://192.168.5.29:3306/hive?createDatabaseIfNotExist=true");
hiveConf.setVar(HiveConf.ConfVars.METASTORE_CONNECTION_USER_NAME, "hive");
hiveConf.setVar(HiveConf.ConfVars.METASTOREPWD, "123456");
hiveConf.setVar(HiveConf.ConfVars.HIVEHISTORYFILELOC, "/opt/modules/hive/temp");
1条答案
按热度按时间fnx2tebb1#
好的,我解决了这个问题。因为hive common jar中的类hiveconf默认从hadoop加载“hdfs site.xml”,所以只要在运行时设置指向“hdfs site.xml”文件夹的类路径。