spark1.6和hive0.14集成问题

wj8zmpe1  于 2021-06-26  发布在  Hive
关注(0)|答案(0)|浏览(252)

我一直在尝试将最新的spark 1.6与hive 0.14.0集成。我只是想让储蓄服务器运行起来。我注意到,如果不重写以下配置:(-conf spark.sql.hive.metastore.version=0.14.0--conf spark.sql.hive.metastore.jars=maven)调用start-thrifstserver.sh spark脚本时,然后,由于spark 1.6默认使用的配置单元1.2.1与我在prod中运行的配置单元版本之间的不兼容问题,任何create table查询都会在spark中失败。但是,当我重写这两个配置时,当thrift server启动时,它没有像hive-site.xml中指定的那样连接到我的配置单元元存储uri,但是它连接到derby数据库时很累,然后thrift服务器就不能正常启动。我遗漏了一些额外的覆盖吗?
请参阅下面的thrift服务器日志信息:

Loaded from file:/usr/lib/spark/lib/spark-assembly-1.6.0-hadoop2.6.0.jar 
java.vendor=Oracle Corporation 
java.runtime.version=1.7.0_79-b15 
user.dir=/ 
os.name=Linux 
os.arch=amd64 
os.version=2.6.32-504.23.4.el6.x86_64 
derby.system.home=null 
Database Class Loader started - derby.database.classpath='' 
16/01/26 16:35:20 INFO YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (10.15.150.38:51475) with ID 20 
16/01/26 16:35:20 INFO BlockManagerMasterEndpoint: Registering block manager 10.15.150.38:52107 with 9.9 GB RAM, BlockManagerId(20, 10.15.150.38, 52107) 
16/01/26 16:35:20 INFO YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (10.15.150.38:51479) with ID 48 
16/01/26 16:35:20 INFO BlockManagerMasterEndpoint: Registering block manager 10.15.150.38:47973 with 9.9 GB RAM, BlockManagerId(48, 10.15.150.38, 47973) 
16/01/26 16:35:20 WARN Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@3cf4a477:an attempt to override final parameter: mapreduce.reduce.speculative;  Ignoring. 
16/01/26 16:35:20 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 
16/01/26 16:35:21 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 
16/01/26 16:35:21 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 
16/01/26 16:35:22 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 
16/01/26 16:35:22 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 
16/01/26 16:35:22 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY 
16/01/26 16:35:22 INFO ObjectStore: Initialized ObjectStore 
16/01/26 16:35:22 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 
16/01/26 16:35:22 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException 
16/01/26 16:35:22 INFO HiveMetaStore: Added admin role in metastore 
16/01/26 16:35:22 INFO HiveMetaStore: Added public role in metastore 
16/01/26 16:35:22 INFO HiveMetaStore: No user is added in admin role, since config is empty 
16/01/26 16:35:22 INFO HiveMetaStore: 0: get_all_databases 
16/01/26 16:35:22 INFO audit: ugi=hive  ip=unknown-ip-addr      cmd=get_all_databases 
16/01/26 16:35:22 INFO HiveMetaStore: 0: get_functions: db=default pat=* 
16/01/26 16:35:22 INFO audit: ugi=hive  ip=unknown-ip-addr      cmd=get_functions: db=default pat=* 
16/01/26 16:35:22 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. 
16/01/26 16:35:22 INFO SessionState: Created local directory: /tmp/06895c7e-e26c-42b7-b100-4222d0356b6b_resources 
16/01/26 16:35:22 INFO SessionState: Created HDFS directory: /tmp/hive/hive/06895c7e-e26c-42b7-b100-4222d0356b6b 
16/01/26 16:35:22 INFO SessionState: Created local directory: /tmp/hive/06895c7e-e26c-42b7-b100-4222d0356b6b 
16/01/26 16:35:23 INFO SessionState: Created HDFS directory: /tmp/hive/hive/06895c7e-e26c-42b7-b100-4222d0356b6b/_tmp_space.db 
16/01/26 16:35:23 WARN Configuration: org.apache.hadoop.hive.conf.LoopingByteArrayInputStream@37f031a:an attempt to override final parameter: mapreduce.reduce.speculative;  Ignoring. 
16/01/26 16:35:23 INFO HiveContext: default warehouse location is /user/hive/warehouse 
16/01/26 16:35:23 INFO HiveContext: Initializing HiveMetastoreConnection version 0.14.0 using maven. 
Ivy Default Cache set to: /home/hive/.ivy2/cache 
The jars for the packages stored in: /home/hive/.ivy2/jars 
http://www.datanucleus.org/downloads/maven2 added as a remote repository with the name: repo-1 
:: loading settings :: url = jar:file:/usr/lib/spark/lib/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/apache/ivy/core/settings/ivysettings.xml 
org.apache.calcite#calcite-core added as a dependency 
org.apache.calcite#calcite-avatica added as a dependency 
org.apache.hive#hive-metastore added as a dependency 
org.apache.hive#hive-exec added as a dependency 
org.apache.hive#hive-common added as a dependency 
org.apache.hive#hive-serde added as a dependency 
com.google.guava#guava added as a dependency 
org.apache.hadoop#hadoop-client added as a dependency 
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0 
        confs: [default]

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题