我正在尝试从配置单元中的db.执行select*,此配置单元表是使用spark加载的
它不工作显示错误:
错误:java.io.ioexception:java.lang.illegalargumentexception:bucketid超出范围:-1(状态=,代码=0)
使用以下属性时,我可以查询配置单元:
set hive.mapred.mode=nonstrict;
set hive.optimize.ppd=true;
set hive.optimize.index.filter=true;
set hive.tez.bucket.pruning=true;
set hive.explain.user=false;
set hive.fetch.task.conversion=none;
现在,当我尝试使用spark读取同一个配置单元表db.时,我收到如下错误:
只有具有以下功能的客户端才能访问此表:connectorread、hivefullacidread、hivefullacidwrite、hivemanagestats、hivecacheinvalidate、connectorwrite。这个表可能是一个配置单元管理的acid表,或者需要一些spark目前没有实现的其他功能;在org.apache.spark.sql.catalyst.catalog.catalogTILS$.throwifnoaccess(externalcatalogTILS。scala:280)在org.apache.spark.sql.hive.hivetranslationlayercheck$$anonfun$apply$1.applyorelse(hivetranslationlayerstrategies)中。scala:105)在org.apache.spark.sql.hive.hivetranslationlayercheck$$anonfun$apply$1.applyorelse(hivetranslationlayers策略)。scala:85)在org.apache.spark.sql.catalyst.trees.treenode$$anonfun$transformup$1.apply(treenode。scala:289)在org.apache.spark.sql.catalyst.trees.treenode$$anonfun$transformup$1.apply(treenode。scala:289)在org.apache.spark.sql.catalyst.trees.currentorigin$.withorigin(treenode。scala:70)在org.apache.spark.sql.catalyst.trees.treenode.transformup(treenode。scala:288)在org.apache.spark.sql.catalyst.trees.treenode$$anonfun$3.apply(treenode。scala:286)在org.apache.spark.sql.catalyst.trees.treenode$$anonfun$3.apply(treenode。scala:286)在org.apache.spark.sql.catalyst.trees.treenode$$anonfun$4.apply(treenode。scala:306)在org.apache.spark.sql.catalyst.trees.treenode.mapproductiterator(treenode。scala:187)在org.apache.spark.sql.catalyst.trees.treenode.mapchildren(treenode。scala:304)在org.apache.spark.sql.catalyst.trees.treenode.transformup(treenode。scala:286)在org.apache.spark.sql.catalyst.trees.treenode$$anonfun$3.apply(treenode。scala:286)在org.apache.spark.sql.catalyst.trees.treenode$$anonfun$3.apply(treenode。scala:286)在org.apache.spark.sql.catalyst.trees.treenode$$anonfun$4.apply(treenode。scala:306)在org.apache.spark.sql.catalyst.trees.treenode.mapproductiterator(treenode。scala:187)在org.apache.spark.sql.catalyst.trees.treenode.mapchildren(treenode。scala:304)在org.apache.spark.sql.catalyst.trees.treenode.transformup(treenode。scala:286)在org.apache.spark.sql.hive.hivetranslationlayercheck.apply(hivetranslationlayerstrategies。scala:85)在org.apache.spark.sql.hive.hivetranslationlayercheck.apply(hivetranslationlayerstrategies。scala:83)在org.apache.spark.sql.catalyst.rules.ruleexecutor$$anonfun$execute$1$$anonfun$apply$1.apply(ruleexecutor。scala:87)在org.apache.spark.sql.catalyst.rules.ruleexecutor$$anonfun$execute$1$$anonfun$apply$1.apply(ruleexecutor。scala:84)在scala.collection.linearseqoptimized$class.foldleft(linearseqoptimized。scala:124)在scala.collection.immutable.list.foldleft(list。scala:84)在org.apache.spark.sql.catalyst.rules.ruleexecutor$$anonfun$execute$1.apply(ruleexecutor。scala:84)在org.apache.spark.sql.catalyst.rules.ruleexecutor$$anonfun$execute$1.apply(ruleexecutor。scala:76)在scala.collection.immutable.list.foreach(列表。scala:392)位于org.apache.spark.sql.catalyst.rules.ruleexecutor.execute(ruleexecutor。scala:76)在org.apache.spark.sql.catalyst.analysis.analyzer.org$apache$spark$sql$catalyst$analysis$analyzer$$executesamecontext(analyzer。scala:124)在org.apache.spark.sql.catalyst.analysis.analyzer.execute(analyzer。scala:118)在org.apache.spark.sql.catalyst.analysis.analyzer.executeandcheck(analyzer。scala:103)在org.apache.spark.sql.execution.queryexecution.analysed$lzycompute(queryexecution。scala:57)在org.apache.spark.sql.execution.queryexecution.analysed(queryexecution。scala:55)在org.apache.spark.sql.execution.queryexecution.assertanalyzed(queryexecution。scala:47)在org.apache.spark.sql.dataset$.ofrows(dataset。scala:74)在org.apache.spark.sql.sparksession.sql(sparksession。scala:642) ... 49省略
我需要在spark submit或shell中添加任何属性吗?或者用spark读取hive表格的另一种方法是什么
配置单元表示例格式:
CREATE TABLE `hive``( |
| `c_id` decimal(11,0),etc.........
ROW FORMAT SERDE |
| 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' |
| WITH SERDEPROPERTIES (
STORED AS INPUTFORMAT |
| 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' |
| OUTPUTFORMAT |
| 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' |
LOCATION |
| path= 'hdfs://gjuyada/bbts/scl/raw' |
| TBLPROPERTIES ( |
| 'bucketing_version'='2', |
| 'spark.sql.create.version'='2.3.2.3.1.0.0-78', |
| 'spark.sql.sources.provider'='orc', |
| 'spark.sql.sources.schema.numParts'='1', |
| 'spark.sql.sources.schema.part.0'='{"type":"struct","fields":
[{"name":"Czz_ID","type":"decimal(11,0)","nullable":true,"metadata":{}},
{"name":"DzzzC_CD","type":"string","nullable":true,"metadata":{}},
{"name":"C0000_S_N","type":"decimal(11,0)","nullable":true,"metadata":{}},
{"name":"P_ _NB","type":"decimal(11,0)","nullable":true,"metadata":{}},
{"name":"C_YYYY","type":"string","nullable":true,"metadata":{}},"type":"string","nullable":true,"metadata":{}},{"name":"Cv_ID","type":"string","nullable":true,"metadata":{}},
| 'transactional'='true', |
| 'transient_lastDdlTime'='1574817059')
1条答案
按热度按时间ct2axkht1#
你想读的问题
Transactional table
(transactional = true)
变成Spark。正式
Spark
尚未支持配置单元酸表,请获取full dump/incremental dump of acid table
到常规hive orc/parquet
分区表然后使用spark读取数据。有一个开放的jira saprk-15348来增加对阅读的支持
Hive ACID
table。如果你跑了
major compaction
在酸表(从Hive),然后Spark能够阅读base_XXX
目录,但不是增量目录spark-16996地址在这个jira。使用sparklap读取acid表有一些解决方法,如本文所述。
我想从
HDP-3.X
hivewarehouseconnector能够支持读取hiveacid表。您可以创建
snapshot
事务表的non transactional
然后从表中读取数据。create table <non_trans> stored as orc as select * from <transactional_table>
UPDATE:
1.创建外部配置单元表:2.然后用现有的事务表数据覆盖上述外部表。