无法将数据写入配置单元中的内部表

v6ylcynt  于 2021-07-09  发布在  Spark
关注(0)|答案(0)|浏览(348)

我正在尝试使用spark dataframe的spark 2.3版本将数据写入配置单元内部表

CREATE TABLE `g_interimc.grpxm31`(
>                  `gr98p_cf` bigint,
>
>                  `gr98p_cp` decimal(11,0),
> 
>                  `grp98mmmb` string,
> 
>                  `grp98oob` string,
> 
>                  `srccd` string,
> 
>                  `gp_n` string)
> 
>                ROW FORMAT SERDE
> 
>                  'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> 
>                STORED AS INPUTFORMAT
> 
>                  'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 
>                OUTPUTFORMAT
> 
>                  'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> 
>                LOCATION
> 
>                  'hdfs://gwhdnha/mnoo1/raw/cat/eilkls/g_interimc/grpxm31
> 
>                TBLPROPERTIES (
> 
>                  'bucketing_version'='2',
> 
>                  'transactional'='true',
> 
>                  'transactional_properties'='default',

  dataframe.write.mode("overwrite").insertInto("g_interimc.grpxm31")
Exception in thread "main" org.apache.spark.sql.AnalysisException:
Spark has no access to table `g_interimc`.`grpxm31`. Clients can access this table only if
they have the following capabilities: CONNECTORREAD,HIVEFULLACIDREAD,HIVEFULLACIDWRITE,HIVEMANAGESTATS,HIVECACHEINVALIDATE,CONNECTORWRITE.

This table may be a Hive-managed ACID table, or require some other capability that Spark

currently does not implement;     at org.apache.spark.sql.catalyst.catalog.CatalogUtils$.throwIfNoAccess(ExternalCatalogUtils.scala:280)
        at org.apache.spark.sql.catalyst.catalog.CatalogUtils$.throwIfRO(ExternalCatalogUtils.scala:297)
     org.apache.spark.sql.hive.HiveTranslationLayerCheck$$anonfun$apply$1.applyOrElse(HiveTranslationLayerStrategies.scala:93)
atorg.apache.spark.sql.hive.HiveTranslationLayerCheck$$anonfun$apply$1.applyOrElse(HiveTranslationLayerStrategies.scala:85)

        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)

        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)

        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)

        at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)

        at org.apache.spark.sql.hive.HiveTranslationLayerCheck.apply(HiveTranslationLayerStrategies.scala:85)

        at org.apache.spark.sql.hive.HiveTranslationLayerCheck.apply(HiveTranslationLayerStrategies.scala:83)

        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)

        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)

        at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)

正在尝试使用spark 2.3版本和配置单元版本为3.1.0.3,使用spark dataframe写入配置单元内部表…
在spark代码的末尾,我必须删除hive表中的数据,而不是删除应该删除的数据

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题