spark解释器无法使用ctas(create table as select…)语句创建配置单元表

yduiuuwa  于 2021-06-26  发布在  Hive
关注(0)|答案(0)|浏览(304)

我正在使用zeppelin,并尝试使用ctas语句从另一个配置单元表创建一个配置单元表
但是我的查询总是以错误结束,所以表不会被创建。我发现有几个帖子说要修改齐柏林飞艇的配置,但我不能改变任何配置,因为我没有这样做的权限。
我执行的查询和得到的错误如下:

%sql
create table student as select * from student_score

org.apache.hadoop.hive.ql.metadata.hiveexception:无法更改表。方法名称无效:org.apache.hadoop.hive.ql.metadata.hive.altertable(hive)中的“alter\u table\u with\u cascade”。java:500)在org.apache.hadoop.hive.ql.metadata.hive.altertable(hive。java:484)在org.apache.hadoop.hive.ql.metadata.hive.loadtable(hive。java:1668)位于sun.reflect.nativemethodaccessorimpl.invoke0(本机方法)sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl。java:62)在sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl。java:43)在java.lang.reflect.method.invoke(方法。java:498)在org.apache.spark.sql.hive.client.shim\u v0\u 14.loadtable(hiveshim。scala:716)在org.apache.spark.sql.hive.client.hiveclientimpl$$anonfun$loadtable$1.apply$mcv$sp(hiveclientimpl)。scala:672)在org.apache.spark.sql.hive.client.hiveclientimpl$$anonfun$loadtable$1.apply(hiveclientimpl。scala:672)在org.apache.spark.sql.hive.client.hiveclientimpl$$anonfun$loadtable$1.apply(hiveclientimpl。scala:672)在org.apache.spark.sql.hive.client.hiveclientimpl$$anonfun$与hivestate$1.apply(hiveclientimpl。scala:283)位于org.apache.spark.sql.hive.client.hiveclientimpl.liftedtree1$1(hiveclientimpl。scala:230)位于org.apache.spark.sql.hive.client.hiveclientimpl.retrylocked(hiveclientimpl。scala:229)在org.apache.spark.sql.hive.client.hiveclientimpl.withhivestate(hiveclientimpl。scala:272)位于org.apache.spark.sql.hive.client.hiveclientimpl.loadtable(hiveclientimpl。scala:671)在org.apache.spark.sql.hive.hiveexternalcatalog$$anonfun$loadtable$1.apply$mcv$sp(hiveexternalcatalog)。scala:741)在org.apache.spark.sql.hive.hiveexternalcatalog$$anonfun$loadtable$1.apply(hiveexternalcatalog。scala:739)在org.apache.spark.sql.hive.hiveexternalcatalog$$anonfun$loadtable$1.apply(hiveexternalcatalog。scala:739)位于org.apache.spark.sql.hive.hiveexternalcatalog.withclient(hiveexternalcatalog。scala:95)在org.apache.spark.sql.hive.hiveexternalcatalog.loadtable(hiveexternalcatalog。scala:739)位于org.apache.spark.sql.hive.execution.insertintohivetable.sideeffectresult$lzycompute(insertintohivetable)。scala:323)位于org.apache.spark.sql.hive.execution.insertintohivetable.sideeffectresult(insertintohivetable。scala:170)在org.apache.spark.sql.hive.execution.insertintohivetable.doexecute(insertintohivetable。scala:347)在org.apache.spark.sql.execution.sparkplan$$anonfun$execute$1.apply(sparkplan。scala:114)在org.apache.spark.sql.execution.sparkplan$$anonfun$execute$1.apply(sparkplan。scala:114)在org.apache.spark.sql.execution.sparkplan$$anonfun$executequery$1.apply(sparkplan。scala:135)在org.apache.spark.rdd.rddoperationscope$.withscope(rddoperationscope。scala:151)在org.apache.spark.sql.execution.sparkplan.executequery(sparkplan。scala:132)在org.apache.spark.sql.execution.sparkplan.execute(sparkplan。scala:113)在org.apache.spark.sql.execution.queryexecution.tordd$lzycompute(查询执行)。scala:87)在org.apache.spark.sql.execution.queryexecution.tordd(queryexecution。scala:87)在org.apache.spark.sql.hive.execution.createhivetableasselectcommand.run(createhivetableasselectcommand。scala:92)在org.apache.spark.sql.execution.command.executedcommandexec.sideeffectresult$lzycompute(命令。scala:58)在org.apache.spark.sql.execution.command.executedcommandexec.sideeffectresult(commands。scala:56)在org.apache.spark.sql.execution.command.executecommandexec.doexecute(commands。scala:74)在org.apache.spark.sql.execution.sparkplan$$anonfun$执行$1.apply(sparkplan。scala:114)在org.apache.spark.sql.execution.sparkplan$$anonfun$execute$1.apply(sparkplan。scala:114)在org.apache.spark.sql.execution.sparkplan$$anonfun$executequery$1.apply(sparkplan。scala:135)在org.apache.spark.rdd.rddoperationscope$.withscope(rddoperationscope。scala:151)在org.apache.spark.sql.execution.sparkplan.executequery(sparkplan。scala:132)在org.apache.spark.sql.execution.sparkplan.execute(sparkplan。scala:113)在org.apache.spark.sql.execution.queryexecution.tordd$lzycompute(queryexecution。scala:87)在org.apache.spark.sql.execution.queryexecution.tordd(查询执行)。scala:87)在org.apache.spark.sql.dataset。scala:185)在org.apache.spark.sql.dataset$.ofrows(dataset。scala:64)在org.apache.spark.sql.sparksession.sql(sparksession。scala:592) ... 47省略的原因:org.apache.thrift.tapplicationexception:方法名称无效:org.apache.thrift.tapplicationexception.read(tapplicationexception)处的“alter \u table \u with \u cascade”。java:111)位于org.apache.thrift.tserviceclient.receivebase(tserviceclient。java:71)
位于org.apache.hadoop.hive.metastore.api.thrifthivemetastore$client.recv\u alter\u table\u with\u cascade(thrifthivemetastore)。java:1374)在org.apache.hadoop.hive.metastore.api.thrifthivemetastore$client.alter\u table\u with\u cascade(thrifthivemetastore)。java:1358)位于org.apache.hadoop.hive.metastore.hivemetastoreclient.alter\ u table(hivemetastoreclient。java:340) 位于org.apache.hadoop.hive.ql.metadata.sessionhivemetastoreclient.alter\ table(sessionhivemetastoreclient)。java:251)在sun.reflect.nativemethodaccessorimpl.invoke0(本机方法)在sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl)。java:62)在sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl。java:43)在java.lang.reflect.method.invoke(方法。java:498)位于org.apache.hadoop.hive.metastore.retryingmetastoreclient.invoke(retryingmetastoreclient。java:156)org.apache.hadoop.hive.ql.metadata.hive.altertable(hive。java:496)
... 93个以上

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题