我试图通过sparkshell对配置单元表运行select查询。这是我的密码:
scala >import org.apache.spark.sql.hive.HiveContext
scala >val sqlContext = new HiveContext(sc)
scala >val df = sqlContext.sql("select count(*) from timeserie")
scala >df.head
但是我在执行读取命令时出错了( df.head
, df.count
, df.show
) . 这是错误:
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange SinglePartition
+- *(1) HashAggregate(keys=[], functions=[partial_count(1)], output=[count#13L])
+- HiveTableScan HiveTableRelation `default`.`timeserie`,
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [data#0, temperature#1, hum#2]
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:119)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
... 49 elided
Caused by: java.io.IOException: Not a file: hdfs://sandbox-
hdp.hortonworks.com:8020/warehouse/tablespace/managed/hive/timeserie/delta_0000001_0000001_0000
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:337)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
... 73 more
ps:当我执行 show tables
查询我得到的结果没有错误
show create table timeserie:显示创建表
和hdfs dfs-ls../../warehouse/tablespace/managed/hive/bdp.db/timeserie:hdfs dfs-ls
hdfs dfs-ls-r../../warehouse/tablespace/managed/hive/bdp.db/serie/的结果:在此处输入图像描述
暂无答案!
目前还没有任何答案,快来回答吧!