zeppelin例外,具有spark基本功能

l0oc07j2  于 2021-05-17  发布在  Spark
关注(0)|答案(0)|浏览(293)

我教一门关于scala和spark的课。我已经演示齐柏林飞艇五年了(而且使用它的时间要长一些)。
在过去的几年里,每当我演示齐柏林飞艇时,使用开箱即用的分发方式,我只能展示spark基本功能笔记本。当我这样做的时候,所有的段落都会像他们应该的那样出现。如果我试图更改“年龄”字段中的年龄,或者只是尝试重新运行任何段落,我会得到一个例外。
我再说一遍:这是开箱即用的。我下载了0.9.0-preview2版本,启动了守护进程,并打开了提供的笔记本。有什么想法吗?我使用的是MacBookPro操作系统10.15.7。我还安装了spark-3.0.1-bin-hadoop2.7。
下面是我得到的错误:

  1. java.lang.reflect.InvocationTargetException
  2. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  3. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  4. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  5. at java.lang.reflect.Method.invoke(Method.java:498)
  6. at org.apache.zeppelin.spark.SparkSqlInterpreter.internalInterpret(SparkSqlInterpreter.java:105)
  7. at org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47)
  8. at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110)
  9. at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:776)
  10. at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668)
  11. at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
  12. at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130)
  13. at org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
  14. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  15. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  16. at java.lang.Thread.run(Thread.java:748)
  17. Caused by: org.apache.spark.sql.AnalysisException: Table or view not found: bank; line 2 pos 5;
  18. 'Sort ['age ASC NULLS FIRST], true
  19. +- 'Aggregate ['age], ['age, count(1) AS value#4L]
  20. +- 'Filter ('age < 30)
  21. +- 'UnresolvedRelation [bank]
  22. at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
  23. at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:106)
  24. at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1$adapted(CheckAnalysis.scala:92)
  25. at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:177)
  26. at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:176)
  27. at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:176)
  28. at scala.collection.immutable.List.foreach(List.scala:392)
  29. at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:176)
  30. at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:176)
  31. at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:176)
  32. at scala.collection.immutable.List.foreach(List.scala:392)
  33. at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:176)
  34. at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:176)
  35. at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:176)
  36. at scala.collection.immutable.List.foreach(List.scala:392)
  37. at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:176)
  38. at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:92)
  39. at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:89)
  40. at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:130)
  41. at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:156)
  42. at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
  43. at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:153)
  44. at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:68)
  45. at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
  46. at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:133)
  47. at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  48. at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:133)
  49. at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:68)
  50. at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:66)
  51. at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:58)
  52. at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
  53. at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  54. at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
  55. at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
  56. at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  57. at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
  58. at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
  59. ... 15 more

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题