我试图使用ApacheSpark中提供的sqlcontext使用下面的代码查询存储在hdfs中的文件,但是我得到了一个nosuchmethoderror
package SQL
import org.apache.spark.SparkContext
import org.apache.spark.sql._
object SparSQLCSV { def main(args: Array[String]) {
val sc = new SparkContext("local[*]","home")
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val people = sc.textFile("/home/devan/Documents/dataset/peoplesTest.csv")
val delimiter = ","
val schemaString = "a,b".split(delimiter)//csv header
//Automated Schema creation
val schema = StructType(schemaString.map(fieldName => StructField(fieldName, StringType, true)))
val peopleLines = people.flatMap(x=> x.split("\n"))
val rowRDD = peopleLines.map(p=>{
Row.fromSeq(p.split(delimiter))
})
val peopleSchemaRDD = sqlContext.applySchema(rowRDD, schema)
peopleSchemaRDD.registerTempTable("people")
sqlContext.sql("SELECT b FROM people").foreach(println)
} }
线程“main”java.lang.nosuchmethoderror中出现异常:org.apache.spark.sql.sqlcontext.applyschema(lorg/apache/spark/rdd/rdd;lorg/apache/spark/sql/types/structtype;)lorg/apache/spark/sql/dataframe;烫伤时.main对象$.main(main对象。scala:34)在sun.reflect.nativemethodaccessorimpl.invoke0(本机方法)在sun.reflect.nativemethodaccessorimpl.invoke(本机方法)在sun.reflect.nativemethodaccessorimpl.invoke(本机方法)在烫伤时。java:57)在sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl。java:43)在java.lang.reflect.method.invoke(方法。java:606)在org.apache.spark.deploy.sparksubmit$.launch(sparksubmit。scala:358)位于org.apache.spark.deploy.sparksubmit$.main(sparksubmit.com)。scala:75)位于org.apache.spark.deploy.sparksubmit.main(sparksubmit.scala)
我也用spark中提供的命令行尝试过同样的方法,但是当我创建一个scala项目并尝试运行它时,我得到了上面的错误。我做错什么了?
1条答案
按热度按时间w8f9ii691#
NoSuchMethodError
通常意味着库之间存在不兼容。在这种特殊情况下,您可能正在使用需要spark 1.3的spark csv版本和旧版本的spark。