无法为hbase加载spark sql数据源

lawou6xi  于 2021-06-09  发布在  Hbase
关注(0)|答案(1)|浏览(301)

我想使用sparksql从hbase表中获取数据。但是我在创建Dataframe时遇到了classnotfoundexception。这是我的例外。

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/types/NativeType
    at org.apache.hadoop.hbase.spark.DefaultSource$$anonfun$generateSchemaMappingMap$1.apply(DefaultSource.scala:127)
    at org.apache.hadoop.hbase.spark.DefaultSource$$anonfun$generateSchemaMappingMap$1.apply(DefaultSource.scala:116)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
    at org.apache.hadoop.hbase.spark.DefaultSource.generateSchemaMappingMap(DefaultSource.scala:116)
    at org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:97)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
    at com.apache.spark.gettingStarted.SparkSQLOnHBaseTable.createTableAndPutData(SparkSQLOnHBaseTable.java:146)
    at com.apache.spark.gettingStarted.SparkSQLOnHBaseTable.main(SparkSQLOnHBaseTable.java:154)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.types.NativeType
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 14 more

你们有没有遇到过这样的问题?你是怎么解决的?
这是我的密码

// initializing spark context
    SparkConf sconf = new SparkConf().setMaster("local").setAppName("Test");
    // SparkContext sc = new SparkContext("local", "test", sconf);
    Configuration conf = HBaseConfiguration.create();
    JavaSparkContext jsc = new JavaSparkContext(sconf);
    try {
        HBaseAdmin.checkHBaseAvailable(conf);
        System.out.println("HBase is running");
    } catch (ServiceException e) {
        System.out.println("HBase is not running");
        e.printStackTrace();
    }
    SQLContext sqlContext = new SQLContext(jsc);

    String sqlMapping = "KEY_FIELD STRING :key" + " sql_city STRING personal:city" + ","
            + "sql_name STRING personal:name" + "," + "sql_designation STRING professional:designation" + ","
            + "sql_salary STRING professional:salary";

    HashMap<String, String> colMap = new HashMap<String, String>();
    colMap.put("hbase.columns.mapping", sqlMapping);
    colMap.put("hbase.table", "emp");

    // DataFrame dfJail =
    DataFrame df = sqlContext.read().format("org.apache.hadoop.hbase.spark").options(colMap).load();
    //DataFrame df = sqlContext.load("org.apache.hadoop.hbase.spark", colMap);

    // This is useful when issuing SQL text queries directly against the
    // sqlContext object.
    df.registerTempTable("temp_emp");

    DataFrame result = sqlContext.sql("SELECT count(*) from temp_emp");
    System.out.println("df  " + df);
    System.out.println("result " + result);

下面是pom.xml依赖项

<dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
        <version>1.6.0</version>
    </dependency>

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hbase</groupId>
        <artifactId>hbase-client</artifactId>
        <version>1.1.3</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hbase</groupId>
        <artifactId>hbase-spark</artifactId>
        <version>2.0.0-SNAPSHOT</version>
    </dependency>
</dependencies>
j8yoct9x

j8yoct9x1#

nativetype不再存在:(也不存在datatypes.scala)
类在包中不可用
它曾经存在于datatypes.scala中的spark 1.3.1中。
您可以在这里看到nativetype受到保护:
使nativetype受保护的提交
你可能在用一个老例子。

相关问题