我正在尝试使用pyspark2.1.0读取存储为kudu的数据
>>> from os.path import expanduser, join, abspath
>>> from pyspark.sql import SparkSession
>>> from pyspark.sql import Row
>>> spark = SparkSession.builder \
.master("local") \
.appName("HivePyspark") \
.config("hive.metastore.warehouse.dir", "hdfs:///user/hive/warehouse") \
.enableHiveSupport() \
.getOrCreate()
>>> spark.sql("select count(*) from mySchema.myTable").show()
我已经在集群上安装了kudu1.2.0。那些是Hive/ Impala 的table。
执行最后一行时,出现以下错误:
.
.
.
: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Error in loading storage handler.com.cloudera.kudu.hive.KuduStorageHandler
.
.
.
aused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error in loading storage handler.com.cloudera.kudu.hive.KuduStorageHandler
at org.apache.hadoop.hive.ql.metadata.HiveUtils.getStorageHandler(HiveUtils.java:315)
at org.apache.hadoop.hive.ql.metadata.Table.getStorageHandler(Table.java:284)
... 61 more
Caused by: java.lang.ClassNotFoundException: com.cloudera.kudu.hive.KuduStorageHandler
我指的是以下资源:
https://spark.apache.org/docs/latest/sql-programming-guide.html#hive-表
https://issues.apache.org/jira/browse/kudu-1603
https://github.com/bkvarda/iot_demo/blob/master/total_data_count.py
https://kudu.apache.org/docs/developing.html#_kudu_python_client
我很想知道如何将kudu相关的依赖项包含到我的pyspark程序中,以便克服这个错误。
2条答案
按热度按时间kyks70gy1#
我解决这个问题的方法是将kudu spark对应的jar传递给pyspark2 shell或spark2 submit命令
whlutmcx2#
ApacheSpark 2.3
以下代码供您参考:
从pyspark读取kudu表,代码如下:
使用以下代码写入kudu表:
参考链接: https://medium.com/@sciencecommitter/how-读写到kudu-tables-in-pyspark-via-impala-c4334b98cf05
如果您想使用scala,以下是参考链接:
https://kudu.apache.org/docs/developing.html