将rdd加载到hive中

2mbi3lxu  于 2021-06-28  发布在  Hive
关注(0)|答案(1)|浏览(441)

我想用spark版本1.6.x中的pyspark将rdd(k=table\u name,v=content)加载到分区的hive表(年、月、日)中
在尝试使用此sql查询的逻辑时:

ALTER TABLE db_schema.%FILENAME_WITHOUT_EXTENSION% DROP IF EXISTS PARTITION (year=%YEAR%, month=%MONTH%, day=%DAY%);LOAD DATA INTO TABLE db_schema.%FILENAME_WITHOUT_EXTENSION% PARTITION (year=%YEAR%, month=%MONTH%, day=%DAY%);

有人能给点建议吗?

sirbozc5

sirbozc51#

spark = SparkSession.builder.enableHiveSupport().getOrCreate()
df = spark.sparkContext.parallelize([(1, 'cat', '2016-12-20'), (2, 'dog', '2016-12-21')])
df = spark.createDataFrame(df, schema=['id', 'val', 'dt'])
df.write.saveAsTable(name='default.test', format='orc', mode='overwrite', partitionBy='dt')

使用enablehivesupport()和df.write.saveastable()

相关问题