使用hdfs put vs spark将本地文件加载到HDFS

sxpgvts3  于 2022-12-09  发布在  HDFS
关注(0)|答案(1)|浏览(253)

Usecase is to load local file into HDFS. Below two are approaches to do the same , Please suggest which one is efficient.
Approach1: Using hdfs put command

hadoop fs -put /local/filepath/file.parquet   /user/table_nm/

Approach2: Using Spark .

spark.read.parquet("/local/filepath/file.parquet  ").createOrReplaceTempView("temp")
spark.sql(s"insert into table table_nm select * from temp")

Note:

  1. Source File can be in any format
  2. No transformations needed for file loading .
  3. table_nm is an hive external table pointing to /user/table_nm/
ffx8fchx

ffx8fchx1#

假设它们已经是本地构建的.parquet文件,使用-put会更快,因为没有启动Spark App的开销。
如果有很多文件,那么通过put要做的工作就更少了。

相关问题