pyspark加载csv attributeerror:'rdd'对象没有属性'\u get\u object\u id'

x3naxklr  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(501)

我正在尝试将csv文件加载到sparkDataframe中。这就是我目前所做的:

  1. # sc is an SparkContext.
  2. appName = "testSpark"
  3. master = "local"
  4. conf = SparkConf().setAppName(appName).setMaster(master)
  5. sc = SparkContext(conf=conf)
  6. sqlContext = sql.SQLContext(sc)
  7. # csv path
  8. text_file = sc.textFile("hdfs:///path/to/sensordata20171008223515.csv")
  9. df = sqlContext.load(source="com.databricks.spark.csv", header = 'true', path = text_file)
  10. print df.schema()

以下是跟踪:

  1. Traceback (most recent call last):
  2. File "/home/centos/main.py", line 16, in <module>
  3. df = sc.textFile(text_file).map(lambda line: (line.split(';')[0], line.split(';')[1])).collect()
  4. File "/usr/hdp/2.5.6.0-40/spark/python/lib/pyspark.zip/pyspark/context.py", line 474, in textFile
  5. File "/usr/hdp/2.5.6.0-40/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 804, in __call__
  6. File "/usr/hdp/2.5.6.0-40/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 278, in get_command_part
  7. AttributeError: 'RDD' object has no attribute '_get_object_id'

我是新来的。所以如果有人能告诉我我做错了什么,这会很有帮助。

cig3rfwq

cig3rfwq1#

无法将rdd传递给csv读取器。您应该直接使用路径:

  1. df = sqlContext.load(source="com.databricks.spark.csv",
  2. header = 'true', path = "hdfs:///path/to/sensordata20171008223515.csv")

只有少数格式(特别是json)支持rdd作为输入参数。

相关问题