我有以下简单的python代码:
from __future__ import print_function
import sys
from operator import add
from pyspark import SparkContext
if __name__ == "__main__":
print(len(sys.argv))
if len(sys.argv) < 2:
print("Usage: wordcount <file>", file=sys.stderr)
exit(-1)
sc = SparkContext(appName="PythonWordCount")
lines = sc.textFile(sys.argv[2], 1)
counts = lines.flatMap(lambda x: x.split(' ')).map(lambda x: (x, 1)).reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print("%s: %i" % (word, count))
sc.stop()
然后我尝试在本地群集上运行它,方法是:
spark-submit --master spark://rws-lnx-sprk01:7077 /home/edamameQ/wordcount.py wordcount /home/edamameQ/wordTest.txt
wordtest.txt绝对可用:
edamameQ@spark-cluster:~$ ls
data jars myJob.txt wordTest.txt wordcount.py
但我一直在犯错误:
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
:
:
Caused by: java.io.FileNotFoundException: File file:/home/edamameQ/wordTest.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:520)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:137)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:339)
同样的代码在aws上使用来自s3位置的输入文件。在本地群集上运行时,有什么需要调整的吗?谢谢!
1条答案
按热度按时间eeq64g8w1#
您要读取的文件必须在所有worker上都可以访问。如果这是一个本地文件,唯一的选择就是为每台工作机保留一个副本。