我正在尝试从pyspark(版本2.2.0)访问s3(s3a协议),但遇到了一些困难。
我使用的是hadoop和aws-sdk包。
pyspark --packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2
下面是我的代码:
sc._jsc.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", AWS_ACCESS_KEY_ID)
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", AWS_SECRET_ACCESS_KEY)
rdd = sc.textFile('s3a://spark-test-project/large-file.csv')
print(rdd.first().show())
我明白了:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 1361, in first
rs = self.take(1)
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 1313, in take
totalParts = self.getNumPartitions()
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 385, in getNumPartitions
return self._jrdd.partitions().size()
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/sql/utils.py", line 63, in deco
return f(*a,**kw)
File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o34.partitions.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 32750D3DED4067BD, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: jAhO0tWTblPEUehF1Bul9WZj/9G7woaHFVxb8gzsOpekam82V/Rm9zLgdLDNsGZ6mPizGZmo6xI=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61)
at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
这是aws java sdk的错误吗?我是spark的新手,所以我不知道是否有其他方法可以从aws获得更好的日志信息 AWS Error Code: null
2条答案
按热度按时间vddsk6oq1#
值得一提的是,我在aws上的spark-defaults.conf文件中有这样一行:
我还确保在设置ec2时使用的安全组可以访问s3。
在这两件事之后,我从s3读取文件没有任何问题:
或者,如果您使用aws emr,您应该能够直接访问s3:
qojgxg4l2#
“坏请求”是来自s3的消息,它意味着“这不起作用,我们不会告诉你为什么”。
文档中有一整节关于s3a的故障排除。
如果您的bucket托管的是只支持s3“v4”身份验证协议的用户(法兰克福、伦敦、首尔),那么您需要将fs.s3a.endpoint字段设置为特定区域的字段。。。医生有详细资料。
否则,请尝试使用
s3a://landsat-pds/scene_list.gz
作为一个来源。它是一个公共csv文件,不需要身份验证。如果你看不见,那你就有大麻烦了