pyspark s3错误:java.lang.noclassdeffounderror:com/amazonaws/services/s3/model/multiobjectdeleteexception

guykilcj  于 2021-05-18  发布在  Spark
关注(0)|答案(3)|浏览(1525)

设置可以读取aws s3文件的spark群集失败。我使用的软件如下:
hadoop-aws-3.2.0.jar
aws-java-sdk-1.11.887.jar
spark-3.0.1-bin-hadoop3.2.tgz
使用python版本:python 3.8.6

  1. from pyspark.sql import SparkSession, SQLContext
  2. from pyspark.sql.types import *
  3. from pyspark.sql.functions import *
  4. import sys
  5. spark = (SparkSession.builder
  6. .appName("AuthorsAges")
  7. .appName('SparkCassandraApp')
  8. .getOrCreate())
  9. spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "access-key")
  10. spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "secret-key")
  11. spark._jsc.hadoopConfiguration().set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem")
  12. spark._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", "true")
  13. spark._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider")
  14. spark._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "")
  15. input_file='s3a://spark-test-data/Fire_Department_Calls_for_Service.csv'
  16. file_schema = StructType([StructField("Call_Number",StringType(),True),
  17. StructField("Unit_ID",StringType(),True),
  18. StructField("Incident_Number",StringType(),True),
  19. ...
  20. ...
  21. # Read file into a Spark DataFrame
  22. input_df = (spark.read.format("csv") \
  23. .option("header", "true") \
  24. .schema(file_schema) \
  25. .load(input_file))

代码在开始执行spark.read.format时失败。似乎找不到类。java.lang.noclassdeffounderror:com.amazonaws.services.s3.model.multiobjectdeleteexception

  1. File "<stdin>", line 1, in <module>
  2. File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/pyspark/sql/readwriter.py", line 178, in load
  3. return self._df(self._jreader.load(path))
  4. File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
  5. File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/pyspark/sql/utils.py", line 128, in deco
  6. return f(*a,**kw)
  7. File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
  8. py4j.protocol.Py4JJavaError: An error occurred while calling o51.load.
  9. : java.lang.NoClassDefFoundError: com/amazonaws/services/s3/model/MultiObjectDeleteException
  10. at java.lang.Class.forName0(Native Method)
  11. at java.lang.Class.forName(Class.java:348)
  12. at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2532)
  13. at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2497)
  14. at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2593)
  15. at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3269)
  16. at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3301)
  17. at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
  18. at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
  19. at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
  20. at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
  21. at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
  22. at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)
  23. at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:366)
  24. at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)
  25. at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:286)
  26. at scala.Option.getOrElse(Option.scala:189)
  27. at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286)
  28. at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:232)
  29. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  30. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  31. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  32. at java.lang.reflect.Method.invoke(Method.java:498)
  33. at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
  34. at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
  35. at py4j.Gateway.invoke(Gateway.java:282)
  36. at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
  37. at py4j.commands.CallCommand.execute(CallCommand.java:79)
  38. at py4j.GatewayConnection.run(GatewayConnection.java:238)
  39. at java.lang.Thread.run(Thread.java:748)
  40. Caused by: java.lang.ClassNotFoundException: com.amazonaws.services.s3.model.MultiObjectDeleteException
  41. at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
  42. at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
  43. at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
  44. at java.lang.ClassLoader.loadClass(ClassLoader.java:351)

我一直在尝试为上述jar和python找到合适的组合,但找不到合适的组合。我遇到了各种各样的noclassdeffounderror,所以我决定使用上面列出的所有jar和python的最新版本,但仍然没有成功。
我想知道您使用了什么版本的jars和python成功地建立了一个能够通过pyspark使用s3a访问s3的集群?提前感谢您的回复/帮助。

bksxznpy

bksxznpy1#

hadoop3.2是根据1.11.563构建的;将该特定版本的完整着色sdk粘贴到类路径“aws java sdk bundle”中,一切正常。
sdk在过去是“挑剔的”…升级总是会带来惊喜。为您提供一个aws sdk更新。可能是时候有人再来一次了。

qyswt5oh

qyswt5oh2#

所以我清理了所有的东西,重新安装了以下jars版本,它成功了:hadoop-aws-2.7.4.jar,aws-java-sdk-1.7.4.2.jar。spark安装版本:spark-2.4.7-bin-hadoop2.7。python版本:python3.6。

pbgvytdp

pbgvytdp3#

我在spark3.0/hadoop3.2上解决了这个问题。我在这里也记录了我的答案-aws eks spark 3.0,hadoop 3.2 error-noclassdeffounderror:com/amazonaws/services/s3/model/multiobjectdeleteeexception
使用以下aws-javasdk包,这个问题就可以解决了-
aws-java-sdk-bundle-1.11.874.jar文件(https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle/1.11.874)

相关问题