我刚刚开始使用emr hadoop/spark等,我正在尝试使用sparkshell运行scala代码,将文件上传到emrfs s3位置,但是我收到以下错误-
如果我运行=>
val bucketName = "bucket"
val outputPath = "test.txt"
scala> val putRequest = PutObjectRequest.builder.bucket(bucketName).key(outputPath).build()
<console>:27: error: not found: value PutObjectRequest
val putRequest = PutObjectRequest.builder.bucket(bucketName).key(outputPath).build()
^
一旦我为putobjectrequest添加了导入包,我仍然会得到一个不同的错误。
scala> import com.amazonaws.services.s3.model.PutObjectRequest
导入com.amazonaws.services.s3.model.putobjectrequest
scala> val putRequest = PutObjectRequest.builder.bucket(bucketName).key(outputPath).build()
<console>:28: error: value builder is not a member of object com.amazonaws.services.s3.model.PutObjectRequest
val putRequest = PutObjectRequest.builder.bucket(bucketName).key(outputPath).build()
^
我不确定我错过了什么。任何帮助都将不胜感激!
注:spark版本为2.4.5
1条答案
按热度按时间uidvcgyl1#
不使用生成器,而是通过合适的构造函数创建putobjectrequest的对象。另外,使用amazons3clientbuilder创建到s3的连接。