当我试图通过 pyspark
工作
pyspark.sql.utils.AnalysisException: u'org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.io.IOException: Error accessing Bucket xyz)
此外,我给出了以下参数-
.config("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem") \
.config("fs.AbstractFileSystem.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS") \
.config("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")\
.config("fs.AbstractFileSystem.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS")\
.config("fs.gs.working.dir", "/")\
.config("fs.gs.path.encoding", "uri-path")\
.config("fs.gs.reported.permissions", "777")\
.config("google.cloud.auth.service.account.enable", "true")\
.config("google.cloud.auth.service.account.json.keyfile", JSON_KEY_FILE)
同时,在json服务帐户的帮助下,我能够写入我的gcp bucket。
暂无答案!
目前还没有任何答案,快来回答吧!