在Django中使用django-storages和boto 3调用CreateMultipartUpload操作时AccessDenied

fv2wmkja  于 2022-12-20  发布在  Go
关注(0)|答案(6)|浏览(130)

我想使用django-storages来存储我的模型文件在亚马逊S3,但我得到Access Denied错误。我已经授予用户几乎所有的S3权限PutObject,ListBucketMultipartUploads,ListMultipartUploadParts,AbortMultipartUpload权限等所有资源,但这并没有解决它。

    • 设置. py**
...
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_S3_REGION_NAME = 'eu-west-1'
AWS_S3_CUSTOM_DOMAIN = 'www.xyz.com'
AWS_DEFAULT_ACL = None
AWS_STORAGE_BUCKET_NAME = 'www.xyz.com'
...

使用Django shell,我尝试使用如下所示的存储系统。

Python 3.6.6 (default, Sep 12 2018, 18:26:19)
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> import os
>>> AWS_ACCESS_KEY_ID = os.environ.get( 'AWS_ACCESS_KEY_ID', 'anything' )
>>> AWS_SECRET_ACCESS_KEY = os.environ.get( 'AWS_SECRET_ACCESS_KEY', 'anything' )
>>> AWS_DEFAULT_ACL = 'public-read'
>>> from django.core.files.storage import default_storage
>>> file = default_storage.open('test', 'w')
...
>>> file.write('storage contents')
2018-09-27 16:41:42,596 botocore.hooks [DEBUG] Event before-parameter-build.s3.CreateMultipartUpload: calling handler <function validate_ascii_metadata at 0x7fdb5e848d08>
2018-09-27 16:41:42,596 botocore.hooks [DEBUG] Event before-parameter-build.s3.CreateMultipartUpload: calling handler <function sse_md5 at 0x7fdb5e848158>
2018-09-27 16:41:42,597 botocore.hooks [DEBUG] Event before-parameter-build.s3.CreateMultipartUpload: calling handler <function validate_bucket_name at 0x7fdb5e8480d0>
2018-09-27 16:41:42,597 botocore.hooks [DEBUG] Event before-parameter-build.s3.CreateMultipartUpload: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7fdb5c5d1128>>
2018-09-27 16:41:42,597 botocore.hooks [DEBUG] Event before-parameter-build.s3.CreateMultipartUpload: calling handler <function generate_idempotent_uuid at 0x7fdb5e846c80>
2018-09-27 16:41:42,598 botocore.hooks [DEBUG] Event before-call.s3.CreateMultipartUpload: calling handler <function add_expect_header at 0x7fdb5e848598>
2018-09-27 16:41:42,598 botocore.hooks [DEBUG] Event before-call.s3.CreateMultipartUpload: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7fdb5c5d1128>>
2018-09-27 16:41:42,598 botocore.endpoint [DEBUG] Making request for OperationModel(name=CreateMultipartUpload) with params: {'url_path': '/www.xyz.com/test?uploads', 'query_string': {}, 'method': 'POST', 'headers': {'Content-Type': 'application/octet-stream', 'User-Agent': 'Boto3/1.7.80 Python/3.6.6 Linux/4.14.67-66.56.amzn1.x86_64 Botocore/1.11.1 Resource'}, 'body': b'', 'url': 'https://s3.eu-west-1.amazonaws.com/www.xyz.com/test?uploads', 'context': {'client_region': 'eu-west-1', 'client_config': <botocore.config.Config object at 0x7fdb5c8e80b8>, 'has_streaming_input': False, 'auth_type': None, 'signing': {'bucket': 'www.xyz.com'}}}
2018-09-27 16:41:42,599 botocore.hooks [DEBUG] Event request-created.s3.CreateMultipartUpload: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7fdb5c8db780>>
2018-09-27 16:41:42,599 botocore.hooks [DEBUG] Event choose-signer.s3.CreateMultipartUpload: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7fdb5cabff98>>
2018-09-27 16:41:42,599 botocore.hooks [DEBUG] Event choose-signer.s3.CreateMultipartUpload: calling handler <function set_operation_specific_signer at 0x7fdb5e846b70>
2018-09-27 16:41:42,599 botocore.hooks [DEBUG] Event before-sign.s3.CreateMultipartUpload: calling handler <function fix_s3_host at 0x7fdb5e983048>
2018-09-27 16:41:42,600 botocore.utils [DEBUG] Checking for DNS compatible bucket for: https://s3.eu-west-1.amazonaws.com/www.xyz.com/test?uploads
2018-09-27 16:41:42,600 botocore.utils [DEBUG] Not changing URI, bucket is not DNS compatible: www.xyz.com
2018-09-27 16:41:42,601 botocore.auth [DEBUG] Calculating signature using v4 auth.
2018-09-27 16:41:42,601 botocore.auth [DEBUG] CanonicalRequest:
POST
/www.xyz.com/test
uploads=
content-type:application/octet-stream
host:s3.eu-west-1.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf343ddd27ae41e4649b934ca495991b7852b855
x-amz-date:20180927T164142Z

content-type;host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afb65gdfg33441e4649b934ca495991b7852b855
2018-09-27 16:41:42,601 botocore.auth [DEBUG] StringToSign:
AWS4-HMAC-SHA256
20180927T164142Z
20180927/eu-west-1/s3/aws4_request
8649ef591fb64412e923359a4sfvvffdd6d00915b9756d1611b38e346ae
2018-09-27 16:41:42,602 botocore.auth [DEBUG] Signature:
61db9afe5f87730a75692af5a95ggffdssd6f4e8e712d85c414edb14f
2018-09-27 16:41:42,602 botocore.endpoint [DEBUG] Sending http request: <AWSPreparedRequest stream_output=False, method=POST, url=https://s3.eu-west-1.amazonaws.com/www.xyz.com/test?uploads, headers={'Content-Type': b'application/octet-stream', 'User-Agent': b'Boto3/1.7.80 Python/3.6.6 Linux/4.14.67-66.56.amzn1.x86_64 Botocore/1.11.1 Resource', 'X-Amz-Date': b'20180927T164142Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fbdsdsffdss649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=X1234567890/20180927/eu-west-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date, Signature=61db9afe5f87730a7sdfsdfs20b7137cf5d6f4e8e712d85c414edb14f', 'Content-Length': '0'}>
2018-09-27 16:41:42,638 botocore.parsers [DEBUG] Response headers: {'x-amz-request-id': '9E879E78E4883471', 'x-amz-id-2': 'ZkCfOMwLoD08Yy4Nzfxsdfdsdfds3y9wLxzqFw+o3175I+QEdtdtAi8vIEH1vi9iq9VGUC98GqlE=', 'Content-Type': 'application/xml', 'Transfer-Encoding': 'chunked', 'Date': 'Thu, 27 Sep 2018 16:41:42 GMT', 'Server': 'AmazonS3'}
2018-09-27 16:41:42,639 botocore.parsers [DEBUG] Response body:
b'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>9E879E78E4883471</RequestId><HostId>ZkCfOMwLoD08Yy4Nzfxo8RpzsdfsdfsxzqFw+o3175I+QEdtdtAi8vIEH1vi9iq9VGUC98GqlE=</HostId></Error>'
2018-09-27 16:41:42,639 botocore.hooks [DEBUG] Event needs-retry.s3.CreateMultipartUpload: calling handler <botocore.retryhandler.RetryHandler object at 0x7fdb5c618ac8>
2018-09-27 16:41:42,640 botocore.retryhandler [DEBUG] No retry needed.
2018-09-27 16:41:42,640 botocore.hooks [DEBUG] Event needs-retry.s3.CreateMultipartUpload: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7fdb5c5d1128>>
Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/storages/backends/s3boto3.py", line 127, in write
    self._multipart = self.obj.initiate_multipart_upload(**parameters)
  File "/usr/local/lib/python3.6/dist-packages/boto3/resources/factory.py", line 520, in do_action
    response = action(self, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/boto3/resources/action.py", line 83, in __call__
    response = getattr(parent.meta.client, operation_name)(**params)
  File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 314, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 612, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied

这是我用的版本。

boto3==1.7.80
botocore==1.11.1
Django==2.1
s3transfer==0.1.13
django-storages==1.7.1

它为什么要引发异常?

tyg4sfes

tyg4sfes1#

结果是,我必须指定一个策略来添加使用bucket下的 any object/*的权限。

之前

...
"Resource": [
            "arn:aws:s3:::www.xyz.com"
            ]
...

之后

...
"Resource": [
            "arn:aws:s3:::www.xyz.com/*"
            ]
...
uqjltbpv

uqjltbpv2#

我也收到了这个错误,但我犯了一个不同的错误。django-storages函数创建的对象的ACL为“public-read”。这是默认的,这对Web框架来说是有意义的,事实上这也是我想要的,但我没有在IAM策略中包含ACL相关的权限。

  • 放置对象Acl
  • 放置对象版本Acl

这个策略对我很有效(它基于this one):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucketMultipartUploads",
                "s3:AbortMultipartUpload",
                "s3:PutObjectVersionAcl",
                "s3:DeleteObject",
                "s3:PutObjectAcl",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": [
                "arn:aws:s3:::bucketname/*",
                "arn:aws:s3:::bucketname"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::bucketname"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        }
    ]
}
xmq68pz9

xmq68pz93#

另一个可能的原因是你的bucket开启了加密,你需要第二条语句来添加kms:GenerateDataKeykms:Decrypt,下面是我的语句:

{
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
               "kms:Decrypt",
               "kms:GenerateDataKey"
            ],
            "Resource": "*"
        }

请注意,我使用的是内置键,而不是CMK。更多信息请参见AWS docs here

ivqmmu1c

ivqmmu1c4#

仅供参考,另一个原因是您的目标存储桶没有正确的策略定义。
在我的用例中,我尝试将S3文件从AWS帐户A的一个存储桶复制到AWS帐户B的另一个存储桶。我创建了一个角色和策略来启用此功能,但我没有添加允许外部AWS角色写入存储桶的存储桶策略。我可以通过以下AWS文档站点修复此问题:https://aws.amazon.com/premiumsupport/knowledge-center/copy-s3-objects-account/
(如果上述链接有效,则忽略)
如果上述链接断开,网站提到:
重要提示:Amazon S3中的对象不再自动归上传对象的AWS帐户所有。默认情况下,任何新创建的存储桶现在都启用了存储桶所有者强制设置。在更改对象所有权时使用存储桶所有者强制设置也是一种最佳做法。但是,请注意,此选项将禁用所有存储桶ACL以及存储桶中任何对象的ACL。
通过在S3对象所有权中设置“强制存储桶所有者”,Amazon S3存储桶中的所有对象将自动归存储桶所有者所有。强制存储桶所有者功能还将禁用所有访问控制列表(ACL),这简化了对存储在S3中的数据的访问管理。但是,对于现有的存储桶,Amazon S3对象仍然由上载该对象的AWS帐户拥有,除非明确禁用ACL。要更改现有存储桶中对象的所有权,请参阅如何更改S3存储桶中公共拥有对象的所有权?
如果共享对象的现有方法依赖于使用ACL,则请标识使用ACL访问对象的主体。有关如何在禁用任何ACL之前检查权限的详细信息,请参阅禁用ACL的先决条件。
如果无法禁用ACL,请按照以下步骤获取对象的所有权,直到可以调整存储区策略为止:
1.在源帐户中,创建AWS身份和访问管理(IAM)客户管理策略,该策略可赠款IAM身份(用户或角色)适当的权限。IAM用户必须具有从源存储桶检索对象并将对象放回目标存储桶的访问权限。您可以使用类似于以下内容的IAM策略:
{“版本”:“2012-10-17”,“声明”:[ {“效果”:“允许”、“操作”:[“s3:列表存储桶”,“s3:获取对象”],“资源”:[“arn:aws:s3:::源文档示例桶”,“arn:aws:s3:::源文档示例桶/*”] },{“效果”:“允许”、“操作”:[“s3:列表存储桶”,“s3:放置对象”,“s3:放置对象Acl”],“资源”:[“arn:aws:s3:::目的地-文档-示例-存储桶”,“arn:aws:s3:::目的地-文档-示例-存储桶/*”] } ] }注意:此IAM策略示例仅包括列出对象和跨不同帐户的存储桶复制对象所需的最低权限。您必须根据使用情况自定义允许的S3操作。例如,如果用户必须复制具有对象标记的对象,则您还必须授予s3:GetObjectTagging的权限。如果遇到错误,请尝试以管理员用户身份执行这些步骤。
1.在源帐户中,将客户管理的策略附加到要用于将对象复制到目标存储桶的IAM标识。
1.在目标帐户中,将目标存储桶上的S3对象所有权设置为存储桶所有者首选。设置S3对象所有权后,使用设置为存储桶所有者完全控制的访问控制列表(ACL)上载的新对象将自动归存储桶帐户所有。
1.在目标帐户中,修改目标存储桶的存储桶策略以授予源帐户上载对象的权限。此外,在存储桶策略中包括一个条件,要求上载对象以将ACL设置为存储桶所有者完全控制。可以使用类似于以下内容的语句:
注:将destination-DOC-EXAMPLE-BUCKET替换为目标存储桶的名称。然后,将arn:aws:iam::222222222222:user/Jane替换为源帐户中IAM标识的Amazon资源名称(ARN)。
{“版本”:“2012-10-17”,“身份证”:“政策1611277539797”、“声明”:[ {“Sid”:“状态1611277535086”,“影响”:“允许”、“主体”:{“AWS”:“操作”:“s3:放置对象”,“资源”:“arn:aws:s3:::目的地-文档-示例-铲斗/*",“条件”:{“字符串等于”:{“s3:x-amz-acl”:“存储桶所有者完全控制”} } },{“Sid”:“声明1611277877767”,“影响”:“允许”、“主体”:{“AWS”:“您的帐户:aws:iam::222222222222:用户/简”},“操作”:“s3:列表桶”,“资源”:“arn:aws:s3:::目的地-文档-示例-铲斗”} ] }注意:此示例存储桶策略仅包括上载具有所需ACL的对象所需的最低权限。您必须根据您的用例自定义允许的S3操作。例如,如果用户必须复制具有对象标记的对象,则还必须授予s3:GetObjectTagging权限

1.配置IAM策略和存储桶策略后,来自源帐户的IAM标识必须将对象上载到目标存储桶。请确保ACL设置为bucket-owner-full-control。例如,源IAM标识必须运行带有--acl选项的cp AWS CLI命令:
aws s3 cp s3://源文档示例存储桶/对象.txt s3://目标文档示例存储桶/对象.txt --acl存储桶所有者完全控制

hfsqlsce

hfsqlsce5#

在我的例子中,使用Github操作上传到s3失败,并抛出类似的错误-An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
对IAM用户和S3存储桶进行策略后验证,这很好,类似的设置对不同的IAM用户和S3存储桶也能正常工作。
由于Github不允许查看添加的秘密,为IAM用户旋转security_credentials并在Github使用。

8yoxcaq7

8yoxcaq76#

我在使用以下Python脚本时收到了相同的错误(An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied):

import logging
import boto3
import datetime

def create_boto3_client(s3_id, s3_secret_key):
    try:
        logging.info(f'####### Creating boto3Client... #######')
        s3_client = boto3.resource(
                's3',
                aws_access_key_id = s3_id,
                aws_secret_access_key = s3_secret_key,
        )
        logging.info(f'####### Successfully created boto3Client #######')
    except:
        logging.error(f'####### Failed to create boto3Client  #######')
    return s3_client

def upload_file_to_s3(s3_client, s3_bucket, aws_path, blob):
    try:
        ul_start = datetime.datetime.now()
        logging.info(f'####### Starting file upload at {str(ul_start)} #######')
        config = boto3.s3.transfer.TransferConfig(multipart_threshold=1024*25, max_concurrency=10, multipart_chunksize=1024*25, use_threads=True)
        s3_client.Bucket(s3_bucket).upload_fileobj(blob, Key = aws_path, Config = config)
        ul_end = datetime.datetime.now()
        logging.info(f'####### File uploaded to AWS S3 bucket at {str(ul_end) } #######')
        ul_duration = str(ul_end - ul_start)
        logging.info(f'####### Upload duration:{str(ul_duration)} #######')
    except Exception as e:
        logging.error(f'####### Failed to upload file to AWS S3: {e}  #######')
    return ul_start, ul_end, ul_duration

在我的例子中,aws_path.upload_file(Key))是不正确的。指向的是s3_client没有访问权限的路径。

相关问题