python-3.x 如何使用AWS Boto3获取Cloudwatch指标统计数据?

ztigrdn8  于 12个月前  发布在  Python
关注(0)|答案(7)|浏览(391)

我正在编写一个Python 3脚本,旨在使用Boto 3库从AWS CloudFront获取S3空间利用率统计数据。
我从AWS CLI开始,发现我可以通过这样的命令获得我想要的东西:

aws cloudwatch get-metric-statistics --metric-name BucketSizeBytes --namespace AWS/S3 --start-time 2017-03-06T00:00:00Z --end-time 2017-03-07T00:00:00Z --statistics Average --unit Bytes --region us-west-2 --dimensions Name=BucketName,Value=foo-bar Name=StorageType,Value=StandardStorage --period 86400 --output json

字符串
这将返回我期望的数据。现在我想在Python 3 /Boto 3中做同样的事情。到目前为止,我的代码是:

from datetime import datetime, timedelta
import boto3

seconds_in_one_day = 86400  # used for granularity

cloudwatch = boto3.client('cloudwatch')

response = cloudwatch.get_metric_statistics(
    Namespace='AWS/S3',
    Dimensions=[
        {
            'Name': 'BucketName',
            'Value': 'foo-bar'
        },
        {
            'Name': 'StorageType',
            'Value': 'StandardStorage'
        }
    ],
    MetricName='BucketSizeBytes',
    StartTime=datetime.now() - timedelta(days=7),
    EndTime=datetime.now(),
    Period=seconds_in_one_day,
    Statistics=[
        'Average'
    ],
    Unit='Bytes'
)

print(response)


当我运行这个函数时,我得到了一个有效的响应,但没有数据点(它是一个空数组)。它们看起来是相同的,除了Python方法似乎没有一个区域的位置,命令行需要它。
我又试了一件事:我的代码是计算最后一个日期的日期,而不是硬编码的命令行。我确实尝试过硬编码日期,只是为了看看我是否会得到数据,结果是一样的。
所以我的问题是:
我在Boto / Python中使用的方法等同于命令行吗?假设它们是,我会错过什么?

vh0rcniy

vh0rcniy1#

这里有一个很好的例子,用boto3从python中的cloudwatch获取数据。我花了几个小时才让它工作,但现在应该很容易参考。

def get_req_count(region, lb_name):
    client = boto3.client('cloudwatch', region_name=region)
    count = 0
    response = client.get_metric_statistics(
            Namespace="AWS/ApplicationELB",
            MetricName="RequestCount",
            Dimensions=[
                {
                    "Name": "LoadBalancer",
                    "Value": lb_name
                },
            ],
            StartTime=str_yesterday,
            EndTime=str_today,
            Period=86460,
            Statistics=[
                "Sum",
            ]
    )   

    #print(response2)        
    for r in response['Datapoints']:
        count = (r['Sum'])

    return count

字符串

33qvvth1

33qvvth12#

这就是我所做的:

client = boto3.client(service_name='cloudwatch', region_name='us-east-1')

        response = client.get_metric_statistics(
            Namespace = 'AWS/EC2',
            Period = 300,
            StartTime = datetime.utcnow() - timedelta(seconds = 600),
            EndTime = datetime.utcnow(),
            MetricName = metricVar,
            Statistics=['Average'], Unit='Percent',
            Dimensions = [
                {'Name': 'InstanceId', 'Value': asgName}
            ])

字符串

a9wyjsp7

a9wyjsp73#

我看不出你的代码有什么明显的错误,所以这个区域看起来是主要的嫌疑犯。
您可以在创建客户端时使用以下命令进行设置:

cloudwatch = boto3.client('cloudwatch', region_name='us-west-2')

字符串
如果没有设置,boto将首先尝试从AWS_DEFAULT_REGION env变量中获取区域,然后从~/.aws/config配置文件中获取。尝试检查这些以查看默认区域设置。

wixjitnu

wixjitnu4#

我有一个可行的解决方法,以防其他人需要它,但我仍然想找到一个非笨拙的答案,如果存在的话。它可能不会。我决定我只是生成命令行,使用python运行它并检索json结果-相同的净结果。

s3 = boto3.resource('s3')
s3_client = boto3.client('s3')

command = "aws cloudwatch get-metric-statistics --metric-name BucketSizeBytes --namespace AWS/S3 --start-time {} --end-time {} --statistics Average --unit     Bytes --region {} --dimensions Name=BucketName,Value={}   Name=StorageType,Value=StandardStorage --period 86400 --output json"

for bucket in s3.buckets.all():
    region = s3_client.get_bucket_location(Bucket=bucket.name)
    region_name = region['LocationConstraint']

    start_date = datetime.now() - timedelta(days=7)
    start_date_str = str(start_date.date()) + 'T00:00:00Z'
    end_date = datetime.now()
    end_date_str = str(end_date.date()) + 'T00:00:00Z'
    cmd = command.format(start_date_str, end_date_str, region_name, bucket.name)
    res = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
    bucket_stats = json.loads(res.decode('ascii'))
    if len(bucket_stats['Datapoints']) > 0:
        print(bucket_stats['Datapoints'])

字符串

xqk2d5yq

xqk2d5yq5#

我认为错误是你的命令cloudwatch = boto3.client('cloudwatch')。默认区域是east-1。所以你可以使用这样的命令:

from datetime import datetime, timedelta
import boto3

def credentials_AWS (account):
  
   if (account == 'account1'):
       aws_access_key_id = "key id east"
       aws_secret_access_key = 'east secret_access_key'
       region_name = 'us-east-1'
   elif (account == 'account2'):
       aws_access_key_id = "key id west"
       aws_secret_access_key = 'west secret_access_key'
       region_name = 'us-west-2'
    
   return aws_access_key_id, aws_secret_access_key, region_name
    

def connect_service_aws (service, aws_access_key_id, aws_secret_access_key, region_name):

     aws_connected = boto3.client (service,
                                  aws_access_key_id = aws_access_key_id,
                                  aws_secret_access_key = aws_secret_access_key,
                                  region_name = region_name)
             
     return aws_connected

def get_metrics(account):

  seconds_in_one_day = 86400  # used for granularity

  #cloudwatch = boto3.client('cloudwatch')

  aws_access_key_id, aws_secret_access_key, region_name = credentials_AWS (account)
    
   
   cloudwatch = connect_service_aws ('cloudwatch', aws_access_key_id, 
                 aws_secret_access_key, region_name)

  response = cloudwatch.get_metric_statistics(
    Namespace='AWS/S3',
    Dimensions=[
        {
            'Name': 'BucketName',
            'Value': 'foo-bar'
        },
        {
            'Name': 'StorageType',
            'Value': 'StandardStorage'
        }
    ],
    MetricName='BucketSizeBytes',
    StartTime=datetime.now() - timedelta(days=7),
    EndTime=datetime.now(),
    Period=seconds_in_one_day,
    Statistics=[
        'Average'
    ],
    Unit='Bytes'
  )

  print(response)

字符串

lfapxunr

lfapxunr6#

这个脚本解决了这个问题。详细的解释可以在脚本下面找到。

from datetime import datetime, timedelta
import boto3

seconds_in_one_day = 86400  # used for granularity

bucket_name = "YOUR_BUCKET_NAME"

# Valid values for storage type, as per [AWS S3 Metrics and Dimensions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/metrics-dimensions.html)
# StandardStorage, IntelligentTieringFAStorage, IntelligentTieringIAStorage, IntelligentTieringAAStorage,
# IntelligentTieringAIAStorage, IntelligentTieringDAAStorage, StandardIAStorage, StandardIASizeOverhead,
# StandardIAObjectOverhead, OneZoneIAStorage, OneZoneIASizeOverhead, ReducedRedundancyStorage,
# GlacierInstantRetrievalSizeOverhead GlacierInstantRetrievalStorage, GlacierStorage, GlacierStagingStorage,
# GlacierObjectOverhead, GlacierS3ObjectOverhead, DeepArchiveStorage, DeepArchiveObjectOverhead,
# DeepArchiveS3ObjectOverhead, DeepArchiveStagingStorage, and ExpressOneZone

storage_types = [
    "StandardStorage",
    "IntelligentTieringFAStorage",
    "IntelligentTieringIAStorage",
    "IntelligentTieringAAStorage",
    "IntelligentTieringAIAStorage",
    "IntelligentTieringDAAStorage",
    "StandardIAStorage",
    "StandardIASizeOverhead",
    "StandardIAObjectOverhead",
    "OneZoneIAStorage",
    "OneZoneIASizeOverhead",
    "ReducedRedundancyStorage",
    "GlacierInstantRetrievalSizeOverhead",
    "GlacierInstantRetrievalStorage",
    "GlacierStorage",
    "GlacierStagingStorage",
    "GlacierObjectOverhead",
    "GlacierS3ObjectOverhead",
    "DeepArchiveStorage",
    "DeepArchiveObjectOverhead",
    "DeepArchiveS3ObjectOverhead",
    "DeepArchiveStagingStorage",
    "ExpressOneZone"
]

cloudwatch = boto3.client('cloudwatch')

for storage_type in storage_types:
    response = cloudwatch.get_metric_statistics(
        Namespace='AWS/S3',
        Dimensions=[
            {
                'Name': 'BucketName',
                'Value': bucket_name
            },
            {
                'Name': 'StorageType',
                'Value': storage_type
            }
        ],
        MetricName='BucketSizeBytes',
        StartTime=datetime.now() - timedelta(days=7),
        EndTime=datetime.now(),
        Period=seconds_in_one_day,
        Statistics=['Average'],
        Unit='Bytes'
    )
    if "Datapoints" in response and response.get("Datapoints", []):
        print(f"Storage Type: {storage_type}, Metric Response: {response}")
    else:
        print(f"Storage Type: {storage_type}, no data")

字符串
根据boto3 documentation指标的所有维度都必须在请求中指定。如果维度没有指定,boto 3不会引发异常,但会返回空响应
如果您指定的存储桶在cloudwatch客户端区域之外,或者您没有访问权限,也会返回空响应。
如果度量包含多个维,则必须为每个维包含一个值。[.]如果未发布维的特定组合,则无法检索其统计信息。
每个维度的值都可以在AWS文档中找到:例如AWS S3 Metrics and DimensionsAmazon DynamoDB Metrics and Dimensions
如果没有可用的数据点,它可能表明原始发布者请求的StandardStorage存储类型中没有对象;因此,建议的脚本迭代所有有效的存储类,并为每个存储类计算所需的度量。

f45qwnt8

f45qwnt87#

我能够解决这个问题。你需要在boto3调用中指定Dimensions参数。

相关问题