我想从AWS下载以下内容的所有CloudWatch日志:
- 特定日志组
- 特定的时间范围
我的计划很简单:
1.迭代日志组的所有日志流。
1.对于每个日志流,迭代事件并生成所有日志事件的列表。
import boto3
def overlaps(start1, end1, start2, end2):
return max(start1, start2) < min(end1, end2)
def load_logs(region, group, start=0, end=2672995600000):
client = boto3.client('logs', region_name=region)
paginator = client.get_paginator('describe_log_streams')
response_iterator = paginator.paginate(logGroupName=group)
events = []
for page in response_iterator:
for log_stream in page["logStreams"]:
print(f"Stream: {log_stream['logStreamName']}, start: {log_stream['firstEventTimestamp']} end: {log_stream['lastEventTimestamp']}")
if overlaps(log_stream["firstEventTimestamp"], log_stream["lastEventTimestamp"], start, end):
print("processing")
token = None
while True:
event_args = {
"logGroupName": group,
"logStreamName": log_stream['logStreamName'],
"startTime": start,
"endTime": end
}
if token is not None:
event_args["nextToken"] = token
response = client.get_log_events(**event_args)
for event in response["events"]:
if start < event["timestamp"] < end:
events.append(event)
if response["nextBackwardToken"] == token:
break
else:
token = response["nextBackwardToken"]
print(events)
我将0
作为start
传递,将未来的2672995600000
作为end
传递,并且下载了一些事件,但是events
列表不包含所有日志事件。是否遗漏了某些迭代?我特别关注get_log_events
迭代
1条答案
按热度按时间nzk0hqpo1#
您可以使用
start_query
,它将返回所有日志流中的所有日志。