在连接aws lambda和redis getting任务时,23.02秒错误后超时

6xfqseft  于 2021-06-10  发布在  Redis
关注(0)|答案(1)|浏览(370)

在我的项目中,我想将lambda函数连接到redis存储,但是在进行连接时,我得到了任务超时错误。即使我已经用nat网关连接了私有子网。
python代码:

import json
import boto3
import math
import redis

# from sklearn.model_selection import train_test_split

redis = redis.Redis(host='redisconnection.sxxqwc.ng.0001.use1.cache.amazonaws.com', port=6379)

s3 = boto3.client('s3')

def lambda_handler(event, context):
    # bucket = event['Records'][0]['s3']['bucket']['name']              // if dynamic allocation
    # key = event['Records'][0]['s3']['object']['key']                  // if dynamic searching 
    bucket = "aws-trigger1"
    key = "unigram1.csv"

    response = s3.head_object(Bucket=bucket, Key=key)
    fileSize = response['ContentLength']
    fileSize = fileSize / 1048576
    print("FileSize = " + str(fileSize) + " MB")
    # redis.rpush(fileSize)
    redis.ping
    redis.set('foo','bar')

    obj = s3.get_object(Bucket= bucket, Key=key)
    file_content = obj["Body"].read().decode("utf-8")

    #Calculate the chunk size
    chunkSize = ''
    MAPPERNUMBER=2
    MINBLOCKSIZE= 1024
    chunkSize = int(fileSize/MAPPERNUMBER)
    numberMappers = MAPPERNUMBER
    if chunkSize < MINBLOCKSIZE:
        print("chunk size to small (" + str(chunkSize) + " bytes), changing to " + str(MINBLOCKSIZE) + " bytes")
        chunkSize = MINBLOCKSIZE
        numberMappers = int(fileSize/chunkSize)+1
    residualData = fileSize - (MAPPERNUMBER - 1)*chunkSize
    # print("numberMappers--",residualData)

    #Ensure that chunk size is smaller than lambda function memory
    MEMORY= 1536
    memoryLimit = 0.30
    secureMemorySize = int(MEMORY*memoryLimit)
    if chunkSize > secureMemorySize:
        print("chunk size to large (" + str(chunkSize) + " bytes), changing to " + str(secureMemorySize) + " bytes")
        chunkSize = secureMemorySize
        numberMappers = int(fileSize/chunkSize)+1

    # print("Using chunk size of " + str(chunkSize) + " bytes, and " + str(numberMappers) + " nodes")

    #remove 1st row from the data
    file_content=file_content.split('\n', 1)[-1]
    # print("after removing column name")
    # X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.5, randomstate=42)
    train_pct_index = int(0.5 * len(file_content))  

    X_Map1, X_Map2 = file_content[:train_pct_index], file_content[train_pct_index:]
    # print("the size is--------------",X_Map1)
    # print("the size is--------------",X_Map2)

    linelen = file_content.find('\n')
    if linelen < 0:
        print("\ n not found in mapper chunk")
        return
    extraRange = 2*(linelen+20)
    initRange = fileSize + 1
    limitRange = fileSize + extraRange

    # chunkRange = 'bytes=' + str(initRange) + '-' + str(limitRange)
    # print(chunkRange)

     #invoke mappers
    invokeLam = boto3.client("lambda", region_name="us-east-1")
    payload = X_Map1
    payload2 = X_Map2
    print(X_Map1)
    # resp = invokeLam.invoke(FunctionName = "map1", InvocationType="RequestResponse", Payload = json.dumps(payload))
    # resp2 = invokeLam.invoke(FunctionName = "map2", InvocationType="RequestResponse", Payload = json.dumps(payload2))

    return file_conte

lambda中的vpc连接

ezykj2lf

ezykj2lf1#

尝试从s3检索对象时可能会收到超时。检查您的vpc中是否配置了amazon s3端点:https://docs.aws.amazon.com/glue/latest/dg/vpc-endpoints-s3.html

相关问题