flume安装在amazon ec2(amazon linux ami 2018.03.0.20190514 x86\U 64 hvm gp2)flume版本:1.9
我试着用一个本地的作为一个Flume拷贝工作得很好。但是当我使用s3作为接收器时,遇到了uri中的无效主机名问题。我仔细检查了我的访问密钥和密钥,它们都是正确的。
我试着用s3n://它不起作用
# example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.kafka.bootstrap.servers = localhost:9092
a1.sources.r1.kafka.topics = testflume
a1.sources.r1.kafka.consumer.group.id = flumeconsumer
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = s3://AWSACCESSKEY:AWSSECRETKEY@bucket/path
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.filePrefix = event
a1.sinks.k1.hdfs.rollInterval = 10
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 1000
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
错误
[ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:459)] process failed
java.lang.IllegalArgumentException: Invalid hostname in URI s3://AWSACCESSKEY:AWSSECRETKEY@bucket/path/event.1558997927667.tmp
我希望flume能够在s3中成功地进行身份验证并编写文件
1条答案
按热度按时间taor4pac1#
你能试着用s3a://?但是,将角色分配给ec2示例并将该角色的权限授予s3,而不是提供aws访问和密钥,这是一种很好的做法。一旦你设置好了,你就可以把路径设置为
s3a://bucket_name/path/../