风暴工人在运行2小时后死亡

3phpmpom  于 2021-06-21  发布在  Storm
关注(0)|答案(1)|浏览(451)

我编写了一个拓扑,从kkafka读取主题,然后进行聚合,然后将结果存储到数据库中。拓扑运行了好几个小时,但后来工人死了,最后主管也死了。每次运行几个小时后都会出现此问题。
我正在3个节点上运行Storm0.9.5(1个用于nimbus,2个用于workers)。
这是我在一个工作日志中遇到的错误:

2015-08-12T04:10:38.395+0000 b.s.m.n.Client [ERROR] connection attempt 101 to Netty-Client-/10.28.18.213:6700 failed: java.lang.RuntimeException: Returned channel was actually not established
2015-08-12T04:10:38.395+0000 b.s.m.n.Client [INFO] closing Netty Client Netty-Client-/10.28.18.213:6700
2015-08-12T04:10:38.395+0000 b.s.m.n.Client [INFO] waiting up to 600000 ms to send 0 pending messages to Netty-Client-/10.28.18.213:6700
2015-08-12T04:10:38.404+0000 STDIO [ERROR] Aug 12, 2015 4:10:38 AM org.apache.storm.guava.util.concurrent.ExecutionList executeListener
SEVERE: RuntimeException while executing runnable org.apache.storm.guava.util.concurrent.Futures$4@632ef20f with executor org.apache.storm.guava.util.concurrent.MoreExecutors$SameThreadExecutorService@1f15e9a8
java.lang.RuntimeException: Failed to connect to Netty-Client-/10.28.18.213:6700
        at backtype.storm.messaging.netty.Client.connect(Client.java:308)
        at backtype.storm.messaging.netty.Client.access$1100(Client.java:78)
        at backtype.storm.messaging.netty.Client$2.reconnectAgain(Client.java:297)
        at backtype.storm.messaging.netty.Client$2.onSuccess(Client.java:283)
        at backtype.storm.messaging.netty.Client$2.onSuccess(Client.java:275)
        at org.apache.storm.guava.util.concurrent.Futures$4.run(Futures.java:1181)
        at org.apache.storm.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
        at org.apache.storm.guava.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
        at org.apache.storm.guava.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
        at org.apache.storm.guava.util.concurrent.ListenableFutureTask.done(ListenableFutureTask.java:91)
        at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:380)
        at java.util.concurrent.FutureTask.set(FutureTask.java:229)
        at java.util.concurrent.FutureTask.run(FutureTask.java:270)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Giving up to connect to Netty-Client-/10.28.18.213:6700 after 102 failed attempts
        at backtype.storm.messaging.netty.Client.connect(Client.java:303)
        ... 19 more

这是我对每个工作节点的配置:

storm.zookeeper.servers:
- "10.28.19.230"
- "10.28.19.224"
- "10.28.19.223"
storm.zookeeper.port: 2181
nimbus.host: "10.28.18.211"
storm.local.dir: "/mnt/storm/storm-data"
storm.local.hostname: "10.28.18.213"
storm.messaging.transport: backtype.storm.messaging.netty.Context
storm.messaging.netty.server_worker_threads: 1
storm.messaging.netty.client_worker_threads: 1
storm.messaging.netty.buffer_size: 5242880
storm.messaging.netty.max_retries: 300
storm.messaging.netty.max_wait_ms: 4000
storm.messaging.netty.min_wait_ms: 100
supervisor.slots.ports:
- 6700

supervisor.childopts: -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=12346

# worker.childopts: " -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=1%ID%"

# supervisor.childopts: " -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=12346"

worker.childopts: -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=2%ID% -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Xmx10240m -Xms10240m -XX:MaxNewSize=6144m
nwlls2ji

nwlls2ji1#

我想我找到了这个线索的根本原因:https://mail-archives.apache.org/mod_mbox/storm-user/201402.mbox/%3c20140214170209.gc55319@animetrics.com%3e
netty客户端错误只是症状,但“根本原因是没有指定输出文件就打开了worker的gc日志记录。结果,gc日志进入标准输出,而没有重定向到logback。最终,缓冲区被填满,jvm将挂起(从而停止心跳)并被杀死。工作进程的持续时间取决于内存压力和分配的堆大小(显然,事后来看,gc越多,gc日志记录越多,缓冲区填充越快)
希望这能帮到你们。

相关问题