otter has an error, retrying. caused by java.nio.channels.ClosedByInterruptException: null

uqdfh47h  于 3个月前  发布在  Java
关注(0)|答案(4)|浏览(71)

WARN c.a.otter.canal.parse.inbound.mysql.MysqlEventParser - prepare to find start position just last position
{"identity":{"slaveId":-1,"sourceAddress":{"address":"xxxxx","port":xxxxxx}},"postion":{"included":false,"journalName":"mysql-bin.000004","position":75784970,"serverId":212,"timestamp":1513663476000}}
2017-12-19 14:08:33.488 [destination = CANAL , address = /xxxxx:xxxxxx, EventParser] ERROR c.a.otter.canal.parse.inbound.mysql.MysqlEventParser - dump address xxxxx:xxxxxx, has an error, retrying. caused by
java.nio.channels.ClosedByInterruptException: null
at com.alibaba.otter.canal.parse.driver.mysql.socket.SocketChannel.read(SocketChannel.java:49) ~[canal.parse.driver-1.0.25.jar:na]
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.DirectLogFetcher.fetch0(DirectLogFetcher.java:151) ~[canal.parse-1.0.25.jar:na]
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.DirectLogFetcher.fetch(DirectLogFetcher.java:69) ~[canal.parse-1.0.25.jar:na]
at com.alibaba.otter.canal.parse.inbound.mysql.MysqlConnection.dump(MysqlConnection.java:137) ~[canal.parse-1.0.25.jar:na]
at com.alibaba.otter.canal.parse.inbound.AbstractEventParser$3.run(AbstractEventParser.java:220) ~[canal.parse-1.0.25.jar:na]

otter部署为双向同步,一个节点正常,不报错, 另一个不支持ddl的node节点每隔2分钟报一次上面的错误,看了下binlog的位点,是到了最后的位点,binlog有没有新的日志写进来。 但是为什么另一个节点不报这个错误。 而且过了一段时间, channel 就挂点了,通过 manager启动channel 失败!!!!
查看manager的日志,报如下错误:

at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.8.0_131]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) [na:1.8.0_131]
at com.alibaba.otter.shared.communication.core.impl.DefaultCommunicationClientImpl.call(DefaultCommunicationClientImpl.java:152) ~[shared.communication-4.2.16-SNAPSHOT.jar:na]
at com.alibaba.otter.manager.biz.remote.impl.ConfigRemoteServiceImpl.notifyChannel(ConfigRemoteServiceImpl.java:119) ~[manager.biz-4.2.16-SNAPSHOT.jar:na]
at com.alibaba.otter.manager.biz.remote.impl.ConfigRemoteServiceImpl$$FastClassByCGLIB$$3f77feba.invoke(<generated>) [cglib-nodep-2.2.jar:na]
at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:191) [cglib-nodep-2.2.jar:na]
at org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint(Cglib2AopProxy.java:689) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE]
at org.springframework.aop.framework.adapter.ThrowsAdviceInterceptor.invoke(ThrowsAdviceInterceptor.java:124) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE]
at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:622) [spring-aop-3.1.2.RELEASE.jar:3.1.2.RELEASE]
at com.alibaba.otter.manager.biz.remote.impl.ConfigRemoteServiceImpl$$EnhancerByCGLIB$$d8e5099b.notifyChannel(<generated>) [cglib-nodep-2.2.jar:na]
at com.alibaba.otter.manager.biz.config.channel.impl.ChannelServiceImpl$3.doInTransactionWithoutResult(ChannelServiceImpl.java:439) [manager.biz-4.2.16-SNAPSHOT.jar:na]
at org.springframework.transaction.support.TransactionCallbackWithoutResult.doInTransaction(TransactionCallbackWithoutResult.java:33) [spring-tx-3.1.2.RELEASE.jar:3.1.2.RELEASE]
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:130) [spring-tx-3.1.2.RELEASE.jar:3.1.2.RELEASE]
at com.alibaba.otter.manager.biz.config.channel.impl.ChannelServiceImpl.switchChannelStatus(ChannelServiceImpl.java:378) [manager.biz-4.2.16-SNAPSHOT.jar:na]
at com.alibaba.otter.manager.biz.config.channel.impl.ChannelServiceImpl.startChannel(ChannelServiceImpl.java:462) [manager.biz-4.2.16-SNAPSHOT.jar:na]
at com.alibaba.otter.manager.biz.monitor.impl.RestartAlarmRecovery.processRecovery(RestartAlarmRecovery.java:103) [manager.biz-4.2.16-SNAPSHOT.jar:na]
at com.alibaba.otter.manager.biz.monitor.impl.RestartAlarmRecovery.access$100(RestartAlarmRecovery.java:44) [manager.biz-4.2.16-SNAPSHOT.jar:na]
at com.alibaba.otter.manager.biz.monitor.impl.RestartAlarmRecovery$1.run(RestartAlarmRecovery.java:137) [manager.biz-4.2.16-SNAPSHOT.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]

Caused by: com.alibaba.otter.shared.communication.core.exception.CommunicationException: call[10.11.4.185:20880] , Event[NotifyChannelEvent[channel=Channel[id=1,name=CHANNEL,status=START,description=,pipelines=[Pipeline[id=3,channelId=1,name=PIPLINE,description=,selectNodes=[Node[id=2,name=NODE,ip=10.11.4.185,port=30880,status=START,description=,parameters=NodeParameter[mbeanPort=30882,downloadPort=30881,zkCluster=AutoKeeperCluster[id=2,clusterName=zookeeper,serverList=[10.11.4.185:2181],description=,gmtCreate=2017-11-21 15:34:09,gmtModified=2017-11-21 。。。。。。。

wf82jlnq

wf82jlnq1#

ClosedByInterruptException应该是哪里发送过canal处理异常,导致binlog dump链接被关闭

2o7dmzc5

2o7dmzc52#

2017-12-21 13:54:59.865 [Thread-5] INFO com.alibaba.otter.node.deployer.OtterLauncher - INFO ## stop the otter server
2017-12-21 13:54:59.890 [Thread-5] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 13:55:01.929 [Thread-5] INFO com.alibaba.otter.node.deployer.OtterLauncher - INFO ## otter server is down.
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
2017-12-21 13:55:29.177 [main] INFO com.alibaba.otter.node.deployer.OtterLauncher - INFO ## the otter server is running now ......
2017-12-21 16:39:45.706 [DubboServerHandler-127.0.0.1:20880-thread-19] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 16:39:47.142 [DubboServerHandler-127.0.0.1:20880-thread-23] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 19:51:48.990 [DubboServerHandler-127.0.0.1:20880-thread-27] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 20:59:50.078 [DubboServerHandler-127.0.0.1:20880-thread-31] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 21:47:50.858 [DubboServerHandler-127.0.0.1:20880-thread-35] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 22:11:51.291 [DubboServerHandler-127.0.0.1:20880-thread-39] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 22:11:52.401 [DubboServerHandler-127.0.0.1:20880-thread-43] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 22:13:51.030 [DubboServerHandler-127.0.0.1:20880-thread-47] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 22:15:51.079 [DubboServerHandler-127.0.0.1:20880-thread-50] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 22:27:51.562 [DubboServerHandler-127.0.0.1:20880-thread-50] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 22:27:52.675 [DubboServerHandler-127.0.0.1:20880-thread-50] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 22:29:51.307 [DubboServerHandler-127.0.0.1:20880-thread-50] WARN c.a.o.shared.arbitrate.impl.setl.monitor.MainstemMonitor - mainstem is not run any in node
2017-12-21 23:25:12.523 [New I/O server worker #1 -4] WARN c.a.d.common.threadpool.support.AbortPolicyWithReport - [DUBBO] Thread pool is EXHAUSTED! Thread Name: DubboServerHandler-127.0.0.1:20880, Pool Size: 50 (active: 50, core: 50, max: 50, largest: 50), Task: 108 (completed: 58), Executor status:(isShutdown:false, isTerminated:false, isTerminating:false), in dubbo://127.0.0.1:20880!, dubbo version: 2.5.3, current host:
发现channel挂掉的原因是这个, dubbo 连接池被用完了, 我看代码里是写死的50个。

toe95027

toe950273#

@hittanic 连接池用完的问题解决了吗?

ncecgwcz

ncecgwcz4#

嗯 我也遇到连接池耗尽的问题

相关问题