seata 启动日志问题

cygmwpex  于 22天前  发布在  其他
关注(0)|答案(9)|浏览(13)
  • I have searched the issues of this repository and believe that this is not a duplicate.

Ⅰ. Issue Description

我注册中心用consul,配置中心用apollo

查看启动日志,有如下问题需要帮助确认:

  1. 10:48:07.454 ERROR --- [tyServerNIOWorker_1_13_16] i.s.core.rpc.netty.v1.ProtocolV1Decoder : Decode frame error, cause: Adjusted frame length exceeds 8388608: 539976035 - discarded

这个ERROR该如何才能让他消失?SEATA SERVER刚启动,还没有SEATA客户端连接上来。

  1. 10:48:04.352 INFO --- [ttyServerNIOWorker_1_8_16] i.s.c.r.n.AbstractNettyRemotingServer : 10.132.102.146:58874 to server channel inactive.

不断有 server channel inactive.的信息打印在控制台上面,而且还有不同的IP和端口号10.132.102.156:33748,10.132.102.146:58868。我的application.yml里面没有配置这些端口。请问怎样修改才能停止这些信息的输出?

日志和配置文件见附件

Ⅱ. Describe what happened

If there is an exception, please attach the exception trace:

log&conf.zip

Ⅲ. Describe what you expected to happen

Ⅳ. How to reproduce it (as minimally and precisely as possible)

  1. xxx
  2. xxx
  3. xxx

Ⅴ. Anything else we need to know?

Ⅵ. Environment:

  • JDK version : 1.8
  • Seata version: 1.6.1
  • OS :
  • Others:
pcrecxhr

pcrecxhr1#

if a health check exists on port 8091 ?

wi3ka0sx

wi3ka0sx2#

if a health check exists on port 8091 ?

Because port 9091 is occupied by another application, I running the following command the start the seata server.
./seata-server.sh -h 10.132.102.141 -p 9092 -m db

I run the netstat command the check the port 8091. It seem this port is ok, this port is occupied by seata application.
netstat -tunlp | grep 8091
tcp6 0 0 :::8091 :::* LISTEN 123042/java

qlfbtfca

qlfbtfca3#

if a health check exists on port 8091 ?

In application.yml, I set the port as 8091 rather than the default value 7091. because the port 8091 is occupied by another application and execute the following command to restart the seata server.

./seata-server.sh -h 10.132.102.141 -p 9092 -m db

4si2a6ki

4si2a6ki4#

@slievrly ,

following is the config info which is stored in the apollo. Is there any additional infomation you want to know?

=====================================================================
#Transport configuration, for client and server
transport.type = TCP
transport.server = NIO
transport.heartbeat = true
transport.enableTmClientBatchSendRequest = false
transport.enableRmClientBatchSendRequest = true
transport.enableTcServerBatchSendResponse = false
transport.rpcRmRequestTimeout = 30000
transport.rpcTmRequestTimeout = 30000
transport.rpcTcRequestTimeout = 30000
transport.threadFactory.bossThreadPrefix = NettyBoss
transport.threadFactory.workerThreadPrefix = NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix = NettyServerBizHandler
transport.threadFactory.shareBossWorker = false
transport.threadFactory.clientSelectorThreadPrefix = NettyClientSelector
transport.threadFactory.clientSelectorThreadSize = 1
transport.threadFactory.clientWorkerThreadPrefix = NettyClientWorkerThread
transport.threadFactory.bossThreadSize = 1
transport.threadFactory.workerThreadSize = default
transport.shutdown.wait = 3
transport.serialization = seata
transport.compressor = none

#Transaction rule configuration, only for the client

server.undo.logSaveDays = 7
server.undo.logDeletePeriod = 86400000

#For TCC transaction mode
tcc.fence.logTableName = tcc_fence_log
tcc.fence.cleanPeriod = 1h

#Log rule configuration, for client and server
log.exceptionRate = 100

#Transaction storage configuration, only for the server. The file, db, and redis configuration values are optional.
store.mode = db
store.lock.mode = db
store.session.mode = db
#Used for password encryption
#store.publicKey=

#These configurations are required if the store mode is db . If store.mode,store.lock.mode,store.session.mode are not equal to db , you can remove the configuration block.
store.db.datasource = druid
store.db.dbType = postgresql
store.db.driverClassName = org.postgresql.Driver
store.db.url = jdbc:postgresql://10.132.102.187:5432/paas_base_seata_db?currentSchema=public&stringtype=unspecified
store.db.user = seata_user
store.db.password = seata@123
store.db.minConn = 5
store.db.maxConn = 30
store.db.globalTable = global_table
store.db.branchTable = branch_table
store.db.distributedLockTable = distributed_lock
store.db.queryLimit = 100
store.db.lockTable = lock_table
store.db.maxWait = 5000

#Transaction rule configuration, only for the server
server.recovery.committingRetryPeriod = 1000
server.recovery.asynCommittingRetryPeriod = 1000
server.recovery.rollbackingRetryPeriod = 1000
server.recovery.timeoutRetryPeriod = 1000
server.maxCommitRetryTimeout = -1
server.maxRollbackRetryTimeout = -1
server.rollbackRetryTimeoutUnlockEnable = false
server.distributedLockExpireTime = 10000
server.xaerNotaRetryTimeout = 60000
server.session.branchAsyncQueueSize = 5000
server.session.enableBranchAsyncRemove = false
server.enableParallelRequestHandle = false

#Metrics configuration, only for the server
metrics.enabled = false
metrics.registryType = compact
metrics.exporterList = prometheus
metrics.exporterPrometheusPort = 9898

service.vgroupMapping.order-service-tx-group = default
service.vgroupMapping.account-service-tx-group = default
service.vgroupMapping.business-service-tx-group = default
service.vgroupMapping.storage-service-tx-group = default

yhxst69z

yhxst69z5#

It appears that you accessed port 9092 using a non seata-spring-boot-starter (seata-all) SDK.

dhxwm5r4

dhxwm5r46#

It appears that you accessed port 9092 using a non seata-spring-boot-starter (seata-all) SDK.

But, I just start the seata server and find the error log. And the seata clients are not started yet.

Actually, I deploy the seata server and seata client on the same VM.

I will try to kill all the seata clients and restart the seata server to check if the ERROR log will continue.

pbgvytdp

pbgvytdp7#

It appears that you accessed port 9092 using a non seata-spring-boot-starter (seata-all) SDK.

@slievrly , I deploy the seata server and client on the same VM. And I am sure nobody know the 9092 of the seata server. I kill all the java application and restart the seata server. I reproduce the error log. Could you kindly take a look at it? thanks,

  1. I kill all the java application. and run ps -ef to check
    [root@jt-ecif-mkr-051 bin]# ps -ef |grep java
    root 49953 49271 0 10:52 pts/0 00:00:00 grep --color=auto java
  2. run the following command to restart seata server.
    ./seata-server.sh -h 10.132.102.141 -p 9092 -m db
  3. check only seata server is running.

[root@jt-ecif-mkr-051 bin]# ps -ef | grep java
root 50311 1 99 10:54 pts/0 00:00:24 /usr/java/jdk1.8.0-121/bin/java -server -Dloader.path=/app/seata-apollo/seata/lib -Xmx2048m -Xms2048m
-Xmn1024m -Xss512k -XX:SurvivorRatio=10 -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=256m -XX:MaxDirectMemorySize=1024m -XX:-OmitStackTraceInFastThrow -X
X:-UseAdaptiveSizePolicy -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/app/seata-apollo/seata/logs/java_heapdump.hprof -XX:+DisableExplicitGC -Xlogg
c:/app/seata-apollo/seata/logs/seata_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:
NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -XX:+UseG1GC -Dio.netty.leakDetectionLevel=advanced -Dapp.name=seata-server -Dapp.pid=50256 -Dapp.home=/app
/seata-apollo/seata -Dbasedir=/app/seata-apollo/seata -Dspring.config.location=/app/seata-apollo/seata/conf/application.yml -Dlogging.config=/app/seata-
apollo/seata/conf/logback-spring.xml -jar /app/seata-apollo/seata/target/seata-server.jar -h 10.132.102.141 -p 9092 -m db
root 50401 49271 0 10:54 pts/0 00:00:00 grep --color=auto java
[root@jt-ecif-mkr-051 bin]#

  1. execute vi start.out command to check the log.

10:54:25.557 INFO --- [main] io.seata.server.ServerRunner: seata server started in 964 millSeconds
10:54:30.325 INFO --- [ttyServerNIOWorker_1_1_16] i.s.c.r.n.AbstractNettyRemotingServer : 10.132.102.154:56904 to server channel inactive.
10:54:30.336 INFO --- [ttyServerNIOWorker_1_1_16] i.s.c.r.n.AbstractNettyRemotingServer : remove unused channel:[id: 0xd0606982, L:0.0.0.0/0.0.0.0:9
092 ! R:/10.132.102.154:56904]
10:54:37.461 ERROR --- [ttyServerNIOWorker_1_2_16] i.s.core.rpc.netty.v1.ProtocolV1Decoder : Decode frame error, cause: Adjusted frame length exceeds 8
388608: 539976035 - discarded
10:54:38.942 ERROR --- [ttyServerNIOWorker_1_3_16] i.s.core.rpc.netty.v1.ProtocolV1Decoder : Decode frame error, cause: Adjusted frame length exceeds 8
388608: 539979890 - discarded
10:54:39.355 ERROR --- [ttyServerNIOWorker_1_4_16] i.s.core.rpc.netty.v1.ProtocolV1Decoder : Decode frame error, cause: Adjusted frame length exceeds 8
388608: 539976035 - discarded
10:54:40.266 INFO --- [ttyServerNIOWorker_1_5_16] i.s.c.r.n.AbstractNettyRemotingServer : 10.132.102.154:57244 to server channel inactive.
10:54:40.266 INFO --- [ttyServerNIOWorker_1_5_16] i.s.c.r.n.AbstractNettyRemotingServer : remove unused channel:[id: 0xae644ec8, L:0.0.0.0/0.0.0.0:9
092 ! R:/10.132.102.154:57244]
10:54:43.512 ERROR --- [ttyServerNIOWorker_1_6_16] i.s.core.rpc.netty.v1.ProtocolV1Decoder : Decode frame error, cause: Adjusted frame length exceeds 8
388608: 539979890 - discarded
10:54:43.755 ERROR --- [ttyServerNIOWorker_1_7_16] i.s.core.rpc.netty.v1.ProtocolV1Decoder : Decode frame error, cause: Adjusted frame length exceeds 8
388608: 539976035 - discarded
10:54:44.337 ERROR --- [ttyServerNIOWorker_1_8_16] i.s.core.rpc.netty.v1.ProtocolV1Decoder : Decode frame error, cause: Adjusted frame length exceeds 8
388608: 539979890 - discarded
10:54:47.451 INFO --- [ttyServerNIOWorker_1_2_16] i.s.c.r.n.AbstractNettyRemotingServer : 10.132.102.156:33932 to server channel inactive.
10:54:47.451 INFO --- [ttyServerNIOWorker_1_2_16] i.s.c.r.n.AbstractNettyRemotingServer : remove unused channel:[id: 0x8287f200, L:0.0.0.0/0.0.0.0:9
092 ! R:/10.132.102.156:33932]
10:54:48.939 INFO --- [ttyServerNIOWorker_1_3_16] i.s.c.r.n.AbstractNettyRemotingServer : 10.132.102.146:38818 to server channel inactive.
10:54:48.939 INFO --- [ttyServerNIOWorker_1_3_16] i.s.c.r.n.AbstractNettyRemotingServer : remove unused channel:[id: 0x5f7ada79, L:0.0.0.0/0.0.0.0:9
092 ! R:/10.132.102.146:38818]
10:54:49.353 INFO --- [ttyServerNIOWorker_1_4_16] i.s.c.r.n.AbstractNettyRemotingServer : 10.132.102.146:38828 to server channel inactive.
10:54:49.353 INFO --- [ttyServerNIOWorker_1_4_16] i.s.c.r.n.AbstractNettyRemotingServer : remove unused channel:[id: 0x6dc3542d, L:0.0.0.0/0.0.0.0:9
092 ! R:/10.132.102.146:38828]
10:54:50.266 INFO --- [ttyServerNIOWorker_1_9_16] i.s.c.r.n.AbstractNettyRemotingServer : 10.132.102.154:57466 to server channel inactive.
10:54:50.266 INFO --- [ttyServerNIOWorker_1_9_16] i.s.c.r.n.AbstractNettyRemotingServer : remove unused channel:[id: 0xbe9a63b0, L:0.0.0.0/0.0.0.0:9
092 ! R:/10.132.102.154:57466]
10:54:52.456 ERROR --- [tyServerNIOWorker_1_10_16] i.s.core.rpc.netty.v1.ProtocolV1Decoder : Decode frame error, cause: Adjusted frame length exceeds 8
388608: 539976035 - discarded
10:54:53.511 INFO --- [ttyServerNIOWorker_1_6_16] i.s.c.r.n.AbstractNettyRemotingServer : 10.132.102.156:34298 to server channel inactive.

zqry0prt

zqry0prt8#

Check if you are accessing port 9092 using something other than the Seata SDK, such as telnet, http, etc.

检查下是否你使用非Seata SDK的方式 访问 了9092端口,比如telnet,http等方式。

knsnq2tg

knsnq2tg9#

Check if you are accessing port 9092 using something other than the Seata SDK, such as telnet, http, etc.

检查下是否你使用非Seata SDK的方式 访问 了9092端口,比如telnet,http等方式。

应该没有。因为整个虚拟机都刚重启。
后面我会试着抓包确认一下。

相关问题