opscenter.rollup\u state表的阻塞压缩

6ie5vjzr  于 2021-06-14  发布在  Cassandra
关注(0)|答案(0)|浏览(253)

我们刚刚向cassandra集群添加了新的第二个dc,它有7个节点(每5个jbods ssd),在复制新的dc之后,我们得到了opscenter.rollup\u state表的周期性压缩。当这种情况发生时,节点会为其他节点进入关闭状态,但它本身保持活动状态,nodetool drain也会阻塞节点,只有重新启动节点才能在这种情况下起到帮助作用。重新启动节点后,下面的日志已存在。下面两个节点都在这种状态下被阻塞。

DEBUG [CompactionExecutor:14] 2019-09-03 17:03:44,456  CompactionTask.java:154 - Compacting (a43c8d71-ce53-11e9-972d-59ed5390f0df) [/cass-db1/data/OpsCenter/rollup_state-43e776914d2911e79ab41dbbeab1d831/mc-581-big-Data.db:level=0, /cass-db1/data/OpsCenter/rollup_state-43e776914d2911e79ab41dbbeab1d831/mc-579-big-Data.db:level=0, ]

其他节点

DEBUG [CompactionExecutor:14] 2019-09-03 20:38:22,272  CompactionTask.java:154 - Compacting (a00354f0-ce71-11e9-91a4-3731a2137ea5) [/cass-db2/data/OpsCenter/rollup_state-43e776914d2911e79ab41dbbeab1d831/mc-610-big-Data.db:level=0, /cass-db2/data/OpsCenter/rollup_state-43e776914d2911e79ab41dbbeab1d831/mc-606-big-Data.db:level=0, ]
    WARN  [CompactionExecutor:14] 2019-09-03 20:38:22,273  LeveledCompactionStrategy.java:273 - Live sstable /cass-db2/data/OpsCenter/rollup_state-43e776914d2911e79ab41dbbeab1d831/mc-606-big-Data.db from level 0 is not on corresponding level in the leveled manifest. This is not a problem per se, but may indicate an orphaned sstable due to a failed compaction not cleaned up properly.

解决这个问题的方法是什么。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题