Cassandra美杜莎无法恢复群集

nom7f22z  于 2021-06-13  发布在  Cassandra
关注(0)|答案(0)|浏览(189)

我正在尝试使用 medusa ,但我遇到了以下异常:

[cassandra@cass-01 centos]$
[cassandra@cass-01 centos]$ medusa restore-cluster --backup-name=2020-06-21_1514 --seed-target cass-01.ks.aws.pfd.com --keep-auth
[2020-06-21 15:47:51,164] INFO: Monitoring provider is noop
[2020-06-21 15:47:51,165] INFO: system_auth keyspace will be overwritten with the backup on target nodes
[2020-06-21 15:47:55,205] INFO: Ensuring the backup is found and is complete
[2020-06-21 15:47:55,280] INFO: Restore will happen "In-Place", no new hardware is involved
[2020-06-21 15:47:55,334] INFO: Starting cluster restore...
[2020-06-21 15:47:55,334] INFO: Working directory for this execution: /tmp/medusa-job-b681204d-58c1-4a51-9e92-b35dfca792a4
[2020-06-21 15:47:55,334] INFO: About to restore on cass-01.ks.aws.pfd.com using {'source': ['cass-01.ks.aws.pfd.com'], 'seed': False} as backup source
[2020-06-21 15:47:55,334] INFO: About to restore on cass-04.ks.aws.pfd.com using {'source': ['cass-04.ks.aws.pfd.com'], 'seed': False} as backup source
[2020-06-21 15:47:55,334] INFO: About to restore on cass-02.ks.aws.pfd.com using {'source': ['cass-02.ks.aws.pfd.com'], 'seed': False} as backup source
[2020-06-21 15:47:55,334] INFO: About to restore on cass-05.ks.aws.pfd.com using {'source': ['cass-05.ks.aws.pfd.com'], 'seed': False} as backup source
[2020-06-21 15:47:55,334] INFO: About to restore on cass-03.ks.aws.pfd.com using {'source': ['cass-03.ks.aws.pfd.com'], 'seed': False} as backup source
[2020-06-21 15:47:55,334] INFO: This will delete all data on the target nodes and replace it with backup 2020-06-21_1514.
Are you sure you want to proceed? (Y/n)Y
[2020-06-21 15:47:59,900] INFO: target seeds : []
[2020-06-21 15:47:59,900] INFO: Stopping Cassandra on all nodes currently up
[2020-06-21 15:47:59,900] INFO: Executing "sudo service cassandra stop" on all nodes.
[2020-06-21 15:48:02,986] INFO: Job executing "sudo service cassandra stop" ran and finished Successfully on all nodes.
[2020-06-21 15:48:02,986] INFO: Restoring data on cass-01.ks.aws.pfd.com...
[2020-06-21 15:48:02,986] INFO: Restoring data on cass-04.ks.aws.pfd.com...
[2020-06-21 15:48:02,987] INFO: Restoring data on cass-02.ks.aws.pfd.com...
[2020-06-21 15:48:02,987] INFO: Restoring data on cass-05.ks.aws.pfd.com...
[2020-06-21 15:48:02,987] INFO: Restoring data on cass-03.ks.aws.pfd.com...
[2020-06-21 15:48:02,987] INFO: Executing "nohup sh -c "mkdir /tmp/medusa-job-b681204d-58c1-4a51-9e92-b35dfca792a4; cd /tmp/medusa-job-b681204d-58c1-4a51-9e92-b35dfca792a4 && medusa-wrapper sudo medusa --fqdn=%s -vvv restore-node --in-place  %s --no-verify --backup-name 2020-06-21_1514 --temp-dir /tmp   "" on all nodes.
[2020-06-21 15:52:34,616] INFO: Job executing "nohup sh -c "mkdir /tmp/medusa-job-b681204d-58c1-4a51-9e92-b35dfca792a4; cd /tmp/medusa-job-b681204d-58c1-4a51-9e92-b35dfca792a4 && medusa-wrapper sudo medusa --fqdn=%s -vvv restore-node --in-place  %s --no-verify --backup-name 2020-06-21_1514 --temp-dir /tmp   "" ran and finished with errors on following nodes: ['cass-01.ks.aws.pfd.com', 'cass-02.ks.aws.pfd.com', 'cass-03.ks.aws.pfd.com', 'cass-04.ks.aws.pfd.com', 'cass-05.ks.aws.pfd.com']
[2020-06-21 15:52:34,616] INFO: [cass-01.ks.aws.pfd.com]  nohup: ignoring input and appending output to 'nohup.out'
[2020-06-21 15:52:34,616] INFO: cass-01.ks.aws.pfd.com-stdout: nohup: ignoring input and appending output to 'nohup.out'
[2020-06-21 15:52:34,616] INFO: [cass-04.ks.aws.pfd.com]  nohup: ignoring input and appending output to 'nohup.out'
[2020-06-21 15:52:34,616] INFO: cass-04.ks.aws.pfd.com-stdout: nohup: ignoring input and appending output to 'nohup.out'
[2020-06-21 15:52:34,616] INFO: [cass-02.ks.aws.pfd.com]  nohup: ignoring input and appending output to 'nohup.out'
[2020-06-21 15:52:34,616] INFO: cass-02.ks.aws.pfd.com-stdout: nohup: ignoring input and appending output to 'nohup.out'
[2020-06-21 15:52:34,617] INFO: [cass-05.ks.aws.pfd.com]  nohup: ignoring input and appending output to 'nohup.out'
[2020-06-21 15:52:34,617] INFO: cass-05.ks.aws.pfd.com-stdout: nohup: ignoring input and appending output to 'nohup.out'
[2020-06-21 15:52:34,617] INFO: [cass-03.ks.aws.pfd.com]  nohup: ignoring input and appending output to 'nohup.out'
[2020-06-21 15:52:34,617] INFO: cass-03.ks.aws.pfd.com-stdout: nohup: ignoring input and appending output to 'nohup.out'
[2020-06-21 15:52:34,618] ERROR: Some nodes failed to restore. Exiting
[2020-06-21 15:52:34,618] ERROR: This error happened during the cluster restore: Some nodes failed to restore. Exiting
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/medusa/restore_cluster.py", line 72, in orchestrate
    restore.execute()
  File "/usr/local/lib/python3.6/site-packages/medusa/restore_cluster.py", line 146, in execute
    self._restore_data()
  File "/usr/local/lib/python3.6/site-packages/medusa/restore_cluster.py", line 350, in _restore_data
    raise Exception(err_msg)
Exception: Some nodes failed to restore. Exiting
[cassandra@cass-01 centos]$
[cassandra@cass-01 centos]$

例外情况似乎是因为种子节点无法在其他节点上成功运行restore命令(知道这一点) cassandra 用户配置为无密码登录到群集中的其他节点。不知道会有什么问题。谢谢你的帮助

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题