ceph环境清理

第一步:在/root/ceph 目录下执行(第一个节点)

ceph-deploy purge ceph01 ceph02 ceph03 ceph04
ceph-deploy purgedata ceph01 ceph02 ceph03 ceph04
ceph-deploy forgetkeys
rm -rf ceph.*

第二步:在/root 目录下执行(节点1、节点2、节点3、节点4):

result_VG=`vgdisplay | grep "ceph" |awk '{print $3}'`
for vg in ${result_VG}
do
	vgremove -f ${vg}
done

第三步:自动化框架清理:

第四步:清理缓存(每个节点执行)

需要提前安装nvme 工具
链接:https://pan.baidu.com/s/1icvDViQeh6iW9nfad7WxAQ
提取码:s3g6

cfdisk /dev/nvme0n1
cfdisk /dev/nvme1n1
nvme format -s 1 -f /dev/nvme0n1
nvme format -s 1 -f /dev/nvme1n1

查看分区:

lsblk

#对之前的bcache解绑定
对ceph1-4节点/sys/fs/bcache/目录进行unregister操作

#!/bin/bash
for n in {1..4}
do
    echo "ceph0$n"
    f=$(ssh ceph0$n "ls /sys/fs/bcache/")
    for i in $f
    do
        ssh ceph0$n "echo 1 > /sys/fs/bcache/$i/unregister"
    done
done

#对系统下sda…sdh硬盘进行detach,stop操作,dd写操作

for i in {0..7}
do
        temp=`expr $i % 8 + 97`
        diskID=$(printf \\x`printf %x ${temp}`)
        echo 1 >/sys/block/sd${diskID}/bcache/detach
        echo 1 >/sys/block/sd${diskID}/bcache/stop
        dd if=/dev/zero of=/dev/sd${diskID} count=2048 bs=1M
done

第五步:重启系统(每个节点)
nvme盘清完后,bcache分区会清除不了,重启bcache分区会自动清除

第六步:重启后执行下面命令

wipefs -a /dev/sda
wipefs -a /dev/sdb
wipefs -a /dev/sdc
wipefs -a /dev/sdd
wipefs -a /dev/sde
wipefs -a /dev/sdf
wipefs -a /dev/sdg
wipefs -a /dev/sdh
wipefs -a /dev/nvme0n1
wipefs -a /dev/nvme1n1

再次执行,没有回显表示正常

第七步:清除ceph

sudo rm -rf /var/lib/ceph/osd/*           
sudo rm -rf /var/lib/ceph/mon/*           
sudo rm -rf /var/lib/ceph/mds/*           
sudo rm -rf /var/lib/ceph/bootstrap-mds/* 
sudo rm -rf /var/lib/ceph/bootstrap-osd/* 
sudo rm -rf /var/lib/ceph/bootstrap-mon/* 
sudo rm -rf /var/lib/ceph/tmp/*           
sudo rm -rf /etc/ceph/*                   
sudo rm -rf /var/run/ceph/*               
rm -rf /etc/ceph/*
rm -rf /root/ceph/*
rm -rf /var/lib/ceph/*/*
rm -rf /var/log/ceph/*
rm -rf /var/run/ceph/*
userdel -r ceph  # 清除ceph用户
上一篇:etcd高可用集群


下一篇:Ceph-1