VSM Import Cluster功能验证二(导入篇)

三 vsm import cluster

3.1登录vsm web UI

登陆 VSM web UI,https://172.16.34.51/dashboard/vsm/,点击Cluster Management菜单中的Import Cluster,页面显示如下:

VSM Import Cluster功能验证二(导入篇)

3.2在ceph节点上启动vms-agent

在ceph各节点执行

python /usr/bin/vsm-agent --config-file /etc/vsm/vsm.conf --log-file /var/log/vsm/vsm-agent.log >& &

ceph各节点启动vsm-agent后,Import Cluster如下所示:

VSM Import Cluster功能验证二(导入篇)

3.3生成osdkeyring以及修改ceph.conf ceph deploy node

3.3.1.生成osd的keyring。

ceph auth get-or-create osd. | tee /home/cephcluster_yhc/keyring.osd.
ceph auth get-or-create osd. | tee /home/cephcluster_yhc/keyring.osd.
ceph auth get-or-create osd. | tee /home/cephcluster_yhc/keyring.osd.

拷贝到相应节点

cp  /home/cephcluster_yhc/keyring.osd. /etc/ceph/
scp /home/cephcluster_yhc/keyring.osd. ceph02:/etc/ceph/
scp /home/cephcluster_yhc/keyring.osd. ceph03:/etc/ceph/

3.3.2.修改ceph.conf,并推送到各ceph节点

ceph.conf修改如下:

[global]
fsid = add3d8a4-f6aa-4d6b-a3ce-aa285d55ae56
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.1.35.52,192.1.35.53,192.1.35.54
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx public network = 192.1.35.0/
cluster network = 192.2.35.0/
[mon]
mon data = /var/lib/ceph/mon/$cluster-$id
mon clock drift allowed = . [mon.ceph01]
host = ceph01
mon addr = 192.1.35.52: [mon.ceph02]
host = ceph02
mon addr = 192.1.35.53: [mon.ceph03]
host = ceph03
mon addr = 192.1.35.54: [osd]
osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog
osd crush update on start = false
filestore xattr use omap = true
keyring = /etc/ceph/keyring.$name
osd data = /var/lib/ceph/osd/ceph-$id
osd heartbeat grace =
osd heartbeat interval =
osd mkfs type = xfs
osd mkfs options xfs = -f
osd journal size = [osd.]
osd journal = /dev/sdb2
devs = /dev/sdb1
host = ceph01
cluster addr = 192.2.35.52
public addr = 192.1.35.52 [osd.]
osd journal = /dev/sdb2
devs = /dev/sdb1
host = ceph02
cluster addr = 192.2.35.53
public addr = 192.1.35.53 [osd.]
osd journal = /dev/sdb2
devs = /dev/sdb1
host = ceph03
cluster addr = 192.2.35.54
public addr = 192.1.35.54

推送到各ceph节点

ceph-deploy --overwrite-conf admin ceph01 ceph02 ceph03

到各ceph节点重启ceph 进程

service ceph restart

3.4执行Import Cluster操作

1)  点击 Import Cluster 页面的Import Cluster 按钮,出现页面如下

VSM Import Cluster功能验证二(导入篇)

2)  页面的Crushmap旁的AutoDetect按钮,选择Monitor Host 为ceph01 , Monitor Keyring 填入/etc/ceph/ceph.client.admin.keyring,点击AutoDetect按钮。 如下图所示。

VSM Import Cluster功能验证二(导入篇)

3)点击AutoDetect按钮后,crushmap自动填入ceph osd crush dump的输出,页面如下

VSM Import Cluster功能验证二(导入篇)

4)在Ceph.conf里填入ceph集群的配置信息,点击“Validate”按钮,如果配置没问题,页面右上角就会弹出“Validate Cluster Successfully!”,同时会展示出Crushmap的拓扑结构。

VSM Import Cluster功能验证二(导入篇)

5)点击“Submit”,提交。如果导入成功,右上角会弹出提示语句,然后跳转到Cluster Status页面,页面如下:

VSM Import Cluster功能验证二(导入篇)

四、Cluster Status页面的Performance 显示

4.1 设置显示属性

在VSM Managemnet菜单的settings页面设置里设置一下CPU_DIAMOND_COLLECT_INTERVAL和CEPH_DIAMOND_COLLECT_INTERVAL这两个属性,设为5.VSM Import Cluster功能验证二(导入篇)

上一篇:[Jenkins]执行SoapUI脚本,怎样在邮件内容里面嵌入html


下一篇:ACM 计算几何向量