ceph问题总结

之前测试用ceph总是警告

     health HEALTH_WARN
pool cephfs_metadata2 has many more objects per pg than average (too few pgs?)
pool cephfs_data2 has many more objects per pg than average (too few pgs?)

查看pg数

[root@node1 ~]# ceph osd pool get cephfs_metadata2 pg_num
pg_num:
[root@node1 ~]# ceph osd pool get cephfs_metadata2 pgp_num
pgp_num:

突然想起来当时只是测试安装,而且说pg数可以增加但不能减少,所以只是随便设置一个数。再设置回来即可。

[root@node1 ~]# ceph osd pool set cephfs_metadata2 pg_num
Error E2BIG: specified pg_num is too large (creating new PGs on ~ OSDs exceeds per-OSD max of )

结果出现这个错误,参考“http://www.selinuxplus.com/?p=782”,原来是一次增加的数量有限制。最后选择用暴力的方法解决问题:

[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num

过了大概半个小时,集群就正常了。

上一篇:Android图片加载框架最全解析(一),Glide的基本用法


下一篇:JavaFX(Maven 方式)