真-pacemaker使用管理说明

真-pacemaker使用管理说明

############################ PCSD 查看信息命令 ##################################
pcs cluster cib
pcs config
pcs status resources|groups|cluster|nodes|pcsd
pcs --version
pcs status
pcs cluster status
pcs stonith list
pcs stonith describe stonith_agent
pcs stonith show vmfence --full
pcs resource describe ocf:heartbeat:apache
pcs resource defaults
pcs resource show res_1 ---------- 显示资源的meta参数
pcs resource op defaults --------------------- 显示当前已设置的默认OP的值
pcs resource show --------- 显示资源
pcs resource show --full -- 显示详细信息
pcs resource show res_1 --- 显示res_1的信息

pcs constraint list|show
pcs constraint order show
pcs constraint colocation show
pcs constraint ref resource

 

 

PCSD WEB UI

https://203.0.113.21:2224 打开WEB界面
用户名:hacluster
密码: password
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

############################ PCSD 命令行接口 ##################################
3.1
pcs子命令集如下:
cluster-----配置cluster的选项和节点
resource----创建和管理cluster资源
stonith-----为pacemaker配置fence设备
constraint--管理资源约束
property----设置Pacemaker参数
status------参看cluster和资源的当前状态
config------以可读的方式显示cluster完整的配置信息

3.3
查看cluster的raw配置
一般情况,不要直接编辑cluster的配置文件,可以用命令查看
pcs cluster cib
需要cluster启动的时候才能查看

3.4
pcs cluster cib /tmp/testfile
cluster配置信息被保存到testfile这个文件


pcs -f testfile1 resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s
在raw配置文件中,创建一个资源,但是这个资源不会加入当前运行的群集配置
pcs cluster cib-push testfile1
把testfile内容推入cib

3.5
pcs status resources|groups|cluster|nodes|pcsd

3.6
pcs config
显示当前的cluster所有配置

3.7
pcs --version
显示pcs版本

3.8
pcs config backup /config_xxxxx
备份cluster的配置到文件
pcs config restore /config_xxx [--local]
把文件中的配置恢复到所有节点,--local值恢复到本地节点


+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

 

############################ 4. cluster创建和管理 ##################################

4.1
创建cluster
4.1.1
启动pcsd
systemctl start pcsd.service
systemctl enable pcsd.service

4.1.2
群集节点认证
pcs cluster auth r7-node-1-hb r7-node-2-hb -u hacluster -p password
Authorization tokens are stored in the file ~/.pcs/tokens (or /var/lib/pcsd/tokens)

4.1.3
pcs cluster setup --name mycluster r7-node-1-hb r7-node-2-hb --token 10000 --join 100
在节点上创建名称为mycluster的群集,创建后群集不启动

pcs cluster start --all
在所有节点启动群集


4.2
给cluster配置超时值
pcs cluster setup --name mycluster r7-node-1-hb r7-node-2-hb --token 10000 --join 100

如果创建cluster时使用默认的token,下面的文档可以修改
https://access.redhat.com/solutions/221263?band=se&seSessionId=5118ebe5-baed-4c5f-98fe-d31f036a3c7d&seSource=Recommendation%20Aside&seResourceOriginID=

4.3
配置RRP cluster
pcs cluster setup --name my_rrp_cluster r7-node-1-hb,r7-node-1 r7-node-2-hb,r7-node-2 --token 10000 --join 100
配置说明
https://access.redhat.com/solutions/61832?band=se&seSessionId=60f63f92-2372-44b9-993b-c60393019b11&seSource=Recommendation%20Aside&seResourceOriginID=

4.4
管理cluster节点
4.4.1
pcs cluster stop|start [-all][node] 停止|启动所有节点或某个节点
4.4.2
pcs cluster enable [--all][node] 开机启动所有节点或某个节点
pcs cluster disable [--all][node] 禁止开机启动所有节点或某个节点

4.4.3
添加节点
pcs cluster node add r7-node-2-hb,r7-node-2

4.4.4
删除节点
pcs cluster node remove r7-node-2-hb,r7-node-2

4.4.5
pcs cluster standby r7-node-2-hb|--all

pcs cluster unstandby r7-node-2-hb|--all


4.5
设定用户权限
4.5.1

4.5.2
本地用户rouser(在haclient组)被赋予cluster配置的读权限
adduser rouser
usermod -a -G haclient rouser
pcs property set enable-acl=true --force
pcs acl role create read-only description="Read access to cluster" read xpath /cib
pcs acl user create rouser read-only
pcs acl

本地用户wuser(在haclient组)被赋予cluster配置的读权限
adduser wuser
usermod -a -G haclient wuser
pcs property set enable-acl=true --force
pcs acl role create write-access description="Write access to cluster" read xpath /cib
pcs acl user create wuser write-access
pcs acl


4.6
删除cluster配置
pcs cluster destroy --all
这个命令永久删除cluster配置

4.7
显示cluster状态
pcs status
pcs cluster staus
pcs resources status


############################ 5. 配置stonith ##################################
5.1
显示可用的fencing设备
pcs stonith list

5.2
fencing设备一般说明
pcs property set stonith-enabled=false 可以使cluster禁用fencing功能,不推荐

5.3
显示fencing设备参数
pcs stonith describe stonith_agent

5.4
创建fencing设备
pcs stonith create vmfence fence_vmware_soap pcmk_host_list="r7-node-1-hb;r7-node-2-hb" ipaddr=10.0.0.210 ssl_insecure=1 login=root passwd=password pcmk_reboot_action="reboot"


5.5
显示fencing设备
pcs stonith show vmfence --full

5.6
修改和删除fencing设备
pcs stonith update vmfence options
pcs stonith delete vmfence

5.7
pcs stonith fence r7-node-2-hb [--off] off=关机 没有off=重启
pcs stonith confirm node 没有fence设备时使用


5.9
配置fencing等级
pcs stonith level add 1 r7-node-1-hb vmfence
pcs stonith level add 2 node_1 device_2
pcs stonith level add 1 r7-node-2-hb vmfence

pcs stonith level remove 1 -----在所有节点清除
pcs stonith level remove 1 r7-node-1-hb vmfence

pcs stonith level clear ------
pcs stonith level clear r7-node-1|vmfence

pcs stonith level verify

5.11
禁用ACPI
vi /etc/systemd/logind.conf
HandlePowerKey=ignore
systemctl daemon-reload


5.12
fencing设备c测试
fence_vmware_soap -a 10.0.0.210 -l root -p password --ssl-insecure -o status -n r7-node-1-hb

fence_vmware_soap -a 10.0.0.210 -l root -p password --ssl-insecure -o reboot -n r7-node-1-hb

模拟crash
echo c > /proc/sysrq-trigger

 


############################ 6. 配置resources ##################################

6.1
创建资源
pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s

ocf:heartbeat:I可以省略:
pcs resource create VirtualIP IPaddr2 ip=192.168.0.120 cidr_netmask=24 meta
resource-stickiness=50 op monitor interval=30s --group mygroup --disabled
这个资源开机不自动启动,系统每30秒检查一次其是否运行。

pcs resource delete VirtualIP------删除资源VirtualIP

6.2
资源属性
pcs resource list ----------- 显示所有可用的资源
pcs resource standards ------ 显示可用的资源agent标准
pcs resource providers ------ 显示可用的资源agent提供者
pcs resource list string ---- 显示某个资源

6.3
资源属性参数
pcs resource describe ocf:heartbeat:apache ------ 显示资源属性详细参数

6.4
资源元选项
pcs resource defaults resource-stickiness=100 ------- 设置resource-stickiness默认值为100

pcs resource defaults -------------- 显示当前配置的资源选项的默认值

pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 meta resource-stickiness=50 -----------在创建资源的时候可以设置meta参数

pcs resource meta res_1 failure-timeout=20s ------可以在已有的资源设置meta参数

pcs resource show res_1 ---------- 显示资源的meta参数


6.5
资源组
pcs resource group add mygroup res_1 res_2 res_3 ------- 创建资源组
pcs resource group remove mygroup res_1 ---------------- 从资源组中删除res_1
pcs resource group list -------------------------------- 显示当前配置的资源组

6.5.1
group Option
==继承资源的option

6.5.2
group Stickiness
==组中所有活动资源的resource-stickiness的总和

6.6
资源选项参数 RESOURCE OPERATIONS

6.6.1
配置资源OP
pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2 op monitor interval=30s -------- 在创建资源的时候配置OP

pcs resource op add VirtualIP op monitor interval=30s -----------在已有的资源添加OP
pcs resource op remove VirtualIP op monitor interval=30s --------在已有的资源删除OP

pcs resource update VirtualIP op monitor interval=40s -----------修改资源OP数值

6.6.2
配置全局OP默认值
pcs resource op defaults timeout=240s -------- 全局默认timeout设置为240s
pcs resource op defaults --------------------- 显示当前已设置的默认OP的值

6.7
显示资源配置
pcs resource show --------- 显示资源
pcs resource show --full -- 显示详细信息
pcs resource show res_1 --- 显示res_1的信息

6.8
修改资源参数

pcs resource update VirtualIP op monitor interval=40s -----------修改资源OP数值
pcs resource update vip meta resource-stickiness=1500

6.9
??????


6.10
pcs resource enable res_1 ------ res_1自动启动
pcs resource disable res_1 ----- res_1不自动启动

6.11
群集资源cleanup
pcs resource cleanup ----------- 资源正常后刷新所有资源状态以及failcount
pcs resource cleanup res_1 ----- 刷新res_1资源

7.5以后:
pcs resource refresh ----------- 资源正常后刷新所有资源状态以及failcount,默认只在知道资源状态的节点生效
pcs resource refresh --full ---- 所有节点生效

 


############################ 7. 配置资源CONSTRAINTS ##################################

7.1
location约束
7.1.1
基本的location约束

pcs constraint location vip prefers r7-node-1-hb ------- 资源vip优先在node1启动
pcs constraint location vip avoids r7-node-1-hb -------- 资源vip不在node1启动

7.1.2

7.1.4
location约束策略
两种策略:
Opt-In cluster ----------- 资源不能随便在节点启动,只在允许启动的节点启动
Opt-Out cluster ---------- 资源可以在任何节点启动,添加location约束限制资源不能在某些节点启动。这个是Pacemaker默认策略

7.1.4.1
配置Opt-In cluster
pcs property set symmetric-cluster=false ------ 不让资源在任意节点启动,这个值默认是True

pcs constraint location Webserver prefers example-1=200
pcs constraint location Webserver prefers example-3=0
pcs constraint location Database prefers example-2=200
pcs constraint location Database prefers example-3=0


7.1.4.2
配置Opt-Out cluster
pcs property set symmetric-cluster=true
# pcs constraint location Webserver prefers example-1=200
# pcs constraint location Webserver avoids example-2=INFINITY
# pcs constraint location Database avoids example-1=INFINITY
# pcs constraint location Database prefers example-2=200


7.2
Order约束

7.2.1
Mandatory Ordering
这个约束是默认的
If the first resource you specified resource was running and is stopped, the second resource you specified will also be stopped (if it is running).
If the first resource you specified resource was not running and cannot be started, the second resource you specified will be stopped (if it is running).
If the first resource you specified is (re)started while the second resource you specified is running, the second resource you specified will be stopped and restarted.

7.2.2
Advisory Ordering
pcs constraint order VirtualIP then dummy_resource kind=Optional
这个约束只在第一个资源和第二个资源执行相同的操作时才有效。

pcs constraint order stop vip then stop my_lvm kind=Optional -------停止vip后my_lvm不会停止

 

7.2.3
Ordered Resource Sets

pcs constraint order set

7.2.4
pcs constraint order remove

 

7.3
COLOCATION OF RESOURCES


7.3.1.
Mandatory Placement

pcs constraint colocation add res_1 with res_2 score=INFINITY ----- res_1和res_2运行在同一节点上,如果2不能启动,1也不会启动
pcs constraint colocation add res_1 with res_2 score=-INFINITY ----- res_1不和res_2运行在同一节点上,如果只有一个节点可用,而且res_2已经启动,res_1就不启动

7.3.3.
Colocating Sets of Resources


7.3.4.
Removing Colocation Constraints
pcs constraint colocation remove source_resource target_resource


7.4.
DISPLAYING CONSTRAINTS
pcs constraint list|show
pcs constraint order show
pcs constraint colocation show
pcs constraint ref resource

 


***************************************************************************************************************
问题:
我的群集中有两个节点:
node_1
node_2

资源如下:
lvm1 类型 LVM
fs01 fs01_1 fs01_2 类型是Filesystem
fs02 fs02_1 fs02_2 类型是Filesystem

资源的启动顺序计划是这样的:
lvm1启动------->fs01----->fs01_1,fs01_2启动
|--->fs02----->fs02_1,fs02_2启动


fs01和fs02 没有联系,只要lvm1启动这两个就启动,而且任何一个做配置(例如停止,启动,重启,删除)也不会影响另一个
fs01_1,fs01_2 没有联系,只要fs01启动这两个就启动,而且任何一个做配置(例如停止,启动,重启,删除)也不会影响另一个
fs02_1,fs02_2 没有联系,只要fs02启动这两个就启动,而且任何一个做配置(例如停止,启动,重启,删除)也不会影响另一个

这个需求约束应该怎么配置

pcs resource create vip IPaddr2 ip=10.0.0.23 cidr_netmask=24 op monitor interval=30s
pcs resource create my_lvm LVM volgrpname=my_vg exclusive=true op monitor interval=30s
pcs resource create fs_01 Filesystem device="/dev/my_vg/lv01" directory="/fs01" fstype="xfs" op monitor interval=30s
pcs resource create fs_02 Filesystem device="/dev/my_vg/lv02" directory="/fs02" fstype="xfs" op monitor interval=30s
pcs resource create fs_01_01 Filesystem device="/dev/my_vg/lv01_01" directory="/fs01/fs01_01" fstype="xfs" op monitor interval=30s
pcs resource create fs_01_02 Filesystem device="/dev/my_vg/lv01_02" directory="/fs01/fs01_02" fstype="xfs" op monitor interval=30s
pcs resource create fs_02_01 Filesystem device="/dev/my_vg/lv02_01" directory="/fs02/fs02_01" fstype="xfs" op monitor interval=30s
pcs resource create fs_02_02 Filesystem device="/dev/my_vg/lv02_02" directory="/fs02/fs02_02" fstype="xfs" op monitor interval=30s


pcs constraint order start my_lvm then fs_01
pcs constraint order start my_lvm then fs_02
pcs constraint colocation add fs_01 with my_lvm score=INFINITY
pcs constraint colocation add fs_02 with my_lvm score=INFINITY
pcs constraint order start fs_01 then fs_01_01
pcs constraint order start fs_01 then fs_01_02
pcs constraint order start fs_02 then fs_02_01
pcs constraint order start fs_02 then fs_02_02
pcs constraint colocation add fs_01_01 with my_lvm score=INFINITY
pcs constraint colocation add fs_01_02 with my_lvm score=INFINITY
pcs constraint colocation add fs_02_01 with my_lvm score=INFINITY
pcs constraint colocation add fs_02_02 with my_lvm score=INFINITY

pcs constraint location vip prefers r7-node-2-hb=1000
pcs constraint location vip prefers r7-node-1-hb=2000
pcs constraint location my_lvm prefers r7-node-2-hb=1000
pcs constraint location my_lvm prefers r7-node-1-hb=2000


pcs resource show --full

 

创建资源约束:

pcs constraint location vip prefers r7-node-2-hb=1000
pcs constraint location vip prefers r7-node-1-hb=2000
pcs constraint location my_lvm prefers r7-node-2-hb=1000
pcs constraint location my_lvm prefers r7-node-1-hb=2000

############################ 8. 资源管理 ##################################

8.1
手动移动资源
移动所有资源可以使用 Standby Mode
pcs node standby r7-node-1-hb -------- 节点被设置为standby,资源切换到节点2
pcs node unstandby r7-node-1-hb ------ 节点被设置为unstandby,资源回切到节点1

8.1.1
移动资源
pcs resource move res_1 r7-node-2-hb ------- 把资源1移动到节点2,最好加node参数
tips:
pcs resource move 单独执行后,资源会被添加一个约束,资源不会再在源节点运行,使用pcs resource clear res_1 或pcs constraint delete命令移除约束.约束删除胡资源回回切。

8.1.2
把资源移动回主节点

pcs resource relocate run res_1 res_2 -------- 把资源移动回主节点
tips:
不加res的话,所有资源会被移动回主节点
这个命令不考虑resource stickiness.参数
pcs resource relocate clear ---------删除pcs resource relocate run 产生的约束
pcs resource relocate show ----------


8.2
由于失败移动资源
pcs resource meta vip migration-threshold=5 ------- vip资源失败5次后,资源切换到节点2
pcs resource defaults migration-threshold=5 -------

pcs resource failcount show vip ----------- 查看失败次数
pcs resource failcount reset vip ---------- 重置失败次数


8.3
由于连接改变移动资源
添加一个ping资源
pcs resource create ping ocf:pacemaker:ping dampen=5s multiplier=1000 host_list=10.0.0.210 clone ---------

pcs constraint location vip rule score=-INFINITY pingd lt 1 or not_defined pingd


8.4.
ENABLING, DISABLING, AND BANNING CLUSTER RESOURCES

pcs resource disable resource_id [--wait[=n]]
pcs resource enable resource_id [--wait[=n]]
pcs resource ban resource_id [node] [--master] [lifetime=lifetime] [--wait[=n]]

pcs resource debug-start resource_id

 


8.6.
MANAGED RESOURCES
资源可以设置为unmanaged模式,资源还在pacemaker的配置里,但是不受cluster管理
pcs resource unmanage res_1 | groupname
pcs resource manage res_1 | groupname

 

############################ 9. 高级配置 ##################################

9.1资源克隆
9.1.1
创建和删除克隆的资源
pcs resource create vip IPaddr2 ip=10.0.0.23 cidr_netmask=24 op monitor interval=30s clone -------- 创建clone资源vip
pcs resource create my_lvm LVM volgrpname=my_vg exclusive=true op monitor interval=30s clone
pcs resource create fs_01 Filesystem device="/dev/my_vg/lv01" directory="/fs01" fstype="xfs" op monitor interval=30s clone

pcs resource clone res_1 --------- 克隆存在的资源

pcs resource unclone res_1 ------- 移除克隆

9.1.2

 

9.2 MULTISTATE RESOURCES: RESOURCES THAT HAVE MULTIPLE
MODES

pcs resource create vip IPaddr2 ip=10.0.0.23 cidr_netmask=24 op monitor interval=30s master

pcs resource master master/slave_name res_1


############################ 10. CLUSTER QUORUM ##################################

10.2
quorum管理命令
pcs quorum --------- 显示quorum配置
pcs quorum status -- 显示quorum runtime状态
pcs quorum expected-votes votes

10.3
修改quorum参数
pcs quorum update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]] [last_man_standing_window=[time-in-ms] [wait_for_all=[0|1]]

10.4.
THE QUORUM UNBLOCK COMMAND
pcs cluster quorum unblock


10.5
QUORUM DEVICES

10.5.1 Installing Quorum Device Packages

yum install corosync-qdevice --------- 所有节点都安装
yum install pcs corosync-qnetd ------- quorum host 安装
systemctl start pcsd.service --------- quorum host执行
systemctl enable pcsd.service -------- quorum host执行


10.5.2. Configuring a Quorum Device
在qdevice host运行
pcs qdevice setup model net --enable --start ------- 创建并启动qdevice,而且qdevice开机自动启动
pcs qdevice status net --full --------- 查看qdevice状态

在cluster中一个节点运行
pcs cluster auth qdevice
pcs quorum config
pcs quorum status
pcs quorum device add model net host=10.0.0.21 algorithm=ffsplit|lms
pcs quorum config
pcs quorum status
pcs quorum device status
pcs qdevice status net --full ---------查看corosync-qnet守护进程的状态

10.5.3管理qdevice
pcs qdevice start net
pcs qdevice stop net
pcs qdevice enable net
pcd qdevice disable net
pcs qdevice kill net

pcs quorum device update model algorithm=lms
pcs quorum device remove ------- remove a quorum device configured on a cluster node
pcs qdevice destroy net -------- To disable and stop a quorum device on the quorum device host and delete all of its configuration files

 

 

LSB资源
https://access.redhat.com/solutions/753443
https://access.redhat.com/solutions/271703?band=se&seSessionId=9042c971-b8db-4675-8545-07c6c6686f76&seSource=Recommendation%20Aside&seResourceOriginID=

pcs resource list lsb
pcs resource create custom_lsb lsb:custom

 

上一篇:基本的SQL语句


下一篇:【SQL】alter使用