ocr和votedisk是什么?
作为集群,oracle cluster需要共享存储来存放整个集群的配置信息,ocr便是用例存放这些配置信息的地方,ocr的存储容量一般不会太大,在10g下,oracle建议256M已经足以。ocr必须需要存储在集群文件系统或者裸设备上,出于性能上的考虑,本人建议将ocr建立在裸设备上,性能高并且管理也不复杂(ocr和votedisk的数量一般不会太多)。ocr中存放的是集群的配置信息,这些信息只能在一个节点上进行维护操作,这一节点叫做Master Node,其他节点会在内存中保留一份ocr的复制,并且只能进行读操作,所有ocr的更新都是有master node来执行并通知各个节点的。
votedisk磁盘存储集中地各个节点并用来进行心跳监测
ocr和votedisk的维护是否需要保持脱机状态?
orc的维护在大多数情况下,是需要联机操作的,因为在各个节点具有ocr.loc文件,联机操作可以保证各个节点的ocr.loc文件及时得到更新。但是部分操作,如repaire和重建等操作需要在脱机状态下进行(后面会详细描述)
votedisk磁盘的维护往往需要在脱机状态下进行
ocr维护的命令有哪些?
维护ocr常用的命令有:
ocrcheck
[root@node1 bin]# ./ocrcheck -h Name: ocrcheck - Displays health of Oracle Cluster Registry. Synopsis: ocrcheck Description: prompt> ocrcheck Displays current usage, location and health of the cluster registry Notes: A log file will be created in $ORACLE_HOME/log/<hostname>/client/ocrcheck_<pid>.log. Please ensure you have file creation privileges in the above directory before running this tool.ocrdump(dump出来的内容可以用来查看ocr中的内容,但是不可以用来进行恢复)
[root@node1 bin]# ./ocrdump -h Name: ocrdump - Dump contents of Oracle Cluster Registry to a file. Synopsis: ocrdump [<filename>|-stdout] [-backupfile <backupfilename>] [-keyname <keyname>] [-xml] [-noheader] Description: Default filename is OCRDUMPFILE. Examples are: prompt> ocrdump writes cluster registry contents to OCRDUMPFILE in the current directory prompt> ocrdump MYFILE writes cluster registry contents to MYFILE in the current directory prompt> ocrdump -stdout -keyname SYSTEM writes the subtree of SYSTEM in the cluster registry to stdout prompt> ocrdump -stdout -xml writes cluster registry contents to stdout in xml format Notes: The header information will be retrieved based on best effort basis. A log file will be created in $ORACLE_HOME/log/<hostname>/client/ocrdump_<pid>.log. Make sure you have file creation privileges in the above directory before running this tool.ocrconfig
[root@node1 bin]# ./ocrconfig -h Name: ocrconfig - Configuration tool for Oracle Cluster Registry. Synopsis: ocrconfig [option] option: -export <filename> [-s online] - Export cluster register contents to a file -import <filename> - Import cluster registry contents from a file -upgrade [<user> [<group>]] - Upgrade cluster registry from previous version -downgrade [-version <version string>] - Downgrade cluster registry to the specified version -backuploc <dirname> - Configure periodic backup location -showbackup - Show backup information -restore <filename> - Restore from physical backup -replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file -overwrite - Overwrite OCR configuration on disk -repair ocr|ocrmirror <filename> - Repair local OCR configuration -help - Print out this help information Note: A log file will be created in $ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure you have file creation privileges in the above directory before running this tool.
Table D-1 The ocrconfig Command Options
Option | Purpose |
---|---|
|
To change an OCR backup file location. For this entry, use a full path that is accessible by all of the nodes. |
|
To downgrade an OCR to an earlier version. |
|
To export the contents of an OCR into a target file. |
|
To display help for the |
|
To import the OCR contents from a previously exported OCR file. |
|
To update an OCR configuration that is recorded on the OCR with the current OCR configuration information that is found on the node from which you are running this command. |
|
To update an OCR configuration on the node from which you are running this command with the new configuration information specified by this command. |
|
To add, replace, or remove an OCR location. |
|
To restore an OCR from an automatically created OCR backup file. |
|
To display the location, timestamp, and the originating node name of the backup files that Oracle created in the past 4 hours, 8 hours, 12 hours, and in the last day and week. You do not have to be the |
|
To upgrade an OCR to a later version. |
ocr损坏后怎么恢复?
ocr损坏后通常有两种方式进行修正:恢复和重建。恢复的时候,我们可以从之前export导出的文件恢复,也可以从之前有masternode备份的文件进行恢复。
[root@node1 bin]# ./crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- httpd_vip application 0/1 0/0 ONLINE ONLINE node2 httpd_web application 0/1 0/4 ONLINE ONLINE node2 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1 ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1 ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1 ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1 ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2 ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2 ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2 ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2 ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2 ora.racdb.db application 0/0 0/1 ONLINE ONLINE node2 ora....b1.inst application 0/5 0/0 ONLINE ONLINE node1 ora....b2.inst application 0/5 0/0 ONLINE ONLINE node2 [root@node1 bin]# ./ocrconfig -export a.ocr (导出时最好是关闭crs) [root@node1 bin]# ./crs_stop httpd_web Attempting to stop `httpd_web` on member `node2` Stop of `httpd_web` on member `node2` succeeded. [root@node1 bin]# ./crs_stop httpd_vip Attempting to stop `httpd_vip` on member `node2` Stop of `httpd_vip` on member `node2` succeeded. [root@node1 bin]# ./crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- httpd_vip application 0/1 0/0 OFFLINE OFFLINE httpd_web application 0/1 0/4 OFFLINE OFFLINE ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1 ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1 ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1 ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1 ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2 ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2 ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2 ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2 ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2 ora.racdb.db application 0/0 0/1 ONLINE ONLINE node2 ora....b1.inst application 0/5 0/0 ONLINE ONLINE node1 ora....b2.inst application 0/5 0/0 ONLINE ONLINE node2 [root@node1 bin]# ./crs_unregister httpd_web [root@node1 bin]# ./crs_unregister httpd_vip [root@node1 bin]# ./crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1 ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1 ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1 ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1 ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2 ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2 ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2 ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2 ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2 ora.racdb.db application 0/0 0/1 ONLINE ONLINE node2 ora....b1.inst application 0/5 0/0 ONLINE ONLINE node1 ora....b2.inst application 0/5 0/0 ONLINE ONLINE node2 [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crs_stat -t -v root@node2's password: Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1 ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1 ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1 ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1 ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2 ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2 ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2 ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2 ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2 ora.racdb.db application 0/0 0/1 ONLINE ONLINE node2 ora....b1.inst application 0/5 0/0 ONLINE ONLINE node1 ora....b2.inst application 0/5 0/0 ONLINE ONLINE node2 [root@node1 bin]# ./corconfig -import a.ocr -bash: ./corconfig: No such file or directory <strong>[root@node1 bin]# ./ocrconfig -import a.ocr PROT-19: Cannot proceed while clusterware is running. Shutdown clusterware first(<span style="background-color: rgb(255, 0, 0);">在导入ocr是集群必须要关闭</span>)</strong> [root@node1 bin]# ./crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl stop crs root@node2's password: Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [root@node1 bin]# ./ocrconfig -import a.ocr [root@node1 bin]# ./crsctl start crs Attempting to start CRS stack The CRS stack will be started shortly [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl start crs root@node2's password: Attempting to start CRS stack The CRS stack will be started shortly [root@node1 bin]# ./crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- httpd_vip application 0/1 0/0 ONLINE ONLINE node1 httpd_web application 1/1 0/4 ONLINE ONLINE node1 ora....SM1.asm application 0/5 0/0 ONLINE OFFLINE ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1 ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1 ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1 ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2 ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2 ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2 ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2 ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2 ora.racdb.db application 0/0 0/1 ONLINE OFFLINE ora....b1.inst application 0/5 0/0 ONLINE OFFLINE ora....b2.inst application 0/5 0/0 ONLINE OFFLINE
下面是由restore来恢复ocr
[root@node1 crs]# ll -h /u01/app/crs_home/bin/a.ocr -rw-r--r-- 1 root root 93K Aug 1 16:06 /u01/app/crs_home/bin/a.ocr [root@node1 crs]# ll -h total 25M -rw-r--r-- 1 root root 4.4M Jul 31 12:36 35521234 -rw-r--r-- 1 oracle root 3.5M Jul 22 14:04 backup00.ocr -rw-r--r-- 1 oracle root 3.5M Jul 10 14:04 backup01.ocr -rw-r--r-- 1 oracle root 3.5M Jul 9 14:00 backup02.ocr -rw-r--r-- 1 oracle root 3.5M Jul 22 14:04 day.ocr -rw-r--r-- 1 oracle root 85K Jul 24 15:27 ocr.exp -rw-r--r-- 1 oracle root 3.5M Jul 10 14:04 week_.ocr -rw-r--r-- 1 oracle root 3.5M Jul 3 14:15 week.ocr [root@node1 crs]# ocrconfig -restore /u01/app/crs_home/bin/a.ocr PROT-22: Storage too small [root@node1 crs]# ocrconfig -restore /u01/app/crs_home/cdata/crs/backup00.ocr PROT-19: Cannot proceed while clusterware is running. Shutdown clusterware first说明:export导出的文件只能是由import导入,restore时,cluster也必须全部离线
具体的操作过程这里不再演示,
10g下怎么增加或者删除ocr磁盘?
在10g中,如果在安装cluster时,如果选择external redundancy那么ocr只能选择一个磁盘,但是通过命令行我们依然可以添加新的磁盘到ocr。如果我们需要替换现有的ocr磁盘,那么必须同时存在两份ocr磁盘才可以,即primary ocr和mirror ocr同时存在时,才可以替换primary ocr或者mirror ocr。
操作实例:
[root@node1 bin]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 1125736 Used space (kbytes) : 3852 Available space (kbytes) : 1121884 ID : 849560479 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded [root@node1 bin]# ocrconfig -replace ocr /dev/raw/raw3 PROT-16: Internal Error [root@node1 bin]# ocrconfig -replace ocrmirror /dev/raw/raw4 [root@node1 bin]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 1125736 Used space (kbytes) : 3852 Available space (kbytes) : 1121884 ID : 849560479 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File Name : /dev/raw/raw4 Device/File integrity check succeeded Cluster registry integrity check succeeded [root@node1 bin]# ocrconfig -replace ocr /dev/raw/raw3 [root@node1 bin]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 1125736 Used space (kbytes) : 3852 Available space (kbytes) : 1121884 ID : 849560479 Device/File Name : /dev/raw/raw3 Device/File integrity check succeeded Device/File Name : /dev/raw/raw4 Device/File integrity check succeeded Cluster registry integrity check succeeded [root@node1 bin]# crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/ocrcheck Warning: Permanently added the RSA host key for IP address '192.168.100.32' to the list of known hosts. root@node2's password: Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 1125736 Used space (kbytes) : 3852 Available space (kbytes) : 1121884 ID : 849560479 Device/File Name : /dev/raw/raw3 Device/File integrity check succeeded Device/File Name : /dev/raw/raw4 Device/File integrity check succeeded Cluster registry integrity check succeeded [root@node1 bin]# ssh node2 cat /etc/oracle/ocr.loc root@node2's password: #Device/file /dev/raw/raw1 getting replaced by device /dev/raw/raw3 ocrconfig_loc=/dev/raw/raw3 ocrmirrorconfig_loc=/dev/raw/raw4 local_only=false
新增或者删除orc最好在所有节点都在线时进行,否则会出现节点信息不同步的情况,
例如:如果在节点node2关闭时,在节点node1进行ocr删除操作,结果如下
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl stop crs root@node2's password: Permission denied, please try again. root@node2's password: Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl check crs root@node2's password: Failure 1 contacting CSS daemon Cannot communicate with CRS Cannot communicate with EVM [root@node1 bin]# ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 1125736 Used space (kbytes) : 3852 Available space (kbytes) : 1121884 ID : 849560479 Device/File Name : /dev/raw/raw3 Device/File integrity check succeeded Device/File Name : /dev/raw/raw4 Device/File integrity check succeeded Cluster registry integrity check succeeded [root@node1 bin]# ./ocrconfig -replace ocrmirror [root@node1 bin]# ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 1125736 Used space (kbytes) : 3852 Available space (kbytes) : 1121884 ID : 849560479 Device/File Name : /dev/raw/raw3 Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded [root@node1 bin]# cat /etc/oracle/ocr.loc #Device/file /dev/raw/raw4 being deleted ocrconfig_loc=/dev/raw/raw3 local_only=false[root@node1 bin]# ssh node2 cat /etc/oracle/ocr.loc root@node2's password: Permission denied, please try again. root@node2's password: #Device/file /dev/raw/raw1 getting replaced by device /dev/raw/raw3 ocrconfig_loc=/dev/raw/raw3 ocrmirrorconfig_loc=/dev/raw/raw4 local_only=false[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl start crs root@node2's password: Attempting to start CRS stack The CRS stack will be started shortly [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl check crs root@node2's password: Failure 1 contacting CSS daemon Cannot communicate with CRS Cannot communicate with EVM [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/ocrconfig -repair ocrmirror root@node2's password: [root@node1 bin]# ssh node2 cat /etc/oracle/ocr.loc root@node2's password: #Device/file /dev/raw/raw4 being deleted ocrconfig_loc=/dev/raw/raw3 local_only=false[r [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl start crs root@node2's password: Attempting to start CRS stack The CRS stack will be started shortly [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl check crs root@node2's password: CSS appears healthy CRS appears healthy EVM appears healthy
结论:在增加、删除、替换ocr时,所有节点最好同时在线,在导入导出或者恢复ocr是,cluster节点需要关闭
如何添加/删除votedisk?
添加删除votedisk可以使用crsctl命令:
[root@node1 bin]# ./crsctl Usage: crsctl check crs - checks the viability of the CRS stack crsctl check cssd - checks the viability of CSS crsctl check crsd - checks the viability of CRS crsctl check evmd - checks the viability of EVM crsctl set css <parameter> <value> - sets a parameter override crsctl get css <parameter> - gets the value of a CSS parameter crsctl unset css <parameter> - sets CSS parameter to its default crsctl query css votedisk - lists the voting disks used by CSS crsctl add css votedisk <path> - adds a new voting disk crsctl delete css votedisk <path> - removes a voting disk crsctl enable crs - enables startup for all CRS daemons crsctl disable crs - disables startup for all CRS daemons crsctl start crs - starts all CRS daemons. crsctl stop crs - stops all CRS daemons. Stops CRS resources in case of cluster. crsctl start resources - starts CRS resources. crsctl stop resources - stops CRS resources. crsctl debug statedump evm - dumps state info for evm objects crsctl debug statedump crs - dumps state info for crs objects crsctl debug statedump css - dumps state info for css objects crsctl debug log css [module:level]{,module:level} ... - Turns on debugging for CSS crsctl debug log crs [module:level]{,module:level} ... - Turns on debugging for CRS crsctl debug log evm [module:level]{,module:level} ... - Turns on debugging for EVM crsctl debug log res <resname:level> turns on debugging for resources crsctl query crs softwareversion [<nodename>] - lists the version of CRS software installed crsctl query crs activeversion - lists the CRS software operating version crsctl lsmodules css - lists the CSS modules that can be used for debugging crsctl lsmodules crs - lists the CRS modules that can be used for debugging crsctl lsmodules evm - lists the EVM modules that can be used for debugging If necesary any of these commands can be run with additional tracing by adding a "trace" argument at the very front. Example: crsctl trace check css
首先在线添加:
[root@node1 bin]# ./crsctl add css votedisk /dev/raw/raw4 Cluster is not in a ready state for online disk addition [root@node1 bin]# ./crsctl add css votedisk /dev/raw/raw4 -force Now formatting voting disk: /dev/raw/raw4 successful addition of votedisk /dev/raw/raw4. [root@node1 bin]# ./crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy [root@node1 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 1. 0 /dev/raw/raw4 located 2 votedisk(s). [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl query css votedisk root@node2's password: 0. 0 /dev/raw/raw2 1. 0 /dev/raw/raw4 located 2 votedisk(s). [root@node1 bin]# ./crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl stop crs root@node2's password: Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [root@node1 bin]# ./crsctl start crs Attempting to start CRS stack The CRS stack will be started shortly [root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl start crs root@node2's password: Attempting to start CRS stack The CRS stack will be started shortly [root@node1 bin]# ./crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1 ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1 ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1 ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1 ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2 ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2 ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2 ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2 ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2 ora.racdb.db application 0/1 0/1 ONLINE ONLINE node1 ora....b1.inst application 0/5 0/0 ONLINE OFFLINE ora....b2.inst application 0/5 0/0 ONLINE ONLINE node2
删除操作不再演示
看来在线添加votedisk是可以成功的,在10g下需要使用 -force选项,但是网上有资料表示:在添加或者删除votedisk时,最好停掉所有应用。
如何重新创建ocr和votedisk(在ocr和votedisk损坏,并且没有备份时,这是非常有用的)?
首先,在所有的节点停止cluster
[root@node1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 1125736 Used space (kbytes) : 3852 Available space (kbytes) : 1121884 ID : 849560479 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded [root@node1 ~]# crsctl query css votedisk 0. 0 /dev/raw/raw2 1. 0 /dev/raw/raw4 located 2 votedisk(s). [root@node1 ~]# crsctl check crs Failure 1 contacting CSS daemon Cannot communicate with CRS Cannot communicate with EVM [root@node1 ~]# ssh node2 /u01/app/crs_home/bin/crsctl check crs Failure 1 contacting CSS daemon Cannot communicate with CRS Cannot communicate with EVM
在每个节点执行rootdelete.sh脚本
[root@node1 ~]# cd $CRS_HOME/install [root@node1 install]# ./rootdelete.sh Shutting down Oracle Cluster Ready Services (CRS): Stopping resources. This could take several minutes. Error while stopping resources. Possible cause: CRSD is down. Shutdown has begun. The daemons should exit soon. Checking to see if Oracle CRS stack is down... Oracle CRS stack is not running. Oracle CRS stack is down now. Removing script for Oracle Cluster Ready services Updating ocr file for downgrade Cleaning up SCR settings in '/etc/oracle/scls_scr' Cleaning up Network socket directories [root@node1 install]# ssh node2 /u01/app/crs_home/install/rootdelete.sh Shutting down Oracle Cluster Ready Services (CRS): Stopping resources. This could take several minutes. Error while stopping resources. Possible cause: CRSD is down. Shutdown has begun. The daemons should exit soon. Checking to see if Oracle CRS stack is down... Oracle CRS stack is not running. Oracle CRS stack is down now. Removing script for Oracle Cluster Ready services Updating ocr file for downgrade Cleaning up SCR settings in '/etc/oracle/scls_scr' Cleaning up Network socket directories
在任意节点上,执行rootdeinstall.sh脚本(只需在一个节点上运行即可)
[root@node1 install]# ./rootdeinstall.sh Removing contents from OCR device 2560+0 records in 2560+0 records out 10485760 bytes (10 MB) copied, 0.486033 seconds, 21.6 MB/s
在所有节点上执行root.sh脚本
[root@node1 crs_home]# pwd /u01/app/crs_home [root@node1 crs_home]# ./root.sh WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: node1 node1-priv node1 node 2: node2 node2-priv node2 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Now formatting voting device: /dev/raw/raw2 Format of 1 voting devices complete. Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. node1 CSS is inactive on these nodes. node2 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons. [root@node1 crs_home]# ssh node2 /u01/app/crs_home/root.sh WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: node1 node1-priv node1 node 2: node2 node2-priv node2 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. node1 node2 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps Invalid interface "255.255.255.0/eth0" entered in an input argument. [root@node1 crs_home]# oifcfg iflist eth0 192.168.100.0 eth1 100.100.100.0 [root@node1 crs_home]# crs_stat -t -v CRS-0202: No resources are registered.
[root@node1 crs_home]# crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy
[root@node1 crs_home]# ssh node2 /u01/app/crs_home/bin/crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy
从上面可以看出,crs已经正常运行,但是vip one gns等资源没有配置成功(可能是因为执行我修改过ip的原因),手工调用vipca图形界面后:
[root@node1 ~]# crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1 ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1 ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1 ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2 ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2 ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2
添加剩余资源到rac中,如asm,instance,linstener
调用netca添加listener,使用srvctl添加其他服务,至此重建ocr和votedisk完成
有一点需要注意,ocr和votedisk重建后,ocr磁盘和votedisk磁盘会设置为cluster新建时的初始值,如何后续我们对此进行过修改,则需要我们手工来进行再次维护了
[root@node1 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 1125736 Used space (kbytes) : 3760 Available space (kbytes) : 1121976 ID : 1334010282 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded [root@node1 ~]# crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s). [root@node1 ~]#