今天是2014-04-01,忙碌了一天终于有时间写点东西了。前段时间写了RAc中ocr存在备份的情况下如何恢复,今天写一下在没有备份情况下如何去重建ocr和olr呢?
最大的悲剧莫过于数据库坏了备份没了。让我想起了前几年流行的一句话:“最大的悲剧是人活着,钱没了”。总之备份重于一切。
情景介绍:
在11G中ocr的asm磁盘组被用户不小心删了且所有备份都误删,olr呢也损坏了。至此悲剧重演。
恢复过程:(其实就是使用root.sh重建ocr的过程,重建之后可能需要重新注册相关资源如listener or database 实例等!)
1、清空所有节点cluster配置信息:(11G中存在rootcrs.sh脚本,该脚本默认存在于$GRID_HOME/crs/install)。
注,该脚本需要在root下执行,否则会有错误提示
[grid@rac-one install]$ ./rootcrs.pl You must be logged in as root to run this script. Log in as root and rerun this script. 2014-04-01 17:08:12: Not running as authorized user Insufficient privileges to execute this script. root or administrative privileges needed to run the script.
了解该脚本的功能可以查看-help。如升级、回退、重新配置等待》
[root@rac-two install]# ./rootcrs.pl -help Usage: rootcrs.pl [-verbose] [-upgrade [-force] | -patch] [-paramfile <parameter-file>] [-deconfig [-deinstall] [-keepdg] [-force] [-lastnode]] [-downgrade -oldcrshome <old crshome path> -version <old crs version> [-force] [-lastnode]] [-unlock [-crshome <path to crs home>] [-nocrsstop]] [-init] Options: -verbose Run this script in verbose mode -upgrade Oracle HA is being upgraded from previous version -patch Oracle HA is being upgraded to a patch version -paramfile Complete path of file specifying HA parameter values -lastnode Force the node this script is executing on to be considered as the last node of deconfiguration or downgrade, and perform actions associated with deconfiguring or downgrading the last node -downgrade Downgrade the clusterware -version For use with downgrade; special handling is required if downgrading to 9i. This is the old crs version in the format A.B.C.D.E (e.g 11.1.0.6.0). -deconfig Remove Oracle Clusterware to allow it to be uninstalled or reinstalled -force Force the execution of steps in delete or dwongrade that cannot be verified to be safe -deinstall Reset the permissions on CRS home during de-configuration -keepdg Keep existing diskgroups during de-configuration -unlock Unlock CRS home -crshome Complete path of crs home. Use with unlock option -oldcrshome For use with downgrade. Complete path of the old crs home -nocrsstop used with unlock option to reset permissions on an inactive grid home -init Reset the permissions of all files and directories under CRS home If neither -upgrade nor -patch is supplied, a new install is performed To see the full manpage for this program, execute: perldoc rootcrs.pl [root@rac-two install]#
在root下清除所有节点cluster 信息:
节点2:
[grid@rac-one install]$ su Password: [root@rac-one install]# ./rootcrs.pl -deconfig -force Using configuration parameter file: ./crsconfig_params Network exists: 1/192.168.4.0/255.255.255.0/eth0, type static VIP exists: /rac-one-vip/192.168.4.113/192.168.4.0/255.255.255.0/eth0, hosting node rac-one VIP exists: /rac-two-vip/192.168.4.114/192.168.4.0/255.255.255.0/eth0, hosting node rac-two GSD exists ONS exists: Local port 6100, remote port 6200, EM port 2016 CRS-2673: Attempting to stop ‘ora.registry.acfs‘ on ‘rac-one‘ CRS-2677: Stop of ‘ora.registry.acfs‘ on ‘rac-one‘ succeeded CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘rac-one‘ CRS-2673: Attempting to stop ‘ora.crsd‘ on ‘rac-one‘ CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘rac-one‘ CRS-2673: Attempting to stop ‘ora.oc4j‘ on ‘rac-one‘ CRS-2673: Attempting to stop ‘ora.GIDG.dg‘ on ‘rac-one‘ CRS-2673: Attempting to stop ‘ora.DATADG.dg‘ on ‘rac-one‘ CRS-2677: Stop of ‘ora.DATADG.dg‘ on ‘rac-one‘ succeeded CRS-2677: Stop of ‘ora.GIDG.dg‘ on ‘rac-one‘ succeeded CRS-2673: Attempting to stop ‘ora.asm‘ on ‘rac-one‘ CRS-2677: Stop of ‘ora.oc4j‘ on ‘rac-one‘ succeeded CRS-2672: Attempting to start ‘ora.oc4j‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.asm‘ on ‘rac-one‘ succeeded CRS-2676: Start of ‘ora.oc4j‘ on ‘rac-two‘ succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘rac-one‘ has completed CRS-2677: Stop of ‘ora.crsd‘ on ‘rac-one‘ succeeded CRS-2673: Attempting to stop ‘ora.crf‘ on ‘rac-one‘ CRS-2673: Attempting to stop ‘ora.ctssd‘ on ‘rac-one‘ CRS-2673: Attempting to stop ‘ora.evmd‘ on ‘rac-one‘ CRS-2673: Attempting to stop ‘ora.asm‘ on ‘rac-one‘ CRS-2673: Attempting to stop ‘ora.drivers.acfs‘ on ‘rac-one‘ CRS-2673: Attempting to stop ‘ora.mdnsd‘ on ‘rac-one‘ CRS-2677: Stop of ‘ora.crf‘ on ‘rac-one‘ succeeded CRS-2677: Stop of ‘ora.evmd‘ on ‘rac-one‘ succeeded CRS-2677: Stop of ‘ora.mdnsd‘ on ‘rac-one‘ succeeded CRS-2677: Stop of ‘ora.ctssd‘ on ‘rac-one‘ succeeded CRS-2677: Stop of ‘ora.asm‘ on ‘rac-one‘ succeeded CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip‘ on ‘rac-one‘ CRS-2677: Stop of ‘ora.cluster_interconnect.haip‘ on ‘rac-one‘ succeeded CRS-2673: Attempting to stop ‘ora.cssd‘ on ‘rac-one‘ CRS-2677: Stop of ‘ora.cssd‘ on ‘rac-one‘ succeeded CRS-2673: Attempting to stop ‘ora.gipcd‘ on ‘rac-one‘ CRS-2677: Stop of ‘ora.drivers.acfs‘ on ‘rac-one‘ succeeded CRS-2677: Stop of ‘ora.gipcd‘ on ‘rac-one‘ succeeded CRS-2673: Attempting to stop ‘ora.gpnpd‘ on ‘rac-one‘ CRS-2677: Stop of ‘ora.gpnpd‘ on ‘rac-one‘ succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘rac-one‘ has completed CRS-4133: Oracle High Availability Services has been stopped. Removing Trace File Analyzer Successfully deconfigured Oracle clusterware stack on this node [root@rac-one install]#
节点1,因为我的RAC是双节点的,那么在清除最后一个使用lastnode参数:
[grid@rac-two crs]$ cd install/ [grid@rac-two install]$ su Password: [root@rac-two install]# ./rootcrs.pl -deconfig -force -lastnode Using configuration parameter file: ./crsconfig_params CRS resources for listeners are still configured Network exists: 1/192.168.4.0/255.255.255.0/eth0, type static VIP exists: /rac-two-vip/192.168.4.114/192.168.4.0/255.255.255.0/eth0, hosting node rac-two GSD exists ONS exists: Local port 6100, remote port 6200, EM port 2016 CRS-2673: Attempting to stop ‘ora.registry.acfs‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.registry.acfs‘ on ‘rac-two‘ succeeded CRS-2673: Attempting to stop ‘ora.crsd‘ on ‘rac-two‘ CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘rac-two‘ CRS-2673: Attempting to stop ‘ora.oc4j‘ on ‘rac-two‘ CRS-2673: Attempting to stop ‘ora.GIDG.dg‘ on ‘rac-two‘ CRS-2673: Attempting to stop ‘ora.DATADG.dg‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.DATADG.dg‘ on ‘rac-two‘ succeeded CRS-2677: Stop of ‘ora.oc4j‘ on ‘rac-two‘ succeeded CRS-2677: Stop of ‘ora.GIDG.dg‘ on ‘rac-two‘ succeeded CRS-2673: Attempting to stop ‘ora.asm‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.asm‘ on ‘rac-two‘ succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘rac-two‘ has completed CRS-2677: Stop of ‘ora.crsd‘ on ‘rac-two‘ succeeded CRS-2673: Attempting to stop ‘ora.ctssd‘ on ‘rac-two‘ CRS-2673: Attempting to stop ‘ora.evmd‘ on ‘rac-two‘ CRS-2673: Attempting to stop ‘ora.asm‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.evmd‘ on ‘rac-two‘ succeeded CRS-2677: Stop of ‘ora.asm‘ on ‘rac-two‘ succeeded CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.cluster_interconnect.haip‘ on ‘rac-two‘ succeeded CRS-2677: Stop of ‘ora.ctssd‘ on ‘rac-two‘ succeeded CRS-2673: Attempting to stop ‘ora.cssd‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.cssd‘ on ‘rac-two‘ succeeded CRS-2672: Attempting to start ‘ora.cssdmonitor‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.cssdmonitor‘ on ‘rac-two‘ succeeded CRS-2672: Attempting to start ‘ora.cssd‘ on ‘rac-two‘ CRS-2672: Attempting to start ‘ora.diskmon‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.diskmon‘ on ‘rac-two‘ succeeded CRS-2676: Start of ‘ora.cssd‘ on ‘rac-two‘ succeeded CRS-4611: Successful deletion of voting disk +GIDG. ASM de-configuration trace file location: /tmp/asmcadc_clean2014-04-01_05-14-52-PM.log ASM Clean Configuration START ASM Clean Configuration END ASM with SID +ASM1 deleted successfully. Check /tmp/asmcadc_clean2014-04-01_05-14-52-PM.log for details. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘rac-two‘ CRS-2673: Attempting to stop ‘ora.mdnsd‘ on ‘rac-two‘ CRS-2673: Attempting to stop ‘ora.ctssd‘ on ‘rac-two‘ CRS-2673: Attempting to stop ‘ora.asm‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.mdnsd‘ on ‘rac-two‘ succeeded CRS-2677: Stop of ‘ora.ctssd‘ on ‘rac-two‘ succeeded CRS-2677: Stop of ‘ora.asm‘ on ‘rac-two‘ succeeded CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.cluster_interconnect.haip‘ on ‘rac-two‘ succeeded CRS-2673: Attempting to stop ‘ora.cssd‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.cssd‘ on ‘rac-two‘ succeeded CRS-2673: Attempting to stop ‘ora.crf‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.crf‘ on ‘rac-two‘ succeeded CRS-2673: Attempting to stop ‘ora.gipcd‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.gipcd‘ on ‘rac-two‘ succeeded CRS-2673: Attempting to stop ‘ora.gpnpd‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.gpnpd‘ on ‘rac-two‘ succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘rac-two‘ has completed CRS-4133: Oracle High Availability Services has been stopped. Removing Trace File Analyzer Successfully deconfigured Oracle clusterware stack on this node [root@rac-two install]#
二、重建ocr和olr,使用root.sh脚本完成重建,其实这就是在安装RAC中执行的脚本,默认位置为:$GRID_HOME/;
eg:
节点1:
[root@rac-two grid]# ./root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params User ignored Prerequisites during installation Installing Trace File Analyzer CRS-2672: Attempting to start ‘ora.mdnsd‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.mdnsd‘ on ‘rac-two‘ succeeded CRS-2672: Attempting to start ‘ora.gpnpd‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.gpnpd‘ on ‘rac-two‘ succeeded CRS-2672: Attempting to start ‘ora.cssdmonitor‘ on ‘rac-two‘ CRS-2672: Attempting to start ‘ora.gipcd‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.cssdmonitor‘ on ‘rac-two‘ succeeded CRS-2676: Start of ‘ora.gipcd‘ on ‘rac-two‘ succeeded CRS-2672: Attempting to start ‘ora.cssd‘ on ‘rac-two‘ CRS-2672: Attempting to start ‘ora.diskmon‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.diskmon‘ on ‘rac-two‘ succeeded CRS-2676: Start of ‘ora.cssd‘ on ‘rac-two‘ succeeded ASM created and started successfully. Disk Group GIDG created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user ‘root‘, privgrp ‘root‘.. Operation successful. Successful addition of voting disk 498646ba39604f86bf697c9748a67697. Successful addition of voting disk 2e1bd16f9e6d4f36bf93550dc8268725. Successful addition of voting disk 3fbd31a0b2634feabfa1115a504cbbe6. Successfully replaced voting disk group with +GIDG. CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 498646ba39604f86bf697c9748a67697 (/dev/asm-diske) [GIDG] 2. ONLINE 2e1bd16f9e6d4f36bf93550dc8268725 (/dev/asm-diskd) [GIDG] 3. ONLINE 3fbd31a0b2634feabfa1115a504cbbe6 (/dev/asm-diskf) [GIDG] Located 3 voting disk(s). CRS-2672: Attempting to start ‘ora.asm‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.asm‘ on ‘rac-two‘ succeeded CRS-2672: Attempting to start ‘ora.GIDG.dg‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.GIDG.dg‘ on ‘rac-two‘ succeeded Preparing packages for installation... cvuqdisk-1.0.9-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@rac-two grid]#
另外注意,如果asm磁盘损坏了那么需要重新修复磁盘,然后会自动重新创建之前的diskgroup.
至此olr和ocr创建成功
节点2:
[root@rac-one grid]# ./root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful Adding Clusterware entries to upstart CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac-two, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster Preparing packages for installation... cvuqdisk-1.0.9-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@rac-one grid]#
3、检查资源信息:
现在呢开始使用srvctl和crsctl工具,这是oracle RAC 经常使用的,另外还有一个工具是oifcfg配置网卡信息等。不在 介绍了。
[root@rac-one bin]# ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@rac-one bin]# ./crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.GIDG.dg ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.asm ONLINE ONLINE rac-one Started ONLINE ONLINE rac-two Started ora.gsd OFFLINE OFFLINE rac-one OFFLINE OFFLINE rac-two ora.net1.network ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.ons ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.registry.acfs ONLINE ONLINE rac-one ONLINE ONLINE rac-two -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac-one ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE rac-two ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE rac-two ora.cvu 1 ONLINE ONLINE rac-two ora.oc4j 1 ONLINE ONLINE rac-two ora.rac-one.vip 1 ONLINE ONLINE rac-one ora.rac-two.vip 1 ONLINE ONLINE rac-two ora.scan1.vip 1 ONLINE ONLINE rac-one ora.scan2.vip 1 ONLINE ONLINE rac-two ora.scan3.vip 1 ONLINE ONLINE rac-two [root@rac-one bin]# [root@rac-one bin]# su - grid [grid@rac-one ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.4.0 Production on Tue Apr 1 17:40:53 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> select name,state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- GIDG MOUNTED DATADG DISMOUNTED SQL> alter diskgroup datadg mount; Diskgroup altered. SQL>
4、添加资源
可以看到目前启动的缺少本地监听和database资源和实例,那么下面开始重新注册这些信息到ocr中,
注册listener:
注:使用grid用户:
[grid@rac-one ~]$ srvctl add listener -l listener PRCN-2061 : Failed to add listener ora.LISTENER.lsnr PRCN-2065 : Port(s) 1521 are not available on the nodes given PRCN-2067 : Port 1521 is not available across node(s) "rac-two-vip" [grid@rac-one ~]$
提示1521该端口被ora.rac-two.vip所使用,暂且跳过
[grid@rac-one ~]$ crsctl stop resource ora.rac-two.vip CRS-2673: Attempting to stop ‘ora.rac-two.vip‘ on ‘rac-two‘ CRS-2677: Stop of ‘ora.rac-two.vip‘ on ‘rac-two‘ succeeded [grid@rac-one ~]$ srvctl add listener -l listener [grid@rac-one ~]$ crsctl start resource ora.rac-two.vip CRS-2672: Attempting to start ‘ora.rac-two.vip‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.rac-two.vip‘ on ‘rac-two‘ succeeded [grid@rac-one ~]$ srvctl config listener Name: LISTENER Network: 1, Owner: grid Home: <CRS home> End points: TCP:1521 [grid@rac-one ~]$ [grid@rac-one ~]$ [grid@rac-one ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATADG.dg ONLINE ONLINE rac-one OFFLINE OFFLINE rac-two ora.GIDG.dg ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.LISTENER.lsnr OFFLINE OFFLINE rac-one OFFLINE OFFLINE rac-two ora.asm ONLINE ONLINE rac-one Started ONLINE ONLINE rac-two Started ora.gsd OFFLINE OFFLINE rac-one OFFLINE OFFLINE rac-two ora.net1.network ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.ons ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.registry.acfs ONLINE ONLINE rac-one ONLINE ONLINE rac-two -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac-one ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE rac-two ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE rac-two ora.cvu 1 ONLINE ONLINE rac-two ora.oc4j 1 ONLINE ONLINE rac-two ora.rac-one.vip 1 ONLINE ONLINE rac-one ora.rac-two.vip 1 ONLINE ONLINE rac-two ora.scan1.vip 1 ONLINE ONLINE rac-one ora.scan2.vip 1 ONLINE ONLINE rac-two ora.scan3.vip 1 ONLINE ONLINE rac-two [grid@rac-one ~]$ 至此监听添加到ocr完毕。下面开始添加db,注意使用oracle用户:
继续添加:
[grid@rac-one ~]$ su - oracle Password: [oracle@rac-one ~]$ srvctl add database -h Adds a database configuration to the Oracle Clusterware. Usage: srvctl add database -d <db_unique_name> -o <oracle_home> [-c {RACONENODE | RAC | SINGLE} [-e <server_list>] [-i <inst_name>] [-w <timeout>]] [-m <domain_name>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL | NORESTART}] [-g "<serverpool_list>"] [-x <node_name>] [-a "<diskgroup_list>"] [-j "<acfs_path_list>"] -d <db_unique_name> Unique name for the database -o <oracle_home> ORACLE_HOME path -c <type> Type of database: RAC One Node, RAC, or Single Instance -e <server_list> Candidate server list for RAC One Node database -i <inst_name> Instance name prefix for administrator-managed RAC One Node database (default first 12 characters of <db_unique_name>) -w <timeout> Online relocation timeout in minutes -x <node_name> Node name. -x option is specified for single-instance databases -m <domain> Domain for database. Must be set if database has DB_DOMAIN set. -p <spfile> Server parameter file path -r <role> Role of the database (primary, physical_standby, logical_standby, snapshot_standby) -s <start_options> Startup options for the database. Examples of startup options are OPEN, MOUNT, or ‘READ ONLY‘. -t <stop_options> Stop options for the database. Examples of shutdown options are NORMAL, TRANSACTIONAL, IMMEDIATE, or ABORT. -n <db_name> Database name (DB_NAME), if different from the unique name given by the -d option -y <dbpolicy> Management policy for the database (AUTOMATIC, MANUAL, or NORESTART) -g "<serverpool_list>" Comma separated list of database server pool names -a "<diskgroup_list>" Comma separated list of disk groups -j "<acfs_path_list>" Comma separated list of ACFS paths where database‘s dependency will be set -h Print usage [oracle@rac-one ~]$ srvctl add database -d Rac -o /u01/app/oracle/product/11.2.0/db_1/ -c RAC [oracle@rac-one ~]$ [oracle@rac-one ~]$ srvctl add instance -h Adds a database instance configuration to the Oracle Clusterware. Usage: srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name> [-f] -d <db_unique_name> Unique name for the database -i <inst> Instance name -n <node_name> Node name -f Force the add operation even though some resource(s) will be stopped -h Print usage [oracle@rac-one ~]$ srvctl add instance -d Rac -i Rac1 -n rac-two [oracle@rac-one ~]$ srvctl add instance -d Rac -i Rac2 -n rac-one [oracle@rac-one ~]$ [oracle@rac-one ~]$ srvctl config database -d Rac Database unique name: Rac Database name: Oracle home: /u01/app/oracle/product/11.2.0/db_1/ Oracle user: oracle Spfile: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Rac Database instances: Rac2,Rac1 Disk Groups: Mount point paths: Services: Type: RAC Database is administrator managed [oracle@rac-one ~]$
重启crs。
[root@rac-one bin]# ./crsctl start cluster -all CRS-2672: Attempting to start ‘ora.cssdmonitor‘ on ‘rac-one‘ CRS-2672: Attempting to start ‘ora.cssdmonitor‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.cssdmonitor‘ on ‘rac-one‘ succeeded CRS-2676: Start of ‘ora.cssdmonitor‘ on ‘rac-two‘ succeeded CRS-2672: Attempting to start ‘ora.cssd‘ on ‘rac-two‘ CRS-2672: Attempting to start ‘ora.cssd‘ on ‘rac-one‘ CRS-2672: Attempting to start ‘ora.diskmon‘ on ‘rac-one‘ CRS-2672: Attempting to start ‘ora.diskmon‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.diskmon‘ on ‘rac-one‘ succeeded CRS-2676: Start of ‘ora.diskmon‘ on ‘rac-two‘ succeeded CRS-2676: Start of ‘ora.cssd‘ on ‘rac-two‘ succeeded CRS-2676: Start of ‘ora.cssd‘ on ‘rac-one‘ succeeded CRS-2672: Attempting to start ‘ora.ctssd‘ on ‘rac-two‘ CRS-2672: Attempting to start ‘ora.ctssd‘ on ‘rac-one‘ CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip‘ on ‘rac-one‘ CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.ctssd‘ on ‘rac-one‘ succeeded CRS-2672: Attempting to start ‘ora.evmd‘ on ‘rac-one‘CRS-2676: Start of ‘ora.ctssd‘ on ‘rac-two‘ succeeded CRS-2672: Attempting to start ‘ora.evmd‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.evmd‘ on ‘rac-one‘ succeeded CRS-2676: Start of ‘ora.evmd‘ on ‘rac-two‘ succeeded CRS-2676: Start of ‘ora.cluster_interconnect.haip‘ on ‘rac-one‘ succeeded CRS-2672: Attempting to start ‘ora.asm‘ on ‘rac-one‘ CRS-2676: Start of ‘ora.cluster_interconnect.haip‘ on ‘rac-two‘ succeeded CRS-2672: Attempting to start ‘ora.asm‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.asm‘ on ‘rac-two‘ succeeded CRS-2672: Attempting to start ‘ora.crsd‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.crsd‘ on ‘rac-two‘ succeeded CRS-2676: Start of ‘ora.asm‘ on ‘rac-one‘ succeeded CRS-2672: Attempting to start ‘ora.crsd‘ on ‘rac-one‘ CRS-2676: Start of ‘ora.crsd‘ on ‘rac-one‘ succeeded
再次查看(等2分钟):
[root@rac-one bin]# ./crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATADG.dg ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.GIDG.dg ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.LISTENER.lsnr ONLINE ONLINE rac-one ONLINE INTERMEDIATE rac-two Not All Endpoints R egistered ora.asm ONLINE ONLINE rac-one Started ONLINE ONLINE rac-two Started ora.gsd OFFLINE OFFLINE rac-one OFFLINE OFFLINE rac-two ora.net1.network ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.ons ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.registry.acfs ONLINE ONLINE rac-one ONLINE ONLINE rac-two -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac-one ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE rac-two ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE rac-two ora.cvu 1 ONLINE ONLINE rac-two ora.oc4j 1 ONLINE ONLINE rac-two ora.rac-one.vip 1 ONLINE ONLINE rac-one ora.rac-two.vip 1 ONLINE ONLINE rac-two ora.rac.db 1 ONLINE ONLINE rac-two Open 2 ONLINE ONLINE rac-one Open ora.scan1.vip 1 ONLINE ONLINE rac-one ora.scan2.vip 1 ONLINE ONLINE rac-two ora.scan3.vip 1 ONLINE ONLINE rac-two
为什么会存在一个监听在rac-two上无法注册呢?
查看该节点监听:
[grid@rac-two admin]$ ps -ef | grep LISTENER grid 6120 1 0 17:02 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit grid 8511 1 0 18:28 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit grid 11010 1 0 18:42 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit grid 11246 9191 0 18:44 pts/1 00:00:00 grep LISTENER
原来是启动了两个,全部停掉再次重启:
[grid@rac-two admin]$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1100(asmadmin),1200(dba),1300(asmdba),1301(asmoper) [grid@rac-two admin]$ lsnrctl stop LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 01-APR-2014 18:45:11 Copyright (c) 1991, 2013, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) The command completed successfully [grid@rac-two admin]$ ps -ef | grep LISTENER grid 6120 1 0 17:02 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit grid 8511 1 0 18:28 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit grid 11452 9191 0 18:45 pts/1 00:00:00 grep LISTENER [grid@rac-two admin]$ exit logout [root@rac-two bin]# kill -9 6120 [root@rac-two bin]# ps -ef | grep LISTENER grid 8511 1 0 18:28 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit root 11518 11280 0 18:46 pts/1 00:00:00 grep LISTENER [root@rac-two bin]# su - grid [grid@rac-two ~]$ crsctl status res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATADG.dg ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.GIDG.dg ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.LISTENER.lsnr ONLINE ONLINE rac-one OFFLINE OFFLINE rac-two ora.asm ONLINE ONLINE rac-one Started ONLINE ONLINE rac-two Started ora.gsd OFFLINE OFFLINE rac-one OFFLINE OFFLINE rac-two ora.net1.network ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.ons ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.registry.acfs ONLINE ONLINE rac-one ONLINE ONLINE rac-two -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac-two ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE rac-one ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE rac-one ora.cvu 1 ONLINE ONLINE rac-one ora.oc4j 1 ONLINE ONLINE rac-one ora.rac-one.vip 1 ONLINE ONLINE rac-one ora.rac-two.vip 1 ONLINE ONLINE rac-two ora.rac.db 1 ONLINE ONLINE rac-two Open 2 ONLINE ONLINE rac-one Open ora.scan1.vip 1 ONLINE ONLINE rac-two ora.scan2.vip 1 ONLINE ONLINE rac-one ora.scan3.vip 1 ONLINE ONLINE rac-one [grid@rac-two ~]$ crsctl start resource ora.LISTENER.lsnr CRS-2672: Attempting to start ‘ora.LISTENER.lsnr‘ on ‘rac-two‘ CRS-2676: Start of ‘ora.LISTENER.lsnr‘ on ‘rac-two‘ succeeded [grid@rac-two ~]$ [grid@rac-two ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATADG.dg ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.GIDG.dg ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.LISTENER.lsnr ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.asm ONLINE ONLINE rac-one Started ONLINE ONLINE rac-two Started ora.gsd OFFLINE OFFLINE rac-one OFFLINE OFFLINE rac-two ora.net1.network ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.ons ONLINE ONLINE rac-one ONLINE ONLINE rac-two ora.registry.acfs ONLINE ONLINE rac-one ONLINE ONLINE rac-two -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac-two ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE rac-one ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE rac-one ora.cvu 1 ONLINE ONLINE rac-one ora.oc4j 1 ONLINE ONLINE rac-one ora.rac-one.vip 1 ONLINE ONLINE rac-one ora.rac-two.vip 1 ONLINE ONLINE rac-two ora.rac.db 1 ONLINE ONLINE rac-two Open 2 ONLINE ONLINE rac-one Open ora.scan1.vip 1 ONLINE ONLINE rac-two ora.scan2.vip 1 ONLINE ONLINE rac-one ora.scan3.vip 1 ONLINE ONLINE rac-one [grid@rac-two ~]$
至此所有问题得到彻底解决。