错误现象:
在其中一个node运行root.sh时,出现以下错误:
ORA-15018 diskgroup cannot be created
ORA-15017 diskgroup OCR_VOTING_DG cannot be mounted
ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space
案例解决方法:
Applies to:
Oracle Database - Enterprise Edition - Version 11.2.0.1 and later
Information in this document applies to any platform.
Symptoms
Multiple nodes cluster, installing 11gR2 Grid Infrastructure for the first time, root.sh fails on the first node.
<GRID_HOME>/cfgtoollogs/crsconfig/rootcrs_<node1>.log shows:
2010-07-24 23:29:36: Executing as oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:36: Running as user oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:36: Invoking "/opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTE_DISK2,ORCL:VOTE_DISK3 -redundancy NORMAL -configureLocalASM" as user "oracle"
2010-07-24 23:29:53:Configuration failed, see logfile for details
$ORACLE_BASE/cfgtoollogs/asmca/asmca-<date>.log shows error:
ORA-15017 diskgroup OCR_VOTING_DG cannot be mounted
ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space
This is a new installation, the disks used by ASM are not shared on any other cluster system.
Changes
New installation.
Cause
The problem is caused by runing root.sh simultaneously on the first node and the remaining node(s) rather than completing root.sh on the first node before running it on the remaining node(s).
On node 2, <GRID_HOME>/cfgtoollogs/crsconfig/rootcrs_<node2>.log has approximate same time stamp:
2010-07-24 23:29:39: Executing as oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:39: Running as user oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:39: Invoking "/opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTE_DISK2,ORCL:VOTE_DISK3 -redundancy NORMAL -configureLocalASM" as user "oracle"
2010-07-24 23:29:55:Configuration failed, see logfile for details
It has similar content, the only difference is its time started 3 seconds later than the first node. This indicates root.sh was running simultaneously on both nodes.
The root.sh on the 2nd node also created +ASM1 instance ( as it also appears as it is the first node to run root.sh), it mounted the same diskgroup, led to the +ASM1 on node 1 reporting:
ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space
Solution
1. Deconfig the Grid Infrastructure without removing binaries, refer to Document 942166.1 How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation. For two nodes case:
As root, run "$GRID_HOME/crs/install/rootcrs.pl -deconfig -force -verbose" on node 1,
As root, run "$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode" on node 2.
2. Rerun root.sh on the first node first, only proceed with the remaining node(s) after root.sh completes on the first node.
References
NOTE:1050908.1 - Troubleshoot Grid Infrastructure Startup Issues
NOTE:942166.1 - How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation