Oracle 11g rac 添加新节点测试

[转]https://blog.csdn.net/shiyu1157758655/article/details/60877076

前期准备:

操作系统设置
OS版本必须相同,检查内核参数,系统内存、CPU、文件系统大小、swap空间等。
创建必要的用户及组
用户及用户组UID及GID必须跟其他节点相同,同时对这些用户环境变量进行设置。
网络配置
网络规划,Public及private网络名称必须相同。
共享存储配置
对于共享存储,必须保证在新的节点上是可以访问的,而且对软件安装者必须有读写权限。
创建相关目录
这些目录用户存放GI及Oracle数据库软件,同时要保证用户组及用户对这些目录的权限。
配置RAC等效性
时间同步设置(CTSS )
配置系统参数、创建用户、配置存储等。
配置所有节点间grid、oracle用户无密码等价访问

测试环境:
已有节点rac1、rac2,下面做的是添加节点rac3

rac3先安装下cvuqdisk包
[root@rac3 src]# rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
 package cvuqdisk-1.0.9-1.x86_64 is already installed
具体步骤:
1.rac3节点添加集群软件GI
①检查rac3是否满足rac安装条件(在已经有节点下面用grid,oracle用户执行)
[root@rac1 ~]# su - grid
ASM1:/home/grid@rac1>cluvfy stage -prenodeadd -n rac3 -fixup -verbose

Performing pre-checks for node addition

Checking node reachability...

PRVF-6006 : Unable to reach any of thenodes

PRKN-1034 : Failed to retrieve IP addressof host "rac3"

Pre-check for node addition wasunsuccessful on all the nodes.

该报错是由于节点直接没有配置ssh 互信,具体配置请参考:http://blog.csdn.net/shiyu1157758655/article/details/56838603

配置好互信执行,再次执行:

+ASM1:/home/grid@rac1>cluvfy stage -pre nodeadd -n rac3 -fixup -verbose
Performing pre-checks for node addition
Checking node reachability...
Check: Node reachability from node "rac1"
  Destination Node                      Reachable?             
  ------------------------------------  ------------------------
  rac3                                  yes                    
Result: Node reachability check passed from node "rac1"

Checking user equivalence...
Check: User equivalence for user "grid"
  Node Name                             Status                 
  ------------------------------------  ------------------------
  rac3                                  passed                 
Result: User equivalence check passed for user "grid"
Checking CRS integrity...
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac1"
The Oracle Clusterware is healthy on node "rac2"
CRS integrity check passed
Checking shared resources...
Checking CRS home location...
"/u01/app/11.2.0/grid_1" is shared
Result: Shared resources check for node addition passed

Checking node connectivity...
Checking hosts config file...
  Node Name                             Status                 
  ------------------------------------  ------------------------
  rac1                                  passed                 
  rac2                                  passed                 
  rac3                                  passed                 
Verification of the hosts config file successful

Interface information for node "rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.180.2   192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:5F:E5 1500 
 eth0   192.168.180.4   192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:5F:E5 1500 
 eth0   192.168.180.6   192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:5F:E5 1500 
 eth1   10.10.10.2      10.10.10.0      0.0.0.0         192.168.180.1   00:50:56:8E:22:19 1500 
 eth1   169.254.145.157 169.254.0.0     0.0.0.0         192.168.180.1   00:50:56:8E:22:19 1500

Interface information for node "rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.180.3   192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:3A:88 1500 
 eth0   192.168.180.5   192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:3A:88 1500 
 eth1   10.10.10.3      10.10.10.0      0.0.0.0         192.168.180.1   00:50:56:8E:0C:E6 1500 
 eth1   169.254.107.75  169.254.0.0     0.0.0.0         192.168.180.1   00:50:56:8E:0C:E6 1500

Interface information for node "rac3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.180.10  192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:6B:4B 1500 
 eth1   10.10.10.10     10.10.10.0      0.0.0.0         192.168.180.1   00:50:56:8E:2C:7F 1500

Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  rac1[192.168.180.2]             rac1[192.168.180.4]             yes            
  rac1[192.168.180.2]             rac1[192.168.180.6]             yes            
  rac1[192.168.180.2]             rac2[192.168.180.3]             yes            
  rac1[192.168.180.2]             rac2[192.168.180.5]             yes            
  rac1[192.168.180.2]             rac3[192.168.180.10]            yes            
  rac1[192.168.180.4]             rac1[192.168.180.6]             yes            
  rac1[192.168.180.4]             rac2[192.168.180.3]             yes            
  rac1[192.168.180.4]             rac2[192.168.180.5]             yes            
  rac1[192.168.180.4]             rac3[192.168.180.10]            yes            
  rac1[192.168.180.6]             rac2[192.168.180.3]             yes            
  rac1[192.168.180.6]             rac2[192.168.180.5]             yes            
  rac1[192.168.180.6]             rac3[192.168.180.10]            yes            
  rac2[192.168.180.3]             rac2[192.168.180.5]             yes            
  rac2[192.168.180.3]             rac3[192.168.180.10]            yes            
  rac2[192.168.180.5]             rac3[192.168.180.10]            yes            
Result: Node connectivity passed for interface "eth0"

Check: TCP connectivity of subnet "192.168.180.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.180.2              rac1:192.168.180.4              passed         
  rac1:192.168.180.2              rac1:192.168.180.6              passed         
  rac1:192.168.180.2              rac2:192.168.180.3              passed         
  rac1:192.168.180.2              rac2:192.168.180.5              passed         
  rac1:192.168.180.2              rac3:192.168.180.10             passed         
Result: TCP connectivity check passed for subnet "192.168.180.0"

Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  rac1[10.10.10.2]                rac2[10.10.10.3]                yes            
  rac1[10.10.10.2]                rac3[10.10.10.10]               yes            
  rac2[10.10.10.3]                rac3[10.10.10.10]               yes            
Result: Node connectivity passed for interface "eth1"

Check: TCP connectivity of subnet "10.10.10.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  rac1:10.10.10.2                 rac2:10.10.10.3                 passed         
  rac1:10.10.10.2                 rac3:10.10.10.10                passed         
Result: TCP connectivity check passed for subnet "10.10.10.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.180.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.180.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.180.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check: Total memory
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          1.8331GB (1922160.0KB)    1.5GB (1572864.0KB)       passed   
  rac3          1.8331GB (1922160.0KB)    1.5GB (1572864.0KB)       passed   
Result: Total memory check passed
Check: Available memory
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          679.8359MB (696152.0KB)   50MB (51200.0KB)          passed   
  rac3          1.623GB (1701816.0KB)     50MB (51200.0KB)          passed   
Result: Available memory check passed
Check: Swap space
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          3.9375GB (4128764.0KB)    2.7497GB (2883240.0KB)    passed   
  rac3          3.9375GB (4128764.0KB)    2.7497GB (2883240.0KB)    passed   
Result: Swap space check passed
Check: Free disk space for "rac1:/u01/app/11.2.0/grid_1"
  Path              Node Name     Mount point   Available     Required      Status     
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/11.2.0/grid_1  rac1          /u01          30.1475GB     5.5GB         passed     
Result: Free disk space check passed for "rac1:/u01/app/11.2.0/grid_1"
Check: Free disk space for "rac3:/u01/app/11.2.0/grid_1"
  Path              Node Name     Mount point   Available     Required      Status     
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/11.2.0/grid_1  rac3          /u01          39.0029GB     5.5GB         passed     
Result: Free disk space check passed for "rac3:/u01/app/11.2.0/grid_1"
Check: Free disk space for "rac1:/var/tmp"
  Path              Node Name     Mount point   Available     Required      Status     
  ----------------  ------------  ------------  ------------  ------------  ------------
  /var/tmp          rac1          /             30GB          1GB           passed     
Result: Free disk space check passed for "rac1:/var/tmp"
Check: Free disk space for "rac3:/var/tmp"
  Path              Node Name     Mount point   Available     Required      Status     
  ----------------  ------------  ------------  ------------  ------------  ------------
  /var/tmp          rac3          /             28.9229GB     1GB           passed     
Result: Free disk space check passed for "rac3:/var/tmp"
Check: User existence for "grid"
  Node Name     Status                    Comment                
  ------------  ------------------------  ------------------------
  rac1          passed                    exists(1100)           
  rac3          passed                    exists(1100)           
Checking for multiple users with UID value 1100
Result: Check for multiple users with UID value 1100 passed
Result: User existence check passed for "grid"
Check: Run level
  Node Name     run level                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          5                         3,5                       passed   
  rac3          5                         3,5                       passed   
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status         
  ----------------  ------------  ------------  ------------  ----------------
  rac1              hard          65536         65536         passed         
  rac3              hard          65536         65536         passed         
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status         
  ----------------  ------------  ------------  ------------  ----------------
  rac1              soft          1024          1024          passed         
  rac3              soft          1024          1024          passed         
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
  Node Name         Type          Available     Required      Status         
  ----------------  ------------  ------------  ------------  ----------------
  rac1              hard          16384         16384         passed         
  rac3              hard          16384         16384         passed         
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
  Node Name         Type          Available     Required      Status         
  ----------------  ------------  ------------  ------------  ----------------
  rac1              soft          2047          2047          passed         
  rac3              soft          2047          2047          passed         
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          x86_64                    x86_64                    passed   
  rac3          x86_64                    x86_64                    passed   
Result: System architecture check passed
Check: Kernel version
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          2.6.32-642.el6.x86_64     2.6.9                     passed   
  rac3          2.6.32-642.el6.x86_64     2.6.9                     passed   
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              250           250           250           passed         
  rac3              250           250           250           passed         
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              32000         32000         32000         passed         
  rac3              32000         32000         32000         passed         
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              100           100           100           passed         
  rac3              100           100           100           passed         
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              128           128           128           passed         
  rac3              128           128           128           passed         
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              4294967295    4294967295    984145920     passed         
  rac3              4294967295    4294967295    984145920     passed         
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              4096          4096          4096          passed         
  rac3              4096          4096          4096          passed         
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              2097152       2097152       2097152       passed         
  rac3              2097152       2097152       2097152       passed         
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              6815744       6815744       6815744       passed         
  rac3              6815744       6815744       6815744       passed         
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed         
  rac3              between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed         
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              262144        262144        262144        passed         
  rac3              262144        262144        262144        passed         
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              4194304       4194304       4194304       passed         
  rac3              4194304       4194304       4194304       passed         
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              262144        262144        262144        passed         
  rac3              262144        262144        262144        passed         
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              1048576       1048576       1048576       passed         
  rac3              1048576       1048576       1048576       passed         
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac1              1048576       1048576       1048576       passed         
  rac3              1048576       1048576       1048576       passed         
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "make"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          make-3.81-23.el6          make-3.80                 passed   
  rac3          make-3.81-23.el6          make-3.80                 passed   
Result: Package existence check passed for "make"
Check: Package existence for "binutils"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          binutils-2.20.51.0.2-5.44.el6  binutils-2.15.92.0.2      passed   
  rac3          binutils-2.20.51.0.2-5.44.el6  binutils-2.15.92.0.2      passed   
Result: Package existence check passed for "binutils"
Check: Package existence for "gcc(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          gcc(x86_64)-4.4.7-17.el6  gcc(x86_64)-3.4.6         passed   
  rac3          gcc(x86_64)-4.4.7-17.el6  gcc(x86_64)-3.4.6         passed   
Result: Package existence check passed for "gcc(x86_64)"
Check: Package existence for "libaio(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.105    passed   
  rac3          libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.105    passed   
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "glibc(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          glibc(x86_64)-2.12-1.192.el6  glibc(x86_64)-2.3.4-2.41  passed   
  rac3          glibc(x86_64)-2.12-1.192.el6  glibc(x86_64)-2.3.4-2.41  passed   
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "compat-libstdc++-33(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed   
  rac3          compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed   
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "elfutils-libelf(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          elfutils-libelf(x86_64)-0.164-2.el6  elfutils-libelf(x86_64)-0.97  passed   
  rac3          elfutils-libelf(x86_64)-0.164-2.el6  elfutils-libelf(x86_64)-0.97  passed   
Result: Package existence check passed for "elfutils-libelf(x86_64)"
Check: Package existence for "elfutils-libelf-devel"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          elfutils-libelf-devel-0.164-2.el6  elfutils-libelf-devel-0.97  passed   
  rac3          elfutils-libelf-devel-0.164-2.el6  elfutils-libelf-devel-0.97  passed   
Result: Package existence check passed for "elfutils-libelf-devel"
Check: Package existence for "glibc-common"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          glibc-common-2.12-1.192.el6  glibc-common-2.3.4        passed   
  rac3          glibc-common-2.12-1.192.el6  glibc-common-2.3.4        passed   
Result: Package existence check passed for "glibc-common"
Check: Package existence for "glibc-devel(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          glibc-devel(x86_64)-2.12-1.192.el6  glibc-devel(x86_64)-2.3.4  passed   
  rac3          glibc-devel(x86_64)-2.12-1.192.el6  glibc-devel(x86_64)-2.3.4  passed   
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "glibc-headers"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          glibc-headers-2.12-1.192.el6  glibc-headers-2.3.4       passed   
  rac3          glibc-headers-2.12-1.192.el6  glibc-headers-2.3.4       passed   
Result: Package existence check passed for "glibc-headers"
Check: Package existence for "gcc-c++(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          gcc-c++(x86_64)-4.4.7-17.el6  gcc-c++(x86_64)-3.4.6     passed   
  rac3          gcc-c++(x86_64)-4.4.7-17.el6  gcc-c++(x86_64)-3.4.6     passed   
Result: Package existence check passed for "gcc-c++(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.105  passed   
  rac3          libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.105  passed   
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "libgcc(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          libgcc(x86_64)-4.4.7-17.el6  libgcc(x86_64)-3.4.6      passed   
  rac3          libgcc(x86_64)-4.4.7-17.el6  libgcc(x86_64)-3.4.6      passed   
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          libstdc++(x86_64)-4.4.7-17.el6  libstdc++(x86_64)-3.4.6   passed   
  rac3          libstdc++(x86_64)-4.4.7-17.el6  libstdc++(x86_64)-3.4.6   passed   
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          libstdc++-devel(x86_64)-4.4.7-17.el6  libstdc++-devel(x86_64)-3.4.6  passed   
  rac3          libstdc++-devel(x86_64)-4.4.7-17.el6  libstdc++-devel(x86_64)-3.4.6  passed   
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          sysstat-9.0.4-31.el6      sysstat-5.0.5             passed   
  rac3          sysstat-9.0.4-31.el6      sysstat-5.0.5             passed   
Result: Package existence check passed for "sysstat"
Check: Package existence for "pdksh"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          pdksh-5.2.14-37.el5_8.1   pdksh-5.2.14              passed   
  rac3          pdksh-5.2.14-37.el5_8.1   pdksh-5.2.14              passed   
Result: Package existence check passed for "pdksh"
Check: Package existence for "expat(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  rac1          expat(x86_64)-2.0.1-11.el6_2  expat(x86_64)-1.95.7      passed   
  rac3          expat(x86_64)-2.0.1-11.el6_2  expat(x86_64)-1.95.7      passed   
Result: Package existence check passed for "expat(x86_64)"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
  Node Name                             Status                 
  ------------------------------------  ------------------------
  rac1                                  passed                 
  rac3                                  passed                 
Check for consistency of root user's primary group passed
Checking OCR integrity...
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Check: Time zone consistency
Result: Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running
Result: Clock synchronization check using Network Time Protocol(NTP) passed

Checking to make sure user "grid" is not in "root" group
  Node Name     Status                    Comment                
  ------------  ------------------------  ------------------------
  rac1          passed                    does not exist         
  rac3          passed                    does not exist         
Result: User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking DNS response time for an unreachable node
  Node Name                             Status                 
  ------------------------------------  ------------------------
  rac1                                  failed                 
  rac3                                  failed                 
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac1,rac3
File "/etc/resolv.conf" is not consistent across nodes

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Pre-check for node addition was unsuccessful on all the nodes.
注意:如果我们不使用DNS解析域名方式的话,那么resolv.conf不一致的问题可以忽略 ,但一定不要忽视其他的报错信息,
②添加新节点的软件
+ASM1:/home/grid@rac1>/u01/app/11.2.0/grid_1/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_MEW_VIRTUAL_HOSTNAMES={rac3-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={rac3-priv}"

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "rac1"

Checking user equivalence...
User equivalence check passed for user "grid"

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/u01/app/11.2.0/grid_1" is shared
Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.180.0"

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "10.10.10.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.180.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.180.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.180.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac1:/u01/app/11.2.0/grid_1"
Free disk space check passed for "rac3:/u01/app/11.2.0/grid_1"
Free disk space check passed for "rac1:/var/tmp"
Free disk space check passed for "rac3:/var/tmp"
Check for multiple users with UID value 1100 passed 
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "pdksh"
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed 
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac1,rac3

File "/etc/resolv.conf" is not consistent across nodes

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Pre-check for node addition was unsuccessful on all the nodes. 
注意:如果我们不使用DNS解析域名方式的话,那么resolv.conf不一致的问题可以忽略
在rac1上进入/u01/app/11.2.0/grid_1/oui/bin目录下,执行以下命令:

+ASM1:/u01/app/11.2.0/grid_1/oui/bin@rac1>export IGNORE_PREADDNODE_CHECKS=Y

再次执行:

+ASM1:/u01/app/11.2.0/grid_1/oui/bin@rac1>./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={rac3-priv}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3476 MB    Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.

Performing tests to see whether nodes rac2,rac3 are available
............................................................... 100% Done.

-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/11.2.0/grid_1
   New Nodes
Space Requirements
   New Nodes
      rac3
         /u01: Required 4.78GB : Available 36.32GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11g 11.2.0.4.0 
      Java Development Kit 1.5.0.51.10 
      Installer SDK Component 11.2.0.4.0 
      Oracle One-Off Patch Installer 11.2.0.3.4 
      Oracle Universal Installer 11.2.0.4.0 
      Oracle RAC Required Support Files-HAS 11.2.0.4.0 
      Oracle USM Deconfiguration 11.2.0.4.0 
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0 
      Enterprise Manager Common Core Files 10.2.0.4.5 
      Oracle DBCA Deconfiguration 11.2.0.4.0 
      Oracle RAC Deconfiguration 11.2.0.4.0 
      Oracle Quality of Service Management (Server) 11.2.0.4.0 
      Installation Plugin Files 11.2.0.4.0 
      Universal Storage Manager Files 11.2.0.4.0 
      Oracle Text Required Support Files 11.2.0.4.0 
      Automatic Storage Management Assistant 11.2.0.4.0 
      Oracle Database 11g Multimedia Files 11.2.0.4.0 
      Oracle Multimedia Java Advanced Imaging 11.2.0.4.0 
      Oracle Globalization Support 11.2.0.4.0 
      Oracle Multimedia Locator RDBMS Files 11.2.0.4.0 
      Oracle Core Required Support Files 11.2.0.4.0 
      Bali Share 1.1.18.0.0 
      Oracle Database Deconfiguration 11.2.0.4.0 
      Oracle Quality of Service Management (Client) 11.2.0.4.0 
      Expat libraries 2.0.1.0.1 
      Oracle Containers for Java 11.2.0.4.0 
      Perl Modules 5.10.0.0.1 
      Secure Socket Layer 11.2.0.4.0 
      Oracle JDBC/OCI Instant Client 11.2.0.4.0 
      Oracle Multimedia Client Option 11.2.0.4.0 
      LDAP Required Support Files 11.2.0.4.0 
      Character Set Migration Utility 11.2.0.4.0 
      Perl Interpreter 5.10.0.0.2 
      PL/SQL Embedded Gateway 11.2.0.4.0 
      OLAP SQL Scripts 11.2.0.4.0 
      Database SQL Scripts 11.2.0.4.0 
      Oracle Extended Windowing Toolkit 3.4.47.0.0 
      SSL Required Support Files for InstantClient 11.2.0.4.0 
      SQL*Plus Files for Instant Client 11.2.0.4.0 
      Oracle Net Required Support Files 11.2.0.4.0 
      Oracle Database User Interface 2.2.13.0.0 
      RDBMS Required Support Files for Instant Client 11.2.0.4.0 
      RDBMS Required Support Files Runtime 11.2.0.4.0 
      XML Parser for Java 11.2.0.4.0 
      Oracle Security Developer Tools 11.2.0.4.0 
      Oracle Wallet Manager 11.2.0.4.0 
      Enterprise Manager plugin Common Files 11.2.0.4.0 
      Platform Required Support Files 11.2.0.4.0 
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 
      RDBMS Required Support Files 11.2.0.4.0 
      Oracle Ice Browser 5.2.3.6.0 
      Oracle Help For Java 4.2.9.0.0 
      Enterprise Manager Common Files 10.2.0.4.5 
      Deinstallation Tool 11.2.0.4.0 
      Oracle Java Client 11.2.0.4.0 
      Cluster Verification Utility Files 11.2.0.4.0 
      Oracle Notification Service (eONS) 11.2.0.4.0 
      Oracle LDAP administration 11.2.0.4.0 
      Cluster Verification Utility Common Files 11.2.0.4.0 
      Oracle Clusterware RDBMS Files 11.2.0.4.0 
      Oracle Locale Builder 11.2.0.4.0 
      Oracle Globalization Support 11.2.0.4.0 
      Buildtools Common Files 11.2.0.4.0 
      HAS Common Files 11.2.0.4.0 
      SQL*Plus Required Support Files 11.2.0.4.0 
      XDK Required Support Files 11.2.0.4.0 
      Agent Required Support Files 10.2.0.4.5 
      Parser Generator Required Support Files 11.2.0.4.0 
      Precompiler Required Support Files 11.2.0.4.0 
      Installation Common Files 11.2.0.4.0 
      Required Support Files 11.2.0.4.0 
      Oracle JDBC/THIN Interfaces 11.2.0.4.0 
      Oracle Multimedia Locator 11.2.0.4.0 
      Oracle Multimedia 11.2.0.4.0 
      Assistant Common Files 11.2.0.4.0 
      Oracle Net 11.2.0.4.0 
      PL/SQL 11.2.0.4.0 
      HAS Files for DB 11.2.0.4.0 
      Oracle Recovery Manager 11.2.0.4.0 
      Oracle Database Utilities 11.2.0.4.0 
      Oracle Notification Service 11.2.0.3.0 
      SQL*Plus 11.2.0.4.0 
      Oracle Netca Client 11.2.0.4.0 
      Oracle Advanced Security 11.2.0.4.0 
      Oracle JVM 11.2.0.4.0 
      Oracle Internet Directory Client 11.2.0.4.0 
      Oracle Net Listener 11.2.0.4.0 
      Cluster Ready Services Files 11.2.0.4.0 
      Oracle Database 11g 11.2.0.4.0 
-----------------------------------------------------------------------------

Instantiating scripts for add node (Saturday, March 4, 2017 2:02:33 PM CST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Saturday, March 4, 2017 2:02:39 PM CST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Saturday, March 4, 2017 2:05:40 PM CST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/11.2.0/grid_1/root.sh #On nodes rac3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
    
The Cluster Node Addition of /u01/app/11.2.0/grid_1 was successful.
在新加的节点上执行root脚本:

root@rac3 ~]# /u01/app/11.2.0/grid_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid_1/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Start of resource "ora.ctssd" failed
CRS-2672: Attempting to start 'ora.ctssd' on 'rac3'
CRS-2674: Start of 'ora.ctssd' on 'rac3' failed
CRS-4000: Command Start failed, or completed with errors.
Failed to start Cluster Time Synchronisation Service - CTSS at /u01/app/11.2.0/grid_1/crs/install/crsconfig_lib.pm line 1288.
/u01/app/11.2.0/grid_1/perl/bin/perl -I/u01/app/11.2.0/grid_1/perl/lib -I/u01/app/11.2.0/grid_1/crs/install /u01/app/11.2.0/grid_1/crs/install/rootcrs.pl execution failed
注意:这里的报错是由于rac3的时间和rac1和rac2 的时间不一致,只要将rac3的时间和rac1 和rac2 改成同步就可以了,具体修改如下:

[root@rac3 src]# date -s '2017-03-04 14:48:34'
Sat Mar  4 14:48:34 CST 2017
[root@rac3 src]# hwclock -w
先删除之前执行的记录

[root@rac3 ~]# /u01/app/11.2.0/grid_1/crs/install/roothas.pl -deconfig -force -verbose
Using configuration parameter file: /u01/app/11.2.0/grid_1/crs/install/crsconfig_params
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'
CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle Restart stack

再次执行:
[root@rac3 ~]# /u01/app/11.2.0/grid_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid_1/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
③验证集群软件是否添加成功

+ASM1:/home/grid@rac1>cluvfy stage -post nodeadd -n rac3 -verbose

Performing post-checks for node addition

Checking node reachability...

Check: Node reachability from node "rac1"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  rac3                                  yes                     
Result: Node reachability check passed from node "rac1"

Checking user equivalence...

Check: User equivalence for user "grid"
  Node Name                             Status                  
  ------------------------------------  ------------------------
  rac3                                  passed                  
Result: User equivalence check passed for user "grid"

Checking cluster integrity...

Node Name                           
  ------------------------------------
  rac1                                
  rac2                                
  rac3

Cluster integrity check passed

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac2"
The Oracle Clusterware is healthy on node "rac1"
The Oracle Clusterware is healthy on node "rac3"

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/u01/app/11.2.0/grid_1" is not shared
Result: Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  rac2                                  passed                  
  rac1                                  passed                  
  rac3                                  passed

Verification of the hosts config file successful

Interface information for node "rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.180.3   192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:3A:88 1500  
 eth0   192.168.180.5   192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:3A:88 1500  
 eth1   10.10.10.3      10.10.10.0      0.0.0.0         192.168.180.1   00:50:56:8E:0C:E6 1500  
 eth1   169.254.107.75  169.254.0.0     0.0.0.0         192.168.180.1   00:50:56:8E:0C:E6 1500

Interface information for node "rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.180.2   192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:5F:E5 1500  
 eth0   192.168.180.4   192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:5F:E5 1500  
 eth0   192.168.180.6   192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:5F:E5 1500  
 eth1   10.10.10.2      10.10.10.0      0.0.0.0         192.168.180.1   00:50:56:8E:22:19 1500  
 eth1   169.254.145.157 169.254.0.0     0.0.0.0         192.168.180.1   00:50:56:8E:22:19 1500

Interface information for node "rac3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.180.10  192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:6B:4B 1500  
 eth0   192.168.180.11  192.168.180.0   0.0.0.0         192.168.180.1   00:50:56:8E:6B:4B 1500  
 eth1   10.10.10.10     10.10.10.0      0.0.0.0         192.168.180.1   00:50:56:8E:2C:7F 1500  
 eth1   169.254.166.147 169.254.0.0     0.0.0.0         192.168.180.1   00:50:56:8E:2C:7F 1500

Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  rac2[192.168.180.3]             rac2[192.168.180.5]             yes             
  rac2[192.168.180.3]             rac1[192.168.180.2]             yes             
  rac2[192.168.180.3]             rac1[192.168.180.4]             yes             
  rac2[192.168.180.3]             rac1[192.168.180.6]             yes             
  rac2[192.168.180.3]             rac3[192.168.180.10]            yes             
  rac2[192.168.180.3]             rac3[192.168.180.11]            yes             
  rac2[192.168.180.5]             rac1[192.168.180.2]             yes             
  rac2[192.168.180.5]             rac1[192.168.180.4]             yes             
  rac2[192.168.180.5]             rac1[192.168.180.6]             yes             
  rac2[192.168.180.5]             rac3[192.168.180.10]            yes             
  rac2[192.168.180.5]             rac3[192.168.180.11]            yes             
  rac1[192.168.180.2]             rac1[192.168.180.4]             yes             
  rac1[192.168.180.2]             rac1[192.168.180.6]             yes             
  rac1[192.168.180.2]             rac3[192.168.180.10]            yes             
  rac1[192.168.180.2]             rac3[192.168.180.11]            yes             
  rac1[192.168.180.4]             rac1[192.168.180.6]             yes             
  rac1[192.168.180.4]             rac3[192.168.180.10]            yes             
  rac1[192.168.180.4]             rac3[192.168.180.11]            yes             
  rac1[192.168.180.6]             rac3[192.168.180.10]            yes             
  rac1[192.168.180.6]             rac3[192.168.180.11]            yes             
  rac3[192.168.180.10]            rac3[192.168.180.11]            yes             
Result: Node connectivity passed for interface "eth0"

Check: TCP connectivity of subnet "192.168.180.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  rac1:192.168.180.2              rac2:192.168.180.3              passed          
  rac1:192.168.180.2              rac2:192.168.180.5              passed          
  rac1:192.168.180.2              rac1:192.168.180.4              passed          
  rac1:192.168.180.2              rac1:192.168.180.6              passed          
  rac1:192.168.180.2              rac3:192.168.180.10             passed          
  rac1:192.168.180.2              rac3:192.168.180.11             passed          
Result: TCP connectivity check passed for subnet "192.168.180.0"

Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  rac2[10.10.10.3]                rac1[10.10.10.2]                yes             
  rac2[10.10.10.3]                rac3[10.10.10.10]               yes             
  rac1[10.10.10.2]                rac3[10.10.10.10]               yes             
Result: Node connectivity passed for interface "eth1"

Check: TCP connectivity of subnet "10.10.10.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  rac1:10.10.10.2                 rac2:10.10.10.3                 passed          
  rac1:10.10.10.2                 rac3:10.10.10.10                passed          
Result: TCP connectivity check passed for subnet "10.10.10.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.180.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.180.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.180.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking node application existence...

Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  rac2          yes                       yes                       passed    
  rac1          yes                       yes                       passed    
  rac3          yes                       yes                       passed    
VIP node application check passed

Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  rac2          yes                       yes                       passed    
  rac1          yes                       yes                       passed    
  rac3          yes                       yes                       passed    
NETWORK node application check passed

Checking existence of GSD node application (optional)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  rac2          no                        no                        exists    
  rac1          no                        no                        exists    
  rac3          no                        no                        exists    
GSD node application is offline on nodes "rac2,rac1,rac3"

Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  rac2          no                        yes                       passed    
  rac1          no                        yes                       passed    
  rac3          no                        yes                       passed    
ONS node application check passed

Checking Single Client Access Name (SCAN)...
  SCAN Name         Node          Running?      ListenerName  Port          Running?    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac-scan          rac1          true          LISTENER_SCAN1  1521          true

Checking TCP connectivity to SCAN Listeners...
  Node          ListenerName              TCP connectivity?       
  ------------  ------------------------  ------------------------
  rac1          LISTENER_SCAN1            yes                     
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for "rac-scan"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

ERROR: 
PRVG-1101 : SCAN name "rac-scan" failed to resolve
  SCAN Name     IP Address                Status                    Comment   
  ------------  ------------------------  ------------------------  ----------
  rac-scan      192.168.180.6             failed                    NIS Entry

ERROR: 
PRVF-4657 : Name resolution setup check for "rac-scan" (IP address: 192.168.180.6) failed

ERROR: 
PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan"

Verification of SCAN VIP and Listener setup failed

Checking to make sure user "grid" is not in "root" group
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  rac3          passed                    does not exist          
Result: User "grid" is not part of "root" group. Check passed

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status                  
  ------------------------------------  ------------------------
  rac3                                  passed                  
Result: CTSS resource check passed

Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State                   
  ------------------------------------  ------------------------
  rac3                                  Active                  
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status                  
  ------------  ------------------------  ------------------------
  rac3          0.0                       passed

Time offset is within the specified limits on the following set of nodes: 
"[rac3]" 
Result: Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was unsuccessful on all the nodes.

注意:上面的报错如果在rac3 上crs状态都正常的话,就可以忽略,这里选择忽略。

2.添加新节点数据库
①为新节点安装数据库软件(在已经有节点下用oracle用户执行)

[root@rac1 ~]# su - oracle         
rac1:/home/oracle@rac1>cd /u01/app/oracle/product/11.2.0/db_1/oui/bin/
rac1:/u01/app/oracle/product/11.2.0/db_1/oui/bin@rac1>./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}"

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "rac1"

Checking user equivalence...
User equivalence check passed for user "oracle"

WARNING: 
Node "rac3" already appears to be part of cluster

Pre-check for node addition was successful. 
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3467 MB    Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.

Performing tests to see whether nodes rac2,rac3 are available
............................................................... 100% Done.

..
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/oracle/product/11.2.0/db_1
   New Nodes
Space Requirements
   New Nodes
      rac3
         /u01: Required 4.29GB : Available 32.46GB
Installed Products
   Product Names
      Oracle Database 11g 11.2.0.4.0 
      Java Development Kit 1.5.0.51.10 
      Installer SDK Component 11.2.0.4.0 
      Oracle One-Off Patch Installer 11.2.0.3.4 
      Oracle Universal Installer 11.2.0.4.0 
      Oracle USM Deconfiguration 11.2.0.4.0 
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0 
      Oracle DBCA Deconfiguration 11.2.0.4.0 
      Oracle RAC Deconfiguration 11.2.0.4.0 
      Oracle Database Deconfiguration 11.2.0.4.0 
      Oracle Configuration Manager Client 10.3.2.1.0 
      Oracle Configuration Manager 10.3.8.1.0 
      Oracle ODBC Driverfor Instant Client 11.2.0.4.0 
      LDAP Required Support Files 11.2.0.4.0 
      SSL Required Support Files for InstantClient 11.2.0.4.0 
      Bali Share 1.1.18.0.0 
      Oracle Extended Windowing Toolkit 3.4.47.0.0 
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 
      Oracle Real Application Testing 11.2.0.4.0 
      Oracle Database Vault J2EE Application 11.2.0.4.0 
      Oracle Label Security 11.2.0.4.0 
      Oracle Data Mining RDBMS Files 11.2.0.4.0 
      Oracle OLAP RDBMS Files 11.2.0.4.0 
      Oracle OLAP API 11.2.0.4.0 
      Platform Required Support Files 11.2.0.4.0 
      Oracle Database Vault option 11.2.0.4.0 
      Oracle RAC Required Support Files-HAS 11.2.0.4.0 
      SQL*Plus Required Support Files 11.2.0.4.0 
      Oracle Display Fonts 9.0.2.0.0 
      Oracle Ice Browser 5.2.3.6.0 
      Oracle JDBC Server Support Package 11.2.0.4.0 
      Oracle SQL Developer 11.2.0.4.0 
      Oracle Application Express 11.2.0.4.0 
      XDK Required Support Files 11.2.0.4.0 
      RDBMS Required Support Files for Instant Client 11.2.0.4.0 
      SQLJ Runtime 11.2.0.4.0 
      Database Workspace Manager 11.2.0.4.0 
      RDBMS Required Support Files Runtime 11.2.0.4.0 
      Oracle Globalization Support 11.2.0.4.0 
      Exadata Storage Server 11.2.0.1.0 
      Provisioning Advisor Framework 10.2.0.4.3 
      Enterprise Manager Database Plugin -- Repository Support 11.2.0.4.0 
      Enterprise Manager Repository Core Files 10.2.0.4.5 
      Enterprise Manager Database Plugin -- Agent Support 11.2.0.4.0 
      Enterprise Manager Grid Control Core Files 10.2.0.4.5 
      Enterprise Manager Common Core Files 10.2.0.4.5 
      Enterprise Manager Agent Core Files 10.2.0.4.5 
      RDBMS Required Support Files 11.2.0.4.0 
      regexp 2.1.9.0.0 
      Agent Required Support Files 10.2.0.4.5 
      Oracle 11g Warehouse Builder Required Files 11.2.0.4.0 
      Oracle Notification Service (eONS) 11.2.0.4.0 
      Oracle Text Required Support Files 11.2.0.4.0 
      Parser Generator Required Support Files 11.2.0.4.0 
      Oracle Database 11g Multimedia Files 11.2.0.4.0 
      Oracle Multimedia Java Advanced Imaging 11.2.0.4.0 
      Oracle Multimedia Annotator 11.2.0.4.0 
      Oracle JDBC/OCI Instant Client 11.2.0.4.0 
      Oracle Multimedia Locator RDBMS Files 11.2.0.4.0 
      Precompiler Required Support Files 11.2.0.4.0 
      Oracle Core Required Support Files 11.2.0.4.0 
      Sample Schema Data 11.2.0.4.0 
      Oracle Starter Database 11.2.0.4.0 
      Oracle Message Gateway Common Files 11.2.0.4.0 
      Oracle XML Query 11.2.0.4.0 
      XML Parser for Oracle JVM 11.2.0.4.0 
      Oracle Help For Java 4.2.9.0.0 
      Installation Plugin Files 11.2.0.4.0 
      Enterprise Manager Common Files 10.2.0.4.5 
      Expat libraries 2.0.1.0.1 
      Deinstallation Tool 11.2.0.4.0 
      Oracle Quality of Service Management (Client) 11.2.0.4.0 
      Perl Modules 5.10.0.0.1 
      JAccelerator (COMPANION) 11.2.0.4.0 
      Oracle Containers for Java 11.2.0.4.0 
      Perl Interpreter 5.10.0.0.2 
      Oracle Net Required Support Files 11.2.0.4.0 
      Secure Socket Layer 11.2.0.4.0 
      Oracle Universal Connection Pool 11.2.0.4.0 
      Oracle JDBC/THIN Interfaces 11.2.0.4.0 
      Oracle Multimedia Client Option 11.2.0.4.0 
      Oracle Java Client 11.2.0.4.0 
      Character Set Migration Utility 11.2.0.4.0 
      Oracle Code Editor 1.2.1.0.0I 
      PL/SQL Embedded Gateway 11.2.0.4.0 
      OLAP SQL Scripts 11.2.0.4.0 
      Database SQL Scripts 11.2.0.4.0 
      Oracle Locale Builder 11.2.0.4.0 
      Oracle Globalization Support 11.2.0.4.0 
      SQL*Plus Files for Instant Client 11.2.0.4.0 
      Required Support Files 11.2.0.4.0 
      Oracle Database User Interface 2.2.13.0.0 
      Oracle ODBC Driver 11.2.0.4.0 
      Oracle Notification Service 11.2.0.3.0 
      XML Parser for Java 11.2.0.4.0 
      Oracle Security Developer Tools 11.2.0.4.0 
      Oracle Wallet Manager 11.2.0.4.0 
      Cluster Verification Utility Common Files 11.2.0.4.0 
      Oracle Clusterware RDBMS Files 11.2.0.4.0 
      Oracle UIX 2.2.24.6.0 
      Enterprise Manager plugin Common Files 11.2.0.4.0 
      HAS Common Files 11.2.0.4.0 
      Precompiler Common Files 11.2.0.4.0 
      Installation Common Files 11.2.0.4.0 
      Oracle Help for the  Web 2.0.14.0.0 
      Oracle LDAP administration 11.2.0.4.0 
      Buildtools Common Files 11.2.0.4.0 
      Assistant Common Files 11.2.0.4.0 
      Oracle Recovery Manager 11.2.0.4.0 
      PL/SQL 11.2.0.4.0 
      Generic Connectivity Common Files 11.2.0.4.0 
      Oracle Database Gateway for ODBC 11.2.0.4.0 
      Oracle Programmer 11.2.0.4.0 
      Oracle Database Utilities 11.2.0.4.0 
      Enterprise Manager Agent 10.2.0.4.5 
      SQL*Plus 11.2.0.4.0 
      Oracle Netca Client 11.2.0.4.0 
      Oracle Multimedia Locator 11.2.0.4.0 
      Oracle Call Interface (OCI) 11.2.0.4.0 
      Oracle Multimedia 11.2.0.4.0 
      Oracle Net 11.2.0.4.0 
      Oracle XML Development Kit 11.2.0.4.0 
      Oracle Internet Directory Client 11.2.0.4.0 
      Database Configuration and Upgrade Assistants 11.2.0.4.0 
      Oracle JVM 11.2.0.4.0 
      Oracle Advanced Security 11.2.0.4.0 
      Oracle Net Listener 11.2.0.4.0 
      Oracle Enterprise Manager Console DB 11.2.0.4.0 
      HAS Files for DB 11.2.0.4.0 
      Oracle Text 11.2.0.4.0 
      Oracle Net Services 11.2.0.4.0 
      Oracle Database 11g 11.2.0.4.0 
      Oracle OLAP 11.2.0.4.0 
      Oracle Spatial 11.2.0.4.0 
      Oracle Partitioning 11.2.0.4.0 
      Enterprise Edition Options 11.2.0.4.0 
-----------------------------------------------------------------------------

Instantiating scripts for add node (Saturday, March 4, 2017 3:03:12 PM CST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Saturday, March 4, 2017 3:03:23 PM CST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Saturday, March 4, 2017 3:09:21 PM CST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes rac3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
    
The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
在新节点 rac3用root用户执行  /u01/app/oracle/product/11.2.0/db_1/root.sh

[root@rac3 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

②添加实例                                              
[oracle@rac1~]$ dbca
或用命令行直接添加实例(在已经有节点下面用oracle用户执行)
rac1:/home/oracle@rac1>dbca -silent -addInstance -nodeList rac3 -gdbName rac -instanceName rac3 -sysDBAUserName sys -sysDBAPassword oracle
Adding instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
66% complete
Completing instance management.
76% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/rac/rac.log" for further details.
3.验证已添加实例
rac1:/home/oracle@rac1>sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun Mar 5 08:20:36 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select thread#,status,instance from gv$thread;

THREAD# STATUS
---------- ------
INSTANCE
--------------------------------------------------------------------------------
1 OPEN
rac1

2 OPEN
rac2

3 OPEN
rac3

THREAD# STATUS
---------- ------
INSTANCE
--------------------------------------------------------------------------------
1 OPEN
rac1

2 OPEN
rac2

3 OPEN
rac3

THREAD# STATUS
---------- ------
INSTANCE
--------------------------------------------------------------------------------
1 OPEN
rac1

2 OPEN
rac2

3 OPEN
rac3

9 rows selected.

SQL> 
+ASM1:/home/grid@rac1>crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
               ONLINE  ONLINE       rac3                                         
ora.FRA_ARC.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
               ONLINE  ONLINE       rac3                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
               ONLINE  ONLINE       rac3                                         
ora.OCR_VOTING.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
               ONLINE  ONLINE       rac3                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
               ONLINE  ONLINE       rac3                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
               OFFLINE OFFLINE      rac3                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
               ONLINE  ONLINE       rac3                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
               ONLINE  ONLINE       rac3                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cvu
      1        ONLINE  ONLINE       rac1                                         
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.rac.db
      1        ONLINE  ONLINE       rac1                     Open                
      2        ONLINE  ONLINE       rac2                     Open                
      3        ONLINE  ONLINE       rac3                     Open                
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.rac3.vip
      1        ONLINE  ONLINE       rac3                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                                         
+ASM1:/home/grid@rac1>

至此Oracle rac 下添加节点已经完成。
---------------------
作者:逍遥浪子-雨
来源:CSDN
原文:https://blog.csdn.net/shiyu1157758655/article/details/60877076
版权声明:本文为博主原创文章,转载请附上博文链接!

上一篇:支撑大规模公有云的Kubernetes改进与优化 (3)


下一篇:access to modified closure 闭包的问题