本系列文章详细记录了在已有两个rac节点的基础上,添加rac第3个节点的过程,期间对rac的使用没有任何影响,rac节点的操作系统均为centos4.8 64位版,数据库版本均为10.2.0.1
一:准备工作
1:配置新节点操作系统环境同其他节点一致,包括共享存储,补丁包,内核参数,用户环境变量等等
[root@rac3 ~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.41 rac1.yang.com rac1
192.168.122.41 rac1-priv.yang.com rac1-priv
192.168.1.141 rac1-vip.yang.com rac1-vip
192.168.1.42 rac2.yang.com rac2
192.168.122.42 rac2-priv.yang.com rac2-priv
192.168.1.142 rac2-vip.yang.com rac2-vip
192.168.1.43 rac3.yang.com rac3
192.168.122.43 rac3-priv.yang.com rac3-priv
192.168.1.143 rac3-vip.yang.com rac3-vip
[root@rac3 ~]# getenforce
Disabled
[root@rac3 ~]# groupadd oinstall
[root@rac3 ~]# groupadd dba
[root@rac3 ~]# useradd -g oinstall -G dba oracle
[root@rac3 ~]# echo 'oracle' |passwd --stdin oracle
Changing password for user oracle.
passwd: all authentication tokens updated successfully.
[root@rac3 ~]# tail /etc/sysctl.conf
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
[root@rac3 ~]# sysctl -p
[root@rac3 ~]# tail -4 /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
[root@rac3 ~]# tail -1 /etc/pam.d/login
session required pam_limits.so
[root@rac3 ~]# tail -1 /etc/modprobe.conf
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
[root@rac3 ~]# modprobe -v hangcheck-timer
insmod /lib/modules/2.6.9-89.EL/kernel/drivers/char/hangcheck-timer.ko hangcheck_tick=30 hangcheck_margin=180
2:在新节点上配置共享存储,创建相关目录,设置用户环境变量
[root@rac3 ~]# chown -R oracle.oinstall /u01/
[root@rac3 ~]# su - oracle
[oracle@rac3 ~]$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
PATH=/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6/bin:/root/bin:/root/bin
export EDITOR=vim
export ORACLE_SID=racdb3
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs_1
export LD_LIBRARY_PATH=/lib
alias sqlplus='/usr/local/rlwrap/bin/rlwrap sqlplus'
alias rman='/usr/local/rlwrap/bin/rlwrap rman'
export NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'
export NLS_LANG=american_america.UTF8
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH
umask 022
[oracle@rac3 ~]$ mkdir -p $ORACLE_BASE/admin
[oracle@rac3 ~]$ mkdir -p $ORACLE_HOME
[oracle@rac3 ~]$ mkdir -p $ORA_CRS_HOME
3:配置各个节点的ssh对等性
[oracle@rac3 ~]$ ssh-keygen -t dsa
[oracle@rac3 ~]$ ssh-keygen -t rsa
使用ssh-copy-id命令将rac3节点的公钥导入到rac1,rac2上的/home/oracle/.ssh/authorized_keys文件中,步骤不在赘述;
最终需要实现在3个节点上使用oracle用户进行相互ssh登陆不需要输入密码
|
二:配置新节点上的CRS
[oracle@rac1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1
ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1
ora.rac1.gsd application 0/5 0/0 ONLINE ONLINE rac1
ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1
ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2
ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2
ora.racdb.db application 0/1 0/1 ONLINE ONLINE rac1
ora....b1.inst application 0/5 0/0 ONLINE ONLINE rac1
ora....b2.inst application 0/5 0/0 ONLINE ONLINE rac2
[oracle@rac1 ~]$ cd $ORA_CRS_HOME/oui/bin
[oracle@rac1 bin]$ ./addNode.sh
|
查看日志信息
[oracle@rac1 ~]$ tail -f /u01/app/oracle/oraInventory/logs/addNodeActions2011-11-29_09-42-48PM.log
INFO: /u01/app/oracle/product/10.2.0/crs_1/oui/bin/../bin/runInstaller -paramFile /u01/app/oracle/product/10.2.0/crs_1/oui/bin/../clusterparam.ini -silent
-ignoreSysPrereqs -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/10.2.0/crs_1 CLUSTER_NODES=rac1,rac2,rac3 CRS=true
"INVENTORY_LOCATION=/u01/app/oracle/oraInventory" LOCAL_NODE=rac3 -remoteInvocation -invokingNodeName rac1 -logFilePath "/u01/app/oracle/oraInventory/logs"
-timestamp 2011-11-29_09-42-48PM
INFO: OUI-10234:Failed to copy the root script, /u01/app/oracle/oraInventory/orainstRoot.sh to the cluster nodes rac3.
Please copy them manually to these nodes and execute the script
[oracle@rac1 ~]$ scp /u01/app/oracle/oraInventory/orainstRoot.sh rac3:/u01/app/oracle/oraInventory/
|
运行相关脚本:
[root@rac3 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/install/rootaddnode.sh
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 3: rac3 rac3-priv rac3
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/u01/app/oracle/product/10.2.0/crs_1/bin/srvctl add nodeapps -n rac3 -A rac3-vip/255.255.255.0/eth0:eth1 -o /u01/app/oracle/product/10.2.0/crs_1
[root@rac3 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
OCR LOCATIONS = /dev/raw/raw5,/dev/raw/raw6
OCR backup directory '/u01/app/oracle/product/10.2.0/crs_1/cdata/crs' does not exist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
rac2
rac3
CSS is active on all nodes.
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.
最后需要在rac3节点上使用root用户手动运行下$ORA_CRS_HOME/bin/vipca
|
CRS配置成功后,应该在各个节点上crs_stat和olsnodes命令的输出结果一致,可以看到节点3的gsd,ons,vip已经正常
[oracle@rac2 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1
ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1
ora.rac1.gsd application 0/5 0/0 ONLINE ONLINE rac1
ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1
ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2
ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2
ora.rac3.gsd application 0/5 0/0 ONLINE ONLINE rac3
ora.rac3.ons application 0/3 0/0 ONLINE ONLINE rac3
ora.rac3.vip application 0/0 0/0 ONLINE ONLINE rac3
ora.racdb.db application 0/1 0/1 ONLINE ONLINE rac1
ora....b1.inst application 0/5 0/0 ONLINE ONLINE rac1
ora....b2.inst application 0/5 0/0 ONLINE ONLINE rac2
[oracle@rac2 ~]$ olsnodes -n
rac1 1
rac2 2
rac3 3
|
三:在新节点上部署oracle数据库软件
[oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ ./addNode.sh
[root@rac3 ~]# /u01/app/oracle/product/10.2.0/db_1/root.sh
Running Oracle10 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/10.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
|
至此,新节点rac3上面的crs和数据库软件已经部署完成,由于文章篇幅限制,如何在rac3节点上配置监听,配置ASM实例,数据库实例等内容将在下文中介绍…
本文转自斩月博客51CTO博客,原文链接http://blog.51cto.com/ylw6006/730519如需转载请自行联系原作者
ylw6006