RAC安装时需要执行4个脚本及意义

转 RAC安装时需要执行4个脚本及意义https://blog.csdn.net/tianlesoftware/article/details/5317034  RAC安装时需要执行4个脚本
1) $ORACLE_BASE/oraInventory/orainstRoot.sh (clusterware 结束时执行)
2) $CRS_HOME/root.sh (clusterware 结束时执行)
3) $CRS_HOME/bin/vipca.sh(该脚本是在第二个节点执行$CRS_HOME/root.sh时被自动调用)
4) $ORACLE_HOME/root.sh (安装完数据库以后执行)
  1. orainstRoot.sh 脚本
1.1 orainstRoot.sh 脚本执行过程
root@node2 #/oracle/oraInventory/orainstRoot.sh 
Changing permissions of /oracle/oraInventory to 770.
Changing groupname of /oracle/oraInventory to oinstall.
The execution of the script is complete
  1.2 orainstRoot.sh 脚本内容
root@node1 # more /oracle/oraInventory/orainstRoot.sh
#!/bin/sh
if [ ! -d "/var/opt/oracle" ]; then
mkdir -p /var/opt/oracle;
fi
if [ -d "/var/opt/oracle" ]; then
chmod 755 /var/opt/oracle;
fi
if [ -f "/oracle/oraInventory/oraInst.loc" ]; then
cp /oracle/oraInventory/oraInst.loc /var/opt/oracle/oraInst.loc;
chmod 644 /var/opt/oracle/oraInst.loc;
else
INVPTR=/var/opt/oracle/oraInst.loc
INVLOC=/oracle/oraInventory
GRP=oinstall
PTRDIR="`dirname $INVPTR`";
# Create the software inventory location pointer file
if [ ! -d "$PTRDIR" ]; then
 mkdir -p $PTRDIR;
fi
echo "Creating the Oracle inventory pointer file ($INVPTR)";
echo   inventory_loc=$INVLOC > $INVPTR
echo   inst_group=$GRP >> $INVPTR
chmod 644 $INVPTR
# Create the inventory directory if it doesn't exist
if [ ! -d "$INVLOC" ];then
 echo "Creating the Oracle inventory directory ($INVLOC)";
 mkdir -p $INVLOC;
fi
fi
echo "Changing permissions of /oracle/oraInventory to 770.";
chmod -R 770 /oracle/oraInventory;
if [ $? != 0 ]; then
 echo "OUI-35086:WARNING: chmod of /oracle/oraInventory to 770 failed!";
fi
echo "Changing groupname of /oracle/oraInventory to oinstall.";
chgrp oinstall /oracle/oraInventory;
if [ $? != 0 ]; then
 echo "OUI-10057:WARNING: chgrp of /oracle/oraInventory to oinstall failed!";
fi
echo "The execution of the script is complete"
  从脚本我们可以看出,这个脚本主要是创建/var/opt/oracle目录(如果不存在的话),再在该目录下建oraInst.loc文件(该文件记录orainventory的位置和组)。并改变orainventory的属性。
  root@node2 # ls –rlt /var/opt/oracle/
total 2
-rw-r--r--  1 root    root         55 Apr 2 14:42 oraInst.loc
root@node2 # more oraInst.loc
inventory_loc=/oracle/oraInventory
inst_group=oinstall
  在另一个节点上运行该脚本
root@node1 #/oracle/oraInventory/orainstRoot.sh 
Changing permissions of /oracle/oraInventory to 770.
Changing groupname of /oracle/oraInventory to oinstall.
The execution of the script is complete
  2. Root.sh 脚本
  2.1 root.sh 脚本执行过程
root@node2 #/oracle/crs/root.sh
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Checking to see if any 9i GSD is up
  Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 0: node2 node2-priv node2
node 1: node1 node1-priv node1
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /oracle/ocrcfg1
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
       node2
CSS is inactive on these nodes.
       node1
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
     从输出我们可以看出,该脚本主要执行crs的配置,格式化ocr disk,更新/etc/inittab文件,启动css进程,在/var/opt/oracle/新建了ocr.loc文件及,scls_scr,oprocd文件夹。
  2.2 查看crs进程及/etc/inittab文件可以看出节点的变化。
root@node2 # ps -ef|grep crs|grep –v grep
 oracle 18212 18211  0 14:47:28 ?     0:00 /oracle/crs/bin/ocssd.bin
 oracle 18191 18180  0 14:47:28 ?     0:00 /oracle/crs/bin/oclsmon.bin
 oracle 17886    1  0 14:47:27 ?     0:00 /oracle/crs/bin/evmd.bin
 oracle 18180 18092  0 14:47:28 ?     0:00 /bin/sh -c cd /oracle/crs/log/node2/cssd/oclsmon; ulimit -c unlimited; /ora
 root 17889    1  0 14:47:27 ?       0:00 /oracle/crs/bin/crsd.bin reboot
 oracle 18211 18093  0 14:47:28 ?      0:00 /bin/sh -c ulimit -c unlimited; cd /oracle/crs/log/node2/cssd; /oracle/crs
  root@node2 # ls –rlt /var/opt/oracle/
total 8
-rw-r--r--  1 root    root         55 Apr 2 14:42 oraInst.loc
drwxrwxr-x  5 root    root        512 Apr 2 14:47 oprocd
drwxr-xr-x  3 root    root        512 Apr 2 14:47 scls_scr
-rw-r--r--  1 root    oinstall     48 Apr 2 14:47 ocr.loc
  注意:新创建了ocr.loc,scls_scr,oprocd,但没有创建/var/opt/oracle/oratab。
  root@node1 # more inittab
# Copyright 2004 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# The /etc/inittab file controls the configuration of init(1M); for more
# information refer to init(1M) and inittab(4). It is no longer
# necessary to edit inittab(4) directly; administrators should use the
# Solaris Service Management Facility (SMF) to define services instead.
# Refer to smf(5) and the System Administration Guide for more
# information on SMF.
#
# For modifying parameters passed to ttymon, use svccfg(1m) to modify
# the SMF repository. For example:
#
#      # svccfg
#      svc:> select system/console-login
#      svc:/system/console-login> setprop ttymon/terminal_type = "xterm"
#      svc:/system/console-login> exit
#
#ident "@(#)inittab   1.41   04/12/14 SMI"
ap::sysinit:/sbin/autopush -f /etc/iu.ap
sp::sysinit:/sbin/soconfig -f /etc/sock2path
smf::sysinit:/lib/svc/bin/svc.startd   >/dev/msglog 2<>/dev/msglog </dev/console
p3:s1234:powerfail:/usr/sbin/shutdown -y -i5 -g0 >/dev/msglog 2<>/dev/msglog
h1:3:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null
h2:3:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null
h3:3:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null
  root@node1 # ls -rlt /etc/inittab*
-rw-r--r--  1 root    root       1072 Nov 2 12:39 inittab.cssd
-rw-r--r--  1 root    root       1206 Mar 21 17:15 inittab.pre10203
-rw-r--r--  1 root    root       1006 Mar 21 17:15 inittab.nocrs10203
-rw-r--r--  1 root    root       1040 Apr 2 14:50 inittab.orig
-rw-r--r--  1 root    root       1040 Apr 2 14:50 inittab.no_crs
-rw-r--r--  1 root    root       1240 Apr 2 14:50 inittab
-rw-r--r--  1 root    root       1240 Apr 2 14:50 inittab.crs
  该脚本会将inittab复制为inittab.no_crs,修改后的inittab另复制一份为inittab.crs.
    2.3 在另外一个节点执行$CRS_HOME/root.sh
root@node1 #/oracle/crs/root.sh
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Checking to see if any 9i GSD is up
  Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 0: node2 node2-priv node2
node 1: node1 node1-priv node1
clscfg: Arguments check out successfully.
  NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
       node2
       node1
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
  Creating VIP application resource on (2) nodes...
Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...
  Done.
    3. 在第二个节点上运行时会多比在第一个节点上运行多执行一个任务
    -------运行$CRS_HOME/bin/vipca.sh

VIPCA.sh主要是配置vip并启动crs的默认资源(未建库时默认为6个),多启动三个后台进程。
root@node1 # ps -ef|grep crs|grep -v grep
 oracle 18347 17447  0 14:51:06 ?          0:00 /oracle/crs/bin/evmlogger.bin -o /oracle/crs/evm/log/evmlogger.info -l /oracle/
oracle 17447    1  0 14:50:47 ?          0:00 /oracle/crs/bin/evmd.bin
oracle 17763 17756  0 14:50:48 ?          0:00 /oracle/crs/bin/ocssd.bin
oracle 17756 17643  0 14:50:48 ?          0:00 /bin/sh -c ulimit -c unlimited; cd /oracle/crs/log/node1/cssd; /oracle/crs
oracle 21216    1  0 14:52:28 ?          0:00 /oracle/crs/opmn/bin/ons -d
oracle 21217 21216  0 14:52:28 ?          0:00 /oracle/crs/opmn/bin/ons -d
oracle 17771 17642  0 14:50:48 ?          0:00 /bin/sh -c cd /oracle/crs/log/node1/cssd/oclsmon; ulimit -c unlimited; /ora
oracle 17773 17771  0 14:50:48 ?          0:00 /oracle/crs/bin/oclsmon.bin
root 17449    1  0 14:50:47 ?          0:01 /oracle/crs/bin/crsd.bin reboot
  root@node2 # ps -ef|grep crs|grep -v grep
oracle 18212 18211  0 14:47:28 ?          0:00 /oracle/crs/bin/ocssd.bin
oracle 27467 27466  0 14:52:25 ?          0:00 /oracle/crs/opmn/bin/ons -d
oracle 25252 17886  0 14:51:16 ?          0:00 /oracle/crs/bin/evmlogger.bin -o /oracle/crs/evm/log/evmlogger.info -l /oracle/
oracle 27466    1  0 14:52:25 ?          0:00 /oracle/crs/opmn/bin/ons -d
oracle 18191 18180  0 14:47:28 ?          0:00 /oracle/crs/bin/oclsmon.bin
oracle 17886    1  0 14:47:27 ?          0:00 /oracle/crs/bin/evmd.bin
oracle 18180 18092  0 14:47:28 ?       0:00 /bin/sh -c cd /oracle/crs/log/node2/cssd/oclsmon; ulimit -c unlimited; /ora
root 17889    1  0 14:47:27 ?          0:00 /oracle/crs/bin/crsd.bin reboot
 oracle 18211 18093  0 14:47:28 ?          0:00 /bin/sh -c ulimit -c unlimited; cd /oracle/crs/log/node2/cssd; /oracle/crs
从现在node2上的进程就能看出,执行完vipca.sh后,会多出三个后台进程。
  root@node1 # crs_stat -t
Name          Type          Target   State    Host       
------------------------------------------------------------
ora....c03.gsd application   ONLINE   ONLINE   node1  
ora....c03.ons application   ONLINE   ONLINE   node1  
ora....c03.vip application   ONLINE   ONLINE   node1  
ora....c04.gsd application   ONLINE   ONLINE   node2  
ora....c04.ons application   ONLINE   ONLINE   node2  
ora....c04.vip application   ONLINE   ONLINE   node1
    4. 安装数据库软件(binary)时需在最后一步:执行$ORACLE_HOME/root.sh
  root@node2 #$ORACLE_HOME/root.sh
Running Oracle10 root.sh script...
  The following environment variables are set as:
   ORACLE_OWNER= oracle
   ORACLE_HOME= /oracle/10g
  Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
  Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
  Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
  Copying coraenv to /usr/local/bin ...
  Creating /var/opt/oracle/oratab file...
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
  该脚本的作用在于在指定的目录(默认为/usr/local/bin)下创建dbhome,oraenv,coraenv,在/var/opt/oracle/里创建oratab文件。
  root@node2# ls –rlt /usr/local/bin
total 18
-rwxr-xr-x  1 oracle  root       2428 Apr 2 15:07 dbhome
-rwxr-xr-x  1 oracle  root       2560 Apr 2 15:07 oraenv
-rwxr-xr-x  1 oracle  root       2857 Apr 2 15:07 coraenv
  root@node2 # ls –rlt /var/opt/oracle/
total 10
-rw-r--r--  1 root    root         55 Apr 2 14:42 oraInst.loc
drwxrwxr-x  5 root    root        512 Apr 2 14:47 oprocd
drwxr-xr-x  3 root    root        512 Apr 2 14:47 scls_scr
-rw-r--r--  1 root    oinstall     48 Apr 2 14:47 ocr.loc
-rw-rw-r--  1 oracle  root        678 Apr 2 15:07 oratab
  root@node1 # /oracle/10g/root.sh
Running Oracle10 root.sh script...
  The following environment variables are set as:
   ORACLE_OWNER= oracle
   ORACLE_HOME= /oracle/10g
  Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
  Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
  Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
  Copying coraenv to /usr/local/bin ...
  Creating /var/opt/oracle/oratab file...
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
   https://www.icode9.com/i/ll/?i=20190217105710569.png?,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTEwNzgxNDE=,size_16,color_FFFFFF,t_70《算法导论 第三版英文版》_高清中文版.pdf
https://pan.baidu.com/s/17D1kXU6dLdU0YwHM2cvNMw
《深度学习入门:基于Python的理论与实现》_高清中文版.pdf
https://pan.baidu.com/s/1IeVs35f3gX5r6eAdiRQw4A
《深入浅出数据分析》_高清中文版.pdf
https://pan.baidu.com/s/1GV-QNbtmjZqumDkk8s7z5w
《Python编程:从入门到实践》_高清中文版.pdf
https://pan.baidu.com/s/1GUNSg4mdpeOf1LC_MjXunQ
《Python科学计算》_高清中文版.pdf
https://pan.baidu.com/s/1-hDKhK-7rDDFll_UFpKmpw

上一篇:Oracle 12C RAC安装grid时root.sh报错ORA-00845


下一篇:Django的admin管理系统写入中文出错的解决方法/1267 Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) for operation ‘locate’