一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

  1. 一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之grid安装 (四)

本章目录结构:

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

这一步也比较重要,主要是安装ASM,如果前一步的共享磁盘没有准备好的话,执行root脚本的时候可能会报错,不过不要紧的,,,一定可以解决的,,,,

 

 

本章目录结构

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

  1. Grid安装过程

下载软件,上传软件,解压软件:

[root@rac1 share]# ll

total 3398288

-rwxrwxrwx 1 root root 1358454646 Dec 14 2011 p10404530_112030_Linux-x86-64_1of7.zip

-rwxrwxrwx 1 root root 1142195302 May 25 2012 p10404530_112030_Linux-x86-64_2of7.zip

-rwxrwxrwx 1 root root 979195792 May 26 2012 p10404530_112030_Linux-x86-64_3of7.zip

[root@rac1 share]# unzip p10404530_112030_Linux-x86-64_1of7.zip -d /tmp/ && unzip p10404530_112030_Linux-x86-64_2of7.zip -d /tmp/

 

[root@rac1 share]# unzip p10404530_112030_Linux-x86-64_3of7.zip -d /tmp/

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

 

 

  1. 安装补丁包(cvuqdisk)

安装cvuqdisk包并验证,在2个节点上都需要安装该包:

 

在两个 Oracle RAC 节点上安装操作系统程序包 cvuqdisk。如果没有 cvuqdisk,集群验证实用程序就无法发现共享磁盘,当运行(手动运行或在 Oracle Grid Infrastructure 安装结束时自动运行)集群验证实用程序时,您会收到这样的错误消息:"Package cvuqdisk not installed"。使用适用于您的硬件体系结构(例如,x86_64 或 i386)的 cvuqdisk RPM。

cvuqdisk RPM 包含在 Oracle Grid Infrastructure 安装介质上的 rpm 目录中。

设置环境变量 CVUQDISK_GRP,使其指向作为 cvuqdisk 的所有者所在的组(本文为 oinstall):

export CVUQDISK_GRP=oinstall

使用 CVU 验证是否满足 Oracle 集群件要求

记住要作为 grid 用户在将要执行 Oracle 安装的节点 (racnode1) 上运行。此外,必须为 grid

用户配置通过用户等效性实现的 SSH 连通性。

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

[root@rac1 rpm]# rpm -ivh cvuqdisk-1.0.7-1.rpm

Preparing... ########################################### [100%]

Using default group oinstall to install package

1:cvuqdisk ########################################### [100%]

[root@rac1 Packages]# rpm -q cvuqdisk

cvuqdisk-1.0.7-1.x86_64

 

传输到第2个节点上:

[root@rac1 rpm]# scp cvuqdisk-1.0.9-1.rpm root@192.168.59.136:/tmp

 

 

 

  1. cluster 硬件检测--安装前预检查配置信息

该过程有点慢。。。。。慢慢等待吧。。。。。

 

只需要在其中一个节点上运行即可

 

在安装 GRID 之前,建议先利用 CVU(Cluster Verification Utility)检查 CRS 的安装前环境。

① 使用 CVU 检查 CRS 的安装前环境

 

在grid软件目录里运行以下命令:

使用 CVU 验证硬件和操作系统设置

./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose

./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose

./runcluvfy.sh stage -post hwos -n rac1,rac2 -verbose

cluvfy stage -pre crsinst -n node1,node2,node3 -fixup -verbose

 

未检测通过的显示为failed,根据情况修复一下即可:

[grid@rac1 grid]$ ll

total 15

drwxrwxrwx 1 root root 4096 Aug 16 2009 doc

drwxrwxrwx 1 root root 4096 Aug 15 2009 install

drwxrwxrwx 1 root root 0 Aug 15 2009 response

drwxrwxrwx 1 root root 0 Aug 15 2009 rpm

-rwxrwxrwx 1 root root 3795 Jan 28 2009 runcluvfy.sh

-rwxrwxrwx 1 root root 3227 Aug 15 2009 runInstaller

drwxrwxrwx 1 root root 0 Aug 15 2009 sshsetup

drwxrwxrwx 1 root root 8192 Aug 15 2009 stage

-rwxrwxrwx 1 root root 4228 Aug 17 2009 welcome.html

[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose

 

Performing pre-checks for cluster services setup

 

Checking node reachability...

 

Check: Node reachability from node "rac1"

Destination Node Reachable?

------------------------------------ ------------------------

rac2 yes

rac1 yes

Result: Node reachability check passed from node "rac1"

 

 

Checking user equivalence...

 

Check: User equivalence for user "grid"

Node Name Comment

------------------------------------ ------------------------

rac2 passed

rac1 passed

Result: User equivalence check passed for user "grid"

 

Checking node connectivity...

 

Checking hosts config file...

Node Name Status Comment

------------ ------------------------ ------------------------

rac2 passed

rac1 passed

 

Verification of the hosts config file successful

 

 

Interface information for node "rac2"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.128.152 192.168.128.0 0.0.0.0 192.168.128.2 00:0C:29:EC:A0:64 1500

eth1 10.10.10.152 10.0.0.0 0.0.0.0 192.168.128.2 00:0C:29:EC:A0:6E 1500

 

 

Interface information for node "rac1"

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.128.151 192.168.128.0 0.0.0.0 192.168.128.2 00:0C:29:2F:A8:C3 1500

eth1 10.10.10.151 10.0.0.0 0.0.0.0 192.168.128.2 00:0C:29:2F:A8:CD 1500

 

 

Check: Node connectivity of subnet "192.168.128.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

rac2:eth0 rac1:eth0 yes

Result: Node connectivity passed for subnet "192.168.128.0" with node(s) rac2,rac1

 

 

Check: TCP connectivity of subnet "192.168.128.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

rac1:192.168.128.151 rac2:192.168.128.152 passed

Result: TCP connectivity check passed for subnet "192.168.128.0"

 

 

Check: Node connectivity of subnet "10.0.0.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

rac2:eth1 rac1:eth1 yes

Result: Node connectivity passed for subnet "10.0.0.0" with node(s) rac2,rac1

 

 

Check: TCP connectivity of subnet "10.0.0.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

rac1:10.10.10.151 rac2:10.10.10.152 passed

Result: TCP connectivity check passed for subnet "10.0.0.0"

 

 

Interfaces found on subnet "192.168.128.0" that are likely candidates for VIP are:

rac2 eth0:192.168.128.152

rac1 eth0:192.168.128.151

 

Interfaces found on subnet "10.0.0.0" that are likely candidates for a private interconnect are:

rac2 eth1:10.10.10.152

rac1 eth1:10.10.10.151

 

Result: Node connectivity check passed

 

 

Check: Total memory

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 999.85MB (1023844.0KB) 1.5GB (1572864.0KB) failed

rac1 999.85MB (1023844.0KB) 1.5GB (1572864.0KB) failed

Result: Total memory check failed

 

Check: Available memory

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 878.98MB (900076.0KB) 50MB (51200.0KB) passed

rac1 717.45MB (734672.0KB) 50MB (51200.0KB) passed

Result: Available memory check passed

 

Check: Swap space

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

Result: Swap space check failed

 

Check: Free disk space for "rac2:/tmp"

Path Node Name Mount point Available Required Comment

---------------- ------------ ------------ ------------ ------------ ------------

/tmp rac2 / 14.5GB 1GB passed

Result: Free disk space check passed for "rac2:/tmp"

 

Check: Free disk space for "rac1:/tmp"

Path Node Name Mount point Available Required Comment

---------------- ------------ ------------ ------------ ------------ ------------

/tmp rac1 / 14.06GB 1GB passed

Result: Free disk space check passed for "rac1:/tmp"

 

Check: User existence for "grid"

Node Name Status Comment

------------ ------------------------ ------------------------

rac2 exists passed

rac1 exists passed

Result: User existence check passed for "grid"

 

Check: Group existence for "oinstall"

Node Name Status Comment

------------ ------------------------ ------------------------

rac2 exists passed

rac1 exists passed

Result: Group existence check passed for "oinstall"

 

Check: Group existence for "dba"

Node Name Status Comment

------------ ------------------------ ------------------------

rac2 exists passed

rac1 exists passed

Result: Group existence check passed for "dba"

 

Check: Membership of user "grid" in group "oinstall" [as Primary]

Node Name User Exists Group Exists User in Group Primary Comment

---------------- ------------ ------------ ------------ ------------ ------------

rac2 yes yes yes yes passed

rac1 yes yes yes yes passed

Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

 

Check: Membership of user "grid" in group "dba"

Node Name User Exists Group Exists User in Group Comment

---------------- ------------ ------------ ------------ ----------------

rac2 yes yes yes passed

rac1 yes yes yes passed

Result: Membership check for user "grid" in group "dba" passed

 

Check: Run level

Node Name run level Required Comment

------------ ------------------------ ------------------------ ----------

rac2 5 3,5 passed

rac1 5 3,5 passed

Result: Run level check passed

 

Check: Hard limits for "maximum open file descriptors"

Node Name Type Available Required Comment

---------------- ------------ ------------ ------------ ----------------

rac2 hard 65536 65536 passed

rac1 hard 65536 65536 passed

Result: Hard limits check passed for "maximum open file descriptors"

 

Check: Soft limits for "maximum open file descriptors"

Node Name Type Available Required Comment

---------------- ------------ ------------ ------------ ----------------

rac2 soft 1024 1024 passed

rac1 soft 1024 1024 passed

Result: Soft limits check passed for "maximum open file descriptors"

 

Check: Hard limits for "maximum user processes"

Node Name Type Available Required Comment

---------------- ------------ ------------ ------------ ----------------

rac2 hard 16384 16384 passed

rac1 hard 16384 16384 passed

Result: Hard limits check passed for "maximum user processes"

 

Check: Soft limits for "maximum user processes"

Node Name Type Available Required Comment

---------------- ------------ ------------ ------------ ----------------

rac2 soft 2047 2047 passed

rac1 soft 2047 2047 passed

Result: Soft limits check passed for "maximum user processes"

 

Check: System architecture

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 x86_64 x86_64 passed

rac1 x86_64 x86_64 passed

Result: System architecture check passed

 

Check: Kernel version

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 2.6.18-348.el5 2.6.18 passed

rac1 2.6.18-348.el5 2.6.18 passed

Result: Kernel version check passed

 

Check: Kernel parameter for "semmsl"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 250 250 passed

rac1 250 250 passed

Result: Kernel parameter check passed for "semmsl"

 

Check: Kernel parameter for "semmns"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 32000 32000 passed

rac1 32000 32000 passed

Result: Kernel parameter check passed for "semmns"

 

Check: Kernel parameter for "semopm"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 100 100 passed

rac1 100 100 passed

Result: Kernel parameter check passed for "semopm"

 

Check: Kernel parameter for "semmni"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 128 128 passed

rac1 128 128 passed

Result: Kernel parameter check passed for "semmni"

 

Check: Kernel parameter for "shmmax"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 68719476736 536870912 passed

rac1 68719476736 536870912 passed

Result: Kernel parameter check passed for "shmmax"

 

Check: Kernel parameter for "shmmni"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 4096 4096 passed

rac1 4096 4096 passed

Result: Kernel parameter check passed for "shmmni"

 

Check: Kernel parameter for "shmall"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 2147483648 2097152 passed

rac1 2147483648 2097152 passed

Result: Kernel parameter check passed for "shmall"

 

Check: Kernel parameter for "file-max"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 6815744 6815744 passed

rac1 6815744 6815744 passed

Result: Kernel parameter check passed for "file-max"

 

Check: Kernel parameter for "ip_local_port_range"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 between 9000 & 65500 between 9000 & 65500 passed

rac1 between 9000 & 65500 between 9000 & 65500 passed

Result: Kernel parameter check passed for "ip_local_port_range"

 

Check: Kernel parameter for "rmem_default"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 262144 262144 passed

rac1 262144 262144 passed

Result: Kernel parameter check passed for "rmem_default"

 

Check: Kernel parameter for "rmem_max"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 4194304 4194304 passed

rac1 4194304 4194304 passed

Result: Kernel parameter check passed for "rmem_max"

 

Check: Kernel parameter for "wmem_default"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 262144 262144 passed

rac1 262144 262144 passed

Result: Kernel parameter check passed for "wmem_default"

 

Check: Kernel parameter for "wmem_max"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 1048586 1048576 passed

rac1 1048586 1048576 passed

Result: Kernel parameter check passed for "wmem_max"

 

Check: Kernel parameter for "aio-max-nr"

Node Name Configured Required Comment

------------ ------------------------ ------------------------ ----------

rac2 1048576 1048576 passed

rac1 1048576 1048576 passed

Result: Kernel parameter check passed for "aio-max-nr"

 

Check: Package existence for "make-3.81"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 make-3.81-3.el5 make-3.81 passed

rac1 make-3.81-3.el5 make-3.81 passed

Result: Package existence check passed for "make-3.81"

 

Check: Package existence for "binutils-2.17.50.0.6"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 binutils-2.17.50.0.6-20.el5_8.3 binutils-2.17.50.0.6 passed

rac1 binutils-2.17.50.0.6-20.el5_8.3 binutils-2.17.50.0.6 passed

Result: Package existence check passed for "binutils-2.17.50.0.6"

 

Check: Package existence for "gcc-4.1"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 gcc-4.1.2-54.el5 gcc-4.1 passed

rac1 gcc-4.1.2-54.el5 gcc-4.1 passed

Result: Package existence check passed for "gcc-4.1"

 

Check: Package existence for "libaio-0.3.106 (i386)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 libaio-0.3.106-5 (i386) libaio-0.3.106 (i386) passed

rac1 libaio-0.3.106-5 (i386) libaio-0.3.106 (i386) passed

Result: Package existence check passed for "libaio-0.3.106 (i386)"

 

Check: Package existence for "libaio-0.3.106 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 libaio-0.3.106-5 (x86_64) libaio-0.3.106 (x86_64) passed

rac1 libaio-0.3.106-5 (x86_64) libaio-0.3.106 (x86_64) passed

Result: Package existence check passed for "libaio-0.3.106 (x86_64)"

 

Check: Package existence for "glibc-2.5-24 (i686)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 glibc-2.5-107 (i686) glibc-2.5-24 (i686) passed

rac1 glibc-2.5-107 (i686) glibc-2.5-24 (i686) passed

Result: Package existence check passed for "glibc-2.5-24 (i686)"

 

Check: Package existence for "glibc-2.5-24 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 glibc-2.5-107 (x86_64) glibc-2.5-24 (x86_64) passed

rac1 glibc-2.5-107 (x86_64) glibc-2.5-24 (x86_64) passed

Result: Package existence check passed for "glibc-2.5-24 (x86_64)"

 

Check: Package existence for "compat-libstdc++-33-3.2.3 (i386)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 missing compat-libstdc++-33-3.2.3 (i386) failed

rac1 missing compat-libstdc++-33-3.2.3 (i386) failed

Result: Package existence check failed for "compat-libstdc++-33-3.2.3 (i386)"

 

Check: Package existence for "compat-libstdc++-33-3.2.3 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 compat-libstdc++-33-3.2.3-61 (x86_64) compat-libstdc++-33-3.2.3 (x86_64) passed

rac1 compat-libstdc++-33-3.2.3-61 (x86_64) compat-libstdc++-33-3.2.3 (x86_64) passed

Result: Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)"

 

Check: Package existence for "elfutils-libelf-0.125 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 elfutils-libelf-0.137-3.el5 (x86_64) elfutils-libelf-0.125 (x86_64) passed

rac1 elfutils-libelf-0.137-3.el5 (x86_64) elfutils-libelf-0.125 (x86_64) passed

Result: Package existence check passed for "elfutils-libelf-0.125 (x86_64)"

 

Check: Package existence for "elfutils-libelf-devel-0.125"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed

rac1 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed

 

WARNING:

PRVF-7584 : Multiple versions of package "elfutils-libelf-devel" found on node rac2: elfutils-libelf-devel-0.137-3.el5 (i386),elfutils-libelf-devel-0.137-3.el5 (x86_64)

 

WARNING:

PRVF-7584 : Multiple versions of package "elfutils-libelf-devel" found on node rac1: elfutils-libelf-devel-0.137-3.el5 (i386),elfutils-libelf-devel-0.137-3.el5 (x86_64)

Result: Package existence check passed for "elfutils-libelf-devel-0.125"

 

Check: Package existence for "glibc-common-2.5"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 glibc-common-2.5-107 glibc-common-2.5 passed

rac1 glibc-common-2.5-107 glibc-common-2.5 passed

Result: Package existence check passed for "glibc-common-2.5"

 

Check: Package existence for "glibc-devel-2.5 (i386)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 missing glibc-devel-2.5 (i386) failed

rac1 missing glibc-devel-2.5 (i386) failed

Result: Package existence check failed for "glibc-devel-2.5 (i386)"

 

Check: Package existence for "glibc-devel-2.5 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 glibc-devel-2.5-107 (x86_64) glibc-devel-2.5 (x86_64) passed

rac1 glibc-devel-2.5-107 (x86_64) glibc-devel-2.5 (x86_64) passed

Result: Package existence check passed for "glibc-devel-2.5 (x86_64)"

 

Check: Package existence for "glibc-headers-2.5"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 glibc-headers-2.5-107 glibc-headers-2.5 passed

rac1 glibc-headers-2.5-107 glibc-headers-2.5 passed

Result: Package existence check passed for "glibc-headers-2.5"

 

Check: Package existence for "gcc-c++-4.1.2"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 gcc-c++-4.1.2-54.el5 gcc-c++-4.1.2 passed

rac1 gcc-c++-4.1.2-54.el5 gcc-c++-4.1.2 passed

Result: Package existence check passed for "gcc-c++-4.1.2"

 

Check: Package existence for "libaio-devel-0.3.106 (i386)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 missing libaio-devel-0.3.106 (i386) failed

rac1 missing libaio-devel-0.3.106 (i386) failed

Result: Package existence check failed for "libaio-devel-0.3.106 (i386)"

 

Check: Package existence for "libaio-devel-0.3.106 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 libaio-devel-0.3.106-5 (x86_64) libaio-devel-0.3.106 (x86_64) passed

rac1 libaio-devel-0.3.106-5 (x86_64) libaio-devel-0.3.106 (x86_64) passed

Result: Package existence check passed for "libaio-devel-0.3.106 (x86_64)"

 

Check: Package existence for "libgcc-4.1.2 (i386)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 libgcc-4.1.2-54.el5 (i386) libgcc-4.1.2 (i386) passed

rac1 libgcc-4.1.2-54.el5 (i386) libgcc-4.1.2 (i386) passed

Result: Package existence check passed for "libgcc-4.1.2 (i386)"

 

Check: Package existence for "libgcc-4.1.2 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 libgcc-4.1.2-54.el5 (x86_64) libgcc-4.1.2 (x86_64) passed

rac1 libgcc-4.1.2-54.el5 (x86_64) libgcc-4.1.2 (x86_64) passed

Result: Package existence check passed for "libgcc-4.1.2 (x86_64)"

 

Check: Package existence for "libstdc++-4.1.2 (i386)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 libstdc++-4.1.2-54.el5 (i386) libstdc++-4.1.2 (i386) passed

rac1 libstdc++-4.1.2-54.el5 (i386) libstdc++-4.1.2 (i386) passed

Result: Package existence check passed for "libstdc++-4.1.2 (i386)"

 

Check: Package existence for "libstdc++-4.1.2 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 libstdc++-4.1.2-54.el5 (x86_64) libstdc++-4.1.2 (x86_64) passed

rac1 libstdc++-4.1.2-54.el5 (x86_64) libstdc++-4.1.2 (x86_64) passed

Result: Package existence check passed for "libstdc++-4.1.2 (x86_64)"

 

Check: Package existence for "libstdc++-devel-4.1.2 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 libstdc++-devel-4.1.2-54.el5 (x86_64) libstdc++-devel-4.1.2 (x86_64) passed

rac1 libstdc++-devel-4.1.2-54.el5 (x86_64) libstdc++-devel-4.1.2 (x86_64) passed

Result: Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)"

 

Check: Package existence for "sysstat-7.0.2"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 sysstat-7.0.2-12.el5 sysstat-7.0.2 passed

rac1 sysstat-7.0.2-12.el5 sysstat-7.0.2 passed

Result: Package existence check passed for "sysstat-7.0.2"

 

Check: Package existence for "unixODBC-2.2.11 (i386)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 unixODBC-2.2.11-10.el5 (i386) unixODBC-2.2.11 (i386) passed

rac1 unixODBC-2.2.11-10.el5 (i386) unixODBC-2.2.11 (i386) passed

Result: Package existence check passed for "unixODBC-2.2.11 (i386)"

 

Check: Package existence for "unixODBC-2.2.11 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 unixODBC-2.2.11-10.el5 (x86_64) unixODBC-2.2.11 (x86_64) passed

rac1 unixODBC-2.2.11-10.el5 (x86_64) unixODBC-2.2.11 (x86_64) passed

Result: Package existence check passed for "unixODBC-2.2.11 (x86_64)"

 

Check: Package existence for "unixODBC-devel-2.2.11 (i386)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 unixODBC-devel-2.2.11-10.el5 (i386) unixODBC-devel-2.2.11 (i386) passed

rac1 unixODBC-devel-2.2.11-10.el5 (i386) unixODBC-devel-2.2.11 (i386) passed

Result: Package existence check passed for "unixODBC-devel-2.2.11 (i386)"

 

Check: Package existence for "unixODBC-devel-2.2.11 (x86_64)"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 unixODBC-devel-2.2.11-10.el5 (x86_64) unixODBC-devel-2.2.11 (x86_64) passed

rac1 unixODBC-devel-2.2.11-10.el5 (x86_64) unixODBC-devel-2.2.11 (x86_64) passed

Result: Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)"

 

Check: Package existence for "ksh-20060214"

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 ksh-20100621-12.el5 ksh-20060214 passed

rac1 ksh-20100621-12.el5 ksh-20060214 passed

Result: Package existence check passed for "ksh-20060214"

 

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

 

Check: Current group ID

Result: Current group ID check passed

Checking Core file name pattern consistency...

Core file name pattern consistency check passed.

 

Checking to make sure user "grid" is not in "root" group

Node Name Status Comment

------------ ------------------------ ------------------------

rac2 does not exist passed

rac1 does not exist passed

Result: User "grid" is not part of "root" group. Check passed

 

Check default user file creation mask

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 0022 0022 passed

rac1 0022 0022 passed

Result: Default user file creation mask check passed

 

Starting Clock synchronization checks using Network Time Protocol(NTP)...

 

NTP Configuration file check started...

Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes

 

Result: Clock synchronization check using Network Time Protocol(NTP) passed

 

 

Pre-check for cluster services setup was unsuccessful on all the nodes.

[grid@rac1 grid]$

 

 

解决完所有的failed内容后重新检测直到所有问题都解决了。。。。

  1. 开始安装

 

 

安装日志:/u01/app/oraInventory/logs/installActions2014-06-05_06-12-27AM.log

首先打开Xmanager - Passive 软件, 然后在 Xshell 会话设置如下:

[grid@rhel_linux_asm grid]$ clear

[grid@rhel_linux_asm grid]$ export DISPLAY=192.168.1.100:0.0 ---这里的ip地址就是本机的ip地址(ipconfig)

[grid@rhel_linux_asm grid]$ xhost +

access control disabled, clients can connect from any host

[grid@rhel_linux_asm grid]$ ls

doc install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html

[grid@rhel_linux_asm grid]$ ./runInstaller

Starting Oracle Universal Installer...

 

Checking Temp space: must be greater than 120 MB. Actual 31642 MB Passed

Checking swap space: must be greater than 150 MB. Actual 383 MB Passed

Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-04-29_10-53-18PM. Please wait ...[grid@rhel_linux_asm grid]$

 

截图如下:

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

 

安装Grid Infrastructure软件

以下图形界面:

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

SCAN Name 应该写之前的/etc/hosts 中scan配置的名称,不然报错:

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

正确的配置是:

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

 

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

  1. 创建ASM磁盘组

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

如果磁盘是如下状态(member),说明磁盘已经被使用过了,需要把磁盘初始化,即重新进行分区即可:

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

分别执行: fdisk /dev/sdb 后,对2个节点重新同步:partprobe

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

这里所以的检查项最后都通过,内存的话每个节点建议1.5G

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

这一步比较耗时:

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

从rac2节点的大小上也可以看到安装进度,拷贝完成后大约2.9G:

路径: /u01/app/11.2.0

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

 

 

 

  1. 运行root脚本

运行到76%的时候 ,出现root脚本,如下:

首先在local node上执行以下脚本,执行成功后再在其他节点上执行脚本

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

 

这一步如果运行脚本失败需要重新运行:

  1. /u01/app/grid/11.2.0/crs/install/roothas.pl -deconfig -force -verbose
  2. /u01/app/grid/11.2.0/crs/install/rootcrs.pl -verbose -deconfig -force
  3. /u01/app/grid/11.2.0/root.sh

     

    日志地址:/u01/app/grid/11.2.0/cfgtoollogs/crsconfig/

    重置的日志文件:hadelete.log

    root.sh脚本日志:rootcrs_rac2.log

     

    rac1节点上:

    [root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh

    Changing permissions of /u01/app/oraInventory.

    Adding read,write permissions for group.

    Removing read,write,execute permissions for world.

     

    Changing groupname of /u01/app/oraInventory to oinstall.

    The execution of the script is complete.

    [root@rac1 ~]# /u01/app/grid/11.2.0/root.sh

    Running Oracle 11g root.sh script...

     

    The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME= /u01/app/grid/11.2.0

     

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    Copying dbhome to /usr/local/bin ...

    Copying oraenv to /usr/local/bin ...

    Copying coraenv to /usr/local/bin ...

     

     

    Creating /etc/oratab file...

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root.sh script.

    Now product-specific root actions will be performed.

    2014-06-04 12:03:15: Parsing the host name

    2014-06-04 12:03:15: Checking for super user privileges

    2014-06-04 12:03:15: User has super user privileges

    Using configuration parameter file: /u01/app/grid/11.2.0/crs/install/crsconfig_params

    Creating trace directory

    LOCAL ADD MODE

    Creating OCR keys for user 'root', privgrp 'root'..

    Operation successful.

    root wallet

    root wallet cert

    root cert export

    peer wallet

    profile reader wallet

    pa wallet

    peer wallet keys

    pa wallet keys

    peer cert request

    pa cert request

    peer cert

    pa cert

    peer root cert TP

    profile reader root cert TP

    pa root cert TP

    peer pa cert TP

    pa peer cert TP

    profile reader pa cert TP

    profile reader peer cert TP

    peer user cert

    pa user cert

    Adding daemon to inittab

    CRS-4123: Oracle High Availability Services has been started.

    ohasd is starting

    CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'

    CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'

    CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded

    CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'

    CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'

    CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.cssd' on 'rac1'

    CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'

    CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded

    CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'

    CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded

     

    ASM created and started successfully.

     

    DiskGroup CRS created successfully.

     

    clscfg: -install mode specified

    Successfully accumulated necessary OCR keys.

    Creating OCR keys for user 'root', privgrp 'root'..

    Operation successful.

    CRS-2672: Attempting to start 'ora.crsd' on 'rac1'

    CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded

    CRS-4256: Updating the profile

    Successful addition of voting disk 271f9e0c141e4f06bf2cf3938f95d2b8.

    Successfully replaced voting disk group with +CRS.

    CRS-4256: Updating the profile

    CRS-4266: Voting file(s) successfully replaced

    ## STATE File Universal Id File Name Disk group

    -- ----- ----------------- --------- ---------

    1. ONLINE 271f9e0c141e4f06bf2cf3938f95d2b8 (/dev/ocrb) [CRS]

    Located 1 voting disk(s).

    CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'

    CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded

    CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

    CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

    CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'

    CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded

    CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'

    CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded

    CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'

    CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

    CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'

    CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded

    CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

    CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

    CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'

    CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'

    CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'

    CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'

    CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'

    CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.cssd' on 'rac1'

    CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'

    CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded

    CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'

    CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.asm' on 'rac1'

    CRS-2676: Start of 'ora.asm' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.crsd' on 'rac1'

    CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.evmd' on 'rac1'

    CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.asm' on 'rac1'

    CRS-2676: Start of 'ora.asm' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.CRS.dg' on 'rac1'

    CRS-2676: Start of 'ora.CRS.dg' on 'rac1' succeeded

    CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac1'

    CRS-2676: Start of 'ora.registry.acfs' on 'rac1' succeeded

     

    rac1 2014/06/04 12:11:19 /u01/app/grid/11.2.0/cdata/rac1/backup_20140604_121119.olr

    Configure Oracle Grid Infrastructure for a Cluster ... succeeded

    Updating inventory properties for clusterware

    Starting Oracle Universal Installer...

     

    Checking swap space: must be greater than 500 MB. Actual 1795 MB Passed

    The inventory pointer is located at /etc/oraInst.loc

    The inventory is located at /u01/app/oraInventory

    'UpdateNodeList' was successful.

     

    [root@rac1 ~]#

     

    [root@rac2 soft]# /oracle/app/grid/product/11.2.0/root.sh

    rac2节点上:

     

    Running Oracle 11g root.sh script...

    The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME= /oracle/app/grid/product/11.2.0

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying dbhome to /usr/local/bin ...

    The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying oraenv to /usr/local/bin ...

    The file "coraenv" already exists in /usr/local/bin. Overwriteit? (y/n)

    [n]: y

    Copying coraenv to /usr/local/bin ...

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root.sh script.

    Now product-specific root actions will be performed.

    2010-08-02 14:32:28: Parsing the host name

    2010-08-02 14:32:28: Checking for super user privileges

    2010-08-02 14:32:28: User has super user privileges

    Using configuration parameter file:

    /oracle/app/grid/product/11.2.0/crs/install/crsconfig_params

    Creating trace directory

    LOCAL ADD MODE

    Creating OCR keys for user 'root', privgrp 'root'..

    Operation successful.

    Adding daemon to inittab

    CRS-4123: Oracle High Availability Services has been started.

    ohasd is starting

    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on

    node rac1, number 1, and is terminating

    An active cluster was found during exclusive startup, restarting to join the cluster

    CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'

    CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded

    CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'

    CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded

    CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'

    CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded

    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'

    CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded

    CRS-2672: Attempting to start 'ora.cssd' on 'rac2'

    CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'

    CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded

    CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded

    CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'

    CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded

    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac2'

    CRS-2676: Start of 'ora.drivers.acfs' on 'rac2' succeeded

    CRS-2672: Attempting to start 'ora.asm' on 'rac2'

    CRS-2676: Start of 'ora.asm' on 'rac2' succeeded

    CRS-2672: Attempting to start 'ora.crsd' on 'rac2'

    CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded

    CRS-2672: Attempting to start 'ora.evmd' on 'rac2'

    CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded

    rac2 2010/08/02 14:37:51

    /oracle/app/grid/product/11.2.0/cdata/rac2/backup_20100802_143751.olr

    Configure Oracle Grid Infrastructure for a Cluster ... succeeded

    Updating inventory properties for clusterware

    Starting Oracle Universal Installer...

    Checking swap space: must be greater than 500 MB. Actual 1202MB Passed

    The inventory pointer is located at /etc/oraInst.loc

    The inventory is located at /oracle/app/oraInventory

    'UpdateNodeList' was successful.

     

    1. 11.2.0.3 版本的运行日志:

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

    此时,集群件相关的服务已经启动。当然,ASM 实例也将在两个节点上启动。这个时候即可以运行命令crs_stat -t 来查询了,结果中有20行:

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

     

    [grid@rac1 ~]$ crs_stat -t

    Name Type Target State Host

    ------------------------------------------------------------

    ora....ER.lsnr ora....er.type ONLINE ONLINE rac1

    ora....N1.lsnr ora....er.type ONLINE ONLINE rac1

    ora.OCR.dg ora....up.type ONLINE ONLINE rac1

    ora.asm ora.asm.type ONLINE ONLINE rac1

    ora.cvu ora.cvu.type ONLINE ONLINE rac1

    ora.gsd ora.gsd.type ONLINE OFFLINE

    ora....network ora....rk.type ONLINE ONLINE rac1

    ora.oc4j ora.oc4j.type ONLINE ONLINE rac1

    ora.ons ora.ons.type ONLINE ONLINE rac1

    ora....SM1.asm application ONLINE ONLINE rac1

    ora....C1.lsnr application ONLINE ONLINE rac1

    ora.rac1.gsd application ONLINE OFFLINE

    ora.rac1.ons application ONLINE ONLINE rac1

    ora.rac1.vip ora....t1.type ONLINE ONLINE rac1

    ora....SM2.asm application ONLINE ONLINE rac2

    ora....C2.lsnr application ONLINE ONLINE rac2

    ora.rac2.gsd application ONLINE OFFLINE

    ora.rac2.ons application ONLINE ONLINE rac2

    ora.rac2.vip ora....t1.type ONLINE ONLINE rac2

    ora.scan1.vip ora....ip.type ONLINE ONLINE rac1

    [grid@rac1 ~]$

     

     

     

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

     

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

     

    1. 校验

    确认Grid安装成功

    CRS状态

    [grid@rac01 ~]$ crs_stat -t 或 crsctl stat res -t

    Name Type Target State Host

    ------------------------------------------------------------

    ora.CRSDG.dg ora....up.type ONLINE ONLINE rac01

    ora....ER.lsnr ora....er.type ONLINE ONLINE rac01

    ora....N1.lsnr ora....er.type ONLINE ONLINE rac01

    ora.asm ora.asm.type ONLINE ONLINE rac01

    ora.eons ora.eons.type ONLINE ONLINE rac01

    ora.gsd ora.gsd.type OFFLINE OFFLINE

    ora....network ora....rk.type ONLINE ONLINE rac01

    ora.oc4j ora.oc4j.type OFFLINE OFFLINE

    ora.ons ora.ons.type ONLINE ONLINE rac01

    ora....SM1.asm application ONLINE ONLINE rac01

    ora....01.lsnr application ONLINE ONLINE rac01

    ora.rac01.gsd application OFFLINE OFFLINE

    ora.rac01.ons application ONLINE ONLINE rac01

    ora.rac01.vip ora....t1.type ONLINE ONLINE rac01

    ora....SM2.asm application ONLINE ONLINE rac02

    ora....02.lsnr application ONLINE ONLINE rac02

    ora.rac02.gsd application OFFLINE OFFLINE

    ora.rac02.ons application ONLINE ONLINE rac02

    ora.rac02.vip ora....t1.type ONLINE ONLINE rac02

    ora.scan1.vip ora....ip.type ONLINE ONLINE rac01

    标为蓝色的四个服务,在11g里面是可选项,并且默认是offline,可忽略。

    voting disk状态

    [grid@rac01 ~]$ crsctl query css votedisk

    ## STATE File Universal Id File Name Disk group

    -- ----- ----------------- --------- ---------

    1. ONLINE 7b8903f49cc84fa8bf06d199bdf5dfe3 (ORCL:DISK01) [CRSDG]

     

     

    OCR状态

    [grid@rac01 ~]$ ocrcheck

    Status of Oracle Cluster Registry is as follows :

    Version : 3

    Total space (kbytes) : 262120

    Used space (kbytes) : 2264

    Available space (kbytes) : 259856

    ID : 1510360228

    Device/File Name : +CRSDG

    Device/File integrity check succeeded

    Device/File not configured

    Device/File not configured

    Device/File not configured

    Device/File not configured

    Cluster registry integrity check succeeded

    Logical corruption check bypassed due to non-privileged user

    测试GI的安装

    node1

     

    1. [root@node1 ~]# ifconfig
    2. eth0 Link encap:Ethernet HWaddr 00:0C:29:79:33:95
    3. inet addr:192.168.1.51 Bcast:192.168.255.255 Mask:255.255.0.0
    4. inet6 addr: fe80::20c:29ff:fe79:3395/64 Scope:Link
    5. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    6. RX packets:977978 errors:0 dropped:1345 overruns:0 frame:0
    7. TX packets:2525875 errors:0 dropped:0 overruns:0 carrier:0
    8. collisions:0 txqueuelen:1000
    9. RX bytes:106995897 (102.0 MiB) TX bytes:3573509233 (3.3 GiB)
    10.  
    11. eth0:1 Link encap:Ethernet HWaddr 00:0C:29:79:33:95
    12. inet addr:192.168.1.151 Bcast:192.168.255.255 Mask:255.255.0.0
    13. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    14.  
    15. eth0:3 Link encap:Ethernet HWaddr 00:0C:29:79:33:95
    16. inet addr:192.168.1.58 Bcast:192.168.255.255 Mask:255.255.0.0
    17. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    18.  
    19. eth0:4 Link encap:Ethernet HWaddr 00:0C:29:79:33:95
    20. inet addr:192.168.1.59 Bcast:192.168.255.255 Mask:255.255.0.0
    21. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    22.  
    23. eth1 Link encap:Ethernet HWaddr 00:0C:29:79:33:9F
    24. inet addr:172.168.1.51 Bcast:172.168.255.255 Mask:255.255.0.0
    25. inet6 addr: fe80::20c:29ff:fe79:339f/64 Scope:Link
    26. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    27. RX packets:728960 errors:0 dropped:1345 overruns:0 frame:0
    28. TX packets:13833 errors:0 dropped:0 overruns:0 carrier:0
    29. collisions:0 txqueuelen:1000
    30. RX bytes:54104908 (51.5 MiB) TX bytes:7561084 (7.2 MiB)
    31.  
    32. eth1:1 Link encap:Ethernet HWaddr 00:0C:29:79:33:9F
    33. inet addr:169.254.201.146 Bcast:169.254.255.255 Mask:255.255.0.0
    34. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    35.  
    36. lo Link encap:Local Loopback
    37. inet addr:127.0.0.1 Mask:255.0.0.0
    38. inet6 addr: ::1/128 Scope:Host
    39. UP LOOPBACK RUNNING MTU:16436 Metric:1
    40. RX packets:13162 errors:0 dropped:0 overruns:0 frame:0
    41. TX packets:13162 errors:0 dropped:0 overruns:0 carrier:0
    42. collisions:0 txqueuelen:0
    43. RX bytes:7783412 (7.4 MiB) TX bytes:7783412 (7.4 MiB)
    44.  
    45. [root@node1 ~]# ps -ef|egrep -i "asm|listener"
    46. grid 24390 1 0 10:03 ? 00:00:00 asm_pmon_+ASM1
    47. grid 24392 1 0 10:03 ? 00:00:00 asm_psp0_+ASM1
    48. grid 24394 1 1 10:03 ? 00:00:18 asm_vktm_+ASM1
    49. grid 24398 1 0 10:03 ? 00:00:00 asm_gen0_+ASM1
    50. grid 24400 1 0 10:03 ? 00:00:00 asm_diag_+ASM1
    51. grid 24402 1 0 10:03 ? 00:00:00 asm_ping_+ASM1
    52. grid 24404 1 0 10:03 ? 00:00:02 asm_dia0_+ASM1
    53. grid 24406 1 0 10:03 ? 00:00:02 asm_lmon_+ASM1
    54. grid 24408 1 0 10:03 ? 00:00:01 asm_lmd0_+ASM1
    55. grid 24410 1 0 10:03 ? 00:00:02 asm_lms0_+ASM1
    56. grid 24414 1 0 10:03 ? 00:00:00 asm_lmhb_+ASM1
    57. grid 24416 1 0 10:03 ? 00:00:00 asm_mman_+ASM1
    58. grid 24418 1 0 10:03 ? 00:00:00 asm_dbw0_+ASM1
    59. grid 24420 1 0 10:03 ? 00:00:00 asm_lgwr_+ASM1
    60. grid 24422 1 0 10:03 ? 00:00:00 asm_ckpt_+ASM1
    61. grid 24424 1 0 10:03 ? 00:00:00 asm_smon_+ASM1
    62. grid 24426 1 0 10:03 ? 00:00:00 asm_rbal_+ASM1
    63. grid 24428 1 0 10:03 ? 00:00:00 asm_gmon_+ASM1
    64. grid 24430 1 0 10:03 ? 00:00:00 asm_mmon_+ASM1
    65. grid 24432 1 0 10:03 ? 00:00:00 asm_mmnl_+ASM1
    66. grid 24434 1 0 10:03 ? 00:00:00 asm_lck0_+ASM1
    67. grid 24436 1 0 10:03 ? 00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    68. grid 24466 1 0 10:03 ? 00:00:01 oracle+ASM1_ocr (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    69. grid 24471 1 0 10:03 ? 00:00:00 asm_asmb_+ASM1
    70. grid 24473 1 0 10:03 ? 00:00:00 oracle+ASM1_asmb_+asm1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    71. grid 24876 1 0 10:04 ? 00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    72. grid 25269 1 0 10:05 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN2 -inherit
    73. grid 25283 1 0 10:05 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN3 -inherit
    74. grid 26105 1 0 10:15 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
    75. grid 28183 28182 0 10:21 ? 00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    76. root 28263 2146 0 10:26 pts/2 00:00:00 egrep -i asm|listener

    node2

    1. [root@node2 ~]# ifconfig
    2. eth0 Link encap:Ethernet HWaddr 00:0C:29:5C:FC:76
    3. inet addr:192.168.1.52 Bcast:192.168.255.255 Mask:255.255.0.0
    4. inet6 addr: fe80::20c:29ff:fe5c:fc76/64 Scope:Link
    5. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    6. RX packets:3068626 errors:0 dropped:1348 overruns:0 frame:0
    7. TX packets:185731 errors:0 dropped:0 overruns:0 carrier:0
    8. collisions:0 txqueuelen:1000
    9. RX bytes:3505670277 (3.2 GiB) TX bytes:39520990 (37.6 MiB)
    10.  
    11. eth0:1 Link encap:Ethernet HWaddr 00:0C:29:5C:FC:76
    12. inet addr:192.168.1.57 Bcast:192.168.255.255 Mask:255.255.0.0
    13. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    14.  
    15. eth0:2 Link encap:Ethernet HWaddr 00:0C:29:5C:FC:76
    16. inet addr:192.168.1.152 Bcast:192.168.255.255 Mask:255.255.0.0
    17. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    18.  
    19. eth1 Link encap:Ethernet HWaddr 00:0C:29:5C:FC:80
    20. inet addr:172.168.1.52 Bcast:172.168.255.255 Mask:255.255.0.0
    21. inet6 addr: fe80::20c:29ff:fe5c:fc80/64 Scope:Link
    22. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    23. RX packets:729233 errors:0 dropped:1348 overruns:0 frame:0
    24. TX packets:15630 errors:0 dropped:0 overruns:0 carrier:0
    25. collisions:0 txqueuelen:1000
    26. RX bytes:53620798 (51.1 MiB) TX bytes:8883597 (8.4 MiB)
    27.  
    28. eth1:1 Link encap:Ethernet HWaddr 00:0C:29:5C:FC:80
    29. inet addr:169.254.30.23 Bcast:169.254.255.255 Mask:255.255.0.0
    30. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    31.  
    32. lo Link encap:Local Loopback
    33. inet addr:127.0.0.1 Mask:255.0.0.0
    34. inet6 addr: ::1/128 Scope:Host
    35. UP LOOPBACK RUNNING MTU:16436 Metric:1
    36. RX packets:6049 errors:0 dropped:0 overruns:0 frame:0
    37. TX packets:6049 errors:0 dropped:0 overruns:0 carrier:0
    38. collisions:0 txqueuelen:0
    39. RX bytes:2377782 (2.2 MiB) TX bytes:2377782 (2.2 MiB)
    40.  
    41. [root@node2 ~]# ps -ef|egrep -i "asm|listener"
    42. grid 21049 1 0 10:09 ? 00:00:00 asm_pmon_+ASM2
    43. grid 21051 1 0 10:09 ? 00:00:00 asm_psp0_+ASM2
    44. grid 21053 1 1 10:09 ? 00:00:14 asm_vktm_+ASM2
    45. grid 21057 1 0 10:09 ? 00:00:00 asm_gen0_+ASM2
    46. grid 21059 1 0 10:09 ? 00:00:00 asm_diag_+ASM2
    47. grid 21061 1 0 10:09 ? 00:00:00 asm_ping_+ASM2
    48. grid 21063 1 0 10:09 ? 00:00:01 asm_dia0_+ASM2
    49. grid 21065 1 0 10:09 ? 00:00:01 asm_lmon_+ASM2
    50. grid 21067 1 0 10:09 ? 00:00:00 asm_lmd0_+ASM2
    51. grid 21069 1 0 10:09 ? 00:00:02 asm_lms0_+ASM2
    52. grid 21073 1 0 10:09 ? 00:00:00 asm_lmhb_+ASM2
    53. grid 21075 1 0 10:09 ? 00:00:00 asm_mman_+ASM2
    54. grid 21077 1 0 10:09 ? 00:00:00 asm_dbw0_+ASM2
    55. grid 21079 1 0 10:09 ? 00:00:00 asm_lgwr_+ASM2
    56. grid 21081 1 0 10:09 ? 00:00:00 asm_ckpt_+ASM2
    57. grid 21083 1 0 10:09 ? 00:00:00 asm_smon_+ASM2
    58. grid 21085 1 0 10:09 ? 00:00:00 asm_rbal_+ASM2
    59. grid 21087 1 0 10:09 ? 00:00:00 asm_gmon_+ASM2
    60. grid 21089 1 0 10:09 ? 00:00:00 asm_mmon_+ASM2
    61. grid 21091 1 0 10:09 ? 00:00:00 asm_mmnl_+ASM2
    62. grid 21093 1 0 10:09 ? 00:00:00 asm_lck0_+ASM2
    63. grid 21095 1 0 10:09 ? 00:00:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    64. grid 21128 1 0 10:09 ? 00:00:00 oracle+ASM2_ocr (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    65. grid 21130 1 0 10:09 ? 00:00:00 asm_asmb_+ASM2
    66. grid 21132 1 0 10:09 ? 00:00:00 oracle+ASM2_asmb_+asm2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    67. grid 21271 1 0 10:09 ? 00:00:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
    68. grid 21326 1 0 10:09 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
    69. grid 22068 1 0 10:15 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
    70. root 23551 1979 0 10:26 pts/2 00:00:00 egrep -i asm|listener

    检查 CRS 状态

     

    1. [grid@node2 ~]$ crsctl check crs
    2. CRS-4638: Oracle High Availability Services is online
    3. CRS-4537: Cluster Ready Services is online
    4. CRS-4529: Cluster Synchronization Services is online
    5. CRS-4533: Event Manager is online

     

    检查 Clusterware 资源 ,crs_stat命令在11gR2中不再推荐使用,推荐使用crsctl stat res -t

    1. [grid@node2 ~]$ crs_stat -t
    2. Name Type Target State Host
    3. ------------------------------------------------------------
    4. ora.CRS.dg ora....up.type ONLINE ONLINE node1
    5. ora....ER.lsnr ora....er.type ONLINE ONLINE node1
    6. ora....N1.lsnr ora....er.type ONLINE ONLINE node2
    7. ora....N2.lsnr ora....er.type ONLINE ONLINE node1
    8. ora....N3.lsnr ora....er.type ONLINE ONLINE node1
    9. ora.asm ora.asm.type ONLINE ONLINE node1
    10. ora.cvu ora.cvu.type ONLINE ONLINE node1
    11. ora.gsd ora.gsd.type OFFLINE OFFLINE
    12. ora....network ora....rk.type ONLINE ONLINE node1
    13. ora....SM1.asm application ONLINE ONLINE node1
    14. ora....E1.lsnr application ONLINE ONLINE node1
    15. ora.node1.gsd application OFFLINE OFFLINE
    16. ora.node1.ons application ONLINE ONLINE node1
    17. ora.node1.vip ora....t1.type ONLINE ONLINE node1
    18. ora....SM2.asm application ONLINE ONLINE node2
    19. ora....E2.lsnr application ONLINE ONLINE node2
    20. ora.node2.gsd application OFFLINE OFFLINE
    21. ora.node2.ons application ONLINE ONLINE node2
    22. ora.node2.vip ora....t1.type ONLINE ONLINE node2
    23. ora.oc4j ora.oc4j.type ONLINE ONLINE node1
    24. ora.ons ora.ons.type ONLINE ONLINE node1
    25. ora.scan1.vip ora....ip.type ONLINE ONLINE node2
    26. ora.scan2.vip ora....ip.type ONLINE ONLINE node1
    27. ora.scan3.vip ora....ip.type ONLINE ONLINE node1
    28. [grid@node2 ~]$ crsctl stat res -t
    29. --------------------------------------------------------------------------------
    30. NAME TARGET STATE SERVER STATE_DETAILS
    31. --------------------------------------------------------------------------------
    32. Local Resources
    33. --------------------------------------------------------------------------------
    34. ora.CRS.dg
    35. ONLINE ONLINE node1
    36. ONLINE ONLINE node2
    37. ora.LISTENER.lsnr
    38. ONLINE ONLINE node1
    39. ONLINE ONLINE node2
    40. ora.asm
    41. ONLINE ONLINE node1 Started
    42. ONLINE ONLINE node2 Started
    43. ora.gsd
    44. OFFLINE OFFLINE node1
    45. OFFLINE OFFLINE node2
    46. ora.net1.network
    47. ONLINE ONLINE node1
    48. ONLINE ONLINE node2
    49. ora.ons
    50. ONLINE ONLINE node1
    51. ONLINE ONLINE node2
    52. --------------------------------------------------------------------------------
    53. Cluster Resources
    54. --------------------------------------------------------------------------------
    55. ora.LISTENER_SCAN1.lsnr
    56. 1 ONLINE ONLINE node2
    57. ora.LISTENER_SCAN2.lsnr
    58. 1 ONLINE ONLINE node1
    59. ora.LISTENER_SCAN3.lsnr
    60. 1 ONLINE ONLINE node1
    61. ora.cvu
    62. 1 ONLINE ONLINE node1
    63. ora.node1.vip
    64. 1 ONLINE ONLINE node1
    65. ora.node2.vip
    66. 1 ONLINE ONLINE node2
    67. ora.oc4j
    68. 1 ONLINE ONLINE node1
    69. ora.scan1.vip
    70. 1 ONLINE ONLINE node2
    71. ora.scan2.vip
    72. 1 ONLINE ONLINE node1
    73. ora.scan3.vip
    74. 1 ONLINE ONLINE node1

    检查集群节点

    1. [grid@node2 ~]$ olsnodes -n
    2. node1 1
    3. node2 2

    检测CRS版本

    1. [grid@node2 ~]$ crsctl query crs activeversion
    2. Oracle Clusterware active version on the cluster is [11.2.0.3.0]

    检查 Oracle 集群注册表 (OCR)

    1. [grid@node2 ~]$ ocrcheck
    2. Status of Oracle Cluster Registry is as follows :
    3. Version : 3
    4. Total space (kbytes) : 262120
    5. Used space (kbytes) : 2588
    6. Available space (kbytes) : 259532
    7. ID : 1606856820
    8. Device/File Name : +CRS
    9. Device/File integrity check succeeded
    10.  
    11. Device/File not configured
    12.  
    13. Device/File not configured
    14.  
    15. Device/File not configured
    16.  
    17. Device/File not configured
    18.  
    19. Cluster registry integrity check succeeded
    20.  
    21. Logical corruption check bypassed due to non-privileged user

    检查votedisk

    1. [grid@node2 ~]$ crsctl query css votedisk
    2. ## STATE File Universal Id File Name Disk group
    3. -- ----- ----------------- --------- ---------
    4. 1. ONLINE 4b4ef03676d84facbf55c02b8c058a07 (/dev/asm-diskc) [CRS]
    5. Located 1 voting disk(s).

    检查asm

    1. [grid@node2 ~]$ srvctl config asm -a
    2. ASM home: /u01/app/11.2.0/grid
    3. ASM listener: LISTENER
    4. ASM is enabled.
    5. [grid@node2 ~]$ srvctl status asm
    6. ASM is running on node2,node1

     

    1. [grid@node2 ~]$ uname -p
    2. x86_64
    3. [grid@node2 ~]$ sqlplus / as sysdba
    4.  
    5. SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 29 10:45:13 2012
    6.  
    7. Copyright (c) 1982, 2011, Oracle. All rights reserved.
    8.  
    9.  
    10. Connected to:
    11. Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    12. With the Real Application Clusters and Automatic Storage Management options
    13.  
    14. SQL> set linesize 100
    15. SQL> show parameter spfile
    16.  
    17. NAME TYPE VALUE
    18. ------------------------------------ ---------------------- ------------------------------
    19. spfile string +CRS/cluster-scan/asmparameter
    20. file/registry.253.803296901
    21. SQL> select path from v$asm_disk;
    22.  
    23. PATH
    24. ----------------------------------------------------------------------------------------------------
    25. /dev/asm-diskg
    26. /dev/asm-diskf
    27. /dev/asm-diske
    28. /dev/asm-diskb
    29. /dev/asm-diskc
    30. /dev/asm-diskd
    31.  
    32. 6 rows selected.

     

     

    ASM磁盘组配置

    检查监听状态

    [grid@rac01 ~]$ lsnrctl status

    LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 16-MAR-2011 16:24:36

     

    Copyright (c) 1991, 2009, Oracle. All rights reserved.

     

    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))

    STATUS of the LISTENER

    ------------------------

    Alias LISTENER

    Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production

    Start Date 16-MAR-2011 14:27:14

    Uptime 0 days 1 hr. 57 min. 27 sec

    Trace Level off

    Security ON: Local OS Authentication

    SNMP OFF

    Listener Parameter File /u01/app/grid/11.2/network/admin/listener.ora

    Listener Log File /u01/app/oracle/diag/tnslsnr/rac01/listener/alert/log.xml

    Listening Endpoints Summary...

    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))

    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.18.3.211)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.18.3.213)(PORT=1521)))

    Services Summary...

    Service "+ASM" has 1 instance(s).

    Instance "+ASM1", status READY, has 1 handler(s) for this service...

    The command completed successfully

    [grid@rac01 ~]$

     

    如果没有+ASM服务,则需要重新配置监听程序。

     

    1. 创建其它新磁盘组

    grid 用户执行 asmca 命令

     

    使用asmca,创建DATADG和FRADG两个磁盘组。

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

     

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

     

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

     

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

    DATA创建成功,接下来创建FLASHDG:

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

    FLASHDG创建成功,退出ASMCA。

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

    验证:

    一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

上一篇:2022,你的团队距离持续部署还有多远?


下一篇:前端知识体系整理(不断更新)