Oracle 12cR1 RAC集群安装文档:
Oracle 12cR1 RAC集群安装(一)--环境准备
Oracle 12cR1 RAC集群安装(二)--使用图形界面安装
Oracle 12cR1 RAC集群安装(三)--静默安装
------------------------------------------------------------------------------------------------------------
基本环境
操作系统版本 | RedHat6.7 |
数据库版本 | 12.1.0.2 |
数据库名称 | testdb |
数据库实例 | testdb1、testdb2 |
(一)安装服务器硬件要求
配置项目 | 参数要求 |
网卡 | 每台服务器至少2个网卡: --公网网卡:带宽至少1GB --私网网卡:带宽至少1GB,建议使用10GB,用于集群节点之间的内部通信 注意:所有节点的网卡接口名称必须相同。必然要节点1使用网卡eth0来做公网网卡,那么节点2也必须使用eth0来做公网网卡。 |
内存 | 根据是否安装GI,内存要求为: --如果只安装单节点数据库,至少1GB内存 --如果要安装GI,至少需要4GB内存 |
临时磁盘空间 | 至少1GB的 /tmp 空间 |
本地磁盘空间 | 磁盘空间要求如下: --至少为Grid home分配8GB的空间。Oracle建议分配100GB,为后续打补丁预留出空间 --至少为Grid base分配12GB的空间,GI base主要用于存放Oracle cluster和Oracle ASM的日志文件 --在GI Base下预留10GB的额外空间,用于存放TFA数据 --如果要安装Oracle软件,那么还需要准备6.4GB的空间 建议:如果磁盘充足,建议分别给GI和oracle各100GB空间 |
交换空间(swap) |
交换空间要求如下: |
(二)服务器IP地址规划
服务器名称 | 公网IP地址(public IP) | 虚拟IP地址(VIP) | SCAN IP地址 | 私网IP地址 |
node1 | eth0:192.168.10.11 | 192.168.10.13 | 192.168.10.10 | eth1:10.10.10.11 |
node2 | eth0:192.168.10.12 | 192.168.10.14 | 192.168.10.10(相同) | eth1:10.10.10.12 |
注意:
1.一共包含2个IP网段,公网、虚拟、SCAN必须在同一个网段(192.168.10.0/24),私网在另一个网段(10.10.10.0/24)。
2.主机名不能包含下划线“_”,例如:host_node1,这样是错误的,可以带有中划线“-”
(三)配置主机网络
(1)修改主机名
以节点1为例,重启主机生效。
[root@template ~]# vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=node1
(2)修改IP地址
每台服务器的公网IP与虚拟IP均需要修改,这里以节点1的公网IP修改为例,公网使用的网卡是eth0,使用下面方法修改eth0的配置
#进入网卡配置目录
cd /etc/sysconfig/network-scripts/ #修改eth0的网卡配置
vim ifcfg-eth0 DEVICE=eth0
HWADDR=:0c::f8::bb
TYPE=Ethernet
ONBOOT=yes
IPADDR=192.168.10.11
NETMASK=255.255.255.0
其他网卡类似,修改完成后重启网卡
[root@node1 ~]# service network restart
最终2台服务器的网卡配置信息如下图
node1:
node2:
(3)修改/etc/hosts文件,2个节点都做相同的修改
[root@node1 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.10.11 node1
192.168.10.12 node2
192.168.10.13 node1-vip
192.168.10.14 node2-vip
192.168.10.10 node-scan 10.10.10.11 node1-priv
10.10.10.12 node2-priv
(3)关闭防火墙
#临时关闭,重启主机后恢复原来的状态
service iptables stop #永久关闭,重启生效
chkconfig iptables off
(4)关闭seLinux
将参数SELINUX=enforcing改为SELINUX=disabled
[root@node1 ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
重启服务器生效。
(四)创建用户和用户组,创建软件安装目录,配置用户环境变量
(1)创建用户oracle和grid,以及相关的用户组
/usr/sbin/groupadd -g oinstall
/usr/sbin/groupadd -g asmadmin
/usr/sbin/groupadd -g asmdba
/usr/sbin/groupadd -g asmoper
/usr/sbin/groupadd -g dba
/usr/sbin/groupadd -g oper
useradd -u -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
useradd -u -g oinstall -G dba,asmdba,oper oracle
(2)创建GI和Oracle软件的安装目录,并授权
mkdir -p /u01/app/12.1./grid
mkdir -p /u01/app/grid
mkdir /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R /u01/
软件安装目录结构如下:
(3)配置grid的环境变量
[grid@node1 ~]$ vi .bash_profile # 在文件结尾添加如下内容 export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/12.1./grid
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask
执行命令“source .bash_profile”使环境变量生效。
注意:如果是节点2,黄色参数需要改成ORACLE_SID=+ASM2。
(4)配置oracle的环境变量
[oracle@node1 ~]$ vim .bash_profile
#在文件结尾添加一下内容 export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=testdb1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/12.1./db_1
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask
执行命令“source .bash_profile”使环境变量生效。
注意:如果是节点2,黄色参数需要改成ORACLE_SID=testdb2。
(五)配置内核参数和资源限制
(1)配置操作系统的内核参数
在/etc/sysctl.conf文件结尾添加参数
kernel.msgmnb =
kernel.msgmax =
kernel.shmmax =
kernel.shmall =
fs.aio-max-nr =
fs.file-max =
kernel.shmall =
kernel.shmmax =
kernel.shmmni =
kernel.sem =
net.ipv4.ip_local_port_range =
net.core.rmem_default =
net.core.rmem_max =
net.core.wmem_default =
net.core.wmem_max =
net.ipv4.tcp_wmem =
net.ipv4.tcp_rmem =
kernel.panic_on_oops =
sysctl -p生效。
(2)配置oracle和grid用户的资源限制
在/etc/security/limits.conf结尾添加参数
grid soft nproc
grid hard nproc
grid soft nofile
grid hard nofile
oracle soft nproc
oracle hard nproc
oracle soft nofile
oracle hard nofile
(3)配置/etc/pam.d/login文件,在结尾添加参数
session required pam_limits.so
(六)软件包安装
使用yum工具安装缺失的软件包,软件包信息如下
binutils-2.20.51.0.-5.11.el6 (x86_64)
compat-libcap1-1.10- (x86_64)
compat-libstdc++--3.2.-.el6 (x86_64)
compat-libstdc++--3.2.-.el6 (i686)
gcc-4.4.-.el6 (x86_64)
gcc-c++-4.4.-.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (i686)
ksh
libgcc-4.4.-.el6 (i686)
libgcc-4.4.-.el6 (x86_64)
libstdc++-4.4.-.el6 (x86_64)
libstdc++-4.4.-.el6 (i686)
libstdc++-devel-4.4.-.el6 (x86_64)
libstdc++-devel-4.4.-.el6 (i686)
libaio-0.3.-.el6 (x86_64)
libaio-0.3.-.el6 (i686)
libaio-devel-0.3.-.el6 (x86_64)
libaio-devel-0.3.-.el6 (i686)
libXext-1.1 (x86_64)
libXext-1.1 (i686)
libXtst-1.0.99.2 (x86_64)
libXtst-1.0.99.2 (i686)
libX11-1.3 (x86_64)
libX11-1.3 (i686)
libXau-1.0. (x86_64)
libXau-1.0. (i686)
libxcb-1.5 (x86_64)
libxcb-1.5 (i686)
libXi-1.3 (x86_64)
libXi-1.3 (i686)
make-3.81-.el6
sysstat-9.0.-.el6 (x86_64)
其中x86_64代表64位操作系统,i686代表32位操作系统,只需安装对应版本即可。
使用下面命令安装
yum install -y binutils*
yum install -y compat-libcap1*
yum install -y compat-libstdc++*
yum install -y gcc*
yum install -y gcc-c++*
yum install -y glibc*
yum install -y glibc-devel*
yum install -y ksh
yum install -y libgcc*
yum install -y libstdc++*
yum install -y libstdc++-devel*
yum install -y libaio*
yum install -y libaio-devel*
yum install -y libXext*
yum install -y libXtst*
yum install -y libX11*
yum install -y libXau*
yum install -y libxcb*
yum install -y libXi*
yum install -y make*
yum install -y sysstat*
(七)配置共享磁盘
oracle对于存放OCR磁盘组的大小要求如下
存储的文件类型 | 卷数量(磁盘数量) | 卷大小 |
投票具有外部冗余的文件 | 1 | 每个投票文件卷至少300 MB |
具有外部冗余的Oracle Cluster Registry(OCR)和Grid Infrastructure Management Repository | 1 | 包含Grid Infrastructure Management Repository(5.2 GB + 300 MB表决文件+ 400 MB OCR)的OCR卷至少为5.9 GB,对于超过四个节点的集群,每个节点加500 MB。 例如,六节点群集分配应为6.9 GB。 |
Oracle Clusterware文件(OCR和投票文件)和Grid Infrastructure Management Repository,由Oracle软件提供冗余 | 3 |
每个OCR卷至少400 MB 例如,对于6节点群集,大小为14.1 GB: |
在这次安装中,磁盘规划如下:
磁盘组名称 | 磁盘数量 | 单个磁盘大小 | 功能说明 |
OCR | 3 | 10GB | 存放OCR及GI management repository |
DATA | 2 | 10GB | 存放数据库的数据 |
ARCH | 1 | 10GB | 存放归档数据 |
(1)配置共享磁盘的方法
使用udev配置磁盘有2种方法,第一种是直接fdisk格式化磁盘,拿到/dev/sd*1的磁盘,然后使用udev绑定raw,第二种是获取wwid来绑定设备,生产中太长使用第二种方法。
(2)方法1:直接使用raw
(2.1)格式化磁盘,在1个节点上执行
# 在节点1上格式化,以/dev/sdb为例: [root@node1 ~]# fdisk /dev/sdb The number of cylinders for this disk is set to .
There is nothing wrong with that, but this is larger than ,
and could in certain setups cause problems with:
) software that runs at boot time (e.g., old versions of LILO)
) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/ FDISK) Command (m for help): n
Command action
e extended
p primary partition (-)
p
Partition number (-):
First cylinder (-, default ):
Using default value
Last cylinder or +size or +sizeM or +sizeK (-, default ):
Command (m for help): w
The partition table has been altered! Calling ioctl() to re-read partition table.
Syncing disks.
(2.2)在2个节点上配置raw设备
[root@node1 ~]# vi /etc/udev/rules.d/-raw.rules
# 在后面添加 ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw6 %N" KERNEL=="raw[1]", MODE="", OWNER="grid", GROUP="asmadmin"
KERNEL=="raw[2]", MODE="", OWNER="grid", GROUP="asmadmin"
KERNEL=="raw[3]", MODE="", OWNER="grid", GROUP="asmadmin"
KERNEL=="raw[4]", MODE="", OWNER="grid", GROUP="asmadmin"
KERNEL=="raw[5]", MODE="", OWNER="grid", GROUP="asmadmin"
KERNEL=="raw[6]", MODE="", OWNER="grid", GROUP="asmadmin"
(2.3)启动裸设备,2个节点都要执行
[root@node1 ~]# start_udev
(2.4)查看裸设备,2个节点都要查看,如果有节点不能看到下面的raw设备信息,重启节点
[root@node1 ~]# raw -qa
/dev/raw/raw1: bound to major , minor
/dev/raw/raw2: bound to major , minor
/dev/raw/raw3: bound to major , minor
/dev/raw/raw4: bound to major , minor
/dev/raw/raw5: bound to major , minor
/dev/raw/raw6: bound to major , minor
(3)方法2:使用wwid来绑定设备
(3.1)编辑/etc/scsi_id.config文件,2个节点都要编辑
[root@node1 ~]# echo "options=--whitelisted --replace-whitespace" >> /etc/scsi_id.config
(3.2)将磁盘wwid信息写入99-oracle-asmdevices.rules文件,2个节点都要编辑
[root@node1 ~]# for i in b c d e f g ;
> do
> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" >> /etc/udev/rules.d/-oracle-asmdevices.rules
> done
(3.3)查看99-oracle-asmdevices.rules文件,2个节点都要查看
[root@node1 ~]# cd /etc/udev/rules.d/
[root@node1 rules.d]# more -oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=
="36000c293f718a0dcf1f7b410fb9fd1d9", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE=""
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=
="36000c296f46877bf6cff9febd7700fb9", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE=""
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=
="36000c2902c030ca8a0b0a4a32ab547c7", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE=""
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=
="36000c2982ad4757618bd0d06d54d04b8", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE=""
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=
="36000c29872b79e70266a992e788836b6", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE=""
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=
="36000c29b1260d00b8faeb3786092143a", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE=""
(3.4)启动设备,2个节点都要执行
[root@node1 rules.d]# start_udev
Starting udev: [ OK ]
(3.5)确认磁盘已经添加成功
[root@node1 rules.d]# cd /dev
[root@node1 dev]# ls -l asm*
brw-rw---- grid asmadmin , Aug : asm-diskb
brw-rw---- grid asmadmin , Aug : asm-diskc
brw-rw---- grid asmadmin , Aug : asm-diskd
brw-rw---- grid asmadmin , Aug : asm-diske
brw-rw---- grid asmadmin , Aug : asm-diskf
brw-rw---- grid asmadmin , Aug : asm-diskg
(八)用户等效性配置
oracle在安装包已经提供了grid和oracle ssh节点互信配置的工具,直接使用配置非常方便
(1)配置grid用户等效性
(1.1)解压grid安装包,在节点1执行
[grid@node1 ~]$ ls
linuxamd64_12102_grid_1of2.zip linuxamd64_12102_grid_2of2.zip
[grid@node1 ~]$ unzip -q linuxamd64_12102_grid_1of2.zip
[grid@node1 ~]$ unzip -q linuxamd64_12102_grid_2of2.zip
[grid@node1 ~]$ ls
grid linuxamd64_12102_grid_1of2.zip linuxamd64_12102_grid_2of2.zip
(1.2)配置节点互信
[grid@node1 sshsetup]$ pwd
/home/grid/grid/sshsetup
[grid@node1 sshsetup]$ ls
sshUserSetup.sh
[grid@node1 sshsetup]$ ./sshUserSetup.sh -hosts "node1 node2" -user grid -advanced
配置记录如下:
The output of this script is also logged into /tmp/sshUserSetup_2019-----.log
Hosts are node1 node2
user is grid
Platform:- Linux
Checking if the remote hosts are reachable
PING node1 (192.168.10.11) () bytes of data.
bytes from node1 (192.168.10.11): icmp_seq= ttl= time=0.014 ms
bytes from node1 (192.168.10.11): icmp_seq= ttl= time=0.043 ms
bytes from node1 (192.168.10.11): icmp_seq= ttl= time=0.032 ms
bytes from node1 (192.168.10.11): icmp_seq= ttl= time=0.040 ms
bytes from node1 (192.168.10.11): icmp_seq= ttl= time=0.042 ms --- node1 ping statistics ---
packets transmitted, received, % packet loss, time 3999ms
rtt min/avg/max/mdev = 0.014/0.034/0.043/0.011 ms
PING node2 (192.168.10.12) () bytes of data.
bytes from node2 (192.168.10.12): icmp_seq= ttl= time=3.05 ms
bytes from node2 (192.168.10.12): icmp_seq= ttl= time=0.716 ms
bytes from node2 (192.168.10.12): icmp_seq= ttl= time=0.807 ms
bytes from node2 (192.168.10.12): icmp_seq= ttl= time=1.37 ms
bytes from node2 (192.168.10.12): icmp_seq= ttl= time=0.704 ms --- node2 ping statistics ---
packets transmitted, received, % packet loss, time 4007ms
rtt min/avg/max/mdev = 0.704/1.331/3.053/0.896 ms
Remote host reachability check succeeded.
The following hosts are reachable: node1 node2.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost node1
numhosts
The script will setup SSH connectivity from the host node1 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host node1
and the remote hosts without being prompted for passwords or confirmations. NOTE :
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked. NOTE :
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories. Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes The user chose yes
Please specify if you want to specify a passphrase for the private key this script will create for the local host. Passphrase is used to encrypt the private key and makes SSH much more secure. Type 'yes' or 'no' and then press enter. In case you press 'yes', you would need to enter the passphrase whenever the script executes ssh or scp.
The estimated number of times the user would be prompted for a passphrase is . In addition, if the private-public files are also newly created, the user would have to specify the passphrase on one additional occasion.
Enter 'yes' or 'no'.
yes The user chose yes
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to on local host
Creating config file on local host
If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
Removing old private/public keys on local host
Running SSH keygen on local host
Enter passphrase (empty for no passphrase): 备注:输入回车
Enter same passphrase again: 备注:输入回车
Generating public/private rsa key pair.
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
a0::eb:ab:7d::0d:cf:d2:::cd:f0:a0::9d grid@node1
The key's randomart image is:
+--[ RSA ]----+
| |
| |
| .o .. |
| ..oE. . |
| oo.* S |
| .. o + |
| .. O . |
| . .B * |
|..o. + |
+-----------------+
Creating .ssh directory and setting permissions on remote host node1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create ~grid/.ssh/config file on remote host node1. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host node1.
Warning: Permanently added 'node1,192.168.10.11' (RSA) to the list of known hosts.
grid@node1's password: 备注:输入节点1 grid的密码
Done with creating .ssh directory and setting permissions on remote host node1.
Creating .ssh directory and setting permissions on remote host node2
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create ~grid/.ssh/config file on remote host node2. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host node2.
Warning: Permanently added 'node2,192.168.10.12' (RSA) to the list of known hosts.
grid@node2's password: 备注:输入节点2 grid的密码
Done with creating .ssh directory and setting permissions on remote host node2.
Copying local host public key to the remote host node1
The user may be prompted for a password or passphrase here since the script would be using SCP for host node1.
grid@node1's password: 备注:输入节点1 grid的密码
Done copying local host public key to the remote host node1
Copying local host public key to the remote host node2
The user may be prompted for a password or passphrase here since the script would be using SCP for host node2.
grid@node2's password: 备注:输入节点2 grid的密码
Done copying local host public key to the remote host node2
Creating keys on remote host node1 if they do not exist already. This is required to setup SSH on host node1. Creating keys on remote host node2 if they do not exist already. This is required to setup SSH on host node2.
Generating public/private rsa key pair.
Your identification has been saved in .ssh/id_rsa.
Your public key has been saved in .ssh/id_rsa.pub.
The key fingerprint is:
3e::4b:9f::a3:6c:1f:4c:ca:aa:d2:d1::: grid@node2
The key's randomart image is:
+--[ RSA ]----+
| |
| |
| o |
| oEo . |
| S..o..B |
| ...+o+* o |
| .o. +* o |
| . .. o . . |
| .... . |
+-----------------+
Updating authorized_keys file on remote host node1
Updating known_hosts file on remote host node1
The script will run SSH on the remote machine node1. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Updating authorized_keys file on remote host node2
Updating known_hosts file on remote host node2
The script will run SSH on the remote machine node2. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
cat: /home/grid/.ssh/known_hosts.tmp: No such file or directory
cat: /home/grid/.ssh/authorized_keys.tmp: No such file or directory
SSH setup is complete. ------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user grid.
. The server may have disabled public key based authentication.
. The client public key on the server may be outdated.
. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--node1:--
Running /usr/bin/ssh -x -l grid node1 date to verify SSH connectivity has been setup from local host to node1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
The script will run SSH on the remote machine node1. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Tue Aug :: HKT
------------------------------------------------------------------------
--node2:--
Running /usr/bin/ssh -x -l grid node2 date to verify SSH connectivity has been setup from local host to node2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
The script will run SSH on the remote machine node2. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Tue Aug :: HKT
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from node1 to node1
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
Tue Aug :: HKT
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from node1 to node2
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
Tue Aug :: HKT
------------------------------------------------------------------------
-Verification from complete-
SSH verification complete.
(2)配置oracle用户等效性
oracle用户的节点互信与grid类似,sshUserSetup.sh 工具在oracle的安装包下面
[oracle@node1 sshsetup]$ ./sshUserSetup.sh -hosts "node1 node2" -user oracle -advanced
(八)安装前的预检查
(1)安装cvudisk包,2个节点都要执行
[root@node1 grid]# cd grid/rpm/
[root@node1 rpm]# ls
cvuqdisk-1.0.-.rpm
[root@node1 rpm]# rpm -ivh cvuqdisk-1.0.-.rpm
Preparing... ########################################### [%]
Using default group oinstall to install package
:cvuqdisk ########################################### [%]
节点1通过scp命令将cvudisk包拷贝到节点2,安装即可
(2)执行预检查,在节点1执行即可
[root@node1 grid]# su - grid
[grid@node1 ~]$ cd grid/
[grid@node1 grid]$ ls
install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html
[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose>check.log
确保所有项都通过,检查结果如下。
Performing pre-checks for cluster services setup Checking node reachability... Check: Node reachability from node "node1"
Destination Node Reachable?
------------------------------------ ------------------------
node1 yes
node2 yes
Result: Node reachability check passed from node "node1" Checking user equivalence... Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
node2 passed
node1 passed
Result: User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
node1 passed
node2 passed Verification of the hosts config file successful Interface information for node "node1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.10.11 192.168.10.0 0.0.0.0 192.168.0.1 :0C::F8::BB
eth1 10.10.10.11 10.10.10.0 0.0.0.0 192.168.0.1 :0C::F8::C5
eth2 192.168.0.109 192.168.0.0 0.0.0.0 192.168.0.1 :0C::F8::CF Interface information for node "node2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.10.12 192.168.10.0 0.0.0.0 192.168.0.1 :0C:::0B:FC
eth1 10.10.10.12 10.10.10.0 0.0.0.0 192.168.0.1 :0C:::0B:
eth2 192.168.0.108 192.168.0.0 0.0.0.0 192.168.0.1 :0C:::0B: Check: Node connectivity of subnet "192.168.10.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1[192.168.10.11] node2[192.168.10.12] yes
Result: Node connectivity passed for subnet "192.168.10.0" with node(s) node1,node2 Check: TCP connectivity of subnet "192.168.10.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1 : 192.168.10.11 node1 : 192.168.10.11 passed
node2 : 192.168.10.12 node1 : 192.168.10.11 passed
node1 : 192.168.10.11 node2 : 192.168.10.12 passed
node2 : 192.168.10.12 node2 : 192.168.10.12 passed
Result: TCP connectivity check passed for subnet "192.168.10.0" Check: Node connectivity of subnet "10.10.10.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1[10.10.10.11] node2[10.10.10.12] yes
Result: Node connectivity passed for subnet "10.10.10.0" with node(s) node1,node2 Check: TCP connectivity of subnet "10.10.10.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1 : 10.10.10.11 node1 : 10.10.10.11 passed
node2 : 10.10.10.12 node1 : 10.10.10.11 passed
node1 : 10.10.10.11 node2 : 10.10.10.12 passed
node2 : 10.10.10.12 node2 : 10.10.10.12 passed
Result: TCP connectivity check passed for subnet "10.10.10.0" Check: Node connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1[192.168.0.109] node2[192.168.0.108] yes
Result: Node connectivity passed for subnet "192.168.0.0" with node(s) node1,node2 Check: TCP connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1 : 192.168.0.109 node1 : 192.168.0.109 passed
node2 : 192.168.0.108 node1 : 192.168.0.109 passed
node1 : 192.168.0.109 node2 : 192.168.0.108 passed
node2 : 192.168.0.108 node2 : 192.168.0.108 passed
Result: TCP connectivity check passed for subnet "192.168.0.0" Interfaces found on subnet "192.168.0.0" that are likely candidates for VIP are:
node1 eth2:192.168.0.109
node2 eth2:192.168.0.108 Interfaces found on subnet "192.168.10.0" that are likely candidates for a private interconnect are:
node1 eth0:192.168.10.11
node2 eth0:192.168.10.12 Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are:
node1 eth1:10.10.10.11
node2 eth1:10.10.10.12
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.10.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed. Result: Node connectivity check passed Checking multicast communication... Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251" passed. Check of multicast communication passed. Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 .729GB (.0KB) 4GB (.0KB) failed
node1 .729GB (.0KB) 4GB (.0KB) failed
Result: Total memory check failed Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 .4397GB (.0KB) 50MB (.0KB) passed
node1 .1282GB (.0KB) 50MB (.0KB) passed
Result: Available memory check passed Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 .8594GB (.0KB) .729GB (.0KB) passed
node1 .8594GB (.0KB) .729GB (.0KB) passed
Result: Swap space check passed Check: Free disk space for "node2:/usr,node2:/var,node2:/etc,node2:/sbin"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr node2 / .1611GB 65MB passed
/var node2 / .1611GB 65MB passed
/etc node2 / .1611GB 65MB passed
/sbin node2 / .1611GB 65MB passed
Result: Free disk space check passed for "node2:/usr,node2:/var,node2:/etc,node2:/sbin" Check: Free disk space for "node1:/usr,node1:/var,node1:/etc,node1:/sbin"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr node1 / .4079GB 65MB passed
/var node1 / .4079GB 65MB passed
/etc node1 / .4079GB 65MB passed
/sbin node1 / .4079GB 65MB passed
Result: Free disk space check passed for "node1:/usr,node1:/var,node1:/etc,node1:/sbin" Check: Free disk space for "node2:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/tmp node2 /tmp .1611GB 1GB passed
Result: Free disk space check passed for "node2:/tmp" Check: Free disk space for "node1:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/tmp node1 /tmp .4062GB 1GB passed
Result: Free disk space check passed for "node1:/tmp" Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 passed exists()
node1 passed exists() Checking for multiple users with UID value
Result: Check for multiple users with UID value passed
Result: User existence check passed for "grid" Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 passed exists
node1 passed exists
Result: Group existence check passed for "oinstall" Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 passed exists
node1 passed exists
Result: Group existence check passed for "dba" Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Status
---------------- ------------ ------------ ------------ ------------ ------------
node2 yes yes yes yes passed
node1 yes yes yes yes passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
node2 yes yes yes passed
node1 yes yes yes passed
Result: Membership check for user "grid" in group "dba" passed Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
node2 , passed
node1 , passed
Result: Run level check passed Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
node2 hard passed
node1 hard passed
Result: Hard limits check passed for "maximum open file descriptors" Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
node2 soft passed
node1 soft passed
Result: Soft limits check passed for "maximum open file descriptors" Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
node2 hard passed
node1 hard passed
Result: Hard limits check passed for "maximum user processes" Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
node2 soft passed
node1 soft passed
Result: Soft limits check passed for "maximum user processes" Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 x86_64 x86_64 passed
node1 x86_64 x86_64 passed
Result: System architecture check passed Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 2.6.-.el6.x86_64 2.6. passed
node1 2.6.-.el6.x86_64 2.6. passed
Result: Kernel version check passed Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "semmsl" Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "semmns" Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "semopm" Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "semmni" Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "shmmax" Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "shmmni" Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "shmall" Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "file-max" Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 between & between & between & passed
node2 between & between & between & passed
Result: Kernel parameter check passed for "ip_local_port_range" Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "rmem_default" Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "rmem_max" Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "wmem_default" Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "wmem_max" Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "aio-max-nr" Check: Kernel parameter for "panic_on_oops"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
node1 passed
node2 passed
Result: Kernel parameter check passed for "panic_on_oops" Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 binutils-2.20.51.0.-5.43.el6 binutils-2.20.51.0. passed
node1 binutils-2.20.51.0.-5.43.el6 binutils-2.20.51.0. passed
Result: Package existence check passed for "binutils" Check: Package existence for "compat-libcap1"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 compat-libcap1-1.10- compat-libcap1-1.10 passed
node1 compat-libcap1-1.10- compat-libcap1-1.10 passed
Result: Package existence check passed for "compat-libcap1" Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 compat-libstdc++-(x86_64)-3.2.-.el6 compat-libstdc++-(x86_64)-3.2. passed
node1 compat-libstdc++-(x86_64)-3.2.-.el6 compat-libstdc++-(x86_64)-3.2. passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)" Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 libgcc(x86_64)-4.4.-.el6 libgcc(x86_64)-4.4. passed
node1 libgcc(x86_64)-4.4.-.el6 libgcc(x86_64)-4.4. passed
Result: Package existence check passed for "libgcc(x86_64)" Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 libstdc++(x86_64)-4.4.-.el6 libstdc++(x86_64)-4.4. passed
node1 libstdc++(x86_64)-4.4.-.el6 libstdc++(x86_64)-4.4. passed
Result: Package existence check passed for "libstdc++(x86_64)" Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 libstdc++-devel(x86_64)-4.4.-.el6 libstdc++-devel(x86_64)-4.4. passed
node1 libstdc++-devel(x86_64)-4.4.-.el6 libstdc++-devel(x86_64)-4.4. passed
Result: Package existence check passed for "libstdc++-devel(x86_64)" Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 sysstat-9.0.-.el6 sysstat-9.0. passed
node1 sysstat-9.0.-.el6 sysstat-9.0. passed
Result: Package existence check passed for "sysstat" Check: Package existence for "gcc"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 gcc-4.4.-.el6 gcc-4.4. passed
node1 gcc-4.4.-.el6 gcc-4.4. passed
Result: Package existence check passed for "gcc" Check: Package existence for "gcc-c++"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 gcc-c++-4.4.-.el6 gcc-c++-4.4. passed
node1 gcc-c++-4.4.-.el6 gcc-c++-4.4. passed
Result: Package existence check passed for "gcc-c++" Check: Package existence for "ksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 ksh ksh passed
node1 ksh ksh passed
Result: Package existence check passed for "ksh" Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 make-3.81-.el6 make-3.81 passed
node1 make-3.81-.el6 make-3.81 passed
Result: Package existence check passed for "make" Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 glibc(x86_64)-2.12-1.212.el6_10. glibc(x86_64)-2.12 passed
node1 glibc(x86_64)-2.12-1.212.el6_10. glibc(x86_64)-2.12 passed
Result: Package existence check passed for "glibc(x86_64)" Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 glibc-devel(x86_64)-2.12-1.212.el6_10. glibc-devel(x86_64)-2.12 passed
node1 glibc-devel(x86_64)-2.12-1.212.el6_10. glibc-devel(x86_64)-2.12 passed
Result: Package existence check passed for "glibc-devel(x86_64)" Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 libaio(x86_64)-0.3.-.el6 libaio(x86_64)-0.3. passed
node1 libaio(x86_64)-0.3.-.el6 libaio(x86_64)-0.3. passed
Result: Package existence check passed for "libaio(x86_64)" Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 libaio-devel(x86_64)-0.3.-.el6 libaio-devel(x86_64)-0.3. passed
node1 libaio-devel(x86_64)-0.3.-.el6 libaio-devel(x86_64)-0.3. passed
Result: Package existence check passed for "libaio-devel(x86_64)" Check: Package existence for "nfs-utils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 nfs-utils-1.2.-.el6 nfs-utils-1.2.- passed
node1 nfs-utils-1.2.-.el6 nfs-utils-1.2.- passed
Result: Package existence check passed for "nfs-utils" Checking availability of ports "6200,6100" required for component "Oracle Notification Service (ONS)"
Node Name Port Number Protocol Available Status
---------------- ------------ ------------ ------------ ----------------
node2 TCP yes successful
node1 TCP yes successful
node2 TCP yes successful
node1 TCP yes successful
Result: Port availability check passed for ports "6200,6100" Checking availability of ports "" required for component "Oracle Cluster Synchronization Services (CSSD)"
Node Name Port Number Protocol Available Status
---------------- ------------ ------------ ------------ ----------------
node2 TCP yes successful
node1 TCP yes successful
Result: Port availability check passed for ports "" Checking for multiple users with UID value
Result: Check for multiple users with UID value passed Check: Current group ID
Result: Current group ID check passed Starting check for consistency of primary group of root user
Node Name Status
------------------------------------ ------------------------
node2 passed
node1 passed Check for consistency of root user's primary group passed Starting Clock synchronization checks using Network Time Protocol(NTP)... Checking existence of NTP configuration file "/etc/ntp.conf" across nodes
Node Name File exists?
------------------------------------ ------------------------
node2 yes
node1 yes
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed
No NTP Daemons or Services were found to be running
PRVF- : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
node2,node1
Result: Clock synchronization check using Network Time Protocol(NTP) failed Checking Core file name pattern consistency...
Core file name pattern consistency check passed. Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
node2 passed does not exist
node1 passed does not exist
Result: User "grid" is not part of "root" group. Check passed Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 passed
node1 passed
Result: Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes Checking the file "/etc/resolv.conf" to make sure only one of 'domain' and 'search' entries is defined WARNING:
PRVF- : Both 'search' and 'domain' entries are present in file "/etc/resolv.conf" on the following nodes: node1,node2 Check for integrity of file "/etc/resolv.conf" passed Check: Time zone consistency
Result: Time zone consistency check passed Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed Checking daemon "avahi-daemon" is not configured and running Check: Daemon "avahi-daemon" not configured
Node Name Configured Status
------------ ------------------------ ------------------------
node2 no passed
node1 no passed
Daemon not configured check passed for process "avahi-daemon" Check: Daemon "avahi-daemon" not running
Node Name Running? Status
------------ ------------------------ ------------------------
node2 no passed
node1 no passed
Daemon not running check passed for process "avahi-daemon" Starting check for /dev/shm mounted as temporary file system ... Check for /dev/shm mounted as temporary file system passed Starting check for /boot mount ... Check for /boot mount passed Starting check for zeroconf check ... ERROR: PRVE- : NOZEROCONF parameter was not specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "node2"
PRVE- : NOZEROCONF parameter was not specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "node1" Check for zeroconf check failed Pre-check for cluster services setup was unsuccessful on all the nodes. ******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
****************************************************************************************** -------------- --------------- ----------------
Check failed. Failed on nodes Reboot required?
-------------- --------------- ----------------
zeroconf check node2,node1 no Execute "/tmp/CVU_12.1.0.2.0_grid/runfixup.sh" as root user on nodes "node1,node2" to perform the fix up operations manually Press ENTER key to continue after execution of "/tmp/CVU_12.1.0.2.0_grid/runfixup.sh" has completed on nodes "node1,node2" Fix: zeroconf check
Node Name Status
------------------------------------ ------------------------
node2 failed
node1 failed ERROR:
PRVG- : Manual fix up command "/tmp/CVU_12.1.0.2.0_grid/runfixup.sh" was not issued by root user on node "node2" PRVG- : Manual fix up command "/tmp/CVU_12.1.0.2.0_grid/runfixup.sh" was not issued by root user on node "node1" Result: "zeroconf check" could not be fixed on nodes "node2,node1"
Fix up operations for selected fixable prerequisites were unsuccessful on nodes "node2,node1"
后续:安装Grid infrastructure和RDBMS,以及创建数据库,见文档: