一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之共享磁盘准备 (三)
注意:这一步是配置rac的过程中非常重要的一步,很多童鞋多次安装rac都不成功,主要原因就是失败在共享磁盘的配置上,包括小麦苗我自己,多次安装才懂的这个道理,所以,这一步大家一定要睁大眼睛多看多想,如有不懂的地方就直接联系小麦苗吧。
本部分目录截图:
- 配置共享存储
这个是重点,也是最容易出错的地方,我最初安装的时候就是在这里老报错,大家看仔细了哟!!!
-
添加共享磁盘
-
第一步
-
在 cmd 中进入 WMware Workstation 安装目录,执行命令创建磁盘:
cd C:\Program Files (x86)\VMware\VMware Workstation
C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 "E:\My Virtual Machines\rac\sharedisk\ocr_vote.vmdk"
VixDiskLib: Invalid configuration file parameter. Failed to read configuration file.
Creating disk 'E:\My Virtual Machines\rac\sharedisk\ocr_vote.vmdk'
Create: 100% done.
Virtual disk creation successful.
C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 "E:\My Virtual Machines\rac\sharedisk\data.vmdk"
VixDiskLib: Invalid configuration file parameter. Failed to read configuration file.
Creating disk 'E:\My Virtual Machines\rac\sharedisk\data.vmdk'
Create: 100% done.
Virtual disk creation successful.
C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 5g -a lsilogic -t 2 "E:\My Virtual Machines\rac\sharedisk\data.vmdk"
VixDiskLib: Invalid configuration file parameter. Failed to read configuration file.
Creating disk 'E:\My Virtual Machines\rac\sharedisk\data.vmdk'
Create: 100% done.
Virtual disk creation successful.
C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 5g -a lsilogic -t 2 "E:\My Virtual Machines\rac\sharedisk\fra.vmdk"
VixDiskLib: Invalid configuration file parameter. Failed to read configuration file.
Creating disk 'E:\My Virtual Machines\rac\sharedisk\fra.vmdk'
Create: 100% done.
Virtual disk creation successful.
C:\Program Files (x86)\VMware\VMware Workstation>
其实这个步骤可以通过界面创建,如下:
尤其注意这一步的选择,目前测试通过的只能这样选择:
点击下一步,输入名称,完成,依次添加需要的磁盘!
-
第二步
关闭两台虚拟机,用记事本打开 虚拟机名字.wmx,即打开配置文件,2个节点都需要修改 例如: D:\rhela\rhela.vmx
添加以下内容,红色字体修改为自己的共享虚拟机磁盘文件路径,当然如果在上一步中如果是通过界面来创建的那么需要把下边的缺失的部分添加进去即可:
#shared disks configure
disk.EnableUUID="TRUE"
disk.locking = "FALSE"
scsi1.shared = "TRUE"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize= "4096"
diskLib.maxUnsyncedWrites = "0"
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsil.sharedBus = "VIRTUAL"
scsi1:0.present = "TRUE"
scsi1:0.mode = "independent-persistent"
scsi1:0.fileName = "E:\share\ocr_vote.vmdk"
scsi1:0.deviceType = "disk"
scsi1:0.redo = ""
scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.fileName = "E:\share\data.vmdk"
scsi1:1.deviceType = "disk"
scsi1:1.redo = ""
scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.fileName = "E:\share\fra.vmdk"
scsi1:2.deviceType = "disk"
scsi1:2.redo = ""
第二个节点添加共享磁盘的时候也可以使用界面添加:
-
第三步
关闭 VMware Workstation 软件重新打开
此时看到共享磁盘正确加载则配置正确
-
设置共享磁盘
这里可以采用asmlib也可以采用udev来管理,我安装的时候asmlib没有通过一直报错,最后采用了udev来管理,推荐采用udev来管理,asmlib最高支持到rhel5.9,所以这里只演示采用udev来管理,当然使用raw也可以的,有兴趣的可以联系我,,,
-
可以使用udev来 共享存储规划
-
配置 udev 绑定的 scsi_id
注意以下两点:
首先切换到root用户下:
5.1. 不同的操作系统,scsi_id 命令的位置不同。
[root@localhost ~]# cat /etc/issue
Oracle Linux Server release 6.5
Kernel \r on an \m
注意:rhel 6 之后只支持 --whitelisted --replace-whitespace 参数,之前的 -g -u -s 参数已经不支持了。
[root@localhost ~]# which scsi_id
/sbin/scsi_id
[root@localhost ~]#
5.2. 编辑 /etc/scsi_id.config 文件,如果该文件不存在,则创建该文件并添加如下行:
[root@localhost ~]# vi /etc/scsi_id.config
options=--whitelisted --replace-whitespace
[root@localhost ~]#
5.3. 如果是使用 VMware 虚拟机,直接输入 scsi_id 命令可能无法获取 id,需修改 VMware 文件参数,这一步如果在添加磁盘的时候做过这一步的话就可以跳过了,直接获取uuid即可
[root@localhost ~]# scsi_id --whitelisted --replace-whitespace --device=/dev/sdb
[root@localhost ~]# scsi_id --whitelisted --replace-whitespace --device=/dev/sdc
D:\VMs\Oracle Database 11gR2\Oracle Database 11gR2.vmx
使用文本编辑器编辑该文件,在尾部新增一行参数:
disk.EnableUUID="TRUE"
保存文件,重新启动虚拟机。这里注意修改文件的时候一定要在关机的状态下修改,或者 scsi_id -g -u /dev/sdc 来获得uuid,-g -u参数在rhel6以后已经不用了
[root@localhost share]# scsi_id --whitelisted --replace-whitespace --device=/dev/sdb
36000c29fbe57659626ee89b4fba07616
[root@localhost share]# scsi_id --whitelisted --replace-whitespace --device=/dev/sdc
36000c29384cde894e087e5f0fcaa80f4
[root@localhost share]# scsi_id --whitelisted --replace-whitespace --device=/dev/sdd
36000c29022aee23728231ed9b1f9743d
[root@localhost share]# scsi_id --whitelisted --replace-whitespace --device=/dev/sde
36000c2938f431664218d1d2632ff1352
-
创建并配置 udev rules 文件
[root@localhost ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29fe0fc917d7e9982742a28ce7c", NAME="asm-diskb", OWNER="grid",GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c293ffc0900fd932348de4b6baf8", NAME="asm-diskc", OWNER="grid",GROUP="oinstall", MODE="0660"
根据步骤 5 获取的 ID 修改 RESULT 值
这里需要注意,一个KERNEL就是一行,不能换行的,我之前就是犯了这个错误的
添加4块硬盘:
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29346c1344ffb26f0e5603d519e", NAME="asm-diskb", OWNER="grid",GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29d08ee059a345571054517cd03", NAME="asm-diskc", OWNER="grid",GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c295037a910bfb765af8f400aa07", NAME="asm-diskd", OWNER="grid",GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c2982bda048f642acd3c429ec983", NAME="asm-diske", OWNER="grid",GROUP="asmadmin", MODE="0660"
-
发送到rac2
scp /etc/udev/rules.d/99-oracle-asmdevices.rules rac2:/etc/udev/rules.d
-
添加完成后,重启 udev,不同 Linux 发行版本重启方式不一样。
该步骤慢一点,大约可能需要30秒左右吧,等等等等。。。。。。
[root@localhost ~]# start_udev
Starting udev: [ OK ]
[root@localhost ~]#
-
查看绑定的 asm,如果此时还是看不到 asm disk,请重启操作系统后再查看。
[root@localhost ~]# ll /dev/asm*
brw-rw---- 1 grid asmadmin 8, 17 Oct 17 14:26 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 33 Oct 17 14:26 /dev/asm-diskc
-
对硬盘进行分区
[root@localhost share]# fdisk -l | grep "Disk /dev/sd"
以下操作在节点1完成:
[root@rac01 ~]# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044):
Using default value 1044
Command (m for help): p
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 1044 8385898+ 83 Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
...
fdisk /dev/sdc
fdisk /dev/sdd
-
共享磁盘设置不正确报错
执行root脚本报错:
第一种:
DiskGroup CRS creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15080: synchronous I/O operation to a disk failed
第二种:
Configuration of ASM failed, see logs for details
Did not succssfully configure and start ASM
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /u01/app/grid/11.2.0/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
第三种:
2014-06-05 06:39:01: Did not succssfully configure and start ASM
2014-06-05 06:39:01: Exiting exclusive mode
2014-06-05 06:39:01: Command return code of 1 (256) from command: /u01/app/grid/11.2.0/bin/crsctl stop resource ora.crsd -init
2014-06-05 06:39:01: Stop of resource "ora.crsd -init" failed
2014-06-05 06:39:01: Failed to stop CRSD
2014-06-05 06:39:32: Initial cluster configuration failed. See /u01/app/grid/11.2.0/cfgtoollogs/crsconfig/rootcrs_rac1.log for details
2013-01-21 11:19:25.396: [ CRSOCR][1] OCR context init failure. Error: PROC-26: Error while accessing the physical storage ASM error [SLOS: cat=8, opn=kgfoOpenFile01, dep=15056, loc=kgfokge
ORA-17503: ksfdopnGOpenFile05 Failed to open file +OCR.255.4294967295
ORA-17503: ksfdopn:2 Failed to open file +OCR.255.4294967295
ORA-15001: diskgroup "OCR"
] [8]
2013-01-21 11:19:25.396: [ CRSD][1][PANIC] CRSD exiting: Could not init OCR, code: 26
2013-01-21 11:19:25.396: [ CRSD][1] Done.
2014-06-06 23:20:23.442: [ OCRRAW][2849145568]propriogid:1_2: INVALID FORMAT
2014-06-06 23:20:23.442: [ OCRRAW][2849145568]proprioini: all disks are not OCR/OLR formatted
2014-06-06 23:20:23.442: [ OCRRAW][2849145568]proprinit: Could not open raw device
2014-06-06 23:20:23.445: [ OCRAPI][2849145568]a_init:16!: Backend init unsuccessful : [26]
2014-06-06 23:20:23.445: [ CRSOCR][2849145568] OCR context init failure. Error: PROC-26: Error while accessing the physical storage
2014-06-06 23:20:23.445: [ CRSD][2849145568][PANIC] CRSD exiting: Could not init OCR, code: 26
2014-06-06 23:20:23.446: [ CRSD][2849145568] Done.