Oracle 12c新特性--ASMFD(ASM Filter Driver)特性

Oracle 12c新特性--ASMFD(ASM Filter Driver)特性


从Oracle 12.1.0.2开始,可以使用asmfd来取代udev规则下的asm磁盘设备绑定,同时他也具有过滤非法IO操作的特性。ASMFD是ASMLIB和UDEV的替代产品,实际上,ASMFD也用到了UDEV。

可以在安装GRID时配置ASMFD,也可以在安装Grid后再进行配置。

检查操作系统版本是否支持ASMFD,可以使用如下代码:

1

acfsdriverstate -orahome $ORACLE_HOME supported

 

ASMFD 是 12.1 中就引入的新特性,它可以不用手动配置 ASM 磁盘,更重要的是它可以保护磁盘被其他非 Oracle 操作复写,例如 dd , echo等命令。

 

 

 

 

下面的环境基于RHEL 7.4,测试将UDEV存储转移到AFD磁盘路径。

 

一、afd配置调整

1、root用户下添加grid环境变量

 

[root@rac1 ~]# export ORACLE_HOME=/u01/app/12.2.0/grid

[root@rac1 ~]# export ORACLE_BASE=/tmp

 

 

2、获取当前asm磁盘组发现路径

 

[root@rac1 ~]# $ORACLE_HOME/bin/asmcmd dsget

parameter:/dev/asm*

profile:/dev/asm*

 

 

3、添加AFD发现路径

 

[root@rac1 ~]# asmcmd dsset '/dev/asm*','AFD:*'

 

[root@rac1 ~]# $ORACLE_HOME/bin/asmcmd dsget   

parameter:/dev/asm*, AFD:*

profile:/dev/asm*,AFD:*

 

 

4、查看节点信息

 

[root@rac1 ~]# olsnodes -a

rac1    Hub

rac2    Hub

 

 

以下需要在所有节点运行

5、停止crs

 

[root@rac1 ~]# crsctl stop crs

1

6、安装oracle afd

节点1、节点2

 

加载以及配置AFD

 

[root@rac1 yum.repos.d]# asmcmd afd_configure

1

备注:在7.4以上的redhat或者centos下需要升级kmod才可以启用AFD,在前面一篇文章中已有介绍

解决在RHEL/CentOS7.4以上版本无法使用AFD(Oracle ASMFD)特性

https://blog.csdn.net/kiral07/article/details/87629679

加载afd过程:

 

AFD-627: AFD distribution files found.

AFD-634: Removing previous AFD installation.

AFD-635: Previous AFD components successfully removed.

AFD-636: Installing requested AFD software.

AFD-637: Loading installed AFD drivers.

AFD-9321: Creating udev for AFD.

AFD-9323: Creating module dependencies - this may take some time.

AFD-9154: Loading 'oracleafd.ko' driver.

AFD-649: Verifying AFD devices.

AFD-9156: Detecting control device '/dev/oracleafd/admin'.

AFD-638: AFD installation correctness verified.

Modifying resource dependencies - this may take some time.

 

 

查询afd状态信息

 

[root@rac1 yum.repos.d]# asmcmd afd_state

ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'rac1'

 

 

7、启动crs

 

[root@rac1 yum.repos.d]# crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

 

 

8、查看当前存储设备

 

[root@rac1 yum.repos.d]# ll /dev/mapper/mpath*

lrwxrwxrwx 1 root root 7 Feb 15 17:18 /dev/mapper/mpathc -> ../dm-1

lrwxrwxrwx 1 root root 7 Feb 15 17:18 /dev/mapper/mpathd -> ../dm-0

 

此处使用多路径设备mpathc、mpathd

[root@rac2 ~]# multipath -ll

mpathd (14f504e46494c45526147693538302d577037452d39596459) dm-1 OPNFILER,VIRTUAL-DISK    

size=30G features='0' hwhandler='0' wp=rw

|-+- policy='service-time 0' prio=1 status=active

| `- 34:0:0:1 sdc             8:32  active ready running

|-+- policy='service-time 0' prio=1 status=enabled

| `- 35:0:0:1 sde             8:64  active ready running

|-+- policy='service-time 0' prio=1 status=enabled

| `- 36:0:0:1 sdg             8:96  active ready running

`-+- policy='service-time 0' prio=1 status=enabled

  `- 37:0:0:1 sdi             8:128 active ready running

mpathc (14f504e46494c45524f444c7844412d717a557a2d6b7a6752) dm-0 OPNFILER,VIRTUAL-DISK    

size=40G features='0' hwhandler='0' wp=rw

|-+- policy='service-time 0' prio=1 status=active

| `- 34:0:0:0 sdb             8:16  active ready running

|-+- policy='service-time 0' prio=1 status=enabled

| `- 35:0:0:0 sdd             8:48  active ready running

|-+- policy='service-time 0' prio=1 status=enabled

| `- 36:0:0:0 sdf             8:80  active ready running

`-+- policy='service-time 0' prio=1 status=enabled

  `- 37:0:0:0 sdh             8:112 active ready running

 

 

9、添加afd发现路径

 

切换到grid用户

[root@rac2 ~]# su - grid

 

使用afd_dsset添加存储路径

[grid@rac2:/home/grid]$asmcmd afd_dsset '/dev/mapper/mpath*'

 

[grid@rac2:/home/grid]$asmcmd afd_dsget

AFD discovery string: /dev/mapper/mpath*

 

此时未添加afd label所以为空

[grid@rac2:/home/grid]$asmcmd afd_lsdsk

There are no labelled devices.

 

 

至此从步骤5,在rac所有节点已运行所有命令

 

二、转移UDEV设备到AFD路径

1、查看当前ocr以及voting files磁盘组

 

  [root@rac1 ~]# ocrcheck -config

Oracle Cluster Registry configuration is :

         Device/File Name         :       +CRS

[root@rac1 ~]# crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   4e20265767f54f49bf12bd72f367217f (/dev/asm_crs) [CRS]

Located 1 voting disk(s).

 

 

2、查看crs磁盘组对应的udev存储路径

 

[root@rac1 ~]# su - grid

 

[grid@rac1:/home/grid]$asmcmd lsdsk -G crs

Path

/dev/asm_crs

 

 

3、停止rac集群

 

[root@rac1 ~]# crsctl stop cluster -all

1

4、转移udev设备到afd

 

使用label添加asmcrs磁盘组,将udev规则下的磁盘路径转移到afd

[grid@rac1:/home/grid]$asmcmd afd_label asmcrs /dev/mapper/mpathc --migrate       

 

crs磁盘组已加载完毕

[grid@rac1:/home/grid]$asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                      ENABLED   /dev/mapper/mpathc

 

备注:由于当前磁盘组已被asm使用,必须使用migrate才可以进行转移。

 

添加另外一块data磁盘组

[grid@rac1:/home/grid]$asmcmd afd_label asmdata /dev/mapper/mpathd --migrate    

 

查看afd磁盘组,已加载完毕

[grid@rac1:/home/grid]$asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                      ENABLED   /dev/mapper/mpathc

ASMDATA                     ENABLED   /dev/mapper/mpathd

 

 

5、在其余节点扫描afd设备

 

[grid@rac2:/home/grid]$asmcmd afd_scan

 

[grid@rac2:/home/grid]$asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                      ENABLED   /dev/mapper/mpathc

ASMDATA                     ENABLED   /dev/mapper/mpathd

 

 

6、启动rac集群

 

[root@rac1 ~]# crsctl start cluster -all

1

7、在asm实例下查询asm磁盘信息

原udev asm存储路径信息还能查看到

 

[root@rac1 ~]# asmcmd lsdsk

Path

AFD:ASMCRS

AFD:ASMDATA

 

SQL> col name for a10

SQL> col label for a10

SQL> col path for a15

SQL> select NAME,LABEL,PATH from V$ASM_DISK;

 

NAME       LABEL      PATH

---------- ---------- ---------------

           ASMDATA    /dev/asm_data ---->之前udev asm路径

           ASMCRS     /dev/asm_crs  ---->之前udev asm路径

CRS_0000   ASMCRS     AFD:ASMCRS

DATA_0000  ASMDATA    AFD:ASMDATA

 

 

8、修改发现路径

 

[grid@rac1:/home/grid]$asmcmd dsget

parameter:/dev/asm*, AFD:*

profile:/dev/asm*,AFD:*

 

只保留afd路径

[grid@rac1:/home/grid]$asmcmd dsset 'AFD:*'

[grid@rac1:/home/grid]$asmcmd dsget

parameter:AFD:*

profile:AFD:*

 

再次查询udev路径下的设备已没有

SQL> select NAME,LABEL,PATH from V$ASM_DISK;

 

NAME       LABEL      PATH

---------- ---------- ---------------

CRS_0000   ASMCRS     AFD:ASMCRS

DATA_0000  ASMDATA    AFD:ASMDATA

 

 

9、移除UDEV规则文件

 

[root@rac1 ~]# ll -hrt /etc/udev/rules.d/

total 12K

-rw-r--r-- 1 root root 297 Nov  3 17:04 99-oracle-asmdevices.rules.old

-rw-r--r-- 1 root root 224 Feb 15 17:13 53-afd.rules

-rw-r--r-- 1 root root 957 Feb 18 08:55 70-persistent-ipoib.rules

 

99-oracle-asmdevices.rules重命名之后已无法发现磁盘

[root@rac1 ~]# ll /dev/asm*

ls: cannot access /dev/asm*: No such file or directory

 

使用afd特性的磁盘组未受影响

[grid@rac2:/home/grid]$asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                      ENABLED   /dev/mapper/mpathc

ASMDATA                     ENABLED   /dev/mapper/mpathd

 

 

到此为止已完成afd的配置与加载

 

三、ASM磁盘组dd格式化测试

Oracle的afd特性可以过滤掉”非规范“的io操作,只要不是用于oracle的io操作都会被过滤掉,下面使用dd格式化asm整个磁盘组做测试

 

1、增加一个磁盘组”asmtest“用来做dd格式化实验

 

[root@rac1 ~]# asmcmd afd_label asmtest /dev/mapper/mpathe

[root@rac1 ~]# asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                      ENABLED   /dev/mapper/mpathc

ASMDATA                     ENABLED   /dev/mapper/mpathd

ASMTEST                     ENABLED   /dev/mapper/mpathe

 

 

[root@rac1 ~]# su - grid

 

[grid@rac1:/home/grid]$sqlplus / as sysasm

 

创建asmtest磁盘组

SQL> create diskgroup asmtest external redundancy disk 'AFD:asmtest';

 

Diskgroup created.

 

 

2、创建测试表空间asmtest以及测试表

 

SQL>  create tablespace asmtest datafile '+asmtest' size 100m;

SQL>  create table afd (id number) tablespace asmtest;

 

SQL>  insert into afd values (1);

 

1 row created.

 

SQL> commit;

 

Commit complete.

 

SQL> select * from afd; 

 

        ID

----------

         1

         

 

 

 

3、dd格式化

格式化整个磁盘组”asmtest“ —>/dev/mapper/mpathe

 

[root@rac1 ~]# dd if=/dev/zero of=/dev/mapper/mpathe

dd: writing to dev/mapper/mpathe No space left on device

2097153+0 records in

2097152+0 records out

1073741824 bytes (1.1 GB) copied, 70.6 s, 15.2 MB/s

 

 

4、再次做创建表空间操作

 

SQL>  create tablespace asmtest2 datafile '+asmtest' size 100m;

SQL>  create table afd2 (id number) tablespace asmtest2;

 

SQL>  insert into afd2 values (2);

 

1 row created.

 

SQL> commit;

 

Commit complete.

 

SQL> select * from afd; 

 

        ID

----------

         2

SQL>  ALTER system checkpoint;

 

System altered.

 

checkpoint之后也没有报错

 

 

5、禁用afd Filter

 

[root@rac1 ~]# asmcmd afd_filter -d

备注(-d是disable、-e是enable)

[root@rac1 ~]# asmcmd afd_lsdsk

--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

ASMCRS                     DISABLED   /dev/mapper/mpathc

ASMDATA                    DISABLED   /dev/mapper/mpathd

ASMTEST                    DISABLED   /dev/mapper/mpathe

 

 

6、再次做dd格式化

 

[root@rac1 ~]# dd if=/dev/zero of=/dev/mapper/mpathe

dd: writing to dev/mapper/mpathe No space left on device

2097153+0 records in

2097152+0 records out

1073741824 bytes (1.1 GB) copied, 89.2804 s, 12.0 MB/s

 

 

7、在数据库中测试

 

SQL> insert into afd values (3);

 

1 row created.

 

SQL> commit;

 

Commit complete.

 

SQL> alter system checkpoint;

alter system checkpoint

*

ERROR at line 1:

ORA-03113: end-of-file on communication channel

Process ID: 21718

Session ID: 35 Serial number: 63358

 

数据库已崩溃

 

 

数据库重启之后已无法启动

 

[oracle@rac1:/home/oracle]$sqlplus / as sysdba

 

SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 18 14:55:41 2019

 

Copyright © 1982, 2016, Oracle.  All rights reserved.

 

Connected to an idle instance.

 

SQL> startup 

ORA-39510: CRS error performing start on instance 'orcl1' on 'orcl'

CRS-2672: Attempting to start 'ora.AFDTEST.dg' on 'rac1'

CRS-2672: Attempting to start 'ora.AFDTEST.dg' on 'rac2'

CRS-2674: Start of 'ora.AFDTEST.dg' on 'rac1' failed

CRS-2674: Start of 'ora.AFDTEST.dg' on 'rac2' failed

CRS-0215: Could not start resource 'ora.orcl.db 1 1'.

clsr_start_resource:260 status:215

clsrapi_start_db:start_asmdbs status:215

 

 

四、拓展研究

在配置完afd之后,在dev路径下会有lable过后的磁盘

 

[root@rac1 ~]# ll /dev/oracleafd/disks/

total 8

-rwxrwx--- 1 grid oinstall 19 Feb 18 13:40 ASMCRS

-rwxrwx--- 1 grid oinstall 19 Feb 18 13:40 ASMDATA

 

查看此设备内容发现对应为多路径设备

[root@rac1 ~]# cat /dev/oracleafd/disks/ASMCRS 

/dev/mapper/mpathc

[root@rac1 ~]# cat /dev/oracleafd/disks/ASMDATA 

/dev/mapper/mpathd

 

 

udev规则下会有afd的规则文件

 

[grid@rac1:/home/grid]$ll -hrt /etc/udev/rules.d/

total 12K

-rw-r--r-- 1 root root 297 Nov  3 17:04 99-oracle-asmdevices.rules.old

-rw-r--r-- 1 root root 224 Feb 15 17:13 53-afd.rules

-rw-r--r-- 1 root root 957 Feb 18 08:55 70-persistent-ipoib.rules

 

 

[grid@rac1:/home/grid]$cat /etc/udev/rules.d/53-afd.rules 

#

# AFD devices

KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmadmin", MODE="0775"

KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmadmin", MODE="0775"

KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmadmin", MODE="0664"

 

五、总结

Oracle的afd特性可以将”危险“IO操作进行重定向,具体原理不得而知,其本质还是通过系统内核使用udev规则加载磁盘设备。

 

 

 

 

 

Oracle 12C R2-新特性-自动配置ASMFD

1  说明

ASMFD 是 12.1 中就引入的新特性,它可以不用手动配置 ASM 磁盘,更重要的是它可以保护磁盘被其他非 Oracle 操作复写,例如dd , echo 等命令。

更为详尽的介绍,请查看官方文档:

--12.1 新特性 ASMFD

https://docs.oracle.com/database/121/OSTMG/GUID-2F5E344F-AFC2-4768-8C00-6F3C56302123.htm#OSTMG95729

 

在 12.2 中, ASMFD 被加强了,它可以自动配置磁盘,只需要执行一个命令,该磁盘就可以被 ASM 使用。

非常方便。

 

2  具体例子

2.1   创建目录

[root@rac1 software]# mkdir -p /u01/app/12.2.0/grid
[root@rac1 software]# chown grid:oinstall /u01/app/12.2.0/grid

2.2   解压 GRID 软件

使用 grid 用户解压

[grid@rac1 software]# cd /u01/app/12.2.0/grid
[grid@rac1 grid]# unzip -q /software/grid_home_image.zip

2.3   配置共享磁盘用于 ASMFD

2.3.1  使用 root 用户登录,并设置 ORACLE_HOME , ORACLE_BASE

[root@rac1 grid]# su - root
[root@rac1 grid]# export ORACLE_HOME=/u01/app/12.2.0/grid
[root@rac1 grid]# export ORACLE_BASE=/tmp
[root@rac1 grid]# echo $ORACLE_BASE
/tmp
[root@rac1 grid]# echo $ORACLE_HOME
/u01/app/12.2.0/grid

2.3.2  使用 ASMCMD 命令来为 ASMFD 提供磁盘

如下,初始化三块磁盘,就不需要使用 udev , ASMLIB 等方式来绑定磁盘并赋权限

 

[root@rac1 grid]# /u01/app/12.2.0/grid/bin/asmcmd afd_label DATA1 /dev/sdb --init
[root@rac1 grid]#

2.3.3  验证磁盘是否被标记为 ASMFD 使用

[root@rac1 grid]# /u01/app/12.2.0/grid/bin/asmcmd afd_lslbl /dev/sdb
--------------------------------------------------------------------------------
Label                     Duplicate  Path
====================================DATA1                              /dev/sdb

然后下面安装 GRID 的时候,就可以直接使用该磁盘了 /dev/sdb 。

注意 :/dev/sdb 重启后,可能会变名称,所以 Oracle 使用了 label 标签将其绑定

2.3.4  重置 ORACLE_BASE

unset ORACLE_BASE

2.4   开始安装 GRID

./gridSetup.sh

更多信息,请参考官方文档:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/tdprc/installing-oracle-grid.html#GUID-72D1994F-E838-415A-8E7D-30EA780D74A8

 

 

ASMFD (ASM Filter Driver) 支持的操作系统版本

 

ASMFD 12.1.0.2  Supported Platforms

Vendor Version Update/Kernel Architecture Bug or PSU
Oracle Linux – RedHat Compatible Kernel 5 Update 3 and later, 2.6.18 kernel series (RedHat Compatible Kernel) X86_64 12.1.0.2.3 DB PSU Patch
Oracle Linux – Unbreakable Enterprise Kernel   5 Update 3 and later, 2.6.39-100 and later UEK 2.6.39 kernels   X86_64

12.1.0.2.3 DB PSU Patch

(See Note 1)

 Oracle Linux – RedHat Compatible Kernel  6 All Updates, 2.6.32-71 and later 2.6.32 RedHat Compatible kernels  X86_64 12.1.0.2.3 DB PSU Patch
Oracle Linux - Unbreakable Enterprise Kernel 6 All Updates, 2.6.39-100 and later UEK 2.6.39 kernels  X86_64

12.1.0.2.3 DB PSU Patch

(See Note 1)

Oracle Linux - Unbreakable Enterprise Kernel 6 All Updates, 3.8.13-13 and later UEK 3.8.13 kernels  X86_64 12.1.0.2.3 DB PSU Patch (See Note 1)
Oracle Linux - Unbreakable Enterprise Kernel 6 All Updates, 4.1.12 and later UEK 4.1.12 kernels X86_64 12.1.0.2.160719 (Base Bug 22810422 )
Oracle Linux – RedHat Compatible Kernel 7

 Update 0,  RedHat Compatible Kernel 3.10.0-123

 

 X86_64 12.1.0.2.3  (Base Bug 18321597 )
Oracle Linux – RedHat Compatible Kernel 7

Update 1 and later, 3.10-0-229 and later RedHat Compatible kernels

 

 X86_64 12.1.0.2.160119 (Base Bug  21233961 )
Oracle Linux - Unbreakable Enterprise Kernel 7

All Updates, 3.8.13-35 and later UEK 3.8.13 kernels

 

 X86_64

12.1.0.2.3  (Base Bug 18321597 )

Base ( See Note 1 )

Oracle Linux - Unbreakable Enterprise Kernel 7

All Updates, 4.1.12 and later UEK 4.1.12 kernels

X86_64

12.1.0.2.160719 (Base Bug  22810422 )

RedHat Linux  5 Update 3 and later, 2.6.18 kernel series  X86_64 12.1.0.2.3 DB PSU Patch
 RedHat Linux  6 All Updates, 2.6.32-279 and later RedHat kernels  X86_64 12.1.0.2.3 DB PSU Patch
 RedHat Linux 7

Update 0,  3.10.0-123 kernel

 X86_64   12.1.0.2.3  (Base Bug 18321597 )
RedHat Linux 7

Update 1 and later, 3.10.0-229 and later RedHat kernels

 X86_64 12.1.0.2.160119 (Base Bug  21233961 )
RedHat Linux 7

Update 4

X86_64 12.1.0.2.170718ACFSPSU + Patch 26247490
Novell SLES 11 SP2 X86_64 Base 
Novell SLES 11 SP3 X86_64 Base 
Novell SLES 11 SP4 X86_64  12.1.0.2.160419 (Base Bug 21231953 )
Novell SLES 12 SP1 X86_64 12.1.0.2.170117ACFSPSU(Base Bug  23321114 )

 

12c新特性ASMFD

 

时间: 2016-05-17 22:32:08  | 作者: ohsdba  |  English   
如非注明,本站文章皆为原创。欢迎转载,转载时请注明出处和作者信息。

从12.1.0.2开始,Oracle 引入了ASMFD(ASM Filter Driver),ASMFD只适应于Linux平台。安装完Grid Infrastructure后,你可以决定是否配置她。如果之前使用了ASMLIB(可以简单的理解为对设备做标签来标识磁盘)或者udev(可以动态管理设备),迁移到ASMFD之后,需要卸载ASMLIB或禁用udev的规则。通过Filter driver可以过滤无效的请求,避免因为非oracle的I/O请求造成意外的覆写,进而保证了系统的安全和稳定。 

 

官方文档中关于ASMFD的描述

This feature is available on Linux systems starting with Oracle Database 12c Release 1 (12.1.0.2). 

Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks.Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks. After installation of Oracle Grid Infrastructure, you can optionally configure Oracle ASMFD for your system. If ASMLIB is configured for an existing Oracle ASM installation, then you must explicitly migrate the existing ASMLIB configuration to Oracle ASMFD. 

The Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.

 

The Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.

ASMFD会拒绝所有的无效的I/O请求。这种行为可以避免因为意外的覆写造成ASM Disk的损坏或磁盘组中文件的损坏。比如她会过滤出所有可能造成覆写的non-oracle的I/O请求。 

本文以Oracle Restart(测试版本12.1.0.2.0)环境测试为例来说明如何安装配置ASMFD。首先安装GI(Install Softeware Only),然后配置ASMFD,配置Label ASMFD Disks,创建ASM实例,创建ASM磁盘组(ASMFD),创建spfile并迁移至ASM磁盘组。最后在启用和关闭Filter功能情况下分别测试。 
详情参考:http://docs.oracle.com/database/121/OSTMG/GUID-06B3337C-07A3-4B3F-B6CD-04F2916C11F6.htm 

配置Oracle Restart(SIHA) 
[root@db1 ~]# /orgrid/oracle/product/121/root.sh  
Performing root user operation. 
The following environment variables are set as: 
    ORACLE_OWNER= orgrid 
    ORACLE_HOME=  /orgrid/oracle/product/121 

Enter the full pathname of the local bin directory: [/usr/local/bin]:  
The contents of "dbhome" have not changed. No need to overwrite. 
The contents of "oraenv" have not changed. No need to overwrite. 
The contents of "coraenv" have not changed. No need to overwrite. 

Entries will be added to the /etc/oratab file as needed by 
Database Configuration Assistant when a database is created 
Finished running generic part of root script. 
Now product-specific root actions will be performed. 

To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user: 
/orgrid/oracle/product/121/perl/bin/perl -I/orgrid/oracle/product/121/perl/lib -I/orgrid/oracle/product/121/crs/install /orgrid/oracle/product/121/crs/install/roothas.pl  执行这个脚本配置HAS,可以不必在GUI下运行   

To configure Grid Infrastructure for a Cluster execute the following command as orgrid user: 

/orgrid/oracle/product/121/crs/config/config.sh  

安装GI,选择只安装软件,如果要配置RAC,需要 运行config.sh脚本( 必须在GUI模式下运行 ),会让你输入cluster信息,scan信息,感兴趣的可以尝试下。

This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media. 

[root@db1 ~]#  
[root@db1 ~]# /orgrid/oracle/product/121/perl/bin/perl -I/orgrid/oracle/product/121/perl/lib -I/orgrid/oracle/product/121/crs/install /orgrid/oracle/product/121/crs/install/roothas.pl 
Using configuration parameter file: /orgrid/oracle/product/121/crs/install/crsconfig_params 
LOCAL ADD MODE  
Creating OCR keys for user 'orgrid', privgrp 'asmadmin'.. 
Operation successful. 
LOCAL ONLY MODE  
Successfully accumulated necessary OCR keys. 
Creating OCR keys for user 'root', privgrp 'root'.. 
Operation successful. 
CRS-4664: Node db1 successfully pinned. 
2016/05/16 22:10:54 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf' 

db1     2016/05/16 22:11:11     /orgrid/oracle/product/121/cdata/db1/backup_20160516_221111.olr     0      
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db1' 
CRS-2673: Attempting to stop 'ora.evmd' on 'db1' 
CRS-2677: Stop of 'ora.evmd' on 'db1' succeeded 
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db1' has completed 
CRS-4133: Oracle High Availability Services has been stopped. 
CRS-4123: Oracle High Availability Services has been started. 
2016/05/16 22:12:19 CLSRSC-327: Successfully configured Oracle Restart for a standalone server 

用udev绑定,加载并查看 
[root@db1 ~]# cd /etc/udev/rules.d/ 
[root@db1 rules.d]# cat 99-oracle-asmdevices.rules  

<span style="color:#333333">KERNEL=="sdb1",NAME="asmdisk1",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdb2",NAME="asmdisk2",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdb3",NAME="asmdisk3",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdb4",NAME="asmdisk4",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdc1",NAME="asmdisk5",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdc2",NAME="asmdisk6",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdc3",NAME="asmdisk7",OWNER="orgrid",GROUP="asmadmin",MODE="0660"
KERNEL=="sdc4",NAME="asmdisk8",OWNER="orgrid",GROUP="asmadmin",MODE="0660"  
[root@db1 rules.d]# 
[root@db1 rules.d]# udevadm control --reload-rules
[root@db1 rules.d]# udevadm trigger
[root@db1 rules.d]# ls -l /dev/asmdisk*
brw-rw---- 1 orgrid asmadmin 8, 17 May 16 23:03 /dev/asmdisk1
brw-rw---- 1 orgrid asmadmin 8, 18 May 16 23:03 /dev/asmdisk2
brw-rw---- 1 orgrid asmadmin 8, 19 May 16 23:03 /dev/asmdisk3
brw-rw---- 1 orgrid asmadmin 8, 20 May 16 23:03 /dev/asmdisk4
brw-rw---- 1 orgrid asmadmin 8, 33 May 16 23:03 /dev/asmdisk5
brw-rw---- 1 orgrid asmadmin 8, 34 May 16 23:03 /dev/asmdisk6
brw-rw---- 1 orgrid asmadmin 8, 35 May 16 23:03 /dev/asmdisk7
brw-rw---- 1 orgrid asmadmin 8, 36 May 16 23:03 /dev/asmdisk8
[root@db1 rules.d]#
</span>

更多关于udev,请参考 http://www.ibm.com/developerworks/cn/linux/l-cn-udev/

 

查看ASMFD是否安装 
[root@db1 ~]# export ORACLE_HOME=/orgrid/oracle/product/121 
[root@db1 ~]# export ORACLE_SID=+ASM 
[root@db1 ~]# export PATH=$ORACLE_HOME/bin:$PATH 

[root@db1 ~]#  $ORACLE_HOME/bin/asmcmd afd_state

Connected to an idle instance.

ASMCMD-9526: The AFD state is  'NOT INSTALLED'  and filtering is 'DEFAULT' on host 'db1' 
[root@db1 ~]#  

安装ASMFD(必须先关掉CRS(RAC)/HAS(SIHA)服务) 

<span style="color:#333333">[root@db1 ~]# $ORACLE_HOME/bin/asmcmd afd_configure
Connected to an idle instance.
ASMCMD-9523: command cannot be used when Oracle Clusterware stack is up
[root@db1 ~]# crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db1'
CRS-2673: Attempting to stop 'ora.evmd' on 'db1'
CRS-2677: Stop of 'ora.evmd' on 'db1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@db1 ~]# $ORACLE_HOME/bin/asmcmd afd_configure
Connected to an idle instance.
AFD-627: AFD distribution files found.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.
[root@db1 ~]#
</span>


查看ASMFD详情 
[orgrid@db1 ~]$ $ORACLE_HOME/bin/asmcmd afd_state 
Connected to an idle instance. 
ASMCMD-9526: The AFD state is ' LOADED ' and filtering is 'DEFAULT' on host 'db1' 

<span style="color:#333333">[root@db1 ~]# /orgrid/oracle/product/121/bin/crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[root@db1 ~]#
[orgrid@db1 ~]$
[orgrid@db1 bin]$ pwd
/orgrid/oracle/product/121/bin
[orgrid@db1 bin]$ ls -ltr afd*
-rwxr-x--- 1 orgrid asmadmin     1000 May 23  2014 afdroot
-rwxr-xr-x 1 orgrid asmadmin 72836515 Jul  1  2014 afdboot
-rwxr-xr-x 1 orgrid asmadmin   184403 Jul  1  2014 afdtool.bin
-rwxr-x--- 1 orgrid asmadmin      766 May 16 23:29 afdload
-rwxr-x--- 1 orgrid asmadmin     1254 May 16 23:29 afddriverstate
-rwxr-xr-x 1 orgrid asmadmin     2829 May 16 23:29 afdtool
</span>

<span style="color:#333333">[root@db1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ons
               OFFLINE OFFLINE      db1                      STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        OFFLINE OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       db1                      STABLE
ora.evmd
      1        ONLINE  ONLINE       db1                      STABLE
--------------------------------------------------------------------------------
[root@db1 ~]#
</span>

安装成功后,你看到afd的一些文件,还能看到资源ora.driver.afd 

用afd_label标识磁盘 
[orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_label ASMDISK1 /dev/asmdisk1 

<span style="color:#333333">Connected to an idle instance.
[orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                    ENABLED   /dev/asmdisk1
[orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_label ASMDISK2 /dev/asmdisk2
Connected to an idle instance.
[orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                    ENABLED   /dev/asmdisk1
ASMDISK2                    ENABLED   /dev/asmdisk2
[orgrid@db1 bin]$ asmcmd
Connected to an idle instance.
ASMCMD> afd_label ASMDISK3 /dev/asmdisk3
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                    ENABLED   /dev/asmdisk1
ASMDISK2                    ENABLED   /dev/asmdisk2
ASMDISK3                    ENABLED   /dev/asmdisk3
ASMCMD>
</span>

 

<span style="color:#333333">[root@db1 rules.d]# ls -ltr|tail -5
-rw-r--r--. 1 root root  789 Mar 10 05:18 70-persistent-cd.rules
-rw-r--r--. 1 root root  341 Mar 10 05:25 99-vmware-scsi-udev.rules
-rw-r--r--  1 root root  190 May 16 22:11 55-usm.rules
-rw-r--r--  1 root root  600 May 16 23:03 99-oracle-asmdevices.rules
-rw-r--r--  1 root root  230 May 17 00:31 53-afd.rules
[root@db1 rules.d]#
[orgrid@db1 rules.d]$ pwd
/etc/udev/rules.d
[root@db1 rules.d]# cat 53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="orgrid", GROUP="asmadmin", MODE="0770"
KERNEL=="oracleafd/*", OWNER="orgrid", GROUP="asmadmin", MODE="0770"
KERNEL=="oracleafd/disks/*", OWNER="orgrid", GROUP="asmadmin", MODE="0660"
[root@db1 rules.d]# cat 55-usm.rules
#
# ADVM devices
KERNEL=="asm/*",      GROUP="asmadmin", MODE="0770"
KERNEL=="asm/.*",     GROUP="asmadmin", MODE="0770"
#
# ACFS devices
KERNEL=="ofsctl",     GROUP="asmadmin", MODE="0664"
[root@db1 rules.d]#
</span>

安装后会看到udev rules下面多了一些文件,实际上ASMFD仍使用了udev

创建ASM实例(也可以通过asmca去创建) 

<span style="color:#333333">[orgrid@db1 dbs]$ srvctl add asm
[orgrid@db1 dbs]$ ps -ef|grep pmon
orgrid    42414  36911  0 14:26 pts/2    00:00:00 grep pmon
[orgrid@db1 dbs]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.asm
               OFFLINE OFFLINE      db1                      STABLE
ora.ons
               OFFLINE OFFLINE      db1                      STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        OFFLINE OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       db1                      STABLE
ora.evmd
      1        ONLINE  ONLINE       db1                      STABLE
--------------------------------------------------------------------------------
[orgrid@db1 dbs]$
[orgrid@db1 ~]$ cat $ORACLE_HOME/dbs/init*.ora
*.asm_power_limit=1
*.diagnostic_dest='/orgrid/grid_base'
*.instance_type='asm'
*.large_pool_size=12M
*.memory_target=1024M
*.remote_login_passwordfile='EXCLUSIVE'
[orgrid@db1 ~]$
[orgrid@db1 ~]$ ps -ef|grep pmon
orgrid    42724  42694  0 14:30 pts/2    00:00:00 grep pmon
[orgrid@db1 ~]$ srvctl start asm
[orgrid@db1 ~]$ ps -ef|grep pmon
orgrid    42807      1  0 14:30 ?        00:00:00 asm_pmon_+ASM
orgrid    42888  42694  0 14:31 pts/2    00:00:00 grep pmon
[orgrid@db1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.asm
               ONLINE  ONLINE       db1                      Started,STABLE
ora.ons
               OFFLINE OFFLINE      db1                      STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       db1                      STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       db1                      STABLE
ora.evmd
      1        ONLINE  ONLINE       db1                      STABLE
--------------------------------------------------------------------------------
[orgrid@db1 ~]$
</span>


通过asmca创建DiskGroup 
[orgrid@db1 ~]$ asmca -silent  -sysAsmPassword oracle -asmsnmpPassword oracle -createDiskGroup -diskString 'AFD:*' -diskGroupName DATA_AFD -disk 'AFD:ASMDISK1' -disk 'AFD:ASMDISK2' -redundancy Normal -au_size 4  -compatible.asm 12.1 -compatible.rdbms 12.1 
Disk Group DATA_AFD created successfully. 
[orgrid@db1 ~]$  

创建spfile并迁移到磁盘组 

<span style="color:#333333">[orgrid@db1 ~]$ asmcmd spget
[orgrid@db1 ~]$
[orgrid@db1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Tue May 17 15:09:26 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
SQL> show parameter spf
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string
SQL> create spfile='+DATA_AFD' from pfile;
File created.
SQL> show parameter spf
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
[orgrid@db1 ~]$ asmcmd spget
+DATA_AFD/ASM/ASMPARAMETERFILE/registry.253.912092995
[orgrid@db1 ~]$
</span>

 

备份并移除udev rule文件99-oracle-asmdevices.rules

重命名99-oracle-asmdevices.rules为99-oracle-asmdevices.rules.bak。如果不move 99-oracle-asmdevices.rules文件,下次重启之后,之前ASMFD标识过的磁盘,看不到。 

<span style="color:#333333">[orgrid@db1 ~]$ asmcmd afd_lsdsk
There are no labelled devices.
[root@db1 ~]# ls -l /dev/oracleafd/disks
total 0
[root@db1 ~]# ls -l /dev/oracleafd/
admin  disks/
</span>


设置磁盘Discovery String字符串 

<span style="color:#333333">ASMCMD> afd_dsget
AFD discovery string:
ASMCMD> afd_dsset '/dev/sd*'       --设置ASMFD discovery string为原来物理磁盘的信息
ASMCMD> afd_dsget
AFD discovery string: '/dev/sd*'
ASMCMD>
[orgrid@db1 ~]$ asmcmd afd_dsget
AFD discovery string: '/dev/sd*'
[orgrid@db1 ~]$ asmcmd dsget       --设置ASM磁盘组iscovery string为AFD:*     
parameter:AFD:*
profile:AFD:*
[orgrid@db1 ~]$
</span>


重启服务器并验证 

<span style="color:#333333">[root@db1 ~]# ls -l /dev/oracleafd/disks/
total 12
-rw-r--r-- 1 root root 10 May 17 00:15 ASMDISK1
-rw-r--r-- 1 root root 10 May 17 00:15 ASMDISK2
-rw-r--r-- 1 root root 10 May 17 00:15 ASMDISK3
[root@db1 ~]#
ASMCMD> lsdsk  --candidate   
Path
AFD:ASMDISK2
AFD:ASMDISK3
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                   DISABLED   /dev/sdb1
ASMDISK2                   DISABLED   /dev/sdb2
ASMDISK3                   DISABLED   /dev/sdb3
ASMCMD>
[orgrid@db1 ~]$ ls -l /dev/disk/by-label/
total 0
lrwxrwxrwx 1 root root 10 May 17 00:30 ASMDISK1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 17 00:30 ASMDISK2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 May 17 00:31 ASMDISK3 -> ../../sdb3
[orgrid@db1 ~]$
</span>

重启后会发现,ASMFD用的磁盘的属性变成了root权限 

启用Filter功能 

<span style="color:#333333">ASMCMD> help afd_filter
afd_filter
        Sets the AFD filtering mode on a given disk path.
        If the command is executed without specifying a disk path then
        filtering is set at node level.
ASMCMD>
ASMCMD> afd_filter -e /dev/sdb2
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                   DISABLED   /dev/sdb1
ASMDISK2                   DISABLED   /dev/sdb2
ASMDISK3                   DISABLED   /dev/sdb3
ASMCMD> afd_filter -e
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                    ENABLED   /dev/sdb1
ASMDISK2                    ENABLED   /dev/sdb2
ASMDISK3                    ENABLED   /dev/sdb3
ASMCMD>
</span>


创建新磁盘组DATA_PGOLD 
SQL> create diskgroup DATA_PGOLD external redundancy disk 'AFD:ASMDISK3'; 
Diskgroup created. 

SQL>  
[orgrid@db1 ~]$ kfed read AFD:ASMDISK3 

<span style="color:#333333">kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                   771071217 ; 0x00c: 0x2df59cf1
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr: ORCLDISKASMDISK3 ; 0x000: length=16
kfdhdb.driver.reserved[0]:   1145918273 ; 0x008: 0x444d5341
kfdhdb.driver.reserved[1]:    843797321 ; 0x00c: 0x324b5349
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                168820736 ; 0x020: 0x0a100000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                ASMDISK3 ; 0x028: length=8
kfdhdb.grpname:              DATA_PGOLD ; 0x048: length=10
kfdhdb.fgname:                 ASMDISK2 ; 0x068: length=8
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             33035808 ; 0x0a8: HOUR=0x0 DAYS=0x11 MNTH=0x5 YEAR=0x7e0
kfdhdb.crestmp.lo:           3231790080 ; 0x0ac: USEC=0x0 MSEC=0x4d SECS=0xa MINS=0x30
kfdhdb.mntstmp.hi:             33035808 ; 0x0b0: HOUR=0x0 DAYS=0x11 MNTH=0x5 YEAR=0x7e0
kfdhdb.mntstmp.lo:           3239631872 ; 0x0b4: USEC=0x0 MSEC=0x237 SECS=0x11 MINS=0x30
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize:                    2055 ; 0x0c4: 0x00000807
kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000
kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi:             33035808 ; 0x0e4: HOUR=0x0 DAYS=0x11 MNTH=0x5 YEAR=0x7e0
kfdhdb.grpstmp.lo:           3231717376 ; 0x0e8: USEC=0x0 MSEC=0x6 SECS=0xa MINS=0x30
</span>


在启用Filter功能下,用dd做测试 

<span style="color:#333333">[root@db1 log]# dd if=/dev/zero of=/dev/sdb3
dd: writing to `/dev/sdb3': No space left on device
4209031+0 records in
4209030+0 records out
2155023360 bytes (2.2 GB) copied, 235.599 s, 9.1 MB/s
[root@db1 log]#
</span>

 

<span style="color:#333333">[root@db1 ~]# strings -a /dev/sdb3
ORCLDISKASMDISK3
ASMDISK3
DATA_PGOLD
ASMDISK3
0        
...省去了一部分
ORCLDISKASMDISK3
ASMDISK3
DATA_PGOLD
ASMDISK3
[root@db1 ~]#
[root@db1 ~]#
</span>

通过strings查看/dev/sdb3,可以发现,里面的内容并没有被清空

 

卸载、挂载磁盘组正常 

<span style="color:#333333">[orgrid@db1 ~]$ asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/
MOUNTED  EXTERN  N         512   4096  1048576      2055     1993                0            1993              0             N  DATA_PGOLD/
ASMCMD> umount data_pgold
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/
ASMCMD> mount data_pgold
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/
MOUNTED  EXTERN  N         512   4096  1048576      2055     1993                0            1993              0             N  DATA_PGOLD/
ASMCMD>
</span>


/var/log/messages里显示的错误信息 
[root@db1 log]# tail -3 messages 

<span style="color:#333333">May 17 01:10:34 db1 kernel: F 4297082.224/160516171034 flush-8:16[9173]  afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=18)  not supported i=2 start=8418038 seccnt=2  pstart=4209030  pend=8418060
May 17 01:10:34 db1 kernel: F 4297082.224/160516171034 flush-8:16[9173]  afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=18)  not supported i=2 start=8418040 seccnt=2  pstart=4209030  pend=8418060
May 17 01:10:34 db1 kernel: F 4297082.224/160516171034 flush-8:16[9173]  afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=18)  not supported i=2 s
[root@db1 log]#
</span>


在关闭Filter功能情况下做测试 

<span style="color:#333333">ASMCMD> afd_filter -d
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK1                   DISABLED   /dev/sdb1
ASMDISK2                   DISABLED   /dev/sdb2
ASMDISK3                   DISABLED   /dev/sdb3
ASMCMD> exit
[orgrid@db1 ~]$
</span>


备份磁盘的前1024字节并清除,普通用户没权限读 

<span style="color:#333333">[orgrid@db1 ~]$ dd if=/dev/sdb3 of=block1 bs=1024 count=1
dd: opening `/dev/sdb3': Permission denied
[orgrid@db1 ~]$ exit
logout
[root@db1 ~]# dd if=/dev/sdb3 of=block1 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.00236493 s, 433 kB/s
[root@db1 ~]# dd if=/dev/zero of=/dev/sdb3 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000458225 s, 2.2 MB/s
[root@db1 ~]# su - orgrid
</span>


卸载、挂载磁盘组DATA_PGOLD 
[orgrid@db1 ~]$ asmcmd 
ASMCMD> lsdg 
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name 
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/ 
MOUNTED  EXTERN  N         512   4096  1048576      2055     1993                0            1993              0             N  DATA_PGOLD/ 
ASMCMD> umount data_pgold 
ASMCMD> mount data_pgold 
ORA-15032: not all alterations performed 
ORA-15017: diskgroup "DATA_PGOLD" cannot be mounted 
ORA-15040: diskgroup is incomplete (DBD ERROR: OCIStmtExecute) 
ASMCMD>  
可以看出,关闭了Filter功能之后,就会失去保护功能 

通过kfed修复 
[root@db1 ~]# /orgrid/oracle/product/121/bin/kfed repair /dev/sdb3 
[root@db1 ~]# su - orgrid 
[orgrid@db1 ~]$ asmcmd 
ASMCMD> lsdg 
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name 
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/ 
ASMCMD> mount data_pgold 
ASMCMD> lsdg 
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name 
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/ 
MOUNTED  EXTERN  N         512   4096  1048576      2055     1993                0            1993              0             N  DATA_PGOLD/ 
ASMCMD> 

通过之前dd备份的块做修复 
[root@db1 ~]# dd if=block1 of=/dev/sdb2 bs=1024 count=1 conv=notrunc 
1+0 records in 
1+0 records out 
1024 bytes (1.0 kB) copied, 0.000467297 s, 2.2 MB/s 
[root@db1 ~]# su - orgrid 
[orgrid@db1 ~]$ asmcmd 
ASMCMD> mount data_pgold 
ASMCMD> lsdg 
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name 
MOUNTED  EXTERN  N         512   4096  1048576      4110     4058                0            4058              0             N  DATA_ADF/ 
MOUNTED  EXTERN  N         512   4096  1048576      2055     1993                0            1993              0             N  DATA_PGOLD/ 
ASMCMD> exit 
[orgrid@db1 ~]$ 

增加AFD DISK,一般用户没权限添加,必须用root用户 
ASMCMD> help afd_label 
afd_label 
        To set the given label to the specified disk 
ASMCMD> 

<span style="color:#333333">[orgrid@db1 ~]$ $ORACLE_HOME/bin/asmcmd afd_label ASMDISK4 /dev/sdb4
ORA-15227: could not perform label set/clear operation
ORA-15031: disk specification '/dev/sdb4' matches no disks (DBD ERROR: OCIStmtExecute)
ASMCMD-9513: ASM disk label set operation failed.
[root@db1 ~]# /orgrid/oracle/product/121/bin/asmcmd afd_label ASMDISK4 /dev/sdb4
Connected to an idle instance.
[root@db1 ~]#
</span>


常见问题和解答 
问:运行afd_configure时遇到ASMCMD-9524: AFD configuration failed 'ERROR: OHASD start failed' 
答:安装是如果遇到这个错误,需要安装p19035573_121020_Generic.zip,这个patch实际上是一个asmcmdsys.pm文件 

问:什么时候用afd_label --migrate 
答:如果是从现有DiskGroup迁移到ASMFD,需要加参数--migrate,否则不需要 

 

Reference

Configure ASMFD

http://docs.oracle.com/database/121/OSTMG/GUID-2F5E344F-AFC2-4768-8C00-6F3C56302123.htm#OSTMG95729

http://docs.oracle.com/database/121/OSTMG/GUID-BB2B3A64-4B83-4A6D-816C-6472FAF9B27A.htm#OSTMG95909 
Configure in Restart 

http://docs.oracle.com/database/121/OSTMG/GUID-06B3337C-07A3-4B3F-B6CD-04F2916C11F6.htm

http://www.ibm.com/developerworks/cn/linux/l-cn-udev/ 

https://wiki.archlinux.org/index.php/Udev#Setting_static_device_names

 

 

ASMFD 12.2.0.1  Supported Platforms

Vendor

  Version         Update/Kernel     Architecture       Bug or PSU

Oracle Linux – RedHat Compatible Kernel

6

All Updates, 2.6.32-71 and later 2.6.32 RedHat Compatible kernels

X86_64 Base

Oracle Linux - Unbreakable Enterprise Kernel

6

All Updates, 2.6.39-100 and later UEK 2.6.39 kernels

X86_64 Base

Oracle Linux – Unbreakable Enterprise Kernel

6 All Updates, 3.8.13-13 and later UEK 3.8.13 kernels X86_64 Base

Oracle Linux – Unbreakable Enterprise Kernel

 

 

6

All Updates, 4.1 and later UEK 4.1 kernels

X86_64 Base

Oracle Linux – RedHat Compatible Kernel

7 GA release, 3.10.0-123 and through 3.10.0-513 X86_64   Base

Oracle Linux – RedHat Compatible Kernel

7 Update 3, 3.10.0-514 and later X86_64 Base + Patch 25078431

Oracle Linux – RedHat Compatible Kernel

7 Update 4, 3.10.0-663 and later RedHat Compatible Kernels X86_64 12.2.0.1.180116 
(Base Bug 26247490)

Oracle Linux – Unbreakable Enterprise Kernel

7 All Updates, 3.8.13-35 and later UEK 3.8.13 kernels X86_64 Base

Oracle Linux – Unbreakable Enterprise Kernel

7 All Updates, 4.1 and later UEK 4.1 kernels X86_64   Base

RedHat Linux 

6 All Updates, 2.6.32-279 and later RedHat kernels X86_64 Base

RedHat Linux

7 GA release, 3.10.0-123 and through 3.10.0-513 X86_64 Base

RedHat Linux

7 Update 4, 3.10.0-663 and later RedHat Compatible Kernels X86_64 12.2.0.1.180116 
(Base Bug 26247490)

Novell SLES

12 GA, SP1 X86_64 Base

Solaris

10 Update 10 or later X86_64 and SPARC64 Base

Solaris

11 Update 10 and later X86_64 and SPARC64 Base

 

解决在RHEL/CentOS7.4以上版本无法使用AFD(Oracle ASMFD)特性

在7.4以上的redhat或者centos,配置afd不能配置成功,AFD is not ‘supported’,经过查询MOS资料,发现在7.4以上的redhat内核需要升级”kmod”

 

1、加载afd

 

[root@rac1 ~]# asmcmd afd_configure

ASMCMD-9520: AFD is not 'supported' -----仅输出不支持的信息,无法得知具体内容

 

2、查看afd的支持的内核版本

 

[root@rac1 ~]# afdroot install

AFD-620: AFD is not supported on this operating system version: '3.10.0-693.el7.x86_64'

 

[root@rac1 ~]#  acfsdriverstate -orahome $ORACLE_HOME supported

ACFS-9459: ADVM/ACFS is not supported on this OS version: '3.10.0-693.el7.x86_64'

ACFS-9201: Not Supported 

 

 

[root@rac1 ~]# uname -a       

Linux rac1 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux

----查看当前系统内核版本确实不被支持

 

[root@rac1 ~]# cat /etc/redhat-release 

Red Hat Enterprise Linux Server release 7.4 (Maipo)

 

3、查看kmod版本

 

[root@rac2 ~]# rpm -qa|grep kmod

kmod-libs-20-15.el7.x86_64

kmod-20-15.el7.x86_64   ----20-15版本

 

4、升级kmod

 

[root@rac1 yum.repos.d]# yum install kmod

Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager

This system is not registered with an entitlement server. You can use subscription-manager to register.

Local                                                                                           | 3.6 kB  00:00:00     

(1/2): Local/group_gz                                                                           | 166 kB  00:00:00     

(2/2): Local/primary_db                                                                         | 3.1 MB  00:00:00     

Resolving Dependencies

--> Running transaction check

---> Package kmod.x86_64 0:20-15.el7 will be updated

---> Package kmod.x86_64 0:20-21.el7 will be an update

--> Finished Dependency Resolution

 

Dependencies Resolved

 

=======================================================================================================================

 Package                   Arch                        Version                        Repository                  Size

=======================================================================================================================

Updating:

 kmod                      x86_64                      20-21.el7                      Local                      121 k

 

Transaction Summary

=======================================================================================================================

Upgrade  1 Package

 

Total download size: 121 k

Is this ok [y/d/N]: y

Downloading packages:

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Warning: RPMDB altered outside of yum.

  Updating   : kmod-20-21.el7.x86_64                                                                               1/2 

  Cleanup    : kmod-20-15.el7.x86_64                                                                               2/2 

  Verifying  : kmod-20-21.el7.x86_64                                                                               1/2 

  Verifying  : kmod-20-15.el7.x86_64                                                                               2/2 

 

Updated:

  kmod.x86_64 0:20-21.el7                                                                                              

 

Complete!

 

5、再次查看lmod

 

[grid@rac1:/home/grid]$rpm -qa|grep kmod

kmod-libs-20-15.el7.x86_64

kmod-20-21.el7.x86_64  --->已升级到20-21版本

 

6、查看afd驱动信息

 

[root@rac1 yum.repos.d]#  acfsdriverstate -orahome $ORACLE_HOME supported

ACFS-9200: Supported 升级kmod之后afd驱动已支持

 

7、重新安装afd

加载以及配置AFD

 

[root@rac1 yum.repos.d]# asmcmd afd_configure

加载afd过程:

 

AFD-627: AFD distribution files found.

AFD-634: Removing previous AFD installation.

AFD-635: Previous AFD components successfully removed.

AFD-636: Installing requested AFD software.

AFD-637: Loading installed AFD drivers.

AFD-9321: Creating udev for AFD.

AFD-9323: Creating module dependencies - this may take some time.

AFD-9154: Loading 'oracleafd.ko' driver.

AFD-649: Verifying AFD devices.

AFD-9156: Detecting control device '/dev/oracleafd/admin'.

AFD-638: AFD installation correctness verified.

Modifying resource dependencies - this may take some time.

 

查询afd状态信息

 

[root@rac1 yum.repos.d]# asmcmd afd_state

ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'rac1'

 

无任何报错,配置成功

 

MOS id:2303388.1

ACFS and AFD report “Not Supported” after installing appropriate Oracle Grid Infrastructure Patches on RedHat (文档 ID 2303388.1)

 

了解KMOD

https://www.linux.org/docs/man8/kmod.html

上述为linux对kmod的一段简单描述,简单意思是KMOD是管理linux内核的一个模块,对于用户来说不会直接使用kmod,而是其他一些系统命令会调用。

进入sbin目录发现kmod被做了软连接,很多系统命令会被重定向

 

[grid@rac1:/sbin]$pwd

/sbin

[grid@rac1:/sbin]$ls -alt | grep kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 insmod -> ../bin/kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 lsmod -> ../bin/kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 modinfo -> ../bin/kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 modprobe -> ../bin/kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 rmmod -> ../bin/kmod

lrwxrwxrwx   1 root root           11 Feb 15 17:11 depmod -> ../bin/kmod

 

Oracle ASM Filter Driver (ASMFD) – New Features for Oracle ASM 12.1.0.2

作者:张乐奕

2014 年 12 月发布

什么是 Oracle ASM Filter Driver (ASMFD)?

简单地说,这是一个可以取代 ASMLIB 和 udev 设置的新功能,并且还增加了 I/O Filter 功能,这也体现在该功能的命名中。ASMFD 目前只在 Linux 操作系统中有效,并且必须要使用最新版的 Oracle ASM 12.1.0.2。在之前,由于 Linux 操作系统对于块设备的发现顺序不定,所以在系统重启以后,无法保证原来的 /dev/sda 还是 sda,所以不能直接使用这样原始的设备名来做 ASM Disk 的 Path,因此出现了 ASMLIB,以 Label 的方式给予设备一个固定的名字,而 ASM 则直接使用这个固定的名字来创建 ASM 磁盘,后来 ASMLIB 必须要 ULN 帐号才可以下载了,于是大家全部转到了 udev 方式,我还因此写了几篇文章来阐述在 Linux 中如何设置 udev rule。比如:

How to use udev for Oracle ASM in Oracle Linux 6

Oracle Datafiles & Block Device & Parted & Udev

现在 Oracle 推出了 ASMFD,可以一举取代 ASMLIB 和手动设置 udev rules 文件的繁琐,并且最重要的是 I/O Filter 功能。

什么是 I/O Filter 功能?

文档原文如下:

The Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.

意思是:该功能可以拒绝所有无效的 I/O 请求,最主要的作用是防止意外覆写 ASM 磁盘的底层盘,在后面的测试中可以看到对于  root  用户的  dd  全盘清零这样的变态操作也都是可以过滤的。

真是不错,那么该怎么启用这个功能呢?

通常我们原先的 ASM 中都应该使用的是 ASMLIB 或者 udev 绑定的设备,这里就直接描述如何将原先的设备名重新命名为新的 AFD 设备名。

--确认目前 ASMFD 模块(以下简称 AFD)的状态,未加载。
[grid@dbserver1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'NOT INSTALLED' and filtering is 'DEFAULT' on host 'dbserver1.vbox.com'
 
--获取当前 ASM 磁盘发现路径,我这里是使用 udev 绑定的名称
[grid@dbserver1 ~]$ asmcmd dsget
parameter:/dev/asm*
profile:/dev/asm*
 
--设置 ASM 磁盘路径,将新的 Disk String 加入
--可以看到在设置该参数时,ASM 会检查现有已经加载的磁盘,如果不在发现路径上,将会报错。
[grid@dbserver1 ~]$ asmcmd dsset AFD:*
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-15014: path '/dev/asm-disk7' is not in the discovery set (DBD ERROR: OCIStmtExecute)
 
--因此我们必须将新的路径加在原始路径后面,设置成多种路径,该操作会运行一段时间,视 ASM 磁盘多少而定
[grid@dbserver1 ~]$ asmcmd dsset '/dev/asm*','AFD:*'
 
[grid@dbserver1 ~]$ asmcmd dsget
parameter:/dev/asm*, AFD:*
profile:/dev/asm*,AFD:*
 
--检查现在 GI 环境中的节点。
[grid@dbserver1 ~]$ olsnodes -a
dbserver1       Hub
dbserver2       Hub
 
--以下命令必须在所有 Hub 节点上都运行,可以使用 Rolling 的方式。以下命令有些需要 root 用户,有些需要 grid 用户,请注意 # 
或者 $ 不同的提示符表示不同的用户。
--先停止crs
[root@dbserver1 ~]# crsctl stop crs
 
--作 AFD Configure,实际上这是一个解压程序包,安装,并加载 Driver 的过程,需要消耗一些时间
[root@dbserver1 ~]# asmcmd afd_configure
Connected to an idle instance.
AFD-627: AFD distribution files found.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.
 
--结束以后,可以再次检查 AFD 状态,显示已加载。
[root@dbserver1 ~]# asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DEFAULT' on host 'dbserver1.vbox.com'
 
--接下来需要设置 AFD 自己的磁盘发现路径了,其实这里很像以前 ASMLIB 的操作。
--设置 AFD 磁盘发现路径,必须先启动 CRS,否则将会遇到下面的错误。同时也可以看到这个信息是存储在每个节点自己的 OLR 中,因此
必须在所有节点中都设置。
[root@dbserver1 ~]# asmcmd afd_dsget
Connected to an idle instance.
ASMCMD-9511: failed to obtain required AFD disk string from Oracle Local Repository
[root@dbserver1 ~]#
[root@dbserver1 ~]# asmcmd afd_dsset '/dev/sd*'
Connected to an idle instance.
ASMCMD-9512: failed to update AFD disk string in Oracle Local Repository.
 
--启动 CRS
[root@dbserver1 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
 
--此时查看后台的 ASM 告警日志,可以看到加载的磁盘仍然使用的是原始路径。但是也可以看到 libafd 已经成功加载。
2014-11-20 17:01:04.545000 +08:00
NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libafd12.so
ORACLE_BASE from environment = /u03/app/grid
SQL> ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:9:3} */
NOTE: Diskgroup used for Voting files is:
  CRSDG
Diskgroup with spfile:CRSDG
NOTE: Diskgroup used for OCR is:CRSDG
NOTE: Diskgroups listed in ASM_DISKGROUP are
  DATADG
NOTE: cache registered group CRSDG 1/0xB8E8EA0B
NOTE: cache began mount (first) of group CRSDG 1/0xB8E8EA0B
NOTE: cache registered group DATADG 2/0xB8F8EA0C
NOTE: cache began mount (first) of group DATADG 2/0xB8F8EA0C
NOTE: Assigning number (1,2) to disk (/dev/asm-disk3)
NOTE: Assigning number (1,1) to disk (/dev/asm-disk2)
NOTE: Assigning number (1,0) to disk (/dev/asm-disk1)
NOTE: Assigning number (1,5) to disk (/dev/asm-disk10)
NOTE: Assigning number (1,3) to disk (/dev/asm-disk8)
NOTE: Assigning number (1,4) to disk (/dev/asm-disk9)
NOTE: Assigning number (2,3) to disk (/dev/asm-disk7)
NOTE: Assigning number (2,2) to disk (/dev/asm-disk6)
NOTE: Assigning number (2,1) to disk (/dev/asm-disk5)
NOTE: Assigning number (2,5) to disk (/dev/asm-disk12)
NOTE: Assigning number (2,0) to disk (/dev/asm-disk4)
NOTE: Assigning number (2,6) to disk (/dev/asm-disk13)
NOTE: Assigning number (2,4) to disk (/dev/asm-disk11)
 
--将 afd_ds 设置为 ASM 磁盘的底层磁盘设备名,这样以后就不再需要手工配置 udev rules 了。
[grid@dbserver1 ~]$ asmcmd afd_dsset '/dev/sd*'
 
[grid@dbserver1 ~]$ asmcmd afd_dsget
AFD discovery string: /dev/sd*
 
--我在测试的时候,上面犯了一个错误,将路径设置为了“dev/sd*”,缺少了最开始的根目录。因此此处没有发现任何磁盘,如果你的测试中,
这一步已经可以发现磁盘,请告诉我。
[grid@dbserver1 ~]$ asmcmd afd_lsdsk
There are no labelled devices.
 
--再次提醒,到此为止的所有命令,都必须要在集群环境的所有节点中都执行。
--接下来就是将原先的 ASM 磁盘路径从 udev 转到 AFD
--先检查现在的磁盘路径
[root@dbserver1 ~]# ocrcheck -config
Oracle Cluster Registry configuration is :
         Device/File Name         :     +CRSDG
 
[root@dbserver1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   4838a0ee7bfa4fbebf8ff9f58642c965 (/dev/asm-disk1) [CRSDG]
 2. ONLINE   72057097a36e4f02bfc7b5e23672e4cc (/dev/asm-disk2) [CRSDG]
 3. ONLINE   7906e2fb24d24faebf9b82bba6564be3 (/dev/asm-disk3) [CRSDG]
Located 3 voting disk(s).
 
[root@dbserver1 ~]# su - grid
[grid@dbserver1 ~]$ asmcmd lsdsk -G CRSDG
Path
/dev/asm-disk1
/dev/asm-disk10
/dev/asm-disk2
/dev/asm-disk3
/dev/asm-disk8
/dev/asm-disk9
 
--由于要修改 OCR 所在的磁盘,因此修改之前需要停止 Cluster。
[root@dbserver1 ~]# crsctl stop cluster -all
 
--直接修改会报错,因为 /dev/asm-disk1 已经存在在 ASM 中了。
[grid@dbserver1 ~]$ asmcmd afd_label asmdisk01 /dev/asm-disk1
Connected to an idle instance.
ASMCMD-9513: ASM disk label set operation failed.
disk /dev/asm-disk1 is already provisioned for ASM
 
--必须要增加 migrate 关键字,才可以成功。
[grid@dbserver1 ~]$ asmcmd afd_label asmdisk01 /dev/asm-disk1 --migrate
Connected to an idle instance.
[grid@dbserver1 ~]$ asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                   ENABLED   /dev/asm-disk1
 
--在我的测试 ASM 中,一共有 13 块磁盘,因此依次修改。
asmcmd afd_label asmdisk01 /dev/asm-disk1 --migrate
asmcmd afd_label asmdisk02 /dev/asm-disk2 --migrate
asmcmd afd_label asmdisk03 /dev/asm-disk3 --migrate
asmcmd afd_label asmdisk04 /dev/asm-disk4 --migrate
asmcmd afd_label asmdisk05 /dev/asm-disk5 --migrate
asmcmd afd_label asmdisk06 /dev/asm-disk6 --migrate
asmcmd afd_label asmdisk07 /dev/asm-disk7 --migrate
asmcmd afd_label asmdisk08 /dev/asm-disk8 --migrate
asmcmd afd_label asmdisk09 /dev/asm-disk9 --migrate
asmcmd afd_label asmdisk10 /dev/asm-disk10 --migrate
asmcmd afd_label asmdisk11 /dev/asm-disk11 --migrate
asmcmd afd_label asmdisk12 /dev/asm-disk12 --migrate
asmcmd afd_label asmdisk13 /dev/asm-disk13 --migrate
 
[grid@dbserver1 ~]$ asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                   ENABLED   /dev/asm-disk1
ASMDISK02                   ENABLED   /dev/asm-disk2
ASMDISK03                   ENABLED   /dev/asm-disk3
ASMDISK04                   ENABLED   /dev/asm-disk4
ASMDISK05                   ENABLED   /dev/asm-disk5
ASMDISK06                   ENABLED   /dev/asm-disk6
ASMDISK07                   ENABLED   /dev/asm-disk7
ASMDISK08                   ENABLED   /dev/asm-disk8
ASMDISK09                   ENABLED   /dev/asm-disk9
ASMDISK10                   ENABLED   /dev/asm-disk10
ASMDISK11                   ENABLED   /dev/asm-disk11
ASMDISK12                   ENABLED   /dev/asm-disk12
ASMDISK13                   ENABLED   /dev/asm-disk13
 
--在另外的节点中,不再需要作 label,而是直接 scan 即可,这跟使用 ASMLIB 的操作非常相像。
[grid@dbserver2 ~]$ asmcmd afd_scan
Connected to an idle instance.
[grid@dbserver2 ~]$ asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK12                   ENABLED   /dev/asm-disk12
ASMDISK09                   ENABLED   /dev/asm-disk9
ASMDISK08                   ENABLED   /dev/asm-disk8
ASMDISK11                   ENABLED   /dev/asm-disk11
ASMDISK10                   ENABLED   /dev/asm-disk10
ASMDISK13                   ENABLED   /dev/asm-disk13
ASMDISK01                   ENABLED   /dev/asm-disk1
ASMDISK04                   ENABLED   /dev/asm-disk4
ASMDISK06                   ENABLED   /dev/asm-disk6
ASMDISK07                   ENABLED   /dev/asm-disk7
ASMDISK05                   ENABLED   /dev/asm-disk5
ASMDISK03                   ENABLED   /dev/asm-disk3
ASMDISK02                   ENABLED   /dev/asm-disk2
 
--重新启动 Cluster
[root@dbserver1 ~]# crsctl start cluster -all
 
--可以看到 ASM 告警日志中已经显示开始使用新的名称。关于其中 WARNING 的含义表示目前 AFD 还不支持 Advanced Format 格式的
磁盘,普通磁盘格式一个扇区是 512 字节,而高级格式则为 4K 字节。
2014-11-20 17:46:16.695000 +08:00
* allocate domain 1, invalid = TRUE
* instance 2 validates domain 1
NOTE: cache began mount (not first) of group CRSDG 1/0x508D0B98
NOTE: cache registered group DATADG 2/0x509D0B99
* allocate domain 2, invalid = TRUE
* instance 2 validates domain 2
NOTE: cache began mount (not first) of group DATADG 2/0x509D0B99
WARNING: Library 'AFD Library - Generic , version 3 (KABI_V3)' does not support advanced format disks
NOTE: Assigning number (1,0) to disk (AFD:ASMDISK01)
NOTE: Assigning number (1,1) to disk (AFD:ASMDISK02)
NOTE: Assigning number (1,2) to disk (AFD:ASMDISK03)
NOTE: Assigning number (1,3) to disk (AFD:ASMDISK08)
NOTE: Assigning number (1,4) to disk (AFD:ASMDISK09)
NOTE: Assigning number (1,5) to disk (AFD:ASMDISK10)
NOTE: Assigning number (2,0) to disk (AFD:ASMDISK04)
NOTE: Assigning number (2,1) to disk (AFD:ASMDISK05)
NOTE: Assigning number (2,2) to disk (AFD:ASMDISK06)
NOTE: Assigning number (2,3) to disk (AFD:ASMDISK07)
NOTE: Assigning number (2,4) to disk (AFD:ASMDISK11)
NOTE: Assigning number (2,5) to disk (AFD:ASMDISK12)
NOTE: Assigning number (2,6) to disk (AFD:ASMDISK13)
 
--检查磁盘加载路径,以及功能全部是 AFD 样式了。
[grid@dbserver1 ~]$ asmcmd lsdsk
Path
AFD:ASMDISK01
AFD:ASMDISK02
AFD:ASMDISK03
AFD:ASMDISK04
AFD:ASMDISK05
AFD:ASMDISK06
AFD:ASMDISK07
AFD:ASMDISK08
AFD:ASMDISK09
AFD:ASMDISK10
AFD:ASMDISK11
AFD:ASMDISK12
AFD:ASMDISK13
 
--但是我们可以看到在数据字典中仍然存在之前的磁盘路径。
SQL> select NAME,LABEL,PATH from V$ASM_DISK;
 
NAME                 LABEL                           PATH
-------------------- ------------------------------- ---------------------------------------------
                                                     /dev/asm-disk7
                                                     /dev/asm-disk6
                                                     /dev/asm-disk13
                                                     /dev/asm-disk12
                                                     /dev/asm-disk11
                                                     /dev/asm-disk4
                                                     /dev/asm-disk2
                                                     /dev/asm-disk9
                                                     /dev/asm-disk3
                                                     /dev/asm-disk5
                                                     /dev/asm-disk10
                                                     /dev/asm-disk8
                                                     /dev/asm-disk1
CRSDG_0000           ASMDISK01                       AFD:ASMDISK01
CRSDG_0001           ASMDISK02                       AFD:ASMDISK02
CRSDG_0002           ASMDISK03                       AFD:ASMDISK03
DATADG_0000          ASMDISK04                       AFD:ASMDISK04
DATADG_0001          ASMDISK05                       AFD:ASMDISK05
DATADG_0002          ASMDISK06                       AFD:ASMDISK06
DATADG_0003          ASMDISK07                       AFD:ASMDISK07
CRSDG_0003           ASMDISK08                       AFD:ASMDISK08
CRSDG_0004           ASMDISK09                       AFD:ASMDISK09
CRSDG_0005           ASMDISK10                       AFD:ASMDISK10
DATADG_0004          ASMDISK11                       AFD:ASMDISK11
DATADG_0005          ASMDISK12                       AFD:ASMDISK12
DATADG_0006          ASMDISK13                       AFD:ASMDISK13
 
26 rows selected.
 
--需要将 ASM 磁盘发现路径(注意,这跟设置 AFD 磁盘发现路径不是一个命令)中原先的路径去除,只保留 AFD 路径。
[grid@dbserver1 ~]$ asmcmd dsset 'AFD:*'
[grid@dbserver1 ~]$ asmcmd dsget
parameter:AFD:*
profile:AFD:*
 
--再次重启 ASM,一切正常了。
SQL> select NAME,LABEL,PATH from V$ASM_DISK;
 
NAME                 LABEL                           PATH
-------------------- ------------------------------- -------------------------------------------------------
CRSDG_0000           ASMDISK01                       AFD:ASMDISK01
CRSDG_0001           ASMDISK02                       AFD:ASMDISK02
CRSDG_0002           ASMDISK03                       AFD:ASMDISK03
DATADG_0000          ASMDISK04                       AFD:ASMDISK04
DATADG_0001          ASMDISK05                       AFD:ASMDISK05
DATADG_0002          ASMDISK06                       AFD:ASMDISK06
DATADG_0003          ASMDISK07                       AFD:ASMDISK07
CRSDG_0003           ASMDISK08                       AFD:ASMDISK08
CRSDG_0004           ASMDISK09                       AFD:ASMDISK09
CRSDG_0005           ASMDISK10                       AFD:ASMDISK10
DATADG_0004          ASMDISK11                       AFD:ASMDISK11
DATADG_0005          ASMDISK12                       AFD:ASMDISK12
DATADG_0006          ASMDISK13                       AFD:ASMDISK13
 
13 rows selected.
 
--收尾工作,将原先的 udev rules 文件移除。当然,这要在所有节点中都运行。以后如果服务器再次重启,AFD 就会完全接管了。
[root@dbserver1 ~]# mv /etc/udev/rules.d/99-oracle-asmdevices.rules ~oracle/

还有什么发现?

其实,AFD 也在使用 udev。囧。

[grid@dbserver1 ~]$ cat /etc/udev/rules.d/53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmdba", MODE="0770"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmdba", MODE="0770"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmdba", MODE="0660"

Label 过后的磁盘在 /dev/oracleafd/disks 目录中可以找到。

[root@dbserver2 disks]# ls -l /dev/oracleafd/disks
total 52
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK01
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK02
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK03
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK04
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK05
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK06
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK07
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK08
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK09
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK10
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK11
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK12
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK13

这里有一个很大不同,所有磁盘的属主变成了 root,并且只有 root 才有写入的权限。很多文章认为,这就是 AFD 的 filter 功能体现了,因为现在用 oracle 或者 grid 用户都没有办法直接对 ASM 磁盘进行写入操作,自然就获得了一层保护。比如以下命令会直接报权限不足。

[oracle@dbserver1 disks]$ echo "do some evil" > ASMDISK99
-bash: ASMDISK99: Permission denied

但是如果你认为这就是 AFD 的保护功能,那也太小看 Oracle 了,仅仅是这样也对不起名字中 Filter 字样。且看后面分解。

操作系统中也可以看到 AFD 磁盘和底层磁盘的对应关系。

[grid@dbserver1 /]$ ls -l /dev/disk/by-label/
total 0
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK01 -> ../../sdc
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK02 -> ../../sdd
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK03 -> ../../sde
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK04 -> ../../sdf
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK05 -> ../../sdg
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK06 -> ../../sdh
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK07 -> ../../sdi
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK08 -> ../../sdj
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK09 -> ../../sdk
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK10 -> ../../sdl
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK11 -> ../../sdm
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK12 -> ../../sdn
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK13 -> ../../sdo

再次重启服务器以后,afd_lsdsk 的结果中显示的路径都已经变为底层磁盘,但是 Filtering 却变成了 DISABLED。不要在意这里的 Label 和 Path 的对应和上面的不一样,因为有些是在节点 1 中执行的结果,有些是在节点 2 中执行的结果,而这也是 AFD 功能的展示,不管两边机器发现块设备的顺序是不是一样,只要绑定了 AFD 的 Label,就没问题了。

ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                  DISABLED   /dev/sdd
ASMDISK02                  DISABLED   /dev/sde
ASMDISK03                  DISABLED   /dev/sdf
ASMDISK04                  DISABLED   /dev/sdg
ASMDISK05                  DISABLED   /dev/sdh
ASMDISK06                  DISABLED   /dev/sdi
ASMDISK07                  DISABLED   /dev/sdj
ASMDISK08                  DISABLED   /dev/sdk
ASMDISK09                  DISABLED   /dev/sdl
ASMDISK10                  DISABLED   /dev/sdm
ASMDISK11                  DISABLED   /dev/sdn
ASMDISK12                  DISABLED   /dev/sdo
ASMDISK13                  DISABLED   /dev/sdp

最后,该来测试一下 I/O Filter 功能了吧,等好久了!

对,这才是重点。

先看一下如何启用或者禁用 Filter 功能。在我的测试中,单独设置某块盘启用还是禁用是不生效的,只能全局启用或者禁用。

[grid@dbserver1 ~]$ asmcmd help afd_filter
afd_filter
        Sets the AFD filtering mode on a given disk path.
        If the command is executed without specifying a disk path then
        filtering is set at node level.
 
Synopsis
        afd_filter {-e | -d } [&lt;disk-path>]
 
Description
        The options for afd_filter are described below
 
        -e      -  enable  AFD filtering mode
        -d      -  disable AFD filtering mode
 
Examples
        The following example uses afd_filter to enable AFD filtering
        on a given diskpath.
 
        ASMCMD [+] >afd_filter -e /dev/sdq
 
See Also
       afd_lsdsk afd_state

启用 Filter 功能。

[grid@dbserver1 ~]$ asmcmd afd_filter -e
[grid@dbserver1 ~]$ asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                   ENABLED   /dev/sdb
ASMDISK02                   ENABLED   /dev/sdc
ASMDISK03                   ENABLED   /dev/sdd
ASMDISK04                   ENABLED   /dev/sde
ASMDISK05                   ENABLED   /dev/sdf
ASMDISK06                   ENABLED   /dev/sdg
ASMDISK07                   ENABLED   /dev/sdh
ASMDISK08                   ENABLED   /dev/sdi
ASMDISK09                   ENABLED   /dev/sdj
ASMDISK10                   ENABLED   /dev/sdk
ASMDISK11                   ENABLED   /dev/sdl
ASMDISK12                   ENABLED   /dev/sdm
ASMDISK13                   ENABLED   /dev/sdn

为了以防万一,不破坏我自己的实验环境,增加了一块磁盘来作测试。

[root@dbserver1 ~]# asmcmd afd_label asmdisk99 /dev/sdo
Connected to an idle instance.
[root@dbserver1 ~]# asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                   ENABLED   /dev/sdb
ASMDISK02                   ENABLED   /dev/sdc
ASMDISK03                   ENABLED   /dev/sdd
ASMDISK04                   ENABLED   /dev/sde
ASMDISK05                   ENABLED   /dev/sdf
ASMDISK06                   ENABLED   /dev/sdg
ASMDISK07                   ENABLED   /dev/sdh
ASMDISK08                   ENABLED   /dev/sdi
ASMDISK09                   ENABLED   /dev/sdj
ASMDISK10                   ENABLED   /dev/sdk
ASMDISK11                   ENABLED   /dev/sdl
ASMDISK12                   ENABLED   /dev/sdm
ASMDISK13                   ENABLED   /dev/sdn
ASMDISK99                   ENABLED   /dev/sdo

创建一个新的磁盘组。

[grid@dbserver1 ~]$ sqlplus / AS sysasm
SQL> CREATE diskgroup DGTEST external redundancy disk 'AFD:ASMDISK99';
 
Diskgroup created.

先用  KFED  读取一下磁盘头,验证一下确实无误。

[grid@dbserver1 ~]$ kfed read AFD:ASMDISK99
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  1854585587 ; 0x00c: 0x6e8abaf3
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:ORCLDISKASMDISK99 ; 0x000: length=17
kfdhdb.driver.reserved[0]:   1145918273 ; 0x008: 0x444d5341
kfdhdb.driver.reserved[1]:    961237833 ; 0x00c: 0x394b5349
kfdhdb.driver.reserved[2]:           57 ; 0x010: 0x00000039
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                168820736 ; 0x020: 0x0a100000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:               ASMDISK99 ; 0x028: length=9
kfdhdb.grpname:                  DGTEST ; 0x048: length=6
kfdhdb.fgname:                ASMDISK99 ; 0x068: length=9

直接使用  dd  尝试将整个磁盘清零。 dd  命令本身没有任何错误返回。

[root@dbserver1 ~]# dd if=/dev/zero of=/dev/sdo
dd: writing to `/dev/sdo': No space left on device
409601+0 records in
409600+0 records out
209715200 bytes (210 MB) copied, 19.9602 s, 10.5 MB/s

之后重新  mount  磁盘组,如果磁盘被清零,在重新  mount  的时候一定会出现错误,而现在正常挂载。

SQL>  ALTER diskgroup DGTEST dismount;
 
Diskgroup altered.
 
SQL>  ALTER diskgroup DGTEST mount;
 
Diskgroup altered.

觉得不过瘾?那再创建一个表空间,插入一些数据,做一次 checkpoint,仍然一切正常。

SQL> CREATE tablespace test datafile '+DGTEST' SIZE 100M;
 
Tablespace created.
 
SQL> CREATE TABLE t_afd (n NUMBER) tablespace test;
 TABLE created.
 
SQL> INSERT INTO t_afd VALUES(1);
 1 ROW created.
 
SQL> commit;
 
Commit complete.
 
SQL> ALTER system checkpoint;
 
System altered.
 
SQL> SELECT COUNT(*) FROM t_afd;
   COUNT(*)----------
         1

但是诡异的是,这时候在操作系统级别直接去读取 /dev/sdo 的内容,会显示全部都已经被清空为 0 了。

[root@dbserver1 ~]# od -c -N 256 /dev/sdo
0000000  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
*
0000400

使用  strings  命令也完全看不到任何有意义的字符。

[root@dbserver1 disks]# strings /dev/sdo
[root@dbserver1 disks]#

但是,千万不要被这样的假象迷惑,以为磁盘真的被清空了,在  dd  的时候,/var/log/message 会产生大量日志,明确表示这些在 ASM 管理的设备上的 IO 操作都是不被支持,这才是 Filter 真正起作用的场合。

afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=224)  not supported

使用  kfed ,仍然可以读取到正常的信息。

[grid@dbserver1 ~]$ kfed read AFD:ASMDISK99
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  1854585587 ; 0x00c: 0x6e8abaf3
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:ORCLDISKASMDISK99 ; 0x000: length=17
......

直到重新启动服务器(重新启动 ASM,重新启动 Cluster,在操作系统仍然看到的是清零后的数据),所有的数据又回来了。目前还不确认 Oracle 是使用了怎样的重定向技术实现了这样的神奇效果。

[root@dbserver1 ~]# od -c -N 256 /dev/sdo
0000000 001 202 001 001  \0  \0  \0  \0  \0  \0  \0 200   u 177   D   I
0000020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000040   O   R   C   L   D   I   S   K   A   S   M   D   I   S   K   9
0000060   9  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000100  \0  \0 020  \n  \0  \0 001 003   A   S   M   D   I   S   K   9
0000120   9  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000140  \0  \0  \0  \0  \0  \0  \0  \0   D   G   T   E   S   T  \0  \0
0000160  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000200  \0  \0  \0  \0  \0  \0  \0  \0   A   S   M   D   I   S   K   9
0000220   9  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000240  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
*
0000300  \0  \0  \0  \0  \0  \0  \0  \0 022 257 367 001  \0   X  \0 247
0000320 022 257 367 001  \0   h 036 344  \0 002  \0 020  \0  \0 020  \0
0000340 200 274 001  \0 310  \0  \0  \0 002  \0  \0  \0 001  \0  \0  \0
0000360 002  \0  \0  \0 002  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000400
[root@dbserver1 ~]# 
[root@dbserver1 ~]# strings /dev/sdo | grep ASM
ORCLDISKASMDISK99
ASMDISK99
ASMDISK99
ORCLDISKASMDISK99
ASMDISK99
ASMDISK99
ASMDISK99
ASMDISK99
ASMPARAMETERFILE
ASMPARAMETERBAKFILE
ASM_STALE

最后将 Filter 禁用之后再测试。

[root@dbserver1 ~]# asmcmd afd_filter -d
Connected to an idle instance.
[root@dbserver1 ~]# asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASMDISK01                  DISABLED   /dev/sdb
ASMDISK02                  DISABLED   /dev/sdc
ASMDISK03                  DISABLED   /dev/sdd
ASMDISK04                  DISABLED   /dev/sde
ASMDISK05                  DISABLED   /dev/sdf
ASMDISK06                  DISABLED   /dev/sdg
ASMDISK07                  DISABLED   /dev/sdh
ASMDISK08                  DISABLED   /dev/sdi
ASMDISK09                  DISABLED   /dev/sdj
ASMDISK10                  DISABLED   /dev/sdk
ASMDISK11                  DISABLED   /dev/sdl
ASMDISK12                  DISABLED   /dev/sdm
ASMDISK13                  DISABLED   /dev/sdn
ASMDISK99                  DISABLED   /dev/sdo

同样使用  dd  命令清零整个磁盘。

[root@dbserver1 ~]# dd if=/dev/zero of=/dev/sdo
dd: writing to `/dev/sdo': No space left on device
409601+0 records in
409600+0 records out
209715200 bytes (210 MB) copied, 4.46444 s, 47.0 MB/s

重新  mount  磁盘组,如期报错,磁盘组无法加载。

SQL> alter diskgroup DGTEST dismount;
 
Diskgroup altered.
 
SQL> alter diskgroup DGTEST mount;
alter diskgroup DGTEST mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DGTEST" cannot be mounted
ORA-15040: diskgroup is incomplete

重新启动数据库,也会发现由于表空间中数据库不存在而导致数据库无法正常 Open。

SQL> startup
ORACLE instance started.
 
Total System Global Area  838860800 bytes
Fixed SIZE                  2929936 bytes
Variable >SIZE             385878768 bytesDATABASE Buffers          226492416 bytes
Redo Buffers                5455872 bytes
In-Memory Area            218103808 bytesDATABASE mounted.
ORA-01157: cannot identify/LOCK DATA file 15 - see DBWR trace file
ORA-01110: DATA file 15: '+DGTEST/CDB12C/DATAFILE/test.256.864163075'

 

 

上一篇:bzoj1003 最短路+dp


下一篇:Golang学习 - 学习资源列表