oracle 19C RAC 安装必要配置项分享

亲爱滴伙伴们,又见面了。今天我们讲讲Linux7平台下Oracle19C RAC安装的一些必要配置。


环境:

操作系统:Redhat 7.6

数据库版本:19.7

是否RAC:是


[防火墙和SELNUX]


为啥要关闭防火墙?

之前发生过由于防火墙的存在,RAC安装过程中第二个节点运行root.sh失败,显示信息为:OracleCRS堆栈的最终检查失败,也存在其他潜在的性能和稳定性问题。例如:节点被错误驱逐,IPC发送/接收超时等,所以Oracle强烈建议在数据服务器上关闭防火墙。


关闭防火墙:

--查看防火墙状态

systemctlstatus firewalld


--查看开机是否启动防火墙服务

systemctlis-enabled firewalld


--关闭防火墙

systemctlstop firewalld

systemctlstatus firewalld


--禁用防火墙(系统启动时不启动防火墙服务)

systemctldisable firewalld

systemctlis-enabled firewalld


SELINUX具体定义这里不再多说,其主要作用就是最大限度地减小系统中服务进程可访问的资源(最小权限原则)。由于可能会触发数据库BUG(例如11GR2版本中BUG9746474),所以建议关闭SELINUX。


SELINUX关闭方法:

修改/etc/selinux/config文件,将SELINUX设置为disabled。


[Linux 7 systemd服务]


为防止Linux7systemd服务删除/var/tmp/.oracle导致CRS停止工作。因此需要在/usr/lib/tmpfiles.d/tmp.conf中添加以下内容(若已存在请忽略),文档ID28650460.8有详细介绍:

x /tmp/.oracle*

x /var/tmp/.oracle*

x /usr/tmp/.oracle*

x /var/tmp/.oracle

x /tmp/.oracle

x /var/tmp/.oracle

x /usr/tmp/.oracle


[UDEV]


在ASMLib及DM不能满足要求的情况下使用udev来确定ASM磁盘,如果直接使用/dev/sdd的方式可能导致磁盘权限不稳定,配置磁盘后主机务必要进行一次重启。


文件命名规则为:

/etc/udev/rules.d/99-oracle-asmdevices.rules

内容如下(使用SCSI_ID):

/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdd

KERNEL=="sd*",SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="369835ed100b92182002322d90000000e", SYMLINK+="asmdiskdata40",OWNER="grid", GROUP="asmadmin",MODE="0660"

KERNEL=="sd*",SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="369835ed100b9218200251cdc0000002c", SYMLINK+="asmdiskarch01",OWNER="grid", GROUP="asmadmin",MODE="0660"

KERNEL=="sd*",SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="369835ed100b9218200252c800000002d", SYMLINK+="asmdiskarch02",OWNER="grid", GROUP="asmadmin",MODE="0660"

start_udev


使用路径:

需要固定/dev/sdd,不能用*代替

KERNEL=="sdd", SUBSYSTEM=="block", NAME="asmdsk1", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sde", SUBSYSTEM=="block", NAME="asmdsk2", OWNER="grid", GROUP="asmadmin", MODE="0660"

start_udev


使用Multipath:

multipath –ll

disk04 (360060e8005bf7b000000bf7b00004209) dm-5 HP,OPEN-V

size=123G features='1 queue_if_no_path' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| `- 4:0:0:4  sdg  8:96   active ready running

|-+- policy='round-robin 0' prio=1 status=enabled

| `- 4:0:1:4  sdt  65:48  active ready running

|-+- policy='round-robin 0' prio=1 status=enabled

| `- 5:0:0:4  sdag 66:0   active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

`- 5:0:1:4  sdat 66:208 active ready running

vi /etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL=="dm-*", PROGRAM="scsi_id --page=0x83 --whitelisted --device=/dev/%k",RESULT=="360060e8005bf7b000000bf7b00004206", OWNER:="grid", GROUP:=" asmadmin ", MODE="0660"

KERNEL=="dm-*", PROGRAM="scsi_id --page=0x83 --whitelisted --device=/dev/%k",RESULT=="360060e8005bf7b000000bf7b00004207", OWNER:=" grid ", GROUP:=" asmadmin ", MODE="0660"

KERNEL=="dm-*", PROGRAM="scsi_id --page=0x83 --whitelisted --device=/dev/%k",RESULT=="360060e8005bf7b000000bf7b00004208", OWNER:=" grid ", GROUP:=" asmadmin ", MODE="0660"

需要确认/dev/mapper/mpathb对应的wwid号是一致的。

service multipathd restart


[HugePage]


hugepage是在linux2.6内核被引入的,主要提供4k的page和比较大的page的选择,当我们访问内存时,首先访问”pagetable“,然后linux在通过“pagetable”的mapping来访问真实物理内存(ram+swap)。为了提升性能,linux在cpu中申请固定大小的buffer,被称为TLB,TLB中保存有“pagetable”的部分内容,这也遵循了让数据尽可能的靠近cpu原则。在TLB中通过hugetlb来指向hugepage。这些被分配的hugepage作为内存文件系统hugetlbfs(类似tmpfs)提供给进程使用。


系统启动后,hugepage就被分配并保留,不会pagein/pageout,除非人为干预,如改变hugepage的配置等;根据linux内核的版本和HW的架构,hugepage的大小自定义,具体根据数据库SGA的大小进行配置。因为采用大page,所以也减少TLB和pagetable的管理压力。在内存较大的情况下推荐使用HugePage,但是需禁用AMM及RHEL7/SESL11的THP。


计算并设置hugepage参数

脚本hugepages_settings.sh内容如下:





#!/bin/bash

#

#hugepages_settings.sh

#

#Linux bash script to compute values for the

#recommended HugePages/HugeTLB configuration

#on Oracle Linux

#

#Note: This script does calculation for all shared memory

#segments available when the script is run, no matter it

#is an Oracle RDBMS shared memory segment or not.

#

#This script is provided by Doc ID 401749.1 from My Oracle Support

#http://support.oracle.com

#Welcome text

echo"

Thisscript is provided by Doc ID 401749.1 from My Oracle Support

(http://support.oracle.com)where it is intended to compute values for

therecommended HugePages/HugeTLB configuration for the current shared

memorysegments on Oracle Linux. Before proceeding with the execution pleasenote following:

*For ASM instance, it needs to configure ASMM instead of AMM.

*The 'pga_aggregate_target' is outside the SGA and

youshould accommodate this while calculating the overall size.

*In case you changes the DB SGA size,

asthe new SGA will not fit in the previous HugePages configuration,

ithad better disable the whole HugePages,

startthe DB with new SGA size and run the script again.

Andmake sure that:

*Oracle Database instance(s) are up and running

*Oracle Database 11g Automatic Memory Management (AMM) is not setup

(SeeDoc ID 749851.1)

*The shared memory segments can be listed by command:

#ipcs -m

PressEnter to proceed..."

read

#Check for the kernel version

KERN=`uname-r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`

#Find out the HugePage size

HPG_SZ=`grepHugepagesize /proc/meminfo | awk '{print $2}'`

if[ -z "$HPG_SZ" ];then

echo"The hugepages may not be supported in the system where thescript is being executed."

exit1

fi

#Initialize the counter

NUM_PG=0

#Cumulative number of pages required to handle the running sharedmemory segments

forSEG_BYTES in `ipcs -m | cut -c44-300 | awk '{print $1}' | grep"[0-9][0-9]*"`

do

MIN_PG=`echo"$SEG_BYTES/($HPG_SZ*1024)" | bc -q`

if[ $MIN_PG -gt 0 ]; then

NUM_PG=`echo"$NUM_PG+$MIN_PG+1" | bc -q`

fi

done

RES_BYTES=`echo"$NUM_PG * $HPG_SZ * 1024" | bc -q`

#An SGA less than 100MB does not make sense

#Bail out if that is the case

if[ $RES_BYTES -lt 100000000 ]; then

echo"***********"

echo"** ERROR **"

echo"***********"

echo"Sorry! There are not enough total of shared memory segmentsallocated for

HugePagesconfiguration. HugePages can only be used for shared memory segments

thatyou can list by command:

#ipcs -m

ofa size that can match an Oracle Database SGA. Please make sure that:

*Oracle Database instance is up and running

*Oracle Database 11g Automatic Memory Management (AMM) is notconfigured"

exit1

fi

#Finish with results

case$KERN in

'2.4')HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;

echo"Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;

'2.6')echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;

'3.8')echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;

'3.10')echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;

'4.1')echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;

'4.14')echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;

*)echo "Kernel version $KERN is not supported by this script(yet). Exiting." ;;

esac

#End






向上滑动查看更多内容


脚本来自MOS文章401749.1,运行脚本是需要启动所有数据库实例,并禁用AMM,即设置memory_target=0。


运行脚本计算:

./hugepages_settings.sh

...

Recommendedsetting: vm.nr_hugepages = 1496


设置参数:

vi/etc/sysctl.conf

vm.nr_hugepages= 1496


重起验证

调整数据库参数:

use_large_pages=only

重起主机shutdown-ry 0 (使禁用THP及nr_hugepages参数生效)


启动数据库,验证是否使用:

#grep HugePages /proc/meminfo

AnonHugePages:     0 kB

HugePages_Total:   1496

HugePages_Free:     485

HugePages_Rsvd:     446

HugePages_Surp:       0


[禁用TransparentHugePages(透明大页)]


从RedHat6,OL6,SLES11和UEK2内核开始,实现并启用了透明大页,以尝试改善内存管理。透明大页与早期Linux版本中可用的HugePages类似。主要区别在于,透明大页是在运行时由内核中的khugepaged线程动态设置的,而常规HugePages必须在启动时进行预分配。


但是由于透明大页会导致意外的节点重启和RAC性能问题,所以Oracle强烈建议禁用透明大页。此外,即使在单实例数据库环境中,透明大页也可能会导致意外的性能问题或延迟。因此,Oracle建议在所有运行Oracle的数据库服务器上禁用透明大页。


RHEL7为例:

编辑/etc/default/grub,将transparent_hugepage=never加入含有GRUB_CMDLINE_LINUX行最后。

vi/etc/default/grub

GRUB_TIMEOUT=5

GRUB_DISTRIBUTOR="$(sed's, release .*$,,g' /etc/system-release)"

GRUB_DEFAULT=saved

GRUB_DISABLE_SUBMENU=true

GRUB_TERMINAL_OUTPUT="console"

GRUB_CMDLINE_LINUX="crashkernel=autord.lvm.lv=vg_sys/lv_root rd.lvm.lv=vg_sys/lv_swap rhgb quiettransparent_hugepage=never"

GRUB_DISABLE_RECOVERY="true"


运行:grub2-mkconfig-o /boot/grub2/grub.cfg


重起后验证(也可以稍后调整系统参数后重起)

cat/sys/kernel/mm/transparent_hugepage/enabled

alwaysmadvise [never] ---表示选中了never


好了,本次19C配置分享到此结束,我们下回分解见。


上一篇:yum仓库及NFS共享


下一篇:yum改阿里源