【故障处理】队列等待之enq IV - contention案例

【故障处理】队列等待之enq IV -  contention案例

1.1  BLOG文档结构图

【故障处理】队列等待之enq IV - contention案例 

1.2  前言部分

1.2.1  导读和注意事项

各位技术爱好者,看完本文后,你可以掌握如下的技能,也可以学到一些其它你所不知道的知识,~O(∩_∩)O~

① 队列等待之enq IV -  contention案例(重点)

Tips

① 本文在itpubhttp://blog.itpub.net/26736162)、博客园(http://www.cnblogs.com/lhrbest)和微信公众号(xiaomaimiaolhr有同步更新

② 文章中用到的所有代码相关软件相关资料及本文的pdf版本都请前往小麦苗的云盘下载小麦苗的云盘地址见:http://blog.itpub.net/26736162/viewspace-1624453/

③ 若网页文章代码格式有错乱,下载pdf格式的文档来阅读

④ 本篇BLOG,代码输出部分一般放在一行一列的表格中。

本文如有错误或不完善的地方请大家多多指正,ITPUB留言或QQ皆可,您的批评指正是我写作的最大动力。

 

 

1.3  故障分析及解决过程

 

1.3.1  数据库环境介绍

 

项目

source db

db 类型

RAC

db version

12.1.0.2.0

db 存储

ASM

OS版本及kernel版本

SuSE Linux Enterprise Server(SLES 11) 64

 

1.3.2  AWR分析

【故障处理】队列等待之enq IV - contention案例 

这里简单分析一下Up Time(hrs),其它几个指标都很熟悉了。表中的“Up Time(hrs)”代表自数据库实例启动到本次快照结束这段时间的小时数。例如,该AWR中数据库实例1的启动时间为“2016-08-11 20:51”,快照结束时间为“2016-12-14 21:00”,故“Up Time(hrs)”约为125.006天,转换为小时约为3000.14小时,如下所示:

SYS@lhrdb> SELECT trunc(UP_TIME_D,3),  trunc(trunc(UP_TIME_D,3)*24,2) UP_TIME_HRS FROM (SELECT (TO_DATE('2016-12-14 21:00', 'YYYY-MM-DD HH24:MI') - TO_DATE('2016-08-11 20:51', 'YYYY-MM-DD HH24:MI'))  UP_TIME_D FROM DUAL);

 

TRUNC(UP_TIME_D,3) UP_TIME_HRS

------------------ -----------

           125.006     3000.14

 

可以看到节点1的负载较大,而ADDM中,特殊类的等待事件较多。接下来查看等待事件的部分:

【故障处理】队列等待之enq IV - contention案例 

可以看到enq: IV - contention和DFS lock handle等待较为严重。这里需要说一下enq: IV - contention这个名称。在AWR中,IV-的前后都是1个空格,而在数据库中记录的是-之后有2个空格,如下:

【故障处理】队列等待之enq IV - contention案例 

所以,采用搜索的时候需要注意。

【故障处理】队列等待之enq IV - contention案例 

根据ASH中的p1参数的值获得:

SYS@lhrdb> SELECT CHR(BITAND(1230372867, -16777216) / 16777215) ||

  2         CHR(BITAND(1230372867, 16711680) / 65535) "LOCK",

  3         BITAND(1230372867, 65535) "MODE"

  4    FROM DUAL;

 

LO       MODE

-- ----------

IV          3

 

 

1.3.3  enq: IV -  contention解决

    SELECT *

      FROM V$EVENT_NAME A

     WHERE A.NAME LIKE '%enq: IV -  contention%';

【故障处理】队列等待之enq IV - contention案例 

该等待事件为12c特有,12c相比11g多了大约500多个等待事件。该问题参考MOS12c RAC DDL Performance Issue: High "enq: IV - contention" etc if CPU Count is Different (文档 ID 2028503.1)

【故障处理】队列等待之enq IV - contention案例

The fix will be included in future PSU, patch exists for certain platform/version.

The workaround is to set the following parameter to the highest value in the cluster and restart:

_ges_server_processes

To find out the highest value, run the following grep on each node:

ps -ef| grep lmd

 

该等待事件主要是由于12c RAC的2个节点上的cpu_count这个变量不一致导致的。

AWR中可以看出节点1CPU48,而节点2CPU96

【故障处理】队列等待之enq IV - contention案例 

dba_hist_parameter中可以看到CPU_COUNT这个参数的变化历史:

 

SQL> SHOW PARAMETER CPU  

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

cpu_count                            integer     96

parallel_threads_per_cpu             integer     2

resource_manager_cpu_allocation      integer     96

 

 

SQL>  select snap_id, INSTANCE_NUMBER,PARAMETER_NAME,VALUE from dba_hist_parameter where PARAMETER_NAME='cpu_count' order by snap_id;

 

   SNAP_ID INSTANCE_NUMBER PARAMETER_NAME                                                   VALUE

---------- --------------- ---------------------------------------------------------------- ------

。。。。。。。。。。。。。。。。。。。。。。。。。。。

      3368               1 cpu_count                                                        48

      3369               1 cpu_count                                                        48

      3369               2 cpu_count                                                        48

      3370               1 cpu_count                                                        48

      3371               1 cpu_count                                                        48

      3372               1 cpu_count                                                        48

      3373               1 cpu_count                                                        48

      3374               1 cpu_count                                                        48

      3375               2 cpu_count                                                        96

      3375               1 cpu_count                                                        48

      3376               1 cpu_count                                                        48

      3376               2 cpu_count                                                        96

      3377               1 cpu_count                                                        48

      3377               2 cpu_count                                                        96

      3378               2 cpu_count                                                        96

      3378               1 cpu_count                                                        48

      3379               1 cpu_count                                                        48

      3379               2 cpu_count                                                        96

。。。。。。。。。。。。。。。。。。。。

 

 

 

查询告警日志:more alert*|grep -i Cpu  也可以获取CPU的变化。

询问客户可知,是他们调整过系统的CPU资源,所以导致节点2上的CPU_COUNT自动变化,引起了enq: IV -  contention等待。

若主机的CPU个数变化,那么当主机重启后数据库的cpu_count参数的值会随之变化,该参数属于操作系统依赖参数。

调整主机的CPU个数之后,该等待事件消失。

 

1.4  MOS

1.4.1  12c RAC DDL Performance Issue: High "enq: IV - contention" etc if CPU Count is Different (文档 ID 2028503.1)


【故障处理】队列等待之enq IV - contention案例

In this Document


Symptoms

Cause

Solution

References



APPLIES TO:

Information in this document applies to any platform.

SYMPTOMS

12c RAC database seeing high "enq: IV - contention":


From awr report:

Top 10 Foreground Events by Total Wait Time
===================================
Event Waits Total Wait Time (sec) Wait Avg(ms) % DB time Wait Class
enq: IV - contention 52,914 1688.4 31.91 42.8 Other
row cache lock 44,865 896.8 19.99 22.7 Concurrency

 

tkprof of 10046 trace of SQL statement shows the same event in the top:

Event waited on Times Max. Wait Total Waited 
---------------------------------------- Waited ---------- ------------ 
enq: IV - contention 6017 0.32 287.68 
row cache lock 957 0.20 7.48 
library cache lock 631 0.13 15.10 
library cache pin 616 0.11 14.54

 

 

CAUSE

Cluster nodes have different CPU count resulting in different number of LMD processes, on one node it has two while on the other it has three.

The issue is due to the following bug:

BUG 21293056 - PERFORMANCE DEGRADATION OF GRANT STATEMENT AFTER 12C UPGRADE

Which is closed as duplicate of:

BUG 21395269 - ASYMMETRICAL LMDS CONFIGURATION IN A CLUSTER LEADS TO POOR MESSAGE TRANSFER

 

 

SOLUTION

The fix will be included in future PSU, patch exists for certain platform/version.

The workaround is to set the following parameter to the highest value in the cluster and restart:

ps -ef| grep lmd

 






     本文转自lhrbest 51CTO博客,原文链接http://blog.51cto.com/lhrbest/1885815:,如需转载请自行联系原作者



上一篇:RocketMQ 主从同步若干问题答疑


下一篇:如何在shell下免密码快速登陆MySQL数据库?