Oracle内存分配中的子池(Subpool)--ORA-04031
在 Oracle 9i 和之后的版本,共享池可以被划分为子池。每个子池是一个小号的共享池,有它自己的空闲列表,内存结构条目,和LRU列表。这是一个对共享池和大池的可扩展性的改变,现在每一个子池都由一个 child latch 来保护,因此可以增加这些池的吞吐量。这意味着不再有之前版本的对于共享池和大池的单独 latch 的竞争。共享池中的保留区域也被平均的划分到每个子池中。
当你遇到 ora-04031 时,trace 会显示错误发生在哪个子池中。
例如:
ORA-04031: unable to allocate 4192 bytes of shared memory ("shared pool","SELECT /*+ FIRST_ROWS */ * F...","sql area (6,0)","kafco : qkacol"):4031:375:2008:ocicon.c
这个例子中,错误发生在第六个子池中。
使用子池的缺点是,有些情况下,个别子池被过分利用了。一旦子池选定,即使其他子池有合适的可用内存,内存块的搜索也可能失败。从 10g 开始,我们确实有这样的功能,允许当内存请求在选定的子池中无法满足时,“交换”到其他子池进行搜索,但这功能不可能对所有的内存结构和元素都起作用。
注意:有一小部分功能会跨子池的利用内存块。换句话说,就是跨越多子池的条带化使用内存。
这极少有文档记录,一般来说,内存请求会以轮转的方式,从一个“随机”的子池中找到它需要的内存块。
不平衡的使用子池会导致 ORA-04031。常见的是在"session param values"内存结构上的内存分配失败。在 9i 及更高版本,需要为每个配置了的 process 分配内存来存储动态参数。在启动时,会选中一个子池来管理所有的"session param values"条目。如果 PROCESSES 参数设置的非常高,并且又没有非常高的并发连接,这可能导致在子池中不必要的永久内存分配,并且可能引发 ORA-04031 问题。一般来说,拥有多个共享池 latch 所带来的性能上的改善要超过由于过度利用子池而可能带来的问题。
终端用户是看不到子池的。他们被隐藏在共享池和大池的使用下。注意:如果共享池使用子池,只要 LARGE_POOL_SIZE>0,就会自动的在大池中创建子池。
参考(这个问题看上去影响了很多 Bug):
Bug 4184298
- Subpool imbalance for "session parameters"
我默认有多少子池?
子池的数量计算有一个简单的算法。首先,在 9i 中一个子池至少有个 128MB,在 10g 中至少 256MB。第二,系统中每 4 个 CPU 可以有一个子池,最多 7 个子池。子池的数量可以用初始化参数 _kghdsidx_count 明确的控制。没有参数可以明确控制每个子池的大小。
如果有人在 9i 上配置一个 12 CPU 的系统,300MB 的共享池,Oracle 会创建 2 个子池,每个 150MB。如果共享池大小增大到 500MB,Oracle 会创建 3 个子池,每个 166MB。
因为 128MB(甚至 10g 上的 256MB)子池在很多应用环境中可能偏小,每个子池中的内存会倾向于需要增大。没有参数可以改变子池的最小值。唯一的办法是在一个固定的共享池大小基础上减少子池的数量,或者增大共享池使得子池大小增长。 请注意增长共享池大小并不一定增长子池的大小,因为如果系统上有很多 CPU,子池的数量也会增长。
10.2.0.3 及以上的变化:根据 bug 4994956,10g 或更高版本上,算法是将每个子池增长到 512M。
我应当如何控制使用的子池的数量?
变量 _kghdsidx_count 控制了使用的子池的数量。将这个参数设置为 1 会使共享池的行为回退到 8.1.7 版本。即,一个子池。
SQL> alter system set "_kghdsidx_count"=1scope=spfile;
或者把这行加入 pfile
"_kghdsidx_count"=1
注意:子池的创建是在启动过程中 SGA 创建时发生的。上述的 2 个例子中,数据库都必须重启来改变子池的数量。对 _kghdsidx_count 的改变也会改变大池中的子池数量。
警告:减少子池的数量会对性能有显著的影响,特别是在 RAC 配置,高并发系统,或者有非常大的池的数据库实例中。改变这个参数会影响共享池, 共享池保留区域和大池。
当通过 _kghdsidx_count 手动的设置子池的数量的时候,推荐的做法是逐渐的进行改动,监控性能影响,以避免任何剧烈的影响。
相反的的,增加子池的数量而不增加池的整体大小的话,由于子池大小变小,可能导致空间问题。
我们知道,从Oracle 9i开始,Shared Pool可以被分割为多个子缓冲池(SubPool)进行管理,以提高并发性,减少竞争。
Shared Pool的每个SubPool可以被看作是一个Mini Shared Pool,拥有自己独立的Free List、内存结构以及LRU List。同时Oracle提供多个Latch对各个子缓冲池进行管理,从而避免单个Latch的竞争(Shared Pool Reserved Area同样进行分割管理)。SubPool最多可以有7个,Shared Pool Latch也从原来的一个增加到现在的7个。如果系统有4个或4个以上的CPU,并且SHARED_POOL_SIZE大于250MB,Oracle可以把Shared Pool分割为多个子缓冲池(SubPool)进行管理,在Oracle 9i中,每个SubPool至少为128MB。
如果你看到过类似如下信息,那就意味着你可能遇到了SubPool的问题,如下所示:
Tue Dec 11 17:14:49 2007
Errors in file /oracle/app/admin/ctais2/udump/ctais2_ora_778732.trc:
ORA-04031: unable to allocate 4216 bytes of shared memory
("shared pool","IDX_DJ_NSRXX_P_NSRMCCTAIS2","sga heap(2,0)","library cache")
ORA-04031: unable to allocate 4216 bytes of shared memory
("shared pool","IDX_DJ_NSRXX_P_NSRMCCTAIS2","sga heap(2,0)","library cache")
Tue Dec 11 17:14:51 2007
Errors in file /oracle/app/admin/ctais2/bdump/ctais2_pmon_393248.trc:
ORA-04031: unable to allocate 4216 bytes of shared memory
("shared pool","unknown object","sga heap(2,0)","library cache")
Oracle 9i中多个子缓冲池的结构如图所示。
子缓冲池的数量受一个新引入的隐含参数_KGHDSIDX_COUNT影响。可以手工调整该参数(仅限于试验环境研究用),观察共享池管理的变化,可以通过如下步骤转储默认情况以及修改后的Shared Pool,再进行观察:
alter session set events 'immediate trace name heapdump level 2';
alter system set "_kghdsidx_count"=2 scope=spfile;
startup force;
alter session set events 'immediate trace name heapdump level 2';
以下是概要输出,注意在前者的跟踪文件中,sga heap(1,0)指共享池只存在一个子缓冲,后者则存在sga heap(1,0)以及sga heap(2,0)两个子缓冲池:
[oracle@jumper udump]$ grep "sga heap" eygle_ora_25766.trc
HEAP DUMP heap name="sga heap" desc=0x5000002c
HEAP DUMP heap name="sga heap(1,0)" desc=0x5001ef0c
[oracle@jumper udump]$ grep "sga heap" eygle_ora_25786.trc
HEAP DUMP heap name="sga heap" desc=0x5000002c
HEAP DUMP heap name="sga heap(1,0)" desc=0x5001ef0c
HEAP DUMP heap name="sga heap(2,0)" desc=0x50023c04
子缓冲池的分配的算法很简单:
l 每个子缓冲池必须满足一定的内存约束条件;
l 每4颗CPU可以分配一个子缓冲池,子缓冲池的数量最多7个。
在Oracle 9i中,每个SubPool容量至少128MB,而在Oracle 10g中,每个子缓冲池至少为256MB。如前所述,SubPool的数量可以通过_kghdsidx_count参数来控制,但是没有参数可以显式地控制SubPool的大小。
根据以上规则,在一个12颗CPU的系统中,如果分配容量为300MB的Shared Pool,Oracle 9i将创建两个SubPool,每个容量大约150MB,如果共享池容量增加到500MB,Oracle将创建3个SubPool,每个大约166MB。
不管Oracle 9i中的128MB以及Oracle10g中的256MB,这样的SubPool在许多复杂的系统中,都可能是过小的,在这些情况下,可能要增大SubPool。可以通过控制Shared Pool大小以及SubPool的数量来改变SubPool的大小。一些Bug以及内部测试表明500MB的SubPool可能会带来更好的性能,所以从Oracle 11g开始,每个SubPool至少为512MB。
除大小控制之外,在Oracle 10g中,Oracle仍然对共享池的管理做出了进一步改进,那就是对单个子缓冲池进行进一步的细分。现在默认,Oracle 10g会将单个缓冲池分割为4个子分区进行管理(这可能是因为通常4颗CPU才分配一个SubPool),使用类似如上的方法在Oracle 10gR2中进行测试:
alter session set events 'immediate trace name heapdump level 2';
alter system set "_kghdsidx_count"=2 scope=spfile;
startup force;
alter session set events 'immediate trace name heapdump level 2';
分析得到的日志,当仅有一个子缓冲池时,Shared Pool被划分为sga heap(1,0)~sga heap(1,3)共4个子分区:
[oracle@eygle udump]$ grep "sga heap" eygle_ora_13577.trc
HEAP DUMP heap name="sga heap" desc=0x2000002c
HEAP DUMP heap name="sga heap(1,0)" desc=0x2001b550
HEAP DUMP heap name="sga heap(1,1)" desc=0x2001c188
HEAP DUMP heap name="sga heap(1,2)" desc=0x2001cdc0
HEAP DUMP heap name="sga heap(1,3)" desc=0x2001d9f8
当使用两个子缓冲池时,Shared Pool则被划分为8个子分区进行管理如下:
[oracle@eygle udump]$ grep "sga heap" eygle_ora_13618.trc
HEAP DUMP heap name="sga heap" desc=0x2000002c
HEAP DUMP heap name="sga heap(1,0)" desc=0x2001b550
HEAP DUMP heap name="sga heap(1,1)" desc=0x2001c188
HEAP DUMP heap name="sga heap(1,2)" desc=0x2001cdc0
HEAP DUMP heap name="sga heap(1,3)" desc=0x2001d9f8
HEAP DUMP heap name="sga heap(2,0)" desc=0x20020640
HEAP DUMP heap name="sga heap(2,1)" desc=0x20021278
HEAP DUMP heap name="sga heap(2,2)" desc=0x20021eb0
HEAP DUMP heap name="sga heap(2,3)" desc=0x20022ae8
Oracle 10g中多缓冲池结构如图所示。
通过一个内部表X$KGHLU([K]ernel [G]eneric memory [H]eap manager State of [L]R[U] Of Unpinned Recreatable chunks)可以查询这些子缓冲池的分配:
SQL> select addr,indx,kghluidx,kghludur,kghluops,kghlurcr from x$kghlu;
ADDR INDX KGHLUIDX KGHLUDUR KGHLUOPS KGHLURCR
-------- ---- ---------- ---------- ---------- ----------
B5F4C5B4 0 2 3 12773 257
B5F4C1AC 1 2 2 43675 1042
B5F4D9C8 2 2 1 18831 1518
B5F4D5C0 3 2 0 0 0
B5F4D1B8 4 1 3 144697 327
B5F4E9E4 5 1 2 483428 1462
B5F4E5DC 6 1 1 6558 982
B5F4E1D4 7 1 0 0 0
8 rows selected.
通过这一系列的算法改进,Oracle中Shared Pool管理得以不断增强,较好地解决了大Shared Pool的性能问题;Oracle 8i中,过大Shared Pool设置可能带来的栓锁争用等性能问题在某种程度上得以解决。从Oracle 10g开始,Oracle开始提供自动共享内存管理,使用该特性,用户可以不必显示设置共享内存参数,Oracle会自动进行分配和调整,虽然Oracle为用户提供了极大的便利,但是了解自动化后面的原理对于理解Oracle的运行机制仍然是十分重要的。
虽然多缓冲池技术使Oracle可以管理更大的共享池,但是SubPool的划分可能也会导致各分区之间的协调问题,甚至可能因为内存分散而出现ORA-04031错误。最常见的问题是某个子缓冲池(SubPool)可能出现过度使用,当新的进程仍然被分配到这个SubPool时,可能会导致内存请求失败(而此时其他SubPool可能还有很多内存空间)。
因为子缓冲池存在的种种问题,从Oracle 10g开始,允许内存请求在不同SubPool之间进行切换(Switch),从而提高了请求成功的可能(但是显然切换不可能是无限制的,所以问题仍然可能存在)。
以下是来自客户系统的一个实际案例,在一个Oracle9i的系统中,经常出现ORA-04031的错误,客户系统的主要配置如下:
SQL> select * from v$version where rownum <2;
BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
SQL> show parameter cpu_count
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 48
SQL> select * from v$sga;
NAME VALUE
------------------------------ ----------------
Fixed Size 762240
Variable Size 2600468480
Database Buffers 18975031296
Redo Buffers 6578176
我们检查其参数设置,默认的子池设置是7个,代码如下:
SQL> select a.ksppinm, b.ksppstvl from x$ksppi a, x$ksppsv b
where a.indx = b.indx and a.ksppinm = '_kghdsidx_count';
KSPPINM KSPPSTVL
---------------------------------------------------------------- --------------------
_kghdsidx_count 7
7个子池都被使用,其Latch使用情况如下:
SQL> select child#, gets from v$latch_children
where name = 'shared pool' order by child#;
CHILD# GETS
---------- ----------
1 333403016
2 355720323
3 273944301
4 197980497
5 282347697
6 354398593
7 468809111
看一下具体的子池使用及内存情况,注意到各个Shared Pool子池平均分配了320MB内存左右,共享池合计约2256MB:
SELECT 'shared pool (' || NVL (DECODE (TO_CHAR (ksmdsidx), '0', '0 - Unused', ksmdsidx),'Total') || '):' subpool,
SUM (ksmsslen) BYTES, ROUND (SUM (ksmsslen) / 1048576, 2) mb
FROM x$ksmss WHERE ksmsslen > 0
GROUP BY ROLLUP (ksmdsidx) ORDER BY subpool ASC
/
SUBPOOL BYTES MB
------------------------------ ---------- ----------
shared pool (1): 352321536 336
shared pool (2): 335544320 320
shared pool (3): 335544320 320
shared pool (4): 335544320 320
shared pool (5): 335544320 320
shared pool (6): 335544320 320
shared pool (7): 335544320 320
shared pool (Total): 2365587456 2256
8 rows selected.
进一步可以查询一下各个子池的剩余内存,注意到各个子池剩余内存约在7MB~15MB之间,而这些剩余内存又可能是零散的碎片:
SELECT subpool, NAME, SUM (BYTES), ROUND (SUM (BYTES) / 1048576, 2) mb
FROM (SELECT 'shared pool (' || DECODE (TO_CHAR (ksmdsidx), '0', '0 - Unused', ksmdsidx)
|| '):' subpool, ksmssnam NAME, ksmsslen BYTES
FROM x$ksmss WHERE ksmsslen > 0 AND LOWER (ksmssnam) LIKE LOWER ('%free memory%'))
GROUP BY subpool, NAME ORDER BY subpool ASC, SUM (BYTES) DESC
/
SUBPOOL NAME SUM(BYTES) MB
------------------------------ ------------------------------ ---------- ----------
shared pool (1): free memory 8158640 7.78
shared pool (2): free memory 7414472 7.07
shared pool (3): free memory 7831608 7.47
shared pool (4): free memory 10690992 10.2
shared pool (5): free memory 17201856 16.4
shared pool (6): free memory 8239920 7.86
shared pool (7): free memory 13925416 13.28
通过以下查询可以详细列举不同子池的Free内存块情况,从输出可以观察到,每个子池大于10KB的内存块都很少,这也就意味着,当有大块的共享内存请求时就可能出现ORA-04031错误(注意:R-free指保留池的剩余空间):
SQL> SELECT ksmchidx "SubPool", 'sga heap(' || ksmchidx || ',0)' sga_heap,
2 ksmchcom chunkcomment,
3 DECODE (ROUND (ksmchsiz / 1000),
4 0, '0-1K', 1, '1-2K',2, '2-3K',3, '3-4K',4, '4-5K', 5, '5-6k',
5 6, '6-7k', 7, '7-8k',8, '8-9k',9, '9-10k', '> 10K'
6 ) "size",
7 COUNT (*), ksmchcls status, SUM (ksmchsiz) BYTES
8 FROM x$ksmsp WHERE ksmchcom = 'free memory'
9 GROUP BY ksmchidx, ksmchcls, 'sga heap(' || ksmchidx || ',0)',ksmchcom, ksmchcls,
10 DECODE (ROUND (ksmchsiz / 1000),
11 0, '0-1K', 1, '1-2K',2, '2-3K',3, '3-4K',4, '4-5K', 5, '5-6k',
12 6, '6-7k', 7, '7-8k',8, '8-9k',9, '9-10k', '> 10K' );
SUBPOOL SGA_HEAP CHUNKCOMMENT size COUNT(*) STATUS BYTES
------- ------------------- ---------------- ----- ---------- -------- ----------
1 sga heap(1,0) free memory 0-1K 5173 free 922568
1 sga heap(1,0) free memory 1-2K 5422 free 5274920
.........
1 sga heap(1,0) free memory 6-7k 2 R-free 11968
1 sga heap(1,0) free memory 7-8k 9 R-free 62096
1 sga heap(1,0) free memory 8-9k 12 R-free 95480
1 sga heap(1,0) free memory 9-10k 11 R-free 99192
1 sga heap(1,0) free memory > 10K 25 R-free 434272
2 sga heap(2,0) free memory 0-1K 4919 free 848864
.......
2 sga heap(2,0) free memory 9-10k 5 R-free 46056
2 sga heap(2,0) free memory > 10K 43 R-free 769144
3 sga heap(3,0) free memory 0-1K 6921 free 1058264
。。。。。。
3 sga heap(3,0) free memory 9-10k 9 R-free 81344
3 sga heap(3,0) free memory > 10K 64 R-free 1212424
4 sga heap(4,0) free memory 0-1K 6430 free 928688
.......
4 sga heap(4,0) free memory 9-10k 9 R-free 80464
4 sga heap(4,0) free memory > 10K 34 R-free 689640
5 sga heap(5,0) free memory 0-1K 4416 free 779096
......
5 sga heap(5,0) free memory 9-10k 4 R-free 36344
5 sga heap(5,0) free memory > 10K 40 R-free 1669384
6 sga heap(6,0) free memory 0-1K 6203 free 863104
。。。。。。
6 sga heap(6,0) free memory 9-10k 11 R-free 99464
6 sga heap(6,0) free memory > 10K 56 R-free 1758912
7 sga heap(7,0) free memory 0-1K 3814 free 607616
......
7 sga heap(7,0) free memory 9-10k 6 R-free 54432
7 sga heap(7,0) free memory > 10K 52 R-free 2816480
120 rows selected.
针对这种情况,我们可以相应减少Shared Pool子池的数量,以使得每个子池可以有足够的空闲内存可用。在这个客户环境中,首先将_kghdsidx_count调整为3,ORA-04031错误即没有再次出现,调整之后,每个子池的内存扩大到750MB左右:
SUBPOOL BYTES MB
------------------------------ ---------- ----------
shared pool (1): 788529152 752
shared pool (2): 788529192 752
shared pool (3): 771751936 736
shared pool (Total): 2348810280 2240
现在每个子池的空闲内存达到了20MB~60MB左右:
SUBPOOL NAME SUM(BYTES) MB
------------------------------ -------------------------- ---------- ----------
shared pool (1): free memory 56014080 53.42
shared pool (2): free memory 20292704 19.35
shared pool (3): free memory 67884912 64.74
调整后具体的内存使用情况如下,我们注意到,保留池的大块的空闲内存(R-free)数量大大增加,这样在要请求大块内存时,就更容易获得共享内存资源:
SUBPOOL SGA_HEAP CHUNKCOMMENT size COUNT(*) STATUS BYTES
------- ---------------- ---------------- ----- ---------- -------- ----------
。。。。。。
1 sga heap(1,0) free memory 8-9k 6 free 48016
1 sga heap(1,0) free memory > 10K 4 free 45448
。。。。。。
1 sga heap(1,0) free memory 9-10k 22 R-free 197536
1 sga heap(1,0) free memory > 10K 144 R-free 2606992
。。。。。。
2 sga heap(2,0) free memory 9-10k 8 free 72784
2 sga heap(2,0) free memory > 10K 15 free 172616
......
2 sga heap(2,0) free memory 9-10k 22 R-free 195280
2 sga heap(2,0) free memory > 10K 155 R-free 2839248
。。。。。。
3 sga heap(3,0) free memory 8-9k 14 free 111736
3 sga heap(3,0) free memory 9-10k 1 free 8808
。。。。。。
3 sga heap(3,0) free memory 9-10k 29 R-free 261272
3 sga heap(3,0) free memory > 10K 186 R-free 3434512
客户的系统是一个双节点RAC环境,在运行中,应用设置为只连接其中的一个节点,另外一个空闲节点的Shared Pool使用情况如下,列举供参考:
SUBPOOL SGA_HEAP CHUNKCOMMENT size COUNT(*) STATUS BYTES
------- --------------- ---------------- ----- ---------- -------- ----------
1 sga heap(1,0) free memory 0-1K 373 free 41144
1 sga heap(1,0) free memory 1-2K 1 free 1488
1 sga heap(1,0) free memory 2-3K 1 free 1936
1 sga heap(1,0) free memory 3-4K 1 free 2704
1 sga heap(1,0) free memory 4-5K 1 free 3776
1 sga heap(1,0) free memory 9-10k 4 free 34864
1 sga heap(1,0) free memory > 10K 157 free 460271664
1 sga heap(1,0) free memory > 10K 38 R-free 25520800
2 sga heap(2,0) free memory 0-1K 357 free 37376
2 sga heap(2,0) free memory 3-4K 2 free 6152
2 sga heap(2,0) free memory 4-5K 1 free 3776
2 sga heap(2,0) free memory > 10K 130 free 454592888
2 sga heap(2,0) free memory > 10K 38 R-free 25520800
3 sga heap(3,0) free memory 0-1K 425 free 51280
3 sga heap(3,0) free memory 3-4K 1 free 2704
3 sga heap(3,0) free memory 7-8k 1 free 6664
3 sga heap(3,0) free memory > 10K 44 free 467930312
3 sga heap(3,0) free memory > 10K 38 R-free 25520800
ORA-04031出现时,可能共享池没有足够空闲内存,但是Shared Pool保留池(shared_pool_reserved_size)还有一定的内存空闲,所以我们可以释放降低使用保留池的内存大小,在这个案例中,降低_shared_pool_reserved_min_alloc参数设置,也帮助数据库更好地利用了保留内存。
为什么会在一个 subpool中还有4个 sub partition 如:
sga heap(1,0) sga heap(1,1) sga heap(1,2) sga heap(1,3)
这不是因为 cpu的数目 也不是因为_kghdsidx_count, 而是因为 在10g 中AUTO SGA 引入了 shared pool duration的概念,
duration 分成4类:
- Session duration
- Instance duration (never freed)
- Execution duration (freed fastest)
- Free memory
引入了 shared pool duration的目的是
在10gR1中Shared Pool的shrink收缩操作存在一些缺陷,造成缺陷的原因是在该版本中Buffer Cache还没有能力共享使用一个granule,这是因为Buffer Cache的granule的尾部由granule header和Metadata(可能是buffer header或者RAC中的Lock Elements)拼接组成,在其尾部不容许存在空洞。另一个原因是当时的shared pool允许不同生命周期duration(以后会介绍)的chunk存放在同一个granule中,这造成共享池无法完全释放granule。到10gR2中通过对Buffer Cache Granule结构的修改允许在granule header和buffer及Metadata(buffer header或LE)存在缝隙,同时shared pool中不同duration的chunk将不在共享同一个granule,通过以上改进buffer cache与shared pool间的内存交换变得可行。此外在10gr2中streams pool也开始支持内存交换(实际根据不同的streams pool duration存在限制)
reference : http://www.oracledatabase12g.com/archives/understanding-automatic-sga-memory-management.html
How To Determine The Default Number Of Subpools Allocated During Startup (文档 ID 455179.1)
In this Document
Goal |
Solution |
The # of CPUs |
The shared pool size |
Manual Memory Mangement |
Automatic Shared Memory Management & Automatic Memory Management |
Number of subpools |
Examples |
Example 1: |
Example 2: |
Example 3: |
Example 4: |
References |
APPLIES TO:
Oracle Database - Enterprise Edition - Version 9.2.0.1 to 11.2.0.1 [Release 9.2 to 11.2]
Information in this document applies to any platform.
GOAL
This document explains the algorithm for determining the default number of subpools allocated during database startup.
SOLUTION
The calculation for the # of subpools of which the shared pool is comprised at instance startup is based on 3 items:
- the # of CPUs available to Oracle (value A)
- the shared pool size used by the instance (value B)
- whether a hidden instance parameter forces the use of a fixed # of subpools (value C)
The reference to the values A, B, and C will be used in the section "Number of subpools" when determining the actual # of subpools which will be allocated at instance startup.
The # of CPUs
For every 4 CPUs a subpool will be allocated, with a maximum of 7 subpools. The formula used is:
((CPU_COUNT - 1) >> 2) + 1
which leads to:
1 - 4 CPUs = 1 subpool
5 - 8 CPUs = 2 subpools
9 - 12 CPUs = 3 subpools
etc.
The shared pool size
The shared pool size is dependent on whether Automatic Shared Memory Management (ASMM)/Automatic Memory Management (AMM) is used or not.
Manual Memory Mangement
In this case, the following instance parameters are not set, or set explicitly to 0:
- SGA_TARGET
- MEMORY_TARGET
When shared memory is managed manually, subpools are configured as follows based on the value of SHARED_POOL_SIZE instance parameter:
Oracle version | Minimum subpool size |
---|---|
9i | 128MB |
10g < 10.2.0.3 | 256MB |
10.2.0.3 and higher | 512MB |
So if in Oracle11g a shared pool size of 1.4GB is configured, the calculation will result in 2 subpools of 700MB each (as a subpool must be at least 512MB in Oracle11g). On Oracle10g 10.2.0.2 the same shared pool would have resulted in 5 subpools, each of 286MB.
Automatic Shared Memory Management & Automatic Memory Management
In this setup, either SGA_TARGET or MEMORY_TARGET is set to a non-0 value.
Is SHARED_POOL_SIZE explicitly set, then the same calculation will be done as documented in the section "Manual Memory Management".
Is SHARED_POOL_SIZE not set explicitly, then with the size of SGA_TARGET known (either set explicitly, or calculated by the instance as outlined in the aforementioned document), we assume that the maximum initial value to be set for the SHARED_POOL_SIZE will be 50% of the SGA target size.
Number of subpools
The formula used to calculate the # of subpools at startup is:
if C set
then
subpool_count = C
else
subpool_count = min(A, B)
So if the hidden parameter _kghdsidx_count is set (value C), then this is chosen to be the # of pools to be used at instance startup. If the parameter is not set, then the minimum of A and B will be used.
Note: as the # of subpools is determined at instance startup, changing the SHARED_POOL_SIZE and CPU_COUNT instance parameters dynamically will have no efect on the # of subpools being used.
Examples
The following examples describe some cases how to calculate the # of subpools allocated for the shared pool based on given information such as Oracle release, CPU count, and instance memory parameters being set.
Example 1:
The following setup is used:
- Oracle9i is installed
- the system has 12 CPUs
- SHARED_POOL_SIZE is set to 300MB
The 12 CPUs lead to a possible allocation of 3 subpools (value A ). The 300MB shared pool size leads to a possible allocation of 2 subpools (value B ), each of 150MB in size. The actual # of subpools being allocated at instance startup is min(A, B), so min(3, 2) = 2.
Example 2:
The following setup is used:
- Oracle11gR2 is installed
- the system has 16 CPUs
- MEMORY_TARGET is set to 4.2GB
- no SGA_TARGET nor SHARED_POOL_SIZE is explicitly set
The 16 CPUs lead to a possible allocation of 4 subpools (value A ). The 4.2GB MEMORY_TARGET implies a SGA_TARGET of 2.52GB (60% of MEMORY_TARGET, as outlined in Document:1392549.1 ), out of which is assumed that the shared pool will be at most 50%: 1.26GB. This leads to a possible allocation of 2 subpools (value B ), each of 630MB. So the actual # of subpools allocated at instance startup is min(A, B), so min(4, 2) = 2.
Example 3:
The following setup is used:
- Oracle11gR2 is installed
- the system has 16 CPUs
- MEMORY_TARGET is set to 96GB
- SGA_TARGET is set to 50GB
The 16 CPUs could lead to an allocation of 4 subpools (value A ). As SGA_TARGET is set, and SHARED_POOL_SIZE is not explictly set, the assumption is made that the shared pool size will be at most 25GB (50% of SGA_TARGET). This leads to a possible allocation of 7 subpools (value B ) as at most 7 subpools are allocated at any time and the calculation would lead to 50. So the actual # of subpools allocated at instance startup will be min(A, B), so min(4, 7) = 4.
Example 4:
The following setup is used:
- Oracle11gR2 is installed
- the system has 16 CPUs
- MEMORY_TARGET is set to 60GB
- SGA_TARGET is set to 30GB
- _kghdsidx_count is set to 4
This automatically implies that 4 subpools will be allocated, as the _kghdsidx_count instance parameter takes precedence over everything else.
REFERENCES
BUG:4994956 - NUMBER OF CONFIGURED SHARED POOL SUBPOOLS IS NOT CORRECT