有个徒弟问我,要创建一个索引,去优化一个SQL,但是创建了索引之后其他 SQL 也要用 这个索引,其他SQL慢死了,要优化的SQL又快。遇到这种问题咋搞?
一般遇到这种问题还是很少的。处理的方法很多。我简单的给大家介绍一种方法。
还是直接看我实验操作步骤吧。
在SCOTT账户里面创建一个测试表和一个索引 SQL> create table test as select * from dba_objects; 表已创建。 SQL> create index idx_test on test(object_id); 索引已创建。
SQL> set lines 200 pages 200 SQL> set autot trace SQL> select * from test where object_id=10; 执行计划 ---------------------------------------------------------- Plan hash value: 2473784974 ---------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 207 | 2 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 207 | 2 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | IDX_TEST | 1 | | 1 (0)| 00:00:01 | ---------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("OBJECT_ID"=10) Note ----- - dynamic sampling used for this statement (level=6) 统计信息 ---------------------------------------------------------- 44 recursive calls 0 db block gets 136 consistent gets 4 physical reads 0 redo size 1404 bytes sent via SQL*Net to client 420 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed然后人工设置索引统计信息 把集群银子搞大(非常大) SQL> begin 2 dbms_stats.set_index_stats(ownname => ‘SCOTT‘, 3 indname => ‘IDX_TEST‘, 4 numrows => 100000000000, 5 numlblks => 100000, 6 numdist => 100000, 7 avglblk => 100000, 8 avgdblk => 100000, 9 clstfct => 100000000000); 10 end; 11 / PL/SQL 过程已成功完成。SQL> select * from test where object_id=10; 执行计划 ---------------------------------------------------------- Plan hash value: 1357081020 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 207 | 290 (1)| 00:00:04 | |* 1 | TABLE ACCESS FULL| TEST | 1 | 207 | 290 (1)| 00:00:04 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("OBJECT_ID"=10) Note ----- - dynamic sampling used for this statement (level=6) 统计信息 ---------------------------------------------------------- 0 recursive calls 0 db block gets 1039 consistent gets 0 physical reads 0 redo size 1404 bytes sent via SQL*Net to client 420 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed这个时候,所有的SQL都不会走这个索引了,你想让某个SQL走索引,直接hint 让它走就ok了SQL> select /*+ index(test idx_test) */ * from test where object_id=10; 执行计划 ---------------------------------------------------------- Plan hash value: 2473784974 ---------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 207 | 1446K (1)| 04:49:20 | | 1 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 207 | 1446K (1)| 04:49:20 | |* 2 | INDEX RANGE SCAN | IDX_TEST | 1 | | 15 (94)| 00:00:01 | ---------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("OBJECT_ID"=10) Note ----- - dynamic sampling used for this statement (level=6) 统计信息 ---------------------------------------------------------- 0 recursive calls 0 db block gets 4 consistent gets 0 physical reads 0 redo size 1404 bytes sent via SQL*Net to client 420 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed
这样还没完,因为要是有人收集了统计信息,会覆盖我们set的统计信息,所以收集完统计信息之后,要再跑一下那个set的统计信息就ok了。