下面用实例来模拟数据访问方式和数据处理方式的演变。
1.执行计划—通过表访问数据 TABLE ACCESS FULL
LEO1@LEO1> create table leo1 as select * from dba_objects; 我们创建一张表leo1
Table created.
LEO1@LEO1> select count(*) from leo1; 这张表有71955条记录
COUNT(*)
----------
71955
LEO1@LEO1> set autotrace trace exp; 启动执行计划
LEO1@LEO1> select * from leo1;
Execution Plan
----------------------------------------------------------
Plan hash value: 2716644435
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 83162 | 16M| 287 (1)| 00:00:04 |
| 1 | TABLE ACCESS FULL| LEO1 | 83162 | 16M| 287 (1)| 00:00:04 |
--------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
数据访问方式:走的是全表扫描,因为没有创建索引,所以没办法走索引,这是一种效率不高的数据访问方式,在实际应用中较少。
2.执行计划—通过表并行访问数据 PARALLEL
LEO1@LEO1> select /*+ parallel */ count(*) from leo1; 自动评估并行度
Execution Plan
----------------------------------------------------------
Plan hash value: 452265093
--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 159 (0)| 00:00:02 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM)| :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR| | 71955 | 159 (0)| 00:00:02 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL| LEO1 | 71955 | 159 (0)| 00:00:02 | Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------
Note
-----
- automatic DOP: Computed Degree of Parallelism is 2
如果不指定并行度,优化器自动评估并行度为2,因为我的小本本就是双核的,并行度最大只能是2
LEO1@LEO1> select /*+ parallel(leo1 4) */ count(*) from leo1; 指定4个并行度
Execution Plan
----------------------------------------------------------
Plan hash value: 452265093
--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 80 (2)| 00:00:01 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM)| :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR| | 71955 | 80 (2)| 00:00:01 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL| LEO1 | 71955 | 80 (2)| 00:00:01 | Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------
数据访问方式:这次的访问方式采用了并行机制,并行比非并行的效率是截然不同的,我们指定了4个并行度那么就会有4个进程来分割整个表数据,每个进程分别处理1/4数据,这样理论上提升了4倍的效率(并行度的个数要和cpu数量匹配,目前我的本是2核的所以我们设置了4个并行度也是体现不出来的,如果你指定了并行度,优化器就不会自动评估了)。我们来看一下执行计划的执行顺序,首先全表扫描LEO1->并行迭代方式访问块-> SORT AGGREGATE把检索出来的结果进行统计->PX SEND QC (RANDOM)串行的把4个进程的结果逐个发送到QC并行协调器-> PX COORDINATOR并行协调器进行结果合并-> SORT AGGREGATE再次统计结果->最后把结果返回给用户。
3.执行计划—通过索引唯一扫描访问数据 INDEX UNIQUE SCAN
LEO1@LEO1> alter table leo1 add constraint pk_leo1 primary key (object_id); 给leo1表object_id列添加主键
Table altered.
LEO1@LEO1> set linesize 300
LEO1@LEO1> select * from leo1 where object_id=100; 查看id=100时数据访问方式
Execution Plan
----------------------------------------------------------
Plan hash value: 2711624550
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 207 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID | LEO1 | 1 | 207 | 2 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | PK_LEO1 | 1 | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
-------------------------------------------------------------------
2 - access("OBJECT_ID"=100) 谓词条件object_id=100,就是你根据什么条件生成执行计划
数据访问方式:这条sql语句大家很容易看出,首先执行INDEX UNIQUE SCAN索引唯一扫描,因为你选择的是等值范围,优化器可以直接定位你的索引块,又因为你要的是id=100这条记录的所有字段值(*),因此TABLE ACCESS BY INDEX ROWID还要通过索引键值找到对应的ROWID,再去访问ROWID所在数据块找到需要的记录。这是一种比较快速的数据访问方式,扫描的块少,资源占用率也小,是一种推荐使用的方式。
LEO1@LEO1> select object_id from leo1 where object_id=100;
Execution Plan
----------------------------------------------------------
Plan hash value: 1889847647
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 1 (0)| 00:00:01 |
|* 1 |INDEX UNIQUE SCAN| PK_LEO1 | 1 | 13 | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("OBJECT_ID"=100)
注:select object_id from leo1 where object_id=100; 如果是执行这条sql语句,那么我们只需扫描索引键值即可得到结果,无需再去访问数据块了(因为索引块上就保存了id=100数据),这种方式又加快了检索的速度。
4.执行计划—通过索引范围扫描访问数据 INDEX RANGE SCAN
LEO1@LEO1> select * from leo1 where object_id>10 and object_id<100;
Execution Plan
----------------------------------------------------------
Plan hash value: 2612250437
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 89 | 18423 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID | LEO1 | 89 | 18423 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | PK_LEO1 | 89 | | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OBJECT_ID">10 AND "OBJECT_ID"<100) 谓词条件object_id>10 and object_id<100
Note
-----
- dynamic sampling used for this statement (level=2) 动态采样用于此语句
数据访问方式:由于你的where条件是object_id>10 and object_id<100一个范围(而索引块按顺序排序的,也是按顺序扫描的)因此优化器采用了INDEX RANGE SCAN索引范围扫描,把符合条件的索引块拿出来,找到索引键值对应的ROWID,再去访问ROWID所在的数据块找到需要的记录。这种方式虽然比索引唯一扫描效率低一点,但大大优于全表扫描。也是推荐的一种数据访问方法。
5.执行计划—通过快速索引全扫描访问数据 INDEX FAST FULL SCAN
原理:把索引链切割成很多区域,多索引块并行扫描,这样比INDEX FULL SCAN效率要高
LEO1@LEO1> select count(*) from leo1;
Execution Plan
----------------------------------------------------------
Plan hash value: 173418543
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 46 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN | PK_LEO1 | 83162 | 46 (0)| 00:00:01 |
-------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
数据访问方式:我们的目的想知道leo1表一共有多少条记录,我们又知道表上创建了索引,索引的条数和数据行是一一对应的。那么我们扫描一遍索引块要比扫描一遍表数据块是不是要快啊,因为扫描的数据量少对吧,在索引块里只需扫描有多少条索引键值就知道对应有多少条记录了,同时又启动了并行扫描方式,速度的给力是不言而喻的。SORT AGGREGATE对检索出来的结果进行统计。
6.执行计划—通过索引全扫描访问数据 INDEX FULL SCAN
LEO1@LEO1> select object_id from leo1 order by object_id;
Execution Plan
----------------------------------------------------------
Plan hash value: 1595913726
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 71955 | 351K| 150 (0)| 00:00:02 |
| 1 | INDEX FULL SCAN| PK_LEO1 | 71955 | 351K| 150 (0)| 00:00:02 |
----------------------------------------------------------------------------
数据访问方式:我们要对记录进行一次排序,索引块就是按照顺序存储的,也是按照顺序扫描的。排序工作是串行化的,因此不能并行操作(也就不适应INDEX FAST FULL SCAN)所以我们把索引键值全部扫描一遍就相当于排好序了,根本用不着去访问表数据块。
7.执行计划—通过索引跳跃扫描访问数据 INDEX SKIP SCAN
解释:所谓的索引跳跃扫描,是指跳过前导字段进行扫描,例如表上有一个复合索引,而在查询中有除了索引中第一列(前导字段)的其他列作为条件,并且优化器是CBO,这时候执行计划就有可能走INDEX SKIP SCAN
LEO1@LEO1> create table leo3 (x number,y varchar2(30),z varchar2(30)); 创建一个表,有三个字段
Table created.
LEO1@LEO1> create index compound_idx_leo3 on leo3(x,y); 创建一个复合索引
Index created.
LEO1@LEO1> begin 插入10w条记录
for i in 1..100000 loop
insert into leo3 values(mod(i,30),to_char(i),to_char(i));
end loop;
commit;
end;
/ 2 3 4 5 6 7
PL/SQL procedure successfully completed.
LEO1@LEO1> analyze table leo3 compute statistics; 对表进行整体数据分析
Table analyzed.
LEO1@LEO1> set autotrace trace explain;
LEO1@LEO1> select * from leo3 where y=‘1000‘;
Execution Plan
----------------------------------------------------------
Plan hash value: 1334303583
-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12 | 32 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| LEO3 | 1 | 12 | 32 (0)| 00:00:01 |
|* 2 | INDEX SKIP SCAN | COMPOUND_IDX_LEO3 | 1 | | 31 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id): 谓词条件跳过前导字段(x)进行扫描才成
---------------------------------------------------
2 - access("Y"=‘1000‘)
filter("Y"=‘1000‘)
数据访问方式:如果要想使用索引跳跃扫描需要几个前提条件:
1.跳过前导字段
2. optimizer是CBO
3.对表数据进行分析,让CBO优化器了解数据的分布情况
4.还需要保证第一列的distinct value非常小,表上要有正确的统计数据
有了上述条件,我们在进行数据扫描时就有可能会走INDEX SKIP SCAN
8.执行计划—数据处理方式 哈希关联 HASH JOIN
HASH JOIN特点:没有索引时hash的效果更好,hash需要一定的计算所以会消耗些cpu资源
LEO1@LEO1> create table leo2 as select * from dba_objects where rownum<20000; 创建leo2表
Table created.
LEO1@LEO1> set autotrace off 关闭执行计划
LEO1@LEO1> select count(*) from leo2; 表中有20000行数据
COUNT(*)
----------------
20000
LEO1@LEO1> set autotrace trace exp; 启动执行计划
LEO1@LEO1> select leo1.* from leo1,leo2 where leo1.object_id=leo2.object_id; HASH JOIN
Execution Plan
----------------------------------------------------------
Plan hash value: 2290691545
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 21772 | 2338K| 367 (1)| 00:00:05 |
|* 1 |HASH JOIN | | 21772 | 2338K| 367 (1)| 00:00:05 |
| 2 | TABLE ACCESS FULL| LEO2 | 21772 | 276K| 79 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL| LEO1 | 71955 | 6816K| 287 (1)| 00:00:04 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("LEO1"."OBJECT_ID"="LEO2"."OBJECT_ID") 谓词条件2个表中object_id相等的行
Note
-----
- dynamic sampling used for this statement (level=2) 动态采样用于此语句
数据处理方式:查询2个表中object_id相等的行,HASH JOIN特点先把小表build到内存中,再和大表进行精准匹配,select leo1.* from leo2,leo1 where leo1.object_id=leo2.object_id;不管from leo2,leo1如何排序,都会先扫描小表LEO2(记录少),在扫描大表LEO1(记录多),扫描完2个表之后,把leo2build到内存中,在和leo1进行hash join。
题外话:说一说“执行计划的执行顺序”
先从开头一直往右看,一直看到最右边有并列代码部分。如果遇到并列的,就从上往下看。对于并列的步骤,靠上的先执行;对于不并列的步骤,靠右的先执行。
9.执行计划—数据处理方式 嵌套循环关联 NESTED LOOP JOIN
NESTED LOOP JOIN特点:两张表最好有索引,通过索引键值进行匹配效率较高
LEO1@LEO1> alter table leo2 add constraint pk_leo2 primary key (object_id); leo2表添加主键
Table altered.
LEO1@LEO1> select leo1.* from leo1,leo2 where leo1.object_id=leo2.object_id; 关联匹配
Execution Plan
----------------------------------------------------------
Plan hash value: 1603444237
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 21772 | 2338K| 291 (2)| 00:00:04 |
| 1 | NESTED LOOPS | | 21772 | 2338K| 291 (2)| 00:00:04 |
| 2 | TABLE ACCESS FULL | LEO1 | 71955 | 6816K| 287 (1)| 00:00:04 |
|* 3 | INDEX UNIQUE SCAN | PK_LEO2 | 1 | 13 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("LEO1"."OBJECT_ID"="LEO2"."OBJECT_ID") 谓词条件等值匹配
Note
-----
- dynamic sampling used for this statement (level=2)
数据处理方式:从leo1表里拿出一条记录到leo2表里进行匹配(当然是通过索引匹配),遍历整个leo2表,发现匹配的行就取出来。从leo1表里拿出几条记录,就要遍历leo2表几次。所以2张表最好有索引才会走NESTED loop join
10.执行计划—数据处理方式 合并关联 MERGE JOIN
LEO1@LEO1> alter table leo1 drop constraint pk_leo1;
Table altered.
LEO1@LEO1> alter table leo2 drop constraint pk_leo2;
Table altered.
删除leo1 leo2表上的主键,我测试了一下,如果不删除主键优化器会走NESTED LOOP JOIN方式
LEO1@LEO1> select l1.* from (select * from leo1 order by object_id) l1,(select * from leo2 order by object_id) l2 where l1.object_id=l2.object_id;
Execution Plan
----------------------------------------------------------
Plan hash value: 463394885
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 21772 | 4507K| | 2067 (1)| 00:00:25 |
| 1 |MERGE JOIN | | 21772 | 4507K| | 2067 (1)| 00:00:25 |
| 2 | VIEW | | 71955 | 13M| | 1882 (1)| 00:00:23 |
| 3 | SORT ORDER BY | | 71955 | 6816K| 9448K| 1882 (1)| 00:00:23 |
| 4 | TABLE ACCESS FULL | LEO1 | 71955 | 6816K| | 287 (1)| 00:00:04 |
|* 5 | SORT JOIN | | 21772 | 276K| 872K| 186 (2)| 00:00:03 |
| 6 | TABLE ACCESS FULL | LEO2 | 21772 | 276K| | 79 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - access("L1"."OBJECT_ID"="LEO2"."OBJECT_ID")
filter("L1"."OBJECT_ID"="LEO2"."OBJECT_ID")
Note
-----
- dynamic sampling used for this statement (level=2)
数据处理方式:所谓的MERGE JOIN方式,是先对leo1 leo2表整体排序,在逐条进行匹配。通常MERGE JOIN方式效率不高,因为先要有排序过程。顺序:leo1表全表扫描-> SORT ORDER BY排序->VIEW排好序的结果集-> leo2表全表扫描-> SORT JOIN关联排序-> MERGE JOIN。