如果表字段太多,如果表中有些字段比较大,即便是你只查有限的几个字段,在做表关联和全表扫的时候,由于扫描的数据块多,性能方面还是会不理想。因为oracle扫描的时候是按照块为单位扫描,读取的时候也是按块为单位读取,所以这种功能无法在SQL层面上优化的时候,可以考虑做数据的垂直切分,下面来做个试验:
--制造数据不做垂直切分
create table test(
a number,
b varchar2(4000),
c varchar2(4000),
d varchar2(4000),
e varchar2(4000),
f varchar2(4000),
g varchar2(4000),
h varchar2(4000)
);
INSERT INTO test
SELECT ROWNUM,
rpad(‘*‘, 4000, 1),
rpad(‘*‘, 4000, 1),
rpad(‘*‘, 4000, 1),
rpad(‘*‘, 4000, 1),
rpad(‘*‘, 4000, 1),
rpad(‘*‘, 4000, 1),
rpad(‘*‘, 4000, 1)
FROM DUAL
CONNECT BY ROWNUM <= 100000;
commit;
create table test1 as select * from test;
--制造数据做垂直切分
create table test_cuizhi(
a number
);
INSERT INTO test_cuizhi
SELECT ROWNUM
FROM DUAL
CONNECT BY ROWNUM <= 100000;
commit;
create table test_cuizhi1 as select * from test_cuizhi;
--开始测试,只是取两个最小的字段
SQL> set timing on
SQL> set autotrace traceonly
SQL> select t.a,t1.a from test t, test1 t1 where t.a=t1.a;
已选择100000行。
已用时间: 00: 00: 53.17
执行计划
----------------------------------------------------------
Plan hash value: 2400077556
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 44504 | 1129K| 173K (1)| 00:34:38 |
|* 1 | HASH JOIN | | 44504 | 1129K| 173K (1)| 00:34:38 |
| 2 | TABLE ACCESS FULL| TEST | 44504 | 564K| 87801 (1)| 00:17:34 |
| 3 | TABLE ACCESS FULL| TEST1 | 117K| 1490K| 85344 (1)| 00:17:05 |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T"."A"="T1"."A")
Note
-----
- dynamic sampling used for this statement
统计信息
----------------------------------------------------------
52 recursive calls
0 db block gets
795627 consistent gets
534917 physical reads
0 redo size
1664840 bytes sent via SQL*Net to client
73664 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
100000 rows processed
SQL> /
已选择100000行。
已用时间: 00: 00: 33.36
执行计划
----------------------------------------------------------
Plan hash value: 2400077556
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 44504 | 1129K| 173K (1)| 00:34:38 |
|* 1 | HASH JOIN | | 44504 | 1129K| 173K (1)| 00:34:38 |
| 2 | TABLE ACCESS FULL| TEST | 44504 | 564K| 87801 (1)| 00:17:34 |
| 3 | TABLE ACCESS FULL| TEST1 | 117K| 1490K| 85344 (1)| 00:17:05 |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T"."A"="T1"."A")
Note
-----
- dynamic sampling used for this statement
统计信息
----------------------------------------------------------
0 recursive calls
0 db block gets
795446 consistent gets
552087 physical reads
0 redo size
1664840 bytes sent via SQL*Net to client
73664 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
100000 rows processed
SQL> select t.a,t1.a from test_cuizhi t, test_cuizhi1 t1 where t.a=t1.a;
已选择100000行。
已用时间: 00: 00: 06.17
执行计划
----------------------------------------------------------
Plan hash value: 2501302817
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 88629 | 2250K| | 310 (2)| 00:00:04 |
|* 1 | HASH JOIN | | 88629 | 2250K| 2168K| 310 (2)| 00:00:04 |
| 2 | TABLE ACCESS FULL| TEST_CUIZHI | 88629 | 1125K| | 42 (3)| 00:00:01 |
| 3 | TABLE ACCESS FULL| TEST_CUIZHI1 | 101K| 1288K| | 39 (3)| 00:00:01 |
-------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T"."A"="T1"."A")
Note
-----
- dynamic sampling used for this statement
统计信息
----------------------------------------------------------
52 recursive calls
0 db block gets
7139 consistent gets
153 physical reads
0 redo size
1664840 bytes sent via SQL*Net to client
73664 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
100000 rows processed
SQL> /
已选择100000行。
已用时间: 00: 00: 06.06
执行计划
----------------------------------------------------------
Plan hash value: 2501302817
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 88629 | 2250K| | 310 (2)| 00:00:04 |
|* 1 | HASH JOIN | | 88629 | 2250K| 2168K| 310 (2)| 00:00:04 |
| 2 | TABLE ACCESS FULL| TEST_CUIZHI | 88629 | 1125K| | 42 (3)| 00:00:01 |
| 3 | TABLE ACCESS FULL| TEST_CUIZHI1 | 101K| 1288K| | 39 (3)| 00:00:01 |
-------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T"."A"="T1"."A")
Note
-----
- dynamic sampling used for this statement
统计信息
----------------------------------------------------------
0 recursive calls
0 db block gets
7008 consistent gets
0 physical reads
0 redo size
1664840 bytes sent via SQL*Net to client
73664 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
100000 rows processed
相关文章
- 03-16数据库中的循环与游标
- 03-16java – Spring中的数据库写锁实体
- 03-16基于单片机的地震数据采集显示系统设计-毕设课设
- 03-16数据库笔记---表的完整性约束条件
- 03-16数据库——常用SQL语句的总结
- 03-16使用FIFO解决设计中数据速率转换的问题
- 03-16向数据库添加日期的转换问题
- 03-16将Date存入数据库的两种方式
- 03-16poi导入excel表格数据到数据库的时候,对出生日期的校验
- 03-1641. 谈谈数据库设计的三大范式及反范式