oracle中bulk collect into用法

通过bulk collect减少loop处理的开销 
 
采用bulk collect可以将查询结果一次性地加载到collections中。 
而不是通过cursor一条一条地处理。 
可以在select into,fetch into,returning into语句使用bulk collect。 
注意在使用bulk collect时,所有的into变量都必须是collections. 
 
 
举几个简单的例子: 
--在select into语句中使用bulk collect 
DECLARE 
TYPE SalList IS TABLE OF emp.sal%TYPE; 
sals SalList; 
BEGIN 
-- Limit the number of rows to 100. 
SELECT sal BULK COLLECT INTO sals FROM emp 
WHERE ROWNUM <= 100; 
-- Retrieve 10% (approximately) of the rows in the table. 
SELECT sal BULK COLLECT INTO sals FROM emp SAMPLE 10; 
END; 
/ 
--在fetch into中使用bulk collect 
DECLARE 
TYPE DeptRecTab IS TABLE OF dept%ROWTYPE; 
dept_recs DeptRecTab; 
CURSOR c1 IS 
SELECT deptno, dname, loc FROM dept WHERE deptno > 10; 
BEGIN 
OPEN c1; 
FETCH c1 BULK COLLECT INTO dept_recs; 
END; 
/ 
--在returning into中使用bulk collect 
CREATE TABLE emp2 AS SELECT * FROM employees; 
DECLARE 
TYPE NumList IS TABLE OF employees.employee_id%TYPE; 
enums NumList; 
TYPE NameList IS TABLE OF employees.last_name%TYPE; 
names NameList; 
BEGIN 
DELETE FROM emp2 WHERE department_id = 30 
RETURNING employee_id, last_name BULK COLLECT INTO enums, names; 
dbms_output.put_line('Deleted ' || SQL%ROWCOUNT || ' rows:'); 
FOR i IN enums.FIRST .. enums.LAST 
LOOP 
dbms_output.put_line('Employee #' || enums(i) || ': ' || names(i)); 
END LOOP; 
END; 
/ 
DROP TABLE emp2;

如果有个大数据量的DML操作事务,在OLAP报表等低并发库里. 并且强制归档模式中.

采用BULK 和FORALL 会比较快!

open cur_COLUMN_USER;
  loop
   fetch cur_COLUMN_USER bulk collect
   into 
          l_ARY_statedate,                    
          l_ARY_form,
          l_ARY_columnid,           
          l_ARY_usernumber,                   
          l_ARY_new_user,               
          l_ARY_exit_use    
   limit g_batch_size_n;
   
   forall i in 1..l_ARY_statedate.count
    insert into content_lst_day
    (......)
    values(l_ary_statedate(i),....);
 
 commit;
end loop;
相对使用普通游标循环提取数据出来处理的话 会快很多.

原因 1 bulk collect into 到数组 可以一次型把数据提取出来,减少了循环当中PL/SQL和SQL引撑的切换时间

原因 2 forall in ..... 也是一次型提交数据到某个地方 也同样减少了循环当中PL/SQL和SQL引撑的切换时间

注意 1 数据太大 要设置合理的LIMIT 否则提取到数组的数据会爆了PGA的回话内存

原因 3 bulk 内部操作 对insert delete 做了优化 采用批量方法.极大减少了redo 和Undo 使用量.

原因 4 为证明 当一个大数据量的insert 会超级慢,如果分批插入的总时间 比一次插入省很多时间.

An ORA-22813 when using BULK COLLECT is typically expected behavior indicating that you have exceeded the amount of free memory in the PGA. As collections are processed by PL/SQL they use the PGA to store their memory structures. Depending on the LIMIT size of the BULK COLLECT and additional processing of the collected data you may exceed the free memory of the PGA. While intuitively you may think that increasing the PGA memory and increasing the LIMIT size will increase performance, the following example shows you that this is not true in this case. So, by reviewing this example you should be able to strike a balance between a reasonable LIMIT size and the size of the PGA while maintaining a high level of performance using BULK COLLECT.

 

上一篇:对比解读《2020年CNCF中国云原生调查报告》


下一篇:19实战bulk批量增删改