XTTS 迁移后检查
7.1 更改用户默认表空间
更改用户默认表空间,将用户默认表空间设置与源数据库保持一致:
@default_tablespace.sql
源端执行:
spool default_tablespace.sql
select 'alter user '||username||' default tablespace '||default_tablespace||';' from dba_users where default_tablespace in(‘DATATBS ’);
spool off
添加表空间配额权限:
@unlimited_tablespace.sql
源库:
select 'alter user '||username||' quota unlimited on '|| default_tablespace||';' from dba_users where default_tablespace in (‘DATATBS ’);
7.2 数据库对象并行重编译
exec utl_recomp.recomp_parallel(32);
set echo off feedback off timing off verify off
set pagesize 0 linesize 500 trimspool on trimout on
Set heading off;
set feedback off;
set echo off;
Set lines 999;
spool compile.sql
select 'alter '||
decode(object_type,'SYNONYM',decode(owner,'PUBLIC','PUBLIC SYNONYM '||object_name,
'SYNONYM '||OWNER||'.'||OBJECT_NAME)||' compile;',
decode(OBJECT_TYPE ,'PACKAGE BODY','PACKAGE',OBJECT_TYPE)||
' '||owner||'.'||object_name||' compile '||
decode(OBJECT_TYPE ,'PACKAGE BODY','BODY;',' ;'))
from dba_objects where status<>'VALID'
order by owner,OBJECT_NAME;
spool off
@compile.sql
正式环境没有无效对象。
7.3 数据库对象数据比对
运行数据库对比脚本,通过创建 dblink,运行相关的数据库对象比对脚本。这里我们主要比对了存储过程,函数,触发器,试图,索引,表等等。
创建到生产环境 DB LINK
CREATE DATABASE LINK TEST_COMPARE CONNECT TO SYSTEM IDENTIFIED BY password xxx USING 'xxxx:1521/xxxx';
使用如下脚本对比数据库中对象个数:
SELECT OWNER, OBJECT_NAME, OBJECT_TYPE
FROM DBA_OBJECTS@TEST_COMPARE
WHERE OBJECT_NAME NOT LIKE 'BIN%'
AND OBJECT_NAME NOT LIKE 'SYS_%'
AND OWNER IN ('LUOKLE')
MINUS
SELECT OWNER, OBJECT_NAME, OBJECT_TYPE
FROM DBA_OBJECTS
WHERE OBJECT_NAME NOT LIKE 'BIN%'
AND OBJECT_NAME NOT LIKE 'SYS_%'
AND OWNER IN ('LUOKLE');
或
源库:
select object_type,count(*) from dba_objects where owner
in (select username from 源库) group by object_type;
目标:
select object_type,count(*) from dba_objects where owner
in (select username from 目标库) group by object_type;
如果索引缺失可能是由于没有存放在传输的表空间所以需要重新创建,而缺失的表可能是临时表,需要手工创建。
使用如下脚本进行创建:
CREATE INDEX "LUOKLE"."IDX_XXX" ON "LUOKLE"."BI_XXXX" TABLESPACEDATATBS parallel 8;
Alter index "LUOKLE"."IDX_XX" noparallel;
CREATE GLOBAL TEMPORARY TABLE "LUOKLE"."TEMP_PAY_BATCH_CREATE_INSTR"
( "BATCH_ID" NUMBER,
"STATUS" CHAR(1)
) ON COMMIT PRESERVE ROWS ;
使用 hash 函数进行数据对比
两边分别创建存放 hash 数据的表
create table system.get_has_value (dbname varchar2(20),owner varchar2(30),table_name varchar2(100),value varchar2(100),error varchar2(2000));
创建需要验证的表:
create sequence system.sequence_checkout_table start with 1 increment by 1 order cycle maxvalue 10 nocache;
CREATE TABLE SYSTEM.checkout_table as select sys_context('USERENV', 'INSTANCE_NAME') dbnme,owner,table_name, system.sequence_checkout_table.NEXTVAL groupid from dba_tables where owner='LUOKLE'
结果显示:
1 SELECT owner, groupid, COUNT (*)
2 FROM SYSTEM.checkout_table
3* GROUP BY owner, groupid,dbnme Order by owner,groupid
14:05:21 SQL> SELECT owner, groupid, COUNT (*)
14:05:31 2 FROM SYSTEM.checkout_table
14:05:32 3 GROUP BY owner, groupid,dbnme Order by owner,groupid;
OWNER GROUPID COUNT(*)
------------------------------ ---------- ----------
LUOKLE 1 32
LUOKLE 2 31
LUOKLE 3 31
LUOKLE 4 31
LUOKLE 5 31
LUOKLE 6 31
LUOKLE 7 31
LUOKLE 8 31
LUOKLE 9 31
LUOKLE 10 31
创建 hash 函数
grant select on sys.dba_tab_columns to system;
CREATE OR REPLACE PROCEDURE SYSTEM.get_hv_of_data (
avc_owner VARCHAR2,
avc_table VARCHAR2)
AS
lvc_sql_text VARCHAR2 (30000);
ln_hash_value NUMBER;
lvc_error VARCHAR2 (100);
BEGIN
SELECT 'select /*+parallel(a,25)*/sum(dbms_utility.get_hash_value('
|| column_name_path
|| ',0,power(2,30)) ) from '
|| owner
|| '.'
|| table_name
|| ' a '
INTO LVC_SQL_TEXT
FROM (SELECT owner,
table_name,
column_name_path,
ROW_NUMBER ()
OVER (PARTITION BY table_name
ORDER BY table_name, curr_level DESC)
column_name_path_rank
FROM ( SELECT owner,
table_name,
column_name,
RANK,
LEVEL AS curr_level,
LTRIM (
SYS_CONNECT_BY_PATH (column_name, '||''|''||'),
'||''|''||')
column_name_path
FROM ( SELECT owner,
table_name,
'"' || column_name || '"' column_name,
ROW_NUMBER ()
OVER (PARTITION BY table_name
ORDER BY table_name, column_name)
RANK
FROM dba_tab_columns
WHERE owner = UPPER (avc_owner)
AND table_name = UPPER (avc_table)
AND DATA_TYPE IN ('TIMESTAMP(3)',
'INTERVAL DAY(3) TO SECOND(0)',
'TIMESTAMP(6)',
'NVARCHAR2',
'CHAR',
'BINARY_DOUBLE',
'NCHAR',
'DATE',
'RAW',
'TIMESTAMP(6)',
'VARCHAR2',
'NUMBER')
ORDER BY table_name, column_name)
CONNECT BY table_name = PRIOR table_name
AND RANK - 1 = PRIOR RANK))
WHERE column_name_path_rank = 1;
EXECUTE IMMEDIATE lvc_sql_text INTO ln_hash_value;
lvc_sql_text :=
'insert into system.get_has_value(owner,table_name,value) values(:x1,:x2,:x3)';
EXECUTE IMMEDIATE lvc_sql_text USING avc_owner, avc_table, ln_hash_value;
commit;
DBMS_OUTPUT.put_line (
avc_owner || '.' || avc_table || ' ' || ln_hash_value);
EXCEPTION
WHEN NO_DATA_FOUND
THEN
lvc_error := 'NO DATA FOUND';
lvc_sql_text :=
'insert into system.get_has_value(owner,table_name,error) values(:x1,:x2,:x3)';
EXECUTE IMMEDIATE lvc_sql_text USING avc_owner, avc_table, lvc_error;
commit;
WHEN OTHERS
THEN
lvc_sql_text :=
'insert into system.get_has_value(owner,table_name,value) values(:x1,:x2,:x3)';
EXECUTE IMMEDIATE lvc_sql_text USING avc_owner, avc_table, SQLERRM;
commit;
END;
/
sqlplus system/oracle<<EOF
set heading off linesize 170 pagesize 0 feedback off echo off trimout on trimspool on termout off verify off
exit;
EOF
nohup ./check_source.sh LUOKLE 1 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 2 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 3 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 4 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 5 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 6 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 7 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 8 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 9 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 10 >./source_LUOKLE_cd_1.log 2>&1 &
checkdata_source.sh
date
sqlplus system/oracle<<EOF
set heading off linesize 170 pagesize 0 feedback off
spool source_check_$1_$2.sql
SELECT 'exec system.get_hv_of_data('''
|| owner
|| ''','''
|| table_name
|| ''')'
FROM system.checkout_table
WHERE owner = UPPER ('$1') and groupid=$2
AND table_name NOT IN (SELECT table_name
FROM dba_tables
WHERE owner = UPPER ('$1')
AND iot_type IS NOT NULL)
AND table_name IN (SELECT table_name
FROM ( SELECT table_name, COUNT (*)
FROM dba_tab_columns
WHERE owner = UPPER ('$1')
AND DATA_TYPE IN ('TIMESTAMP(3)',
'INTERVAL DAY(3) TO SECOND(0)',
'TIMESTAMP(6)',
'NVARCHAR2',
'CHAR',
'BINARY_DOUBLE',
'NCHAR',
'DATE',
'RAW',
'VARCHAR2',
'NUMBER')
GROUP BY table_name
HAVING COUNT (*) > 0))
ORDER BY table_name;
spool off
set serveroutput on
@source_check_$1_$2.sql
exit;
EOF
date
运行 hash 计算函数脚本,在LINUX环境对 LUOKLE 下所有表进行 hash 计算耗时30分钟,总共311张表,有52张表没有计算出 hash 经分析发现这些表为空表。
SQL> select count(*) from LUOKLE.XXXX;
COUNT(*)
----------
0
7.4 数据库对象间权限比对处理
对比源库和目标库数据库的对象级别间权限,如若权限不一致建议将源库跑出的 grant_tab_privs.log 到目标端执行。
复核对象上的 select 和 DML 权限赋予给用户
@grant_tab_privs.sql
源库:
select 'grant ' || privilege || ' on ' || owner || '.' || table_name || ' to ' || grantee || ';' from dba_tab_privs where (grantee in(select username from dba_users where default_tablespace
in(‘DATATBS ’)) or owner in(select username from dba_users where default_tablespace in(DATATBS ))) and privilege in('SELECT','DELETE','UPDATE','INSERT') and grantable='NO'
union
select 'grant ' || privilege || ' on ' || owner || '.' || table_name || ' to ' || grantee || ' with grant option;' from dba_tab_privs where (grantee in(select username from dba_users where
default_tablespace in(DATATBS )) or owner in(select username from dba_users where default_tablespace in(DATATBS ))) and privilege in('SELECT','DELETE','UPDATE','INSERT') and grantable='YES';
7.5 收集统计信息
为了防止同时收集统计信息,造成系统资源的消耗,建议提前关闭后台自动收集统计信息的任务。
exec DBMS_AUTO_TASK_ADMIN.DISABLE(client_name => 'auto optimizer stats collection',operation => NULL,window_name => NULL);
查看柱状图信息:
select count(*),owner,table_name,column_name from dba_tab_histograms
group by owner,table_name,column_name
having count(*) > 2;
手工运行收集脚本:
exec DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','TRUE');设置并发收集模式
exec
dbms_stats.gather_database_stats(
estimate_percent =>dbms_stats.AUTO_SAMPLE_SIZE, ///// for all columns size repeat
METHOD_OPT=>'FOR ALL COLUMNS SIZE 1',
options=> 'GATHER',degree=>8,
granularity =>’all’,
cascade=> TRUE
);
select * from dba_scheduler_jobs where schedule_type = 'IMMEDIATE' and state = 'RUNNING';
收集数据字典统计信息:
exec DBMS_STATS.GATHER_DICTIONARY_STATS(degree=>16);
固定对象的统计信息:
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
开启默认收集
exec DBMS_AUTO_TASK_ADMIN.ENABLE(client_name => 'auto optimizer stats collection',operation => NULL,window_name => NULL);
exec DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','false');
以下为测试过程:
13:23:41 SQL> select count(*),owner,table_name,column_name from dba_tab_histograms
13:23:45 2 where owner='LUOKLE'
13:23:46 3 group by owner,table_name,column_name
13:23:46 4 having count(*) > 2;
no rows selected
Elapsed: 00:00:00.10
13:28:06 SQL> exec dbms_stats.gather_database_stats(estimate_percent =>dbms_stats.AUTO_SAMPLE_SIZE,METHOD_OPT=>'FOR ALL COLUMNS SIZE 1',options=> 'GATHER',degree=>8, granularity =>'all',cascade=> TRUE);
PL/SQL procedure successfully completed.
Elapsed: 00:26:51.34
13:55:05 SQL>
全库统计信息收集耗时26分钟
7.6 修改 job 参数
show parameter job_queue_processes;
alter system set job_queue_processes=100 scope=both;
8XTTS 迁移测试耗时(20T)
9XTTS 迁移测试问题记录
expdp \'/ as sysdba\' directory=xtts dumpfile=expdp_LUOKLE_meta0822.dmp logfile=expdp_LUOKLE_meta0822.log CONTENT=metadata_only SCHEMAS=LUOKLE 15:06 开始到出
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW00" prematurely terminated
ORA-31671: Worker process DW00 had an unhandled exception.
ORA-04030: out of process memory when trying to allocate 3704 bytes (kkoutlCreatePh,kkotbi : kkotbal)
ORA-06512: at "SYS.KUPW$WORKER", line 1887
ORA-06512: at line 2
在做元数据导出时候后台报大量 ORA-04030 错误,经过分析为 AMM 问题,通过关闭 AMM 手工管理内存解决。
10g 的 sga_target 设置为0
Errors in file /oracle/app/oracle/diag/rdbms/LUOKLE/orcl1/trace/orcl1_ora_13107324.trc (incident=28001):
ORA-04030: out of process memory when trying to allocate 32808 bytes (TCHK^cadd45dc,kggec.c.kggfa)
经过分析发现 AIX stack 设置偏小导致,修改限制解决。
ERROR IN CONVERSION ORA-19624: operation failed, retry possible
ORA-19505:
failed to identify file "/aix_xtts/oradata2/f8rdl6vi_1_1"
ORA-27037: unable to
obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional
information: 3
ORA-19600: input file is backup piece
(/aix_xtts/oradata2/f8rdl6vi_1_1)
ORA-19601: output file is backup piece
(/aix_xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_)
CONVERTED BACKUP PIECE/aix_xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_
PL/SQL procedure successfully completed.
ERROR IN CONVERSION ORA-19624: operation failed, retry possible
ORA-19505:
failed to identify file "/aix_xtts/oradata2/f9rdl70m_1_1"
ORA-27037: unable to
obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional
information: 3
ORA-19600: input file is backup piece
(/aix_xtts/oradata2/f9rdl70m_1_1)
ORA-19601: output file is backup piece
(/aix_xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_)
CONVERTED BACKUP PIECE/aix_xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_
经过分析发现增量备份没有放在对应目录导致。
failed to create file
"/xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_"
ORA-27040: file create
error, unable to create file
Linux-x86_64 Error: 13: Permission
denied
Additional information: 1
ORA-19600: input file is backup piece
(/xtts/oradata2/f9rdl70m_1_1)
ORA-19601: output file is backup piece
(/xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_)
CONVERTED BACKUP PIECE/xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_
PL/SQL procedure successfully completed.
ERROR IN CONVERSION ORA-19624: operation failed, retry possible
ORA-19504:
failed to create file
"/xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_"
ORA-27040: file create
error, unable to create file
Linux-x86_64 Error: 13: Permission
denied
Additional information: 1
ORA-19600: input file is backup piece
(/xtts/oradata2/f8rdl6vi_1_1)
ORA-19601: output file is backup piece
(/xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_)
CONVERTED BACKUP PIECE/xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_
NFS 目录权限问题导致不行读写,修改权限解决。
NFS 问题:
mount: 1831-008 giving up on:
192.168.1.100:/xtts
vmount: Operation not permitted.
# nfso -p -o nfs_use_reserved_ports=1
Setting nfs_use_reserved_ports to 1
Setting nfs_use_reserved_ports to 1 in nextboot file
# mount -o cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600 10.20.28.21:/xtts /aix_xtts
10总结
XTTS 支持跨平台跨版本迁移,操作起来比较方便,由于停机时间较短,可以较轻松完成迁移工作,在大数据量的跨平台跨版本迁移场景中,建议作为首选方案。
建议在做 XTTS 迁移的时候减少批次,批次越多,增量备份的数据越少,数据越少,最后停机时间越短,但是这个过程如果做太多就越容易出错。一般使用一次增量备份再做一次正式迁移,甚至初始化后直接做正式迁移。
11附录 - xttdriver.pl 脚本使用说明
详见:11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (文档 ID 1389592.1)
Description of Perl Script xttdriver.pl Options
The following table describes the options available for the main supporting script xttdriver.pl.
原文发布时间为:2018-04-22
本文作者:罗贵林