imp/impdp 覆盖schema后如何恢复数据

近期有客户不小心把测试用户数据通过impdp replace导入到了生产用户上;结果可想而知;幸运的是数据库有归档;不幸的是数据库没有打开force logging,也没有开启附加日志等.  不难想像;通过logminer来挖掘归档进行数据恢复可能会导致一些数据丢失.

如下是整个处理思路和简单流程(操作有风险,尤其是构造数据字典,入遇到该问题,请联系我们):

++++尝试直接用在线日志当字典进行分析
SQL> exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10132_1012773265.dbf',sys.dbms_logmnr.addfile);
PL/SQL procedure successfully completed.
SQL> exec sys.dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog+dbms_logmnr.committed_data_only);
PL/SQL procedure successfully completed.
SQL> create table test tablespace users  as select * from v$logmnr_contents ;
SQL> l
1* select table_name,count(*) from test group by table_name  order by 2
SQL> /
TABLE_NAME                            COUNT(*)
----------------------------------- ----------
SEQ$                                         2
LOB$                                         4
WRI$_SEGADV_OBJLIST                          6
WRI$_ADV_REC_ACTIONS                         6
DEFERRED_STG$                                6
WRI$_ADV_FINDINGS                            6
WRI$_ADV_RECOMMENDATIONS                    12
WRI$_ADV_ACTIONS                            12
WRI$_ADV_OBJECTS                            12
DBMS_TABCOMP_TEMP_CMP                       12
SDO_GEOR_DDL__TABLE$$                       12
WRI$_ADV_MESSAGE_GROUPS                     12
DBMS_TABCOMP_TEMP_UNCMP                     12
OBJ$                                        35
CDEF$                                       44
CCOL$                                       44
MON_MODS_ALL$                               46
WRI$_OPTSTAT_TAB_HISTORY                    47
CON$                                        66
WRI$_OPTSTAT_IND_HISTORY                    68
IND$                                        72
TAB$                                       139
STATS_TARGET$                              140
HIST_HEAD$                                1221
COL$                                      1224
WRI$_OPTSTAT_HISTHEAD_HISTORY             2442
SEG$                                      2587
6457
WRI$_OPTSTAT_HISTGRM_HISTORY              7143
HISTGRM$                                 14371
OBJ# 292119                              20385
OBJ# 292120                              20385
OBJ# 292116                              26929
OBJ# 292115                              26929
OBJ# 292118                              55421
OBJ# 292117                              55421
OBJ# 292109                              71354
OBJ# 292124                              85876
OBJ# 292123                              85876
OBJ# 292121                             112601
OBJ# 292122                             112601
OBJ# 292112                             143545
此时sql_redo信息完全是不对的,无法识别对象对象名称和列名称;说明impdp replace完全重构了obj:
insert into "UNKNOWN"."OBJ# 292120"("COL 1","COL 2","COL 3","COL 4","COL 5","COL
6","COL 7","COL 8","COL 9","COL 10","COL 11","COL 12","COL 13","COL 14","COL 15
","COL 16","COL 17","COL 18","COL 19","COL 20","COL 21","COL 22","COL 23","COL 2
4","COL 25","COL 26","COL 27","COL 28","COL 29","COL 30","COL 31","COL 32","COL
33","COL 34","COL 35","COL 36","COL 37","COL 38","COL 39","COL 40","COL 41","COL
42","COL 43","COL 44","COL 45","COL 46","COL 47","COL 48","COL 49","COL 50","CO
L 51","COL 52","COL 53","COL 54","COL 55","COL 56","COL 57","COL 58","COL 59","C
OL 60","COL 61","COL 62","COL 63","COL 64","COL 65","COL 66","COL 67","COL 68","
COL 69","COL 70","COL 71","COL 72","COL 73","COL 74","COL 75","COL 76","COL 77",
"COL 78","COL 79","COL 80","COL 81","COL 82") values (HEXTORAW('d3a6b8b6c6b1bedd
bde1cbe3b7bdcabdb6d4d3a6d2f8d0d0d5cbbba7'),HEXTORAW('303035474756'),NULL,NULL,NU
LL,HEXTORAW('c102'),HEXTORAW('80'),HEXTORAW('c102'),HEXTORAW('c102'),......
++++首先确认需要logminer分析的日志段(获取数据字典之前的变更,如obj$)
由于impdp replace操作熟悉需要导入表结构然后再导入数据;因此会先重建对象,产生obj;实际上对于obj$来讲就是一系列delete操作;
因此这里不需要分析太多归档;分析impdp 开始前面10分钟的归档即可:
SQL> select name,first_time,NEXT_TIME  from v$archived_log where name like '%1_1015%';
NAME                                                   FIRST_TIME          NEXT_TIME
------------------------------------------------------ ------------------- -------------------
/u02/xxxxxx/1_10150_1012773265.dbf                    2020-05-12 08:46:39 2020-05-12 09:29:20
/u02/xxxxxx/1_10151_1012773265.dbf                    2020-05-12 09:29:20 2020-05-12 09:57:51
/u02/xxxxxx/1_10152_1012773265.dbf                    2020-05-12 09:57:51 2020-05-12 10:31:40
/u02/xxxxxx/1_10153_1012773265.dbf                    2020-05-12 10:31:40 2020-05-12 10:45:34
/u02/xxxxxx/1_10154_1012773265.dbf                    2020-05-12 10:45:34 2020-05-12 10:47:41
/u02/xxxxxx/1_10155_1012773265.dbf                    2020-05-12 10:47:41 2020-05-12 10:49:35
/u02/xxxxxx/1_10156_1012773265.dbf                    2020-05-12 10:49:35 2020-05-12 10:51:16
/u02/xxxxxx/1_10157_1012773265.dbf                    2020-05-12 10:51:16 2020-05-12 10:51:30
/u02/xxxxxx/1_10158_1012773265.dbf                    2020-05-12 10:51:30 2020-05-12 10:51:42
/u02/xxxxxx/1_10159_1012773265.dbf                    2020-05-12 10:51:42 2020-05-12 10:51:52
++++构造数据字典
create table test_dict1 tablespace users  as select * from v$logmnr_contents; 执行报如下类似错误:
SQL> select count(*) from V$LOGMNR_CONTENTS ;
select count(*) from V$LOGMNR_CONTENTS
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "SYS.LOGMNR_DICT_CACHE", line 2120
ORA-06512: at "SYS.LOGMNR_GTLO3", line 50
ORA-06512: at line 1
通过10046 event定位发现有有问题记录:
PARSING IN CURSOR #3 len=141 dep=2 uid=0 oct=3 lid=0 tim=1589394650978271 hv=2061154710 ad='4fd5fdf78' sqlid='80agnrtxdpfcq'
SELECT O.SPARE2, O.TYPE#, O.OWNER#, U.NAME, O.NAME, O.SUBNAME, O.FLAGS FROM SYS.OBJ$ O, SYS.USER$ U WHERE O.OBJ# = :B1 AND U.USER# = O.OWNER#
END OF STMT
BINDS #3:
Bind#0
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=7f1b9d4ac7f8  bln=22  avl=04  flg=05
value=116927
发现有重复记录,删除type 为1的信息(为Index):
SQL> select obj# ,count(*) from obj$ group by obj# having count(*)>1;
OBJ#   COUNT(*)
---------- ----------
110154          2
110156          2
215351          2
110152          2
110157          2
SQL> select name,owner#,type#,dataobj#,obj#  from obj$ where obj#=110154;
NAME                               OWNER#      TYPE#   DATAOBJ#       OBJ#
------------------------------ ---------- ---------- ---------- ----------
IUFO_DIS_DISCHEME                      97          1     312864     110154
IUFO_DIS_DISCHEME                      97          2     306881     110154
SQL> delete from  obj$ where obj#=110154 and DATAOBJ#=312864;
1 row deleted.
SQL> delete from  obj$ where obj#=110156 and DATAOBJ#=312866;
1 row deleted.
SQL> delete from  obj$ where obj#=215351 and DATAOBJ#=316906;
1 row deleted.
SQL> delete from  obj$ where obj#=110152 and DATAOBJ#=312862;
1 row deleted.
SQL> delete from  obj$ where obj#=110157 and DATAOBJ#=312867;
1 row deleted.
SQL> commit;
Commit complete.
SQL> select obj# ,count(*) from obj$ group by obj# having count(*)>1;
no rows selected
+++再次logmnr dict
SQL> exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10153_1012773265.dbf',sys.dbms_logmnr.new);
PL/SQL procedure successfully completed.
SQL> exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10154_1012773265.dbf',sys.dbms_logmnr.addfile);
PL/SQL procedure successfully completed.
SQL> exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10155_1012773265.dbf',sys.dbms_logmnr.addfile);
PL/SQL procedure successfully completed.
SQL> exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10156_1012773265.dbf',sys.dbms_logmnr.addfile);
PL/SQL procedure successfully completed.
SQL> exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10157_1012773265.dbf',sys.dbms_logmnr.addfile);
PL/SQL procedure successfully completed.
SQL> exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10158_1012773265.dbf',sys.dbms_logmnr.addfile);
PL/SQL procedure successfully completed.
SQL> exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10159_1012773265.dbf',sys.dbms_logmnr.addfile);
PL/SQL procedure successfully completed.
SQL> exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10160_1012773265.dbf',sys.dbms_logmnr.addfile);
PL/SQL procedure successfully completed.
SQL> exec sys.dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog+dbms_logmnr.committed_data_only);
PL/SQL procedure successfully completed.
SQL> create table test_dict1 tablespace users  as select * from v$logmnr_contents;
Table created.
SQL> execute dbms_logmnr.end_logmnr;
PL/SQL procedure successfully completed.
由于云环境内存较小,很容易出现ora-04030错误,因此不断 echo 3 > /proc/sys/vm/drop_caches
SQL> create table test_dict tablespace users  as select * from v$logmnr_contents
*
ERROR at line 1:
ORA-04030: out of process memory when trying to allocate 344 bytes (Logminer
LCR c,krvtadc)
++++数据字典logmnr完成后开始构造impdp replace之前的字典状态
select SQL_UNDO from test_dict1
where table_name='OBJ$'
and sql_redo like 'delete from%'
and sql_redo like '%'||'"TYPE#" = ''2'''||'%'
and sql_redo not like '%'||'"DATAOBJ#" IS NULL'||'%'
and sql_redo like '%'||'"OWNER#" = ''92'''||'%' ;
create table t as select b.obj# obj_old,b.name,a.obj# obj_new,a.owner# from obj$ a,obj b where a.name=b.name and a.owner#=97;
update obj$ a set obj#=(select obj_old from t where t.obj_new=a.obj#) where obj# in (select obj_new from t);
update col$ a set obj#=(select obj_old from t where t.obj_new=a.obj#) where obj# in (select obj_new from t);
update lob$ a set obj#=(select obj_old from t where t.obj_new=a.obj#) where obj# in (select obj_new from t);
update tab$ a set obj#=(select obj_old from t where t.obj_new=a.obj#) where obj# in (select obj_new from t);
++++ 确认需要分析的日志范围
SQL> alter session set nls_Date_format='yyyy-mm-dd hh24:mi:ss';
Session altered.
SQL> select name,first_time,NEXT_TIME  from v$archived_log where name like '%1_1009%';
NAME                                               FIRST_TIME          NEXT_TIME
-------------------------------------------------- ------------------- -------------------
/u02/xxxxxx/1_10090_1012773265.dbf                2020-05-08 10:35:09 2020-05-08 10:38:09
/u02/xxxxxx/1_10091_1012773265.dbf                2020-05-08 10:38:09 2020-05-08 10:39:21
/u02/xxxxxx/1_10092_1012773265.dbf                2020-05-08 10:39:21 2020-05-08 11:07:16
/u02/xxxxxx/1_10093_1012773265.dbf                2020-05-08 11:07:16 2020-05-08 11:17:37
/u02/xxxxxx/1_10094_1012773265.dbf                2020-05-08 11:17:37 2020-05-08 11:37:14
/u02/xxxxxx/1_10095_1012773265.dbf                2020-05-08 11:37:14 2020-05-08 11:51:32
/u02/xxxxxx/1_10096_1012773265.dbf                2020-05-08 11:51:32 2020-05-08 11:57:36
/u02/xxxxxx/1_10097_1012773265.dbf                2020-05-08 11:57:36 2020-05-08 13:41:44
/u02/xxxxxx/1_10098_1012773265.dbf                2020-05-08 13:41:44 2020-05-08 14:22:30
/u02/xxxxxx/1_10099_1012773265.dbf                2020-05-08 14:22:30 2020-05-08 14:59:59
10 rows selected.
SQL> select name,first_time,NEXT_TIME  from v$archived_log where name like '%1_1015%';
NAME                                               FIRST_TIME          NEXT_TIME
-------------------------------------------------- ------------------- -------------------
/u02/xxxxxx/1_10150_1012773265.dbf                2020-05-12 08:46:39 2020-05-12 09:29:20
/u02/xxxxxx/1_10151_1012773265.dbf                2020-05-12 09:29:20 2020-05-12 09:57:51
/u02/xxxxxx/1_10152_1012773265.dbf                2020-05-12 09:57:51 2020-05-12 10:31:40
/u02/xxxxxx/1_10153_1012773265.dbf                2020-05-12 10:31:40 2020-05-12 10:45:34
/u02/xxxxxx/1_10154_1012773265.dbf                2020-05-12 10:45:34 2020-05-12 10:47:41
/u02/xxxxxx/1_10155_1012773265.dbf                2020-05-12 10:47:41 2020-05-12 10:49:35
/u02/xxxxxx/1_10156_1012773265.dbf                2020-05-12 10:49:35 2020-05-12 10:51:16
/u02/xxxxxx/1_10157_1012773265.dbf                2020-05-12 10:51:16 2020-05-12 10:51:30
/u02/xxxxxx/1_10158_1012773265.dbf                2020-05-12 10:51:30 2020-05-12 10:51:42
/u02/xxxxxx/1_10159_1012773265.dbf                2020-05-12 10:51:42 2020-05-12 10:51:52
10 rows selected.
由于客户有8号的dmp备份;在12号上午10:45分进行了impdp replace操作因此;归档日志需要分析到10154.
++++分析8号-12号replace时间段内归档日志
[oracle@iZwz9a5f061i5x3yv35k04Z ~]$ cat an1.sh
sqlplus "/as sysdba" << EOF
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10097_1012773265.dbf',sys.dbms_logmnr.new);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10098_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10099_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10100_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10101_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10102_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10103_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10104_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10105_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10106_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10107_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10108_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10109_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10110_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10111_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10112_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10113_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10114_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10115_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10116_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10117_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10118_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10119_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10120_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10121_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10122_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10123_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10124_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10125_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10126_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10127_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10128_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10129_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10130_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog+dbms_logmnr.committed_data_only);
create table enmo_logmnr tablespace users  as select * from v\$logmnr_contents;
execute dbms_logmnr.end_logmnr;
quit;
EOF
[oracle@iZwz9a5f061i5x3yv35k04Z ~]$ cat an2.sh
sqlplus "/as sysdba" << EOF
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10131_1012773265.dbf',sys.dbms_logmnr.new);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10132_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10133_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10134_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10135_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10136_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10137_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10138_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10139_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10140_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10141_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10142_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10143_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10144_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10145_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10146_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10147_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10148_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10149_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10150_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10151_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10152_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10153_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.add_logfile('/u02/xxxxxx/1_10154_1012773265.dbf',sys.dbms_logmnr.addfile);
exec sys.dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog+dbms_logmnr.committed_data_only);
create table enmo_logmnr2 tablespace users as select * from v\$logmnr_contents;
execute dbms_logmnr.end_logmnr;
quit;
EOF
nohup sh an1.sh > an1.log &
nohup sh an2.sh > an2.log &
++++ 合并2个logmnr结果表
[oracle@iZwz9a5f061i5x3yv35k04Z ~]$ cat a.sh
sqlplus / as sysdba <<EOF
insert into  LOGMNR_NEW  SELECT /*+full(t1) parallel(t1 16)*/COMMIT_TIMESTAMP,TIMESTAMP,regexp_substr(replace(sql_redo,'and ROWID','?'),'[^?]+',1,1) sql_redo FROM  enmo_logmnr2 t1
where data_obj# in (select t.obj_old from t) and OPERATION  in ('INSERT','UPDATE','DELETE','LOB_WRITE') and TIMESTAMP>to_date('20200508 15:29:00','yyyymmdd hh24:mi:ss');
commit;
EOF
++++ 重新导入8号之前的dmp 备份,其中在drop users遇到一个ora问题
SQL> drop user ysyy cascade;
drop user ysyy cascade
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01418: specified index does not exist
SQL> delete obj$ where owner#=97;
6414 rows deleted.
SQL> commit;
Commit complete.
SQL>
导入数据.....
++++追加8号12号的数据到生产用户
nohup sh recover.sh &
[oracle@iZwz9a5f061i5x3yv35k04Z ~]$ cat recover.sh
sqlplus / as sysdba <<EOF
set serverout on
DECLARE
c_limit   CONSTANT PLS_INTEGER DEFAULT 1000;
v_sql varchar2(32767);
CURSOR c1 IS select /*+parallel(t 16)*/ rtrim(sql_redo,';') from logmnr_new t order by COMMIT_TIMESTAMP,TIMESTAMP;
TYPE typ1 IS TABLE OF VARCHAR2(32767) INDEX BY BINARY_INTEGER;
redo         typ1;
BEGIN
OPEN c1;
LOOP
FETCH c1 BULK COLLECT INTO redo LIMIT c_limit;
EXIT WHEN redo.COUNT = 0;
FOR indx IN 1 .. redo.COUNT
LOOP
begin
v_sql:=redo(indx);
dbms_output.put_line(v_sql);
execute immediate v_sql;
exception
when others then
dbms_output.put_line('ERROR:'||v_sql);
end;
END LOOP;
commit;
END LOOP;
CLOSE c1;
END;
/
EOF

这里要补充一点的是,逻辑恢复是非常麻烦的;其实还需要考虑到很多表没有主键,申请业务操作时不带主键进行DML操作.  实际上我们自己测试发现即使表上存在主键;没有基于主键添加附加日志的话;logminer的结果都带rowid. 对于update操作来讲,可能会有一些数据丢失,因为无法判断数据唯一性;直接提取脚本并进行执行,可能导致数据紊乱.尤其是客户这种财务系统(甚至EBS环境等);对于业务表数据关联性极强的环境.

总的来说只能做到数据最大层面恢复,后期需要业务层面多多配合才行,单从DB层面非常困难;这实际上跟业务特点也有关系.当然如果数据量较小;甄别相对容易;如果是数百万甚至更大量级,还是非常困难的。

 

 


评论

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注