Supplemental Logging LOG DATA (ALL) COLUMNS

加的columns越多,说明一个普通的update中where 条件校验的列越多

update "SCOTT"."EMP" set "ENAME" = 'ALLKEY' where "EMPNO" = '7566' and "ENAME" = 'JONES' and "JOB" = 'MANAGER' and "MGR" = '7839' and "HIREDATE" = TO_DATE('02-APR-81', 'DD-MON-RR') and "SAL" = '2975' and "COMM" IS NULL and "DEPTNO" = '20' and ROWID = 'AAASZHAAEAAAACXAAD';

GOAL

NOTE: In the images and/or the document content below, the user information and environment data used represents fictitious data from the Oracle sample or bulit-in schema(s), Public Documentation delivered with an Oracle database product or other training material. Any similarity to actual environments, actual persons, living or dead, is purely coincidental and not intended in any manner.


 

Supplemental Logging :

Redo log files are generally used for instance recovery and media recovery. The data needed for such operations is automatically recorded in the redo log files. However, a redo-based application may require that additional columns be logged in the redo log files. The process of logging these additional columns is called supplemental logging. Also other actions may depend on supplemental logging, e.g. DataPump import (impdp) uses either "Direct_Path" or "External_Table" mode for loading depending on supplemental logging (See Note 552424.1).

Please refer to the following documentation for Utilities, which mentions two important points regarding Supplemental Logging.  727633.1

1) You must enable supplemental logging prior to generating log files that will be analyzed by LogMiner.

When you enable supplemental logging, additional information is recorded in the redo stream that is needed to make the information in the redo log files useful to you. Therefore, at the very least, you must enable minimal supplemental logging, as the following SQL statement shows:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

To determine whether supplemental logging is enabled, query the V$DATABASE view, as the following SQL statement shows:

SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

If the query returns a value of YES or IMPLICIT, minimal supplemental logging is enabled.

You can turn off Supplemental Logging by following command.

SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

2) By default, Oracle Database does not provide any supplemental logging, which means that by default LogMiner is not usable.


Let us take an example of this, which is using Online Catalog option for logminer dictionary.

Part A : Shows result from V$LOGMNR_CONTENTS when supplemental logging  is turned OFF (default option )

Part B : Shows result from V$LOGMNR_CONTENTS when supplemental logging is turned ON.

SOLUTION

Part A :

Shows result from V$LOGMNR_CONTENTS when supplemental logging is turned OFF (default option )

-----------------------------------------------------------------------------------------------------------------
Example assumes that database is in ARCHIVE LOG MODE

SQL> connect / as sysdba
Connected.
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 8
Next log sequence to archive 10
Current log sequence 10
SQL> alter system switch logfile;

System altered.

SQL> SELECT NAME FROM V$ARCHIVED_LOG
2 WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME
--------------------------------------------------------------------------------
<ORACLE_HOME>\FLASH_RECOVERY_AREA\<SID>\ARCHIVELOG\<DIR>\<FILENAME1>.ARC


SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

SUPPLEME
--------
NO

SQL> connect scott/tiger
Connected.


SQL> create table test ( n number );
Table created.

SQL> insert into test values ( 1);
1 row created.

SQL> insert into test values (2);
1 row created.

SQL> commit;
Commit complete.

SQL> connect / as sysdba
Connected.

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME
-----------------------------------------------------------------------------------------------------------------------------------------
<ORACLE_HOME>\FLASH_RECOVERY_AREA\<SID>\ARCHIVELOG\<DIR>\<FILENAME2>.ARC

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
> LOGFILENAME =>'<ORACLE_HOME>\FLASH_RECOVERY_AREA\<SID>\ARCHIVELOG\<DIR>\<FILENAME2>.ARC' , -
> OPTIONS => DBMS_LOGMNR.NEW);

PL/SQL procedure successfully completed.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
> OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

PL/SQL procedure successfully completed.

SQL> SELECT SQL_REDO , SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username='SCOTT';

no rows selected

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

PL/SQL procedure successfully completed.

-----------------------------------------------------------------------------------------------------------

Part B :

Shows result from V$LOGMNR_CONTENTS when supplemental logging is turned ON.

The same example as above but with Supplemental Logging turned ON.


SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Database altered.

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

SUPPLEME
--------
YES

SQL> alter system switch logfile;
System altered.

SQL> connect scott/tiger
Connected.

SQL> create table test2 ( n2 number );
Table created.

SQL> insert into test2 values ( 2211);
1 row created.

SQL> insert into test2 values ( 2222);
1 row created.

SQL> commit;
Commit complete.

SQL> connect / as sysdba
Connected.
SQL> alter system switch logfile;

System altered.

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME
------------------------------------------------------------------------------------------------------------------------------------------------------
<ORACLE_HOME>\FLASH_RECOVERY_AREA\<SID>\ARCHIVELOG\<DIR>\<FILENAME3>.ARC

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
> LOGFILENAME => '<ORACLE_HOME>\FLASH_RECOVERY_AREA\<SID>\ARCHIVELOG\<DIR>\<FILENAME3>.ARC' , -
> OPTIONS => DBMS_LOGMNR.NEW);

PL/SQL procedure successfully completed.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
> OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

PL/SQL procedure successfully completed.


SQL> SELECT SQL_REDO , SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username='SCOTT';

SQL_REDO
--------------------------------------------------------------------------------
SQL_UNDO
--------------------------------------------------------------------------------

commit;

set transaction read write;

insert into "SCOTT"."TEST2"("N2") values ('2211');
delete from "SCOTT"."TEST2" where "N2" = '2211' and ROWID = 'AAAM7oAAEAAAAGtAAA'

SQL_REDO
--------------------------------------------------------------------------------
SQL_UNDO
--------------------------------------------------------------------------------
;

insert into "SCOTT"."TEST2"("N2") values ('2222');
delete from "SCOTT"."TEST2" where "N2" = '2222' and ROWID = 'AAAM7oAAEAAAAGtAAB'
;

commit;

23 rows selected.

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

PL/SQL procedure successfully completed.

Please make a note that V$LOGMNR_CONTENTS also shows Data dictionary logs like inserting data into Data Dictionary of Database but have been removed in our example and only relevant data is captured. 

Note: 

In order for LogMiner to properly store username and session information, the redo needs to be generated with at least minimal supplemental database logging turned on. Even when supplemental logging is turned on, it's possible to turn of session information and username data to be stored in the redo by setting the TRANSACTION_AUDITING=FALSE (or _TRANSACTION_AUDITING=FALSE in 10.1 and higher releases) instance parameter. This instance parameter when being set to TRUE (which is done implicitly by turning on supplemental database logging), Oracle generates a special redo record that contains the user logon name, username, the session ID, some operating system information, and client information. For each successive transaction, Oracle generates a record that contains only the session ID. These subsequent records link back to the first record, which also contains the session ID.

When set to FALSE, Oracle prevents this special record from being written to the redo, causing all columns in V$LOGMNR_CONTENTS requiring the data from this record to report NULL values.

PURPOSE

The Oracle documentation for the ALTER TABLE ... ADD SUPPLEMENTAL LOG ...  states the following:

supplemental_id_key_clause

Use this clause to specify that all or a combination of the primary key, unique key, and foreign key columns should be supplementally logged.

and:

  • If you specify ALL COLUMNS, then the database includes in the redo log all the fixed-length maximum size columns of that row. Such a redo log is a system-generated unconditional log group.

where it has the following syntax:

DATA
( { ALL | PRIMARY KEY | UNIQUE | FOREIGN KEY }
    [, { ALL | PRIMARY KEY | UNIQUE | FOREIGN KEY } ]...
)
COLUMNS

However, the DATA (ALL) COLUMNS option may be confusing.  This is because it is included in the "...key_clause", implying it applies to KEY columns only, whereas it actually means all the columns in the table.

So, the purpose of this document is to show the effects of the Supplemental Logging DATA (ALL) COLUMNS option, including the Trail file records extracted by GoldenGate.

TROUBLESHOOTING STEPS

PREAMBLE

This demonstration uses the Oracle database SCOTT schema, and the SCOTT.EMP table.
Please note that;

  • The Primary Key for SCOTT.EMP is column EMPNO,
  • Only column ENAME is updated.

Three progressive levels of Supplemental Logging are used;

  1. Database level only
  2. Primary Keys columns
  3. All Columns

Then, the LogMiner and GoldenGate LogDump utilities are used to show the contents of the Archived log and Trail files, respectively. 

SUMMARY OF RESULTS

The following is primarily the SQL executed and the output from the LogMiner and LogDump utilities.
For more details, please see the TEST CASE DETAILS section.

Supplemental logging at the Oracle database level only

Oracle database command
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

GoldenGate TRANDATA equivalent
None, it is outside the scope of the Capture role. 
As per the GoldenGate Oracle Installation and Setup Guide, the following command must be issued on the Oracle database: ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Then, either ADD TRANDATA or ADD SCHEMATRANDATA commands must be issued for GoldenGate to be able to capture table data (columns).

SQL Executed
update SCOTT.EMP set ENAME='DBSUPLOG' where EMPNO=7934;

LogMiner V$LOGMNR_CONTENTS
SQL_REDO
--------------------------------------------------------------------------------
update "SCOTT"."EMP" set "ENAME" = 'DBSUPLOG' where "ENAME" = 'MILLER' and ROWID = 'AAASZHAAEAAAACXAAN';

GoldenGate LogDump

2015/10/15 13:02:47.000.000 FieldComp            Len    30 RBA 1025
Name: SCOTT.EMP
After  Image:                                             Partition 4   G  s
 0000 000a ffff 0000 0000 0000 0000 0001 000c 0000 | ....................
 0008 4442 5355 504c 4f47                          | ..DBSUPLOG
Column     0 (x0000), Len    10 (x000a)
 ffff 0000 0000 0000 0000                          | ..........
Column     1 (x0001), Len    12 (x000c)
 0000 0008 4442 5355 504c 4f47                     | ....DBSUPLOG

NOTE: The Primary Key value 7934 is not included in the Trail file record i.e. Column 0 is empty.  This is because neither the ADD TRANDATA nor the ADD SCHEMATRANDATA command was issued, where this can then cause problems on the REPLICAT side.  Column 0 EMPNO value DEC 7934 = HEX 1EFE, and is not in the Trail record.

With Primary Key columns

Oracle database command
alter table SCOTT.EMP add supplemental log data (primary key) columns;

GoldenGate TRANDATA equivalent
add trandata scott.emp

SQL executed
update SCOTT.EMP set ENAME='PRIKEY' where EMPNO=7369;

LogMiner V$LOGMNR_CONTENTS
SQL_REDO
--------------------------------------------------------------------------------
update "SCOTT"."EMP" set "ENAME" = 'PRIKEY' where "EMPNO" = '7369' and "ENAME" = 'SMITH' and ROWID = 'AAASZHAAEAAAACXAAA';

GoldenGate LogDump

2015 13:09:28.000.000 FieldComp            Len    28 RBA 1167
Name: SCOTT.EMP
After  Image:                                             Partition 4   G  s
 0000 000a 0000 0000 0000 0000 1cc9 0001 000a 0000 | ....................
 0006 5052 494b 4559                               | ..PRIKEY
Column     0 (x0000), Len    10 (x000a)
 0000 0000 0000 0000 1cc9                          | ..........
Column     1 (x0001), Len    10 (x000a)
 0000 0006 5052 494b 4559                          | ....PRIKEY

NOTE: The Primary Key value 7369 is now part of the Trail file record i.e. Column 0 EMPNO is HEX 1cc9 = DEC 7369.

With All Columns

Oracle database command (and for GoldenGate version 11)
alter table SCOTT.EMP add supplemental log data (all) columns;

GoldenGate version 12 TRANDATA equivalent
add trandata scott.emp allcols

SQL executed
update SCOTT.EMP set ENAME='ALLKEY' where EMPNO=7566; 

LogMiner V$LOGMNR_CONTENTS
SQL_REDO
--------------------------------------------------------------------------------
update "SCOTT"."EMP" set "ENAME" = 'ALLKEY' where "EMPNO" = '7566' and "ENAME" = 'JONES' and "JOB" = 'MANAGER' and "MGR" = '7839' and "HIREDATE" = TO_DATE('02-APR-81', 'DD-MON-RR') and "SAL" = '2975' and "COMM" IS NULL and "DEPTNO" = '20' and ROWID = 'AAASZHAAEAAAACXAAD';

GoldenGate LogDump

2015/10/15 13:16:40.000.000 FieldComp            Len   124 RBA 1307
Name: SCOTT.EMP
After  Image:                                             Partition 4   G  s
 0000 000a 0000 0000 0000 0000 1d8e 0001 000a 0000 | ....................
 0006 414c 4c4b 4559 0002 000b 0000 0007 4d41 4e41 | ..ALLKEY........MANA
 4745 5200 0300 0a00 0000 0000 0000 001e 9f00 0400 | GER.................
 1500 0031 3938 312d 3034 2d30 323a 3030 3a30 303a | ...1981-04-02:00:00:
 3030 0005 000a 0000 0000 0000 0004 8a1c 0006 000a | 00..................
 ffff 0000 0000 0000 0000 0007 000a 0000 0000 0000 | ....................
 0000 0014                                         | ....
Column     0 (x0000), Len    10 (x000a)
 0000 0000 0000 0000 1d8e                          | ..........
Column     1 (x0001), Len    10 (x000a)
 0000 0006 414c 4c4b 4559                          | ....ALLKEY
Column     2 (x0002), Len    11 (x000b)
 0000 0007 4d41 4e41 4745 52                       | ....MANAGER
Column     3 (x0003), Len    10 (x000a)
 0000 0000 0000 0000 1e9f                          | ..........
Column     4 (x0004), Len    21 (x0015)
 0000 3139 3831 2d30 342d 3032 3a30 303a 3030 3a30 | ..1981-04-02:00:00:0
 30                                                | 0
Column     5 (x0005), Len    10 (x000a)
 0000 0000 0000 0004 8a1c                          | ..........
Column     6 (x0006), Len    10 (x000a)
 ffff 0000 0000 0000 0000                          | ..........
Column     7 (x0007), Len    10 (x000a)
 0000 0000 0000 0000 0014                          | .......... 

NOTE: Now, all 8 columns of the SCOTT.EMP table and their values have been included in the Trail file, where Column 0 EMPNO is HEX 1d8e = DEC 7566.

TEST CASE DETAILS

The following test case details are provided to demonstrate how the results have been obtained.
It is assumed that the user issuing the test case statements has the DBA Role i.e. connect / as SYSDBA.

SCOTT.EMP create table statement

This is the default SCOTT.EMP create statement, that includes the Primary Key on the EMPNO column, with no Supplemental Logging enabled.

SQL> select sys.dbms_metadata.get_ddl('TABLE','EMP','SCOTT') from dual;

SYS.DBMS_METADATA.GET_DDL('TABLE','EMP','SCOTT')
--------------------------------------------------------------------------------
CREATE TABLE "SCOTT"."EMP"
 (    "EMPNO" NUMBER(4,0),
      "ENAME" VARCHAR2(10),
      "JOB" VARCHAR2(9),
      "MGR" NUMBER(4,0),
      "HIREDATE" DATE,
      "SAL" NUMBER(7,2),
      "COMM" NUMBER(7,2),
      "DEPTNO" NUMBER(2,0),
       CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
 USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS"  ENABLE,
       CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
        REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE
 ) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS"

SCOTT.EMP Rows

The following rows are loaded into the SCOTT.EMP table when a database is created with the demonstration SCOTT schema.

SQL> select empno, ename, deptno from scott.emp;

     EMPNO ENAME          DEPTNO
---------- ---------- ----------
      7369 SMITH              20
      7499 ALLEN              30
      7521 WARD               30
      7566 JONES              20
      7654 MARTIN             30
      7698 BLAKE              30
      7782 CLARK              10
      7788 SCOTT              20
      7839 KING               10
      7844 TURNER             30
      7876 ADAMS              20
      7900 JAMES              30
      7902 FORD               20
      7934 MILLER             10

LogMiner commands

The following SQL statements are to be used to load the new Archived log for mining.
However, do not run them until indicated.

Copy the Archived log filename obtained from the V$ARCHIVED_LOG query, and provide it when prompted by the ADD_LOGFILE command.

The V$LOGMNR_CONTENTS.SQL_REDO column is queried to confirm what statements have actually been captured in the Redo/Archived log.

alter system switch logfile;  /* Switch the current Redo log, and in the process, an Archived log is created */

SELECT NAME FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);  /* Identify the log */
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>'&Name' , OPTIONS => DBMS_LOGMNR.NEW);  /* Add the log for mining */
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);  /* Start the LogMiner session */
SELECT SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER='SCOTT' and SEG_NAME='EMP';  /* Obtain the SQL statements that were issued */
EXECUTE DBMS_LOGMNR.END_LOGMNR();  /* Stop the LogMiner session */

Preparing the Oracle database

To prepare the Oracle database for this test, we need to make sure that the database is actually in Archived log mode, and then switch to a new Redo log, so that there are no other SQL statements therein.
Before the actual switch, query the Supplemental Logging DBA tables to confirm what is currently in effect. 
This is for comparison later on, when we add and change the Supplemental Logging.

connect / as sysdba
archive log list; 
select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI, FORCE_LOGGING from V$DATABASE;
SUPPLEME SUP SUP FOR
-------- --- --- ---
NO       NO  NO  NO
select LOG_GROUP_NAME, TABLE_NAME, ALWAYS from DBA_LOG_GROUPS where OWNER='SCOTT';
no rows selected

select LOG_GROUP_NAME, COLUMN_NAME, POSITION from DBA_LOG_GROUP_COLUMNS where OWNER='SCOTT' and TABLE_NAME = 'EMP';
no rows selected

alter system switch logfile;

The test is now ready to proceed.

Test #1: Oracle database level Supplemental logging

For this test, we need to first turn on Supplemental logging at the database level.
Then, we'll update the SCOTT.EMP table, and examine the Redo generated.

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
/* Check what Supplemental Logging is in effect */

select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI, FORCE_LOGGING from V$DATABASE;

SUPPLEME SUP SUP FOR
-------- --- --- ---
YES      NO  NO  NO

select LOG_GROUP_NAME, TABLE_NAME, ALWAYS from DBA_LOG_GROUPS where OWNER='SCOTT';

no rows selected

select LOG_GROUP_NAME, COLUMN_NAME, POSITION from DBA_LOG_GROUP_COLUMNS where OWNER='SCOTT' and TABLE_NAME = 'EMP';

no rows selected
/* Update the SCOTT.EMP table */

update SCOTT.EMP set ENAME='DBSUPLOG' where EMPNO=7934;
commit;

Mine the log (as per the LogMiner commands above).
The expected results are as follows;

SELECT SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER='SCOTT' and SEG_NAME='EMP';

SQL_REDO
--------------------------------------------------------------------------------
update "SCOTT"."EMP" set "ENAME" = 'DBSUPLOG' where "ENAME" = 'MILLER' and ROWID = 'AAASZHAAEAAAACXAAN';

Here we can see that the statement has been optimized to just the ENAME column, plus its' old and new values.
This occurs because there is only 1 "MILLER" in a table that fits in 1 database block (unit of on-disk storage).

Test #2: Primary Key column

For this test, in addition to the database-level Supplemental Logging, we'll add Primary Key logging for the SCOTT.EMP table.
Remember, the Primary Key for the EMP table is column EMPNO.

alter table SCOTT.EMP add supplemental log data (primary key) columns;
/* Check what Supplemental Logging is in effect */
select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI, FORCE_LOGGING from V$DATABASE;
SUPPLEME SUP SUP FOR
-------- --- --- ---
YES      NO  NO  NO
select LOG_GROUP_NAME, TABLE_NAME, LOG_GROUP_TYPE, ALWAYS from DBA_LOG_GROUPS where OWNER='SCOTT';
LOG_GROUP_NAME                 TABLE_NAME                     LOG_GROUP_TYPE               ALWAYS
------------------------------ ------------------------------ ---------------------------- -----------
SYS_C0017767                   EMP                            PRIMARY KEY LOGGING          ALWAYS
select LOG_GROUP_NAME, COLUMN_NAME, POSITION from DBA_LOG_GROUP_COLUMNS where OWNER='SCOTT' and TABLE_NAME = 'EMP';
no rows selected 
/* Update the SCOTT.EMP table */
update SCOTT.EMP set ENAME='PRIKEY' where EMPNO=7369;
commit;

Here we see a system-generated Log Group for PRIMARY KEY LOGGING has been created, but no actual columns are (individually) logged.

Mine the log (as per the LogMiner commands above).
The expected results are as follows;

SELECT SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER='SCOTT' and SEG_NAME='EMP';
SQL_REDO
--------------------------------------------------------------------------------
alter table SCOTT.EMP add supplemental log data (primary key) columns;
update "SCOTT"."EMP" set "ENAME" = 'PRIKEY' where "EMPNO" = '7369' and "ENAME" = 'SMITH' and ROWID = 'AAASZHAAEAAAACXAAA';

This time, we see that the Primary Key column EMPNO and its' value of 7369 has been included in the SQL Redo statement.
The actual ALTER TABLE statement is also captured, because it too is recorded with SEG_OWNER='SCOTT' and SEG_NAME='EMP'.
Drop the Primary Key logging and switch to a new logfile.

alter table SCOTT.EMP drop supplemental log data (primary key) columns;
alter system switch logfile;

We are now ready for the next test.

Test #3: All Columns

Finally, we'll use the DATA (ALL) COLUMNS clause, to confirm its' effect on Supplemental Logging.

alter table SCOTT.EMP add supplemental log data (all) columns;
/* Check what Supplemental Logging is in effect */

select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI, FORCE_LOGGING from V$DATABASE;

SUPPLEME SUP SUP FOR
-------- --- --- ---
YES      NO  NO  NO

select LOG_GROUP_NAME, TABLE_NAME, LOG_GROUP_TYPE, ALWAYS from DBA_LOG_GROUPS where OWNER='SCOTT';

LOG_GROUP_NAME                 TABLE_NAME                     LOG_GROUP_TYPE               ALWAYS
------------------------------ ------------------------------ ---------------------------- -----------
SYS_C0017769                   EMP                            ALL COLUMN LOGGING           ALWAYS
select LOG_GROUP_NAME, COLUMN_NAME, POSITION from DBA_LOG_GROUP_COLUMNS where OWNER='SCOTT' and TABLE_NAME = 'EMP';

no rows selected 
update SCOTT.EMP set ENAME='ALLKEY' where EMPNO=7566;
commit;

This time, we have a System generated Log Group for ALL COLUMN LOGGING.

Mine the log (as per the LogMiner commands above).
The expected results are as follows;

SQL_REDO
--------------------------------------------------------------------------------
alter table SCOTT.EMP add supplemental log data (all) columns;
update "SCOTT"."EMP" set "ENAME" = 'ALLKEY' where "EMPNO" = '7566' and "ENAME" = 'JONES' and "JOB" = 'MANAGER' and "MGR" = '7839' and "HIREDATE" = TO_DATE('02-APR-81', 'DD-MON-RR') and "SAL" = '2975' and "COMM" IS NULL and "DEPTNO" = '20' and ROWID = 'AAASZHAAEAAAACXAAD';

Now, we see all 8 columns of the EMP table have been included in the Redo.
This confirms that with the DATA (ALL) COLUMNS option, we don't get just all the keys, we get all the columns in the table, even though it is documented in the "...key_clause".

Drop the All (column) logging and switch to a new logfile.

alter table SCOTT.EMP drop supplemental log data (all) columns;
alter system switch logfile; 

We have now concluded our test.

Restore the database

Use the following commands to return the database and the SCOTT.EMP table back to its' previous settings/values.

ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
update SCOTT.EMP set ENAME='MILLER' where EMPNO=7934;
update SCOTT.EMP set ENAME='SMITH' where EMPNO=7369;
update SCOTT.EMP set ENAME='JONES' where EMPNO=7566;
commit;

GoldenGate GGSCI commands

The following are examples of the commands used to create a GoldenGate EXTRACT, for the purposes of creating a Trail file.
It is assumed that the GGUSER has already been created, and granted the DBA role.

EXTRACT parameters
EXTRACT <EXTNAME>
EXTTRAIL ./dirdat/eo
USERID <USERNAME, PASSWORD <PWD>
TABLE SCOTT.*;

GGSCI commands
ADD EXTRACT <EXTNAME>, TRANLOG, BEGIN 2015-10-14:08:00 *
ADD EXTTRAIL ./dirdat/eo, EXTRACT <EXTNAME>, MEGABYTES 50

* Modify the date to when the Archived logs were actually created.

LogMiner commands
open .\dirdat\eo000000
ghdr on
ggstoken detail
detail on
detail data
next

NOTE: Use these options to get the most detail from the dump of the Trail record.

REFERENCES

NOTE:750198.1 - Effect of Supplemental Logging on LogMiner with Example

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/758890.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

木各力“GERRI”被“GREE”格力无效宣告成功

近日“GERRI”被“GREE”格力无效宣告成功&#xff0c;“GERRI”和“GREE”近似不&#xff0c;如果很近似当初就不会通过初审和下商标注册证&#xff0c;但是如果涉及知名商标和驰名商标&#xff0c;人家就可以异议和无效。 “GERRI”在被无效宣告时&#xff0c;引用了6个相关的…

【启明智显分享】乐鑫ESP32-S3R8方案2.8寸串口屏:高性能低功耗,WIFI/蓝牙无线通信

近年来HMI已经成为大量应用聚焦的主题&#xff0c;在消费类产品通过创新的HMI设计带来增强的连接性和更加身临其境的用户体验之际&#xff0c;工业产品却仍旧在采用物理接口。这些物理接口通常依赖小型显示器或是简单的LED&#xff0c;通过简单的机电开关或按钮来实现HMI交互。…

竞赛 深度学习 大数据 股票预测系统 - python lstm

文章目录 0 前言1 课题意义1.1 股票预测主流方法 2 什么是LSTM2.1 循环神经网络2.1 LSTM诞生 2 如何用LSTM做股票预测2.1 算法构建流程2.2 部分代码 3 实现效果3.1 数据3.2 预测结果项目运行展示开发环境数据获取 最后 0 前言 &#x1f525; 优质竞赛项目系列&#xff0c;今天…

2.2 Python数据类型详解

第二节&#xff1a;Python数据类型详解 Python作为一种动态类型语言&#xff0c;支持多种数据类型&#xff0c;每种数据类型都有其特定的特点和用途。本章将详细介绍Python中常见的数据类型及其特性&#xff0c;以及如何使用这些数据类型进行编程。 2.2.1 整数 (int) 整数是…

黑马点评-Redis的缓存击穿,缓存雪崩,缓存穿透,互斥锁

文章目录 1.缓存穿透2.缓存雪崩3.缓存击穿3.1 互斥锁 1.缓存穿透 解决办法 写入NULL值到Redis缓存&#xff0c;以后就会命中Redis的控制缓存而不会出现请求直接打到数据库的问题&#xff01; 代码 2.缓存雪崩 这个概念很好理解&#xff0c;雪崩就是无数的小雪花结构突然因…

pandas数据分析(1)

pandas&#xff0c;即Python数据分析库&#xff08;Python data analysis library&#xff09; DataFrame和Series DataFrame&#xff08;数据帧&#xff09;和Series&#xff08;序列&#xff09;是pandas的核心数据结构。DataFrame的主要组件包含索引、列、数据。DataFrame和…

扫描全能王的AI驱动创新与智能高清滤镜技术解析

目录 引言1、扫描全能王2、智能高清滤镜黑科技2.1、图像视觉矫正2.2、去干扰技术 3、实际应用案例3.1、打印文稿褶皱检测3.2、试卷擦除手写3.3、老旧文件处理3.4、收银小票3.5、从不同角度扫描文档 4、用户体验结论与未来展望 引言 在数字化时代背景下&#xff0c;文档扫描功能…

云计算【第一阶段(21)】Linux引导过程与服务控制

目录 一、linux操作系统引导过程 1.1、开机自检 1.2、MBR引导 1.3、GRUB菜单 1.4、加载 Linux 内核 1.5、init进程初始化 1.6、简述总结 1.7、初始化进程centos 6和7的区别 二、排除启动类故障 2.1、修复MBR扇区故障 2.1.1、 实验 2.2、修复grub引导故障 2.2.1、实…

从AICore到TensorCore:华为910B与NVIDIA A100全面分析

华为NPU 910B与NVIDIA GPU A100性能对比&#xff0c;从AICore到TensorCore&#xff0c;展现各自计算核心优势。 AI 2.0浪潮汹涌而来&#xff0c;若仍将其与区块链等量齐观&#xff0c;视作炒作泡沫&#xff0c;则将错失新时代的巨大机遇。现在&#xff0c;就是把握AI时代的关键…

深入解析高斯过程:数学理论、重要概念和直观可视化全解

与其他算法相比&#xff0c;高斯过程不那么流行&#xff0c;但是如果你只有少量的数据&#xff0c;那么可以首先高斯过程。在这篇文章中&#xff0c;我将详细介绍高斯过程。并可视化和Python实现来解释高斯过程的数学理论。 多元高斯分布 多元高斯分布是理解高斯过程所必须的概…

图书管理系统(附源码)

前言&#xff1a;前面一起和小伙伴们学习了较为完整的Java语法体系&#xff0c;那么本篇将运用这些知识连串在一起实现图书管理系统。 目录 一、总体设计 二、书籍与书架 书籍&#xff08;Book&#xff09; 书架&#xff08;Booklist&#xff09; 三、对图书的相关操作 I…

java将html转成图片

java 将html转成图片 1.导入jar2.代码3.展示结果4.注意事项 最近有一个需求需要根据指定的样式生成图片&#xff0c;使用java原生技术有些麻烦&#xff0c;所以上网搜了下案例&#xff0c;最后发现最好用的还是html2image&#xff0c;这里进行简单总结下。 1.导入jar <!-- 用…

metasfresh开源ERP系统Windows开发环境配置参考

目录 概述 开发环境 配置过程 后端启动 前端启动 登陆系统 其他 概述 Compiere闭源之后衍生出了Admpiere等若干开源的产品&#xff0c;metasfresh就是其中之一&#xff0c;metasfresh截至发稿时在GitHub上已有64000多次的修改提交&#xff0c;而且仍在维护中&#xff0…

Python应用开发——30天学习Streamlit Python包进行APP的构建(12)

st.checkbox 显示复选框部件。 Function signature[source] st.checkbox(label, valueFalse, keyNone, helpNone, on_changeNone, argsNone, kwargsNone, *, disabledFalse, label_visibility"visible") Returns (bool) Whether or not the checkbox is checked. …

Sentinel解决雪崩问题

我们或多或少都对雪崩问题有点了解&#xff0c;在微服务系统中&#xff0c;各个微服务互相调用&#xff0c;关系错综复杂&#xff0c;如果其中一个微服务挂了或者处理消息的速度大幅下降&#xff0c;需要被处理的消息越积越多&#xff0c;那么影响的不仅仅是本微服务的功能&…

算法入门(上)

什么是算法&#xff1f; 算法&#xff08;Algorithm&#xff09;是解决特定问题求解步骤的描述&#xff0c;在计算机中表现为指令的有限序列&#xff0c;并且每条指令表示一个或多个操作。 给定一个问题&#xff0c;能够解决这个问题的算法是有很多种的。算式中的问题是千奇百怪…

C语言单链表的算法之插入节点

一&#xff1a;访问各个节点中的数据 &#xff08;1&#xff09;访问链表中的各个节点的有效数据&#xff0c;这个访问必须注意不能使用p、p1、p2&#xff0c;而只能使用phead &#xff08;2&#xff09;只能用头指针不能用各个节点自己的指针。因为在实际当中我们保存链表的时…

后端之路第三站(Mybatis)——XML文件操作sql

一、XML映射文件是啥 前面我们学过了在Mapper接口用注解的方式来操作sql语句 那么XML映射文件就另一种操作sql语句的方法 为什么还要有这么个玩意&#xff1f; 我简单说就是&#xff1a;如果有的sql特别复杂的话&#xff0c;比如需要【动态sql】的话&#xff0c;就得用到XM…

数据可视化期末总结

期末考试重点&#xff08;世界上最没意义的事情&#xff09; 选择 p8 数据可视化的标准&#xff1a; 实用、完整、真实、艺术、交互&#xff08;性&#xff09; p21 色彩三属性 色相、饱和度、亮度 p23 视觉通道的类型&#xff1a; 记得色调是定性 p39 散点图&#xff08;二维…

GIT-LFS使用

0.前言 目前git仓库有很多很大的文件需要管理&#xff0c;但是直接上传&#xff0c;每次clone的文件太大&#xff0c;所有准备使用git-lfs解决。 1、下载和安装 Git LFS 1.1、直接下载二进制包&#xff1a; Releases git-lfs/git-lfs GitHub 安装 Git LFS sudo rpm -ivh…