Oracle Notes

Sharing Oracle technology knowledge and experiences.

Archive for the ‘Golden Gate’ Category

Oracle 11g GoldenGate and Compressed Tables

Posted by sdefilip on January 6, 2011

Oracle GoldenGate and Compressed Tables

OGG does not support compressed tables or partitions, neither does it handle it well with proper error messages until OGG v10.4. The abends may or may not produce any error message and sometimes produce wrong messages.

From V11.1.1.0.0, Oracle has enhanced the error handling part in BugDB 9425542, which gives meaningful error message on the compressed record before Extract abend. It will list out the table name, rowid, etc

Example :
ERROR OGG-01028 Record on table QATEST1.TAB1 with rowid AAM4EkAAEAACBguAAA from transaction 5.24.270123 (0×0005.018.00041f2b) is compressed. Compression is not supported.

However, due to bug 10063108, sometimes the error message on compressed tables are not entirely correct. This problem has been fixed in 11.1.1.0.3 and above

A table created as compressed will cause all of the DML’s to go into compressed blocks on disk. If the user does an “alter table nocompress”, every DML that goes into the table AFTER that point in time will be uncompressed. The query for compression will return “nocompress” now, but the simple “alter” does not change the already existing compressed blocks on disk that were created before the “alter”. So to capture the records from a table which was compressed we need to do the following

SQL> alter table move nocompress;

This will touch every single block on disk and will uncompress everything thereby causing OGG to work properly and not abend.

If there is even a single partition in a partitioned table that is compressed, it will cause an abend. Partition compression can be verified by getting the full DDL for the table by running the DBMS_METADATA.GET_DDL package. For table partitions that are compressed, run the below query and get the partition names & tablespace names.

SQL> SELECT partition_name, subpartition_name, tablespace_name, high_value FROM user_tab_subpartitions WHERE table_name = ‘table_name’;

Alter statement for partition to move to nocompress:

SQL> ALTER TABLE MOVE PARTITION NOCOMPRESS TABLESPACE ;

Eensure that you have enough disk space within your tablespaces before running the ALTER statement.

Support of compressed tables will be in future releases of OGG, however, in current V10.4 and V11.1.1.x, the only option, if a “move nocompress” is not possible, is to comment the compressed table or exclude them from the Extract.

Listing some scenarios in which we have seen similar errors for compression  :
1) The extract abends with the following error

GGS ERROR ZZ-0QY Failed to validate table <SCHEMA>.<TABLE NAME>. The table is compressed and extract will not be able to extract data from Oracle logs.

This bug happens when DDL is enabled, the checking was not done correctly when table resides on Bigfile tablespace.

Issue is fixed in V10.4.0.93 and V11.1.1.0.4

Reference bugdb – 10063075

2) When running with OGG version earlier than V11.1.1.0.10, Extract might abend with below error message, without telling the name of the table or the Objectid of the compressed table

Source Context :

SourceModule : [er.redo.ora.sr]
SourceID : [/mnt/ecloud/workspace/Build_OpenSys_r11.1.1.0.9_004
_[40043]/perforce/src/app/er/redo/oracle/redoorasr.c]
SourceFunction : [get_subrec(int32_t, mempool_t *, unsigned char *, u
nsigned short, subrec_info_t *, redo_thread_t *, BOOL, log_context_t *)]
SourceLine : [5434]

2010-11-10 11:50:26 ERROR OGG-01028 compressed tables are not supported.

This is because there is a temp table created as compressed, then followed directly by a direct load operation.

Try to exclude the table DBMS_TABCOMP_TEMP_UNCMP and DBMS_TABCOMP_TEMP_CMP, which was created by Compression advisory.

From OGG v11.1.1.0.10 onwards when the extract abends because of direct load inserts on tables with compressed partitions, the extract gives the table name in the error.

Reference bugdb – 10279456

However, if the table has been dropped immediately after creation, the Extract will only give out the object ID of the “compressed” table.  To find out exactly what was the compressed table name, you could query GGS_DDL_HIST table if DDL replication is enabled.  A sample query

select OBJECTID, OBJECTNAME, METADATA_TEXT FROM GGS_DDL_HIST WHERE OBJECTID = xxxxx

 

 

Posted in Golden Gate, Oracle 11g, Oracle 11g Database Administration, Oracle 11g New Features, Oracle Fusion Middleware, Oracle GoldenGate, Oracle Troubleshooting, Tips | 5 Comments »

Using Oracle GoldenGate’s Logdump Utility

Posted by sdefilip on August 5, 2010

USING ORACLE GOLDENGATE’S LOGDUMP UTILITY

A recent client had a problem which required the use of the OGG Logdump utility to verify whether a particular table row had been extracted and placed in the Replicat trail file for replication to the target.  In this post, I will outline the steps that were used to find the answer.

1.   Begin logdump session:

            $ logdump

          GoldenGate Log File Dump Utility

          Version 10.4.0.12 Build 001

          Copyright (C) GoldenGate Software, Inc. 1995-2009.

          The GoldenGate software may include other code written by third parties;

          details regarding such third party code, including applicable copyright,

          legal and licensing notices are available at http://support.goldengate.com.

                   

2.    Open Replicat Trail File

 

Logdump 1>open ./dirdat/ad000053

Current LogTrail is /shared/oradb/GoldenGate/pnet01d/dirdat/ad000053

Logdump 2 >ghdr on       <–shows record header info about transaction

Logdump 3 >detail data     <– adds  hex and ASCII data values to the

                                                        column information                       

Logdump 4 >next            <–  moves to the first record; advances by one

                                                    record

 

2010/08/04 21:22:18.134.541 FileHeader           Len   928 RBA 0

Name: *FileHeader*

 3000 01ad 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0…0…GG..TL..1…

 0002 3200 0004 ffff fffd 3300 0008 02f1 b33c d650        | ..2…….3……<.P

 840d 3400 0031 002f 7572 693a 7473 7332 6130 3137 | ..4..1./uri:tss2a017

 6e33 3a3a 7368 6172 6564 3a6f 7261 6462 3a47 6f6c  |::shared:oradb:Gol

 6465 6e47 6174 653a 706e 6574 3031 6436 0000 1300 |denGate:anondb02

 112e 2f64 6972 6461 742f 6164 3030 3030 3533 3700 |…/dirdat/ad0000537

 0001 0138 0000 0400 0000 3539 0000 0800 0000 001d | …8……59……..

(above is header data in hex and char)

2010/08/04 22:36:23.000.000 Insert               Len   456 RBA 6608  <–this is

                                                                                                                                an insert

                                                                                                                                 at this rba

Name: FIN_AUDIT.FIN_GL_ATTR      <– source object

 

Logdump 5>next      <– shows next record which contains continuing header

                                            info; “next’ or ‘n’ also  scans through records in

                                            sequential order

___________________________________________________________________

Hdr-Ind    :     E  (x45)     Partition  :     .  (x00)

UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)

RecLength  :     0  (x0000)   IO Time    : 2010/08/04 21:22:17.611.797

IOType     :   151  (x97)     OrigNode   :     0  (x00)

TransInd   :     .  (x03)     FormatType :     R  (x52)

SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)

AuditRBA   :          0       AuditPos   : 0

Continued  :     N  (x00)     RecCount   :     0  (x00)

 

2010/08/04 21:22:17.611.797 RestartOK            Len     0 RBA 936

Name:

After  Image:                                             Partition 0   G  s

 

3.    Search for Timestamp in Trail File

 

Logdump 6 >sfts 2010-08-04       <–searches for the timestamp ‘2010-08-04’

Scan for timestamp >= 2010/08/04 04:00:00.000.000 GMT    <– timestamp

                                                                                                                          found

Column    41 (x0029), Len    31 (x001f)       <–column info with data

 0000 3230 3130 2d30 382d 3034 3a32 323a 3336 3a30 | ..2010-08-04:22:36:0

 372e 3839 3333 3139 3030 30                       | 7.893319000RecLength :9 (x0000)   IO Time    : 2010/08/04 21:22:17.611.797

 Since we were looking for any records that were inserted on 2010/08/04, we have found a valid entry in the Replicat trail file that verifies that the record has been placed in the file on the target side by the OGG Collector component.

 A few other of my favorite, very useful  Logdump commands are:

 >filter include filename FIN_AUDIT.FIN_GL_ATTR   

     Which will filter out all records that do not contain the table name

     specified and narrow down the set of records that subsequent commands

     will operate on until a “>filter clear” is issued

>pos FIRST

      Which will go to the first record in the file

 >pos <rba>

       Which will go to a specific rba in the trail file

 >log to <filename>.txt

        Which will log your Logdump session

There are several other useful commands for examining trail files or redo logs.  See the Oracle GoldenGate Troubleshooting Guide  for a complete list.

 

 

Posted in Data Integration, Data Warehouse, Golden Gate, Oracle GoldenGate, Oracle Troubleshooting, Replication, Tips | Tagged: , , , , | 1 Comment »

Oracle GoldenGate – Positioning a Read of Extract/Replicat Trail File or Oracle Redo Log

Posted by sdefilip on July 26, 2010

Positioning in Extract / Replicat Trail and Log

In the event that there is ever a need to position an extract in the
Transaction Log (aka redo log), extract trail file or replicat trail file,
the following options can be used:

INFO EXTRACT <groupname>,  DETAIL

This will name your current redo log along with the RBA and sequence number and your extract trail name along with the RBA. RBA is the relative byte address of the record in the trail file at which the checkpoint was made

INFO EXTRACT, SHOWCH

Will show you your read checkpoint in the data source and write
checkpoint in the trail files.

Log Read Checkpoint File /orarac/oradata/racq/redo01.log  ß- Oracle redo
2006-06-09 14:16:45 Thread 1, Seqno 47, RBA 68748800          info
Log Read Checkpoint File /orarac/oradata/racq/redo04.log
2006-06-09 14:16:19 Thread 2, Seqno 24, RBA 65657408
Current Checkpoint Detail:
Read Checkpoint #1
Oracle RAC Redo Log
Startup Checkpoint (starting position in data source):
Sequence #: 47
RBA: 68548112     ß– RBA offset of entry in redo log
Timestamp: 2006-06-09 13:37:51.000000
SCN: 0.8439720
Redo File: /orarac/oradata/racq/redo01.log
Recovery Checkpoint (position of oldest unprocessed transaction in
data source):
Sequence #: 47
RBA: 68748304
Timestamp: 2006-06-09 14:16:45.000000
SCN: 0.8440969
Redo File: /orarac/oradata/racq/redo01.log
Current Checkpoint (position of last record read in
the data source)

Write Checkpoint #1
GGS Log Trail       ß– start of GG Trail Information
Current Checkpoint (current write position):
Sequence #: 2
RBA: 2142224       ß–  RBA offset of entry in Trail file
Timestamp: 2006-06-09 14:16:50.567638
Extract Trail: ./dirdat/eh
Header:
Version = 2
Record Source = A
Type = 6
# Input Checkpoints = 2
# Output Checkpoints = 1

Once you have your checkpoint RBAs, you can use a few ggsci
commands to help you debug:

ADD EXTRACT <groupname>

Allows creation of an extract from a specific position in a trail file or
transaction log.
Some useful options are:

EXTTRAILSOURCE <trail name>
Specifies a trail as the data source. For <trail name>, specify the
fully qualified path name  of the trail, for example c:\ggs\dirdat\aa.

BEGIN <begin spec>
Specifies a timestamp in the data source at which to begin
processing.
<begin spec>  value is either:
□  NOW    -  the time at which the ADD EXTRACT command
is issued.
□   A date and time in the format of:
yyyy-mm-dd [hh:mi:[ss[.cccccc]]]

*** if you have a 4-node RAC cluster environment, use the
“THREADS 4”  option to any command to which it applies.

EXTRBA <relative byte address>
Specifies an RBA at which to start extracting.  This can be used to
skip over a bad entry in a trail file.

ALTER EXTRACT <group name>

Allows changing the attributes of an extract file created by the ADD EXTRACT
command and allows the incrementing of an extract to the  next file in the sequence.

***Always  “STOP EXTRACT <group name>”  before using this command.

You can use this command to make any changes using any of the options to the ADD EXTRACT command (above).  So, for example, you can ALTER EXTRACT an extract file to begin at a specific RBA for skipping over an entry in the extract file.
Ex.:   ALTER EXTRACT finance, EXTSEQNO 26, EXTRBA 338

You can change any of the attributes specified with the ADD EXTRACT command, except for the following:
□ Altering an Extract specified with the EXTTRAILSOURCE
option.
□ Altering the number of RAC threads specified with the THREADS
option.

SEND EXTRACT

Allows you to send commands to a running extract process.
Some useful options are:

GETLAG shows lag time between the extract and the data source

LOGEND shows whether or not extract has processed all record in
the data source

ROLLOVER makes extract increment to the next file in the trail upon
startup

SHOWTRANS shows information about current transactions:
□   Process Checkpoint
□   Transaction ID
□    Extract Group Name
□    Redo Thread Number
□    Timestamp of first transaction of extract
□     System Change Number  (SCN)
□     RBA and Redo Log Number
□     STATUS ( commit after it has been forced by FORCETRANS
(Pending Commit) or “running”)

Posted in CDC, Data Integration, Data Warehouse, Golden Gate, Oracle GoldenGate, Replication, Tips, Uncategorized | 2 Comments »

Oracle GoldenGate Best Practices and Tips

Posted by sdefilip on July 3, 2010

Lately I’ve been working, once again, with GoldenGate (now Oracle GoldenGate) data integration software. GoldenGate offers tremendously useful capabilities which include CDC (Change Data Capture), Data Warehouse ETL, efficient/low impact data replication from diverse database management systems, real time standby database maintenance (for high-availability, upgrades and patches, feeding Oracle Data Integrator (ODI) and data distribution. So, I thought I’d offer some GoldenGate Best Practices and Tips that I’ve learned largely by making mistakes:

I. Best Practices

PARALLEL PROCESSING

Ensure the system has enough shared memory. GoldenGate runs as an Oracle process. Each Extract or Replicat process requires upwards of 25-50 MB of system shared memory. This means less memory for the Oracle DBMS, especially the SGA.

Use parallel Replicat groups on the target system to reduce latency thru parallelism. Consider parallel Extract groups for tables that are fetch intensive (e.g., those that trigger SQL procedures).

Group tables that have R.I. to each other in the same Extract-Replicat pair.

Pair each Replicat with its own trail and corresponding Extract process.

When using parallel Replicats, configure each one to process a different portion of the overall data.

PASSTHRU PARAMETER

Consider using this parameter if there is no filtering, conversion or mapping required and you’re using DATAPUMP.

In pass-through mode, the Extract process does not look up table definitions, either from the database or from a data definitions file. Pass-through mode increases the throughput of the data pump, because all of the functionality that looks up object definitions is bypassed.

This saves database fetches to improve performance.

INSERTAPPEND

A new GoldenGate 10.4 feature.

Use for large transactions .

Puts records at end of table rather than doing a more costly insert into other areas of table.

DATAPUMP (not the Oracle DB utility)

1. Primary Extract group writes to a trail on the source system.

2. Reads this trail and sends the data across the network to a remote

trail on the target.

3. Adds storage flexibility and also serves to isolate the primary

Extract process from TCP/IP activity.

4. Can be configured for online or batch.

5. Can perform data filtering, mapping, and conversion, or it can be

configured in pass-through mode, where data is passively

transferred as-is, without manipulation.

6. Use to perform filtering thereby removing that processing overhead

from the primary extract group.

7. Use one or more pumps for each source and each target for

parallelism.

In most business cases, it is best practice to use a data pump. Some reasons for using a data pump include the following:

● Protection against network and target failures:

In a basic GoldenGate configuration, with only a trail on the target system, there is nowhere on the source system to store data that the Extract process continuously extracts into memory. If the network or the target system becomes unavailable, the primary Extract could run out of memory and abend. However, with a trail and data pump on the source system, captured data can be moved to disk, preventing the abend. When connectivity is restored, the data pump extracts the data from the source trail and sends it to the target system(s).

● You are implementing several phases of data filtering or transformation. When using complex filtering or data transformation configurations, you can configure a data pump to perform the first transformation either on the source system or on the target system,

and then use another data pump or the Replicat group to perform the second transformation.

● Consolidating data from many sources to a central target. When synchronizing multiple source databases with a central target database, you can store extracted data on each source system and use data pumps on each of those systems to send the data to a trail

on the target system. Dividing the storage load between the source and target systems reduces the need for massive amounts of space on the target system to accommodate data arriving from multiple sources.

● Synchronizing one source with multiple targets. When sending data to multiple target systems, you can configure data pumps on the source system for each target. If network connectivity to any of the targets fails, data can still be sent to the other targets

————————————————————————————————-

STEP BY STEP Datapump Configuration

ON THE SOURCE SYSTEM

To configure the Manager process . (Reference: Oracle GoldenGate Administration Guide, Version 10.4):

1. On the source, configure the Manager process according to the instructions in Chapter 2.

2. In the Manager parameter file, use the PURGEOLDEXTRACTS parameter to control the purging of files from the local trail.

To configure the primary Extract group:

3. On the source, use the ADD EXTRACT command to create a primary Extract group. For documentation purposes, this group is called ext.

ADD EXTRACT , TRANLOG, BEGIN [, THREADS ]

❍ Use TRANLOG as the data source option.

4. On the source, use the ADD EXTTRAIL command to create a local trail. The primary Extract writes to this trail, and the data-pump Extract reads it.

ADD EXTTRAIL , EXTRACT

❍ Use the EXTRACT argument to link this trail to the primary Extract group. The primary Extract group writes to this trail, and the data pump group reads it.

5. On the source, use the EDIT PARAMS command to create a parameter file for the primary Extract group. Include the following parameters plus any others that apply to your database environment:

– Identify the Extract group:

EXTRACT

– Specify database login information as needed for the database:

[SOURCEDB ,][USERID [, PASSWORD ]]

– Specify the local trail that this Extract writes to:

EXTTRAIL

– Specify tables to be captured:

TABLE .
;

To configure the data pump Extract group:

6. On the source, use the ADD EXTRACT command to create a data pump group. For documentation purposes, this group is called pump. In reality, I suggest naming this to match the extract name e.g., dpext1.

ADD EXTRACT , EXTTRAILSOURCE , BEGIN

❍ Use EXTTRAILSOURCE as the data source option, and specify the name of the local trail.

7. On the source, use the ADD RMTTRAIL command to specify a remote trail that will be created on the target system.

ADD RMTTRAIL , EXTRACT

❍ Use the EXTRACT argument to link the remote trail to the data pump group. The linked data pump writes to this trail.

8. On the source, use the EDIT PARAMS command to create a parameter file for the data pump. Include the following parameters plus any others that apply to your database environment.

————————————————————————————————

– Identify the data pump group:

EXTRACT

– Specify database login information as needed for the database:

[SOURCEDB ,][USERID [, PASSWORD ]]

– Specify the name or IP address of the target system:

RMTHOST , MGRPORT

– Specify the remote trail on the target system:

RMTTRAIL

– Allow mapping, filtering, conversion or pass data through as-is:

[PASSTHRU | NOPASSTHRU]

– Specify tables to be captured:

TABLE ;

NOTE – To use PASSTHRU mode, the names of the source and target objects must be identical. No column mapping, filtering, SQLEXEC functions, transformation, or other functions that require data manipulation can be specified in the parameter file. You can combine normal processing with pass-through processing by pairing PASSTHRU and NOPASSTHRU with different TABLE statements.

ON THE TARGET SYSTEM

To configure the Manager process:

9. On the target, configure the Manager process according to the instructions in the Administration Guide, Chapter 2.

10. In the Manager parameter file, use the PURGEOLDEXTRACTS parameter to control the purging of files from the local trail.

To configure the Replicat group:

11. On the target, create a Replicat checkpoint table. This is a best practice. For instructions, see “Creating a checkpoint table” on page 121 of Administration Guide.

12. On the target, use the ADD REPLICAT command to create a Replicat group. For documentation purposes, this group is called rep.

ADD REPLICAT , EXTTRAIL , BEGIN

❍ Use the EXTTRAIL argument to link the Replicat group to the remote trail.

13. On the target, use the EDIT PARAMS command to create a parameter file for the Replicat group. Include the following parameters plus any others that apply to your database environment.

– Identify the Replicat group:

REPLICAT

– State whether or not source and target definitions are identical:

SOURCEDEFS | ASSUMETARGETDEFS

– Specify database login information as needed for the database:

[TARGETDB ,] [USERID [, PASSWORD ]]

– Specify error handling rules:

REPERROR (, )

– Specify tables for delivery:

MAP .

, TARGET .

[, DEF <template name>];

(Reference: Oracle GoldenGate Administration Guide, Version 10.4)

REPLICAT CHECKPOINT TABLE

Create on target.

Can be used by all Replicat groups.

Prevents loss of data in case Replicat needs to be restarted.

PRIMARY KEYS

Must have a unique, non-null identifier, even if not the primary key.

Option is to use KEYCOLS option of MAP and TABLE parameters to

create a substitute key. Otherwise, GG uses multiple columns from

the table; will usually degrade performance. For data integrity,

the key used must contain the same values in corresponding rows of

all subject databases and contain the same columns in databases

where key resides

ADD TRACETABLE

Use default name of “GGS_TRACE”.

FAULT TOLERANCE

In a data distribution configuration, include a primary Extract

group and a data-pump Extract group in the source configuration, one

for each target.

If network connectivity fails, data can still be sent to the target.

————————————————————————————————-

II. PERFORMANCE

This section will cover anything that is performance related that was not also in section “I. Best Practices”.

A. MONITORING

Run Status of Particular Process

To find the run status of a particular process:

GGSCI (development) 23> status manager

Manager is running (IP port development.7809).

GGSCI (development) 24> status extract ext1

EXTRACT EXT1: RUNNING

Detailed information of a particular process:

GGSCI (development) 6> info extract ext1, detail

EXTRACT EXT1 Last Started 2010-01-23 11:19 Status RUNNING

Checkpoint Lag 00:00:00 (updated 00:00:02 ago)

Log Read Checkpoint Oracle Redo Logs

2010-01-23 10:45:18 Seqno 786, RBA 44710400

Target Extract Trails:

Remote Trail Name Seqno RBA Max MB

/u01/oracle/software/goldengate/dirdat/lt 55644 2 55644 10

Extract Source Begin End

/u02/oradata/acme/redo03.log 2010-01-23 11:13 2010-01-23 10:45

/u02/oradata/acme/redo02.log 2010-01-23 11:04 2010-01-22 11:13

/u02/oradata/acme/redo02.log 2010-01-23 10:42 2010-01-21 11:04

Not Available * Initialized * 2010-01-20 10:42

Current directory /u01/oracle/software/goldengate

Report file /u01/oracle/software/goldengate/dirrpt/EXT1.rpt

Parameter file /u01/oracle/software/goldengate/dirprm/ext1.prm

Checkpoint file /u01/oracle/software/goldengate/dirchk/EXT1.cpe

Process file /u01/oracle/software/goldengate/dirpcs/EXT1.pce

Stdout file /u01/oracle/software/goldengate/dirout/EXT1.out

Error log /u01/oracle/software/goldengate/ggserr.log

Detailed information of a particular process:

GGSCI (devu007) 6> info extract ext1, detail

EXTRACT EXT1 Last Started 2010-02-19 11:19 Status RUNNING

Checkpoint Lag 00:00:00 (updated 00:00:02 ago)

Log Read Checkpoint Oracle Redo Logs

2010-02-19 10:45:18 Seqno 786, RBA 44710400

Target Extract Trails:

Remote Trail Name Seqno

/u01/oracle/software/goldengate/dirdat/lt 2

RBA Max MB

55644 10

Extract Source Begin End

/u02/oradata/acme/redo03.log 2010-02-19 11:13 2010-02-26 10:45

/u02/oradata/acme/redo02.log 2010-02-19 11:04 2010-02-19 11:13

/u02/oradata/acme/redo02.log 2010-02-18 10:42 2010-02-19 11:04

Not Available * Initialized * 2010-02-18 10:42

Current directory /u01/oracle/software/goldengate

Report file /u01/oracle/software/goldengate/dirrpt/EXT1.rpt

Parameter file /u01/oracle/software/goldengate/dirprm/ext1.prm

Checkpoint file /u01/oracle/software/goldengate/dirchk/EXT1.cpe

Process file /u01/oracle/software/goldengate/dirpcs/EXT1.pce

Stdout file /u01/oracle/software/goldengate/dirout/EXT1.out

Error log /u01/oracle/software/goldengate/ggserr.log

View processing rate – can use ‘hr’,’min’ or ‘sec’ as a parameter:

GGSCI (devu007) 37> stats extract ext2 reportrate hr

Sending STATS request to EXTRACT EXT2 …

Start of Statistics at 2010-02-19 10:04:46.

Output to /u01/oracle/ggs/dirdat/cc:

Extracting from SH.CUSTOMERS to SH.CUSTOMERS:

*** Total statistics since 2010-02-19 09:29:48 ***

Total inserts/hour: 0.00

Total updates/hour: 95258.62

Total deletes/hour: 0.00

Total discards/hour: 0.00

Total operations/hour: 95258.62

*** Daily statistics since 2010-02-19 09:29:48 ***

Total inserts/hour: 0.00

Total updates/hour: 95258.62

Total deletes/hour: 0.00

Total discards/hour: 0.00

Total operations/hour: 95258.62

*** Hourly statistics since 2010-02-19 10:00:00 ***

No database operations have been performed.

*** Latest statistics since 2010-02-19 09:29:48 ***

Total inserts/hour: 0.00

Total updates/hour: 95258.62

Total deletes/hour: 0.00

Total discards/hour: 0.00

Total operations/hour: 95258.62

End of Statistics.

View latency between the records processed by Goldengate and the timestamp in the data source:

GGSCI (development) 13> send extract ext2, getlag

Sending GETLAG request to EXTRACT EXT2 …

Last record lag: 3 seconds.

At EOF, no more records to process.

GGSCI (development) 15> lag extract ext*

Sending GETLAG request to EXTRACT EXT1 …

Last record lag: 1 seconds.

At EOF, no more records to process.

Sending GETLAG request to EXTRACT EXT2 …

Last record lag: 1 seconds.

At EOF, no more records to process.

Viewing the GoldenGate error log as well as history of commands executed and other events:

For UNIX:

vi ggserr.log

Or

GGSCI command: VIEW GGSEVT

View the process report:

GGSCI (development) 2> view report ext1

GGSCI (development) 2> view report rep1

B. PERFORMANCE TIPS

To reduce bandwidth requirements:

Use compression options of the RMTHOST parameter to compress data

before it is sent across the network. Weigh the benefits of

compression against the CPU resources that are required to perform

the compression.

To increase the TCP/IP packet size:

a.  Use the TCPBUFSIZE option of the RMTHOST parameter to increase the

size of the TCP socket buffer that Extract maintains. By increasing

the size of the buffer, you can send larger packets to the target

system. Consult with Network Support before setting TCPBUFSIZE.

b.  Use SQL Arrays

The BATCHSQL parameter will increase the performance of Replicat. BATCHSQL causes Replicat to create arrays for similar SQL statements and apply them at an accelerated rate. Normally, Replicat applies one SQL statement at a time.

• At 100 bytes of data per row change, BATCHSQL can

improve Replicat’s performance by up to 300 percent; actual

performance benefits will vary, depending on the oeverall

demand on system resources.

• At around 5,000 bytes of data per row change, the benefits of

using BATCHSQL diminish.

. Improve I/O within the system configuration

• Place trail files on the fastest disk controller.

• Use RAID 0+1 disk configuration because GoldenGate performs

sequential writes.

* Be careful with the following points. Increasing the values of

these parameters will improve performance but take longer to

reprocess your data in case of process failure.

• Use the CHECKPOINTSECS in Extract or Replicat; if increased,

less frequent checkpoints; increases data to be reprocessed if

process fails; keep transaction logs available in case of

reprocessing

• Use the GROUPTRANSOPS; increases number of SQL operations in a Replicat ;

reduces I/O to checkpoint file and checkpoint table

Data Filtering and Conversion:

Use primary Extract for data capture only.

Use a data pump on the source to perform filtering and thereby send

less data over the network.

Alternatively, use Replicat for conversion and, if the network can

handle large amounts of data, also for filtering.

**(Upcoming Post - “Troubleshooting GoldenGate Extracts”)

Posted in Best Practices, CDC, Data Integration, Data Warehouse, Golden Gate, Oracle GoldenGate, Replication, Tips | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.