R12: Improving Performance of General Ledger and Journal Import (Doc ID 858725.1 )

In this Document

  Purpose
  Scope
  Details
  A) Database Init.ora Parameters
  B) Concurrent Program Controls setup
  C) General Ledger Optimizer program
  C.1) To create and drop indexes for your chart of accounts segments:
  C.2) To update statistical information about your data:
  D) Useful hints
  E) Database Cost Based Optimizer (CBO)
  F) Indexes optimization
  F.1) Indexes for GL_CODE_COMBINATIONS
  F.2) Indexes for GL_INTERFACE
  F.3) Custom Indexes
  G) Multi-Table Journal Import
  Community Discussions
  Feedback
  References



APPLIES TO:Oracle General
Ledger - Version 12.0 and later

Information in this document applies to any platform.

EXECUTABLE:GLLEZL - Journal Import

EXECUTABLE:GLAMAS - Run MassAllocations

EXECUTABLE:GLPPOS - Posting





PURPOSE

Description of the setup and procedures to improve R12
General Ledger Performance.

SCOPE

These changes can benefit also the performance of other modules that use GL data.

Some actions are to be performed by functional users, but others may require the assistance of a DBA or System Administrator.

DETAILSA) Database Init.ora Parameters

Verify that your init.ora parameters are set according to the recommendations for Oracle Applications.

Download the latest version from Document 174605.1: bde_chk_cbo.sql - Reports Database Initialization

Parameters related to an Apps 12 or 11i instance
, and run bde_chk_cbo.sql to get a listing of your current init.ora parameters and the recommended settings.

Document 396009.1:
Database Initialization Parameters for Oracle Applications Release 12, contains more information on the database initialization parameter settings.Document
34559.1
WAITEVENT: "db file sequential read" useful in case a trace shows much time in "db file sequential read".



B) Concurrent Program Controls setup

To improve Journal Import performance review the following
profile options setup (this replaces the Concurrent Program Controls setup form used in previous releases) which apply directly to Journal Import performance:

  • Set the profile option GL: Number of Records to Process at Once. The more journal lines the Journal Import and MassAllocations/MassBudgeting programs can hold in memory, the faster they will run. If you do not enter a value here, your concurrent
    programs process 1000 journal lines at once. Too large a number and you may find it too big for your memory capacity so you need to test it starting at 25000.
  • Set the profile option GL: Archive Journal Import Data  to "No". When it is "Yes" the data in the GL_INTERFACE is saved to GL_INTERFACE_HISTORY at the end of each Journal Import run and Journal Import runs slower. Oracle Applications doesn't use
    this data.

C) General Ledger Optimizer program

There are 2 programs you need run periodically to keep the best General Ledger's performance possible:

  • Maintain Summary TemplatesProgram:
    updates summary account information in the current Ledger.
  • GL Optimizer Program: create or drop Indexes for those segments in your chart of accounts that you have marked
    for indexing and/or gather Statistics. This is the most relevant for Journal Import performance.

C.1) To create and drop indexes for your chart of accounts segments:

    • Freeze your account structure.
    • Navigate to the Submit Request window.
    • Select the Optimizer program.
    • Enter Yes for Maintain Indexes.

The Optimizer creates an index on a segment if one does not yet exist, and drops an index on a segment if you no longer index
the segment. This is useful when you define a new chart of accounts for which you want to index particular segments or when you want to add or drop an index for an existing segment in your chart of accounts.

    • The General Ledger Optimizer program creates indexes for those segments in your chart of accounts that you have marked for indexing in the Segments Definition form. This indicates whether you want the database column in the GL_COMBINATIONS
      table, used to store the key segments, to have a single column index for that segment.

      This should be the case for segments having many distinct values.
    • The General Ledger Optimizer program drops indexes for those segments in your chart of accounts that you have not marked for indexing in the Segments Definition form. This indicates whether you don't want the database column in the GL_COMBINATIONS
      table, to have a single column index for that segment.

      This should be the case for segments having only a few of distinct values.

You can mark the segments in the General Ledger Superuser responsibility:

    • Navigate to Setup > Financial > Flexfield > Key > Segments
    • Query all your Accounting Flexfield Structures
    • Go into each segment to deselect (or select) the index box.

C.2) To update statistical information about your data:

    • Navigate to the Submit Request window.
    • Select the Optimizer program.
    • Enter Yes for Gather Statistics

The Optimizer program gathers and updates statistical information about the size of your balances and combinations table,
the number of account combinations with a particular segment value, and the number of account balances associated with each accounting period.

This information improves the performance of your journal entry posting and financial reporting process.



To keep these statistics current, you should run the Optimizer at least once a period, or any time you add several segment values, define a new chart of accounts, or add or delete summary templates.

D) Useful hints

Follow the suggestions below to improve Journal Import performance and other areas of Oracle General Ledger as well, such
as posting and consolidations.

  • Run as few batches as possible.

    In the GL_INTERFACE table there are columns called distinguishing attributes which determine how the records will be organized into batches, headers and lines. The ones that dictate the number of batches that are created are the critical LEDGER_ID, JE_SOURCE_NAME,
    JE_BATCH_NAME and PERIOD_NAME.

    For each batch Journal Import creates, it makes an additional pass through the GL_INTERFACE table which substantially increases the processing time of Journal Import. We typically recommend that you process as few batches as possible per run. It is better to
    process one batch with many journal entry lines than to process multiple batches with fewer lines per batch.

    However, when importing long batches, there will be an expected delay during the phase were the rows are deleted from the interface. This can be avoided if this is the only batch present in the interface - in this case the system will make a quick 'truncate'.

    In the current model with Sub-Ledger Accounting, there is less possibility for rows being left behind in the GL_INTERFACE because when a problem occurs, usually the system rollback all rows into the SLA tables and the interface is emptied. This may benefit
    performance.
  • Close out as many periods as possible.

    Determine how many Open or Future Enterable periods in ALL Ledgers you have, including Ledgers that you may not be entering through Journal Import. This will reduce processing time.
  • Disable Dynamic Insertion and define any accounts used in your feeder systems that you have not yet defined in General Ledger. Journal Import runs much faster when it does not have to create new account combinations dynamically.


    You may consider to run the Journal Import with Dynamic Insertion, to create first the new combinations by importing the lines with zero amounts (journals will not be created but the missing combinations will be).

E) Database Cost Based Optimizer (CBO)

Oracle Applications Release 12 uses the cost-based optimization (CBO) approach in choosing the most efficient way to execute
SQL statements.

To use CBO effectively, you must keep your database statistics current. Oracle Applications provides a set of procedures to help you collect these statistics.

For more information please read Document 744143.1: Tuning performance on eBusiness suite.

F) Indexes optimization

you may need your Database or System Administrator assistance before proceeding with the following steps.

F.1) Indexes for GL_CODE_COMBINATIONS

To optimize the GL_CODE_COMBINATIONS performance add a non-unique concatenated index on just the active segments of the GL_CODE_COMBINATIONS
table. This is a required index, per development, no matter where the performance problem is, as it improves performance for many other GL functionalities as well.

If one such index already exists make sure that the segments with the highest number of values still come first as the spread of values may change over time.

The order of the columns, in the CREATE INDEX command, is decisive - the more sensitive or selective an index is the better the performance. To achieve that, the segments with more occurrences in the table must be placed first.



To find what indexes already exist, use this sql:

Select aic.table_name, aic.index_name,

   ai.uniqueness, aic.column_name, aic.column_position

From all_ind_columns aic,

   all_indexes ai

where ai.index_name = aic.index_name

and ai.table_name = 'GL_CODE_COMBINATIONS'

order by aic.index_name, aic.column_position;

To determine the order of the segments in the concatenated index execute the following select statement, which counts the
number of distinct values for each segment used in the GL_CODE_COMBINATIONS table (as the index is for all Charts Of Accounts we do not distinguish between COA's):

set serveroutput on



DECLARE

   v_SegNum VARCHAR2(9);

   v_FreqCount NUMBER;

BEGIN

   FOR i IN 1..30

   LOOP

      EXECUTE IMMEDIATE

         'SELECT COUNT(DISTINCT SEGMENT'||TO_CHAR(i)||')

         FROM gl_code_combinations'

      INTO v_FreqCount ;

      IF v_freqCount <> 0 THEN

          DBMS_OUTPUT.PUT_LINE(

             'SEGMENT'||TO_CHAR(i)||' Frequency = '||TO_CHAR(v_FreqCount));

      END IF;

   END LOOP;

END;

For example, consider that this select statement has returned the values:

SEGMENT1 Frequency = 4

SEGMENT2 Frequency = 37

SEGMENT3 Frequency = 76

SEGMENT4 Frequency = 3221

SEGMENT5 Frequency = 21

SEGMENT6 Frequency = 22

The indexes for individual segments with a small number of distinct values should be disabled in the Segments Definition form
(deselect the index column - please see the paragraph C.1 above).

In general the best concatenated index would include ALL segments, ordered by decreasing number of distinct values. From data above, the best order to make the most selective index is:

  • SEGMENT4,
  • SEGMENT3,
  • SEGMENT2,
  • SEGMENT6,
  • SEGMENT5,
  • SEGMENT1

In this example we recommend to deselect the index flag on SEGMENT1, as there is no benefit to index a segment with a small
number of different values used in the code combinations.



As it is recommended not to have 2 indexes starting with the same column, you would need to uncheck also the index flag on SEGMENT4 in the Key Segments form, because the concatenated index will be used instead (in this example segment4 will be the first column
in the concatenated index). If you do not, then the index on segment4 will be recreated  the next time the GL Optimizer is executed and you would end up with 2 indexes starting with segment4.



Sign on to SQL*Plus as APPS to create manually the new concatenated index GL_CODE_COMBINATIONS_CAT. In our example you should  execute the following:

CREATE INDEX GL_CODE_COMBINATIONS_CAT ON gl_code_combinations

   (segment4,

    segment3,

    segment2,

    segment6,

    segment5,

    segment1)

PCTFREE 0

INITRANS 2

MAXTRANS 255

TABLESPACE user_index

STORAGE (INITIAL 1048576

    NEXT 16384

    PCTINCREASE 0

    MINEXTENTS 1

    MAXEXTENTS 20);

When this index is created you must run the GL Optimizer with Yes for the options to Maintain Indexes, to create/drop the
required indexes for the segments.

Be aware that if the concatenated index uses the name of the standard segment index, it may be replaced/deleted by the GL Optimizer. When this happens the concatenated index must be recreated manually.

F.2) Indexes for GL_INTERFACE

If you still have a really poor performance running Journal Import, and the log file shows gllger as the last module entered,
create a similar index on the GL_INTERFACE table as well.

CREATE INDEX GL_INTERFACE_CAT ON GL_INTERFACE

   (segment4,

   segment3,

   segment2,

   segment6,

   segment5,

   segment1) ;

Further improvements occur by adding non-unique indexes for each segment column of GL_INTERFACE, just as the indexes exist
on GL_CODE_COMBINATIONS.

So, for the present example create also singular indexes for segments 3, 2, 6 and 5.



There are 4 standard indexes defined on the GL_INTERFACE table (GL_INTERFACE_N1 through _N4).

If journal import is very slow, or appears to be hanging in module gllcje
or gllidx, then the problem may be related to index GL_INTERFACE_N2. It is defined on (REQUEST_ID, JE_HEADER_ID, STATUS, CODE_COMBINATION_ID) columns and it can be redefined to make it more selective by including the column JE_LINE_NUM.

Sign on to SQL*Plus as APPS and execute the following:

DROP INDEX GL_INTERFACE_N2;



CREATE INDEX gl_interface_n2 ON gl_interface

    (request_id,

     je_header_id,

     status,

     code_combination_id,

     je_line_num)

PCTFREE 0

INITRANS 2

MAXTRANS 255

TABLESPACE user_index

STORAGE (INITIAL 1048576

     NEXT 16384

     PCTINCREASE 0

     MINEXTENTS 1

     MAXEXTENTS 20);

Notes:

  • the tablespace and storage may vary with your installation;
  • this is not the standard index so Oracle Support may ask you to restore the standard index definition any time in the future.

F.3) Custom Indexes

In many cases of bad performance in a GL program , the cause is related to custom indexes or triggers on GL tables.

If a GL program has unusual performance issues, please ask your DBA to check and confirm that there aren't any custom indexes
or triggers on GL tables like GL_BALANCES, GL_CODE_COMBINATIONS, GL_JE_BATCHES, GL_JE_HEADERS and GL_JE_LINES, except the ones recommended in previous paragraphs of this document or otherwise recommended by support,



In case of custom indexes or triggers, it is recommended to take a note of the custom objects definitions, drop them and retest the program performance. Note that this will be required by development in case a performance bug needs to be logged.

A good way to look at the existing indexes and triggers is to use the All RDA Data Collection test for the product "SQLGL"
(see Oracle General Ledger (GL) Diagnostics CatalogDocument 1416402.1).

The standard objects are described in the Oracle E-Business Suite
Electronic Technical Reference Manual (eTRM).

G) Multi-Table Journal Import

Multi-Table
Journal Import
is possible since old version 11i.GL.D.

Journal Import is able to run against tables other than GL_INTERFACE, as long as they contain the same columns as the GL_INTERFACE table. You can continue using the GL_INTERFACE table without any changes to custom programs which populate the GL_INTERFACE table
or submit the Journal Import program.

Running Multi-Table, Journal Import will launch one concurrent process for each interface table to be processed. Using alternative tables can help you improve performance since Journal Import more efficiently processes high volumes of data from multiple tables
than from the single GL_INTERFACE table.

上一篇:centos 6.5 hadoop 2.3 初配置


下一篇:Python之路第十二天,高级(4)-Python操作rabbitMQ