OCP Blog
Super DBA
ORA-00600: internal error code, arguments: [qctcte1], [0], [], [], [], [], [], []
by Vazha Mantua Thursday, August 18, 2011 5:26 PM
In a test DB we applied patch from 10.2.0.2 to 10.2.0.4 , after this procedure we could ’t compile a user specified view with error :ORA-00600: internal error code, arguments: [qctcte1], [0], [], [], [], [], [], []
On metalink there are a lot of articles and scenarios with this error, but I will focus on only one,which help us.
Problem
--------
SQL statement with aggregations and unions fail with
ORA-600: internal error code, arguments: [qctcte1], [0], [], [], [], [], [], [] during parse.


--------------------
Solution:
disable query_rewrite make it works fine.
alter session set "_complex_view_merging"=false;
Init parameter “_complex_view_merging”- Controls complex view merging.

Tags: ,

Now ocp.community.ge is on Facebook
by Vazha Mantua Wednesday, August 17, 2011 6:03 PM

My friends, know you can find us on Facebook, on page ocp.community.ge. Please like this page Smile

From this moment you can be noticed about new articles form Facebook page

 

 

Vazha Mantua

Tags: ,

How Prevent user with alter user privileges from changing password of sys and system users.
by Vazha Mantua Wednesday, August 17, 2011 10:26 AM

Good Day My friends,

 

Today we will discuss about one small security issue for users which have an alter user privileges. Our Task is avoid from these users privileges changing password of system users.

In metalink we found article about this issue. Note id is 271077.1 , which tell us create system trigger for avoiding this case.

 

SQL> conn  / as sysdba
Connected.

SQL> CREATE or REPLACE TRIGGER prohibit_alter_SYSTEM_SYS_pass
AFTER ALTER on SCOTT.schema
BEGIN
IF ora_sysevent='ALTER' and ora_dict_obj_type = 'USER' and
(ora_dict_obj_name = 'SYSTEM' or ora_dict_obj_name = 'SYS')
THEN
RAISE_APPLICATION_ERROR(-20003,
'You are not allowed to alter SYSTEM/SYS user.');
END IF;
END;
/

Trigger created.

 

Now we can see a result:

SQL> conn scott/tiger
Connected.
SQL>alter user system identified by manager;
alter user system identified by manager
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20003: You are not allowed to alter SYSTEM/SYS user.
ORA-06512: at line 5
SQL> alter user sys identified by manager;
alter user sys identified by manager
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20003: You are not allowed to alter SYSTEM/SYS user.
ORA-06512: at line 5
SQL> alter user dbsnmp identified by dbsnmp;
User altered.

 

 

But There are one mistake , which found my student Mariam Kupatadze.

 

Password of user system changed. trigger works after alter user user, correct version is before alter for prevent changing password!

 

Finally we give you correct version of trigger:

SQL> CREATE or REPLACE TRIGGER prohibit_alter_SYSTEM_SYS_pass 
BEFORE ALTER on SCOTT.schema
BEGIN
IF ora_sysevent='ALTER' and ora_dict_obj_type = 'USER' and
(ora_dict_obj_name = 'SYSTEM' or ora_dict_obj_name = 'SYS')
THEN
RAISE_APPLICATION_ERROR(-20003,
'You are not allowed to alter SYSTEM/SYS user.');
END IF;
END;
/

Trigger created.

 

 

 

Tags: , , , , , , , , ,

Jinitiator for windows 7
by Vazha Mantua Friday, July 29, 2011 4:19 PM

 

Good Day all,

 

Today we will discuss about problem which occurred when you migrated to new windows platform 7 or update IE version.

Oracle tell us that For new versions of IE , jinitiator is not supported , you should use Java instead of Jinitiator ! For oracle forms please use Java.

I Agree with it, because product Jinitiator is out of date!

 

But if you are not going to change method of using this product we found workaround which fix this issue.

 

 

Replace the jvm.dll in jinitiator directory (C:\Program Files\Oracle\JInitiator 1.3.1.22\bin\hotspot\) with this file http://files.getdropbox.com/u/3353/jvm.dll

In our case we use Jinitiator 1.3.1.22

It will help you.

Tags:

Step By step procedure for create 2-way replication for Oracle Databases
by Vazha Mantua Sunday, July 24, 2011 1:49 PM

Good Day All,

 

Today we will be talk about oracle streaming.

Oracle Streams is the flow of information either within a single database or from one database to another. Oracle Streams can be set up in homogeneous (all Oracle databases) or heterogeneous (non-Oracle and Oracle databases) environments. The Streams setup uses a set of processes and database objects to share data and messages. The database changes (DDL and DML) are captured at the source; those are then staged and propagated to one or more destination databases to be applied there. Message propagation uses Advanced Queuing mechanism within the Oracle databases.

 

Let see a example: It will be step by step procedure how create 2-way replica(master to master) between 2 oracle databases(Version 10.2.0.4).

We Have 2 identical Database, named ORCL and TEST.

For Step1-Step5 use database user named sys.

 

 

Step 1: Create stream administration user in both databases.

----------ORCL

create user streamadmin identified by streamadmin default tablespace users;

----------TEST

create user streamadmin identified by streamadmin default tablespace users;

Step 2: Required grants to the user streamadmin.

----------on both of the databases(ORCL&TEST)

grant dba,select_catalog_role to streamadmin;

Grants the privileges needed by a user to be an administrator for streams

begin

dbms_streams_auth.grant_admin_privilege('streamadmin',true);

end;

 

Step 3: We will use default HR schema for setting up this replication for table countries

Step 4: Check database parameters required for setting up stream replication

check job_queue_processes parameter , it should not be 0. Also you can set e global_names parameter true, but if you create database links exact as oracle SID’s value true is not necessary.

Step 5: Enable supplemental logging on the tables of the HR user in both of the databases

----------on both of the databases(ORCL&TEST)

ALTER TABLE HR.countries ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

 

Step 6: Create Database Links between the stream administrator users in the both databases.

Logon with streamadmin user…

----------ORCL

CREATE DATABASE LINK TEST CONNECT TO STREAMADMIN IDENTIFIED BY streamadmin USING 'TEST';

----------TEST

CREATE DATABASE LINK ORCL CONNECT TO STREAMADMIN IDENTIFIED BY streamadmin USING 'ORCL';

Step 7: Create Stream Queues under the streamadmin user to apply and capture the database changes to be replicated.

This also needs to be run on both databases as streamadmin user.

----------on both of the databases(ORCL&TEST)

begin

dbms_streams_adm.set_up_queue(queue_table => 'apply_q', queue_name => 'apply_q', queue_user => 'streamadmin');

end;

begin

dbms_streams_adm.set_up_queue(queue_table => 'capture_q',queue_name => 'capture_q',queue_user => 'streamadmin');

end;

Step 8: Setup data capture on both the databases:

Logon with streamadmin user…

----------on both of the databases(ORCL&TEST)

BEGIN

DBMS_STREAMS_ADM.add_table_rules

(table_name => 'HR.COUNTRIES'

,streams_type => 'CAPTURE'

,streams_name => 'CAPTURE_STREAM'

,queue_name => 'CAPTURE_Q'

,include_dml => TRUE

,include_ddl => TRUE

,inclusion_rule => TRUE

);

END;

Step 9: Setup data apply on both the databases:

Logon with streamadmin user…

----------TEST

BEGIN

DBMS_STREAMS_ADM.add_table_rules (

TABLE_NAME => 'HR.COUNTRIES',

STREAMS_TYPE => 'APPLY',

STREAMS_NAME => 'APPLY_STREAM',

QUEUE_NAME => 'APPLY_Q',

INCLUDE_DML => TRUE,

INCLUDE_DDL => TRUE,

SOURCE_DATABASE => 'ORCL');

END;

----------on ORCL

BEGIN

DBMS_STREAMS_ADM.add_table_rules (

TABLE_NAME => 'HR.COUNTRIES',

STREAMS_TYPE => 'APPLY',

STREAMS_NAME => 'APPLY_STREAM',

QUEUE_NAME => 'APPLY_Q',

INCLUDE_DML => TRUE,

INCLUDE_DDL => TRUE,

SOURCE_DATABASE => 'TEST');

END;

Step 10: Setup propagation process on both the databases:

It is basically setting up related between the capture process on one database and apply process on the other database.

Logon with streamadmin user…

----------on ORCL

BEGIN

DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(

TABLE_NAME => 'HR.COUNTRIES',

STREAMS_NAME => 'ORCL_TO_TEST',

SOURCE_QUEUE_NAME =>'CAPTURE_Q',

DESTINATION_QUEUE_NAME => 'APPLY_Q@TEST',

INCLUDE_DML => TRUE,

INCLUDE_DDL => TRUE,

SOURCE_DATABASE => 'ORCL');

END;

----------on TEST

BEGIN

DBMS_STREAMS_ADM.add_table_propagation_rules(

TABLE_NAME => 'HR.COUNTRIES',

STREAMS_NAME => 'TEST_TO_ORCL',

SOURCE_QUEUE_NAME =>'CAPTURE_Q',

DESTINATION_QUEUE_NAME => 'APPLY_Q@ORCL',

INCLUDE_DML => TRUE,

INCLUDE_DDL => TRUE,

SOURCE_DATABASE => 'TEST');

END;

Step 11: Setup schema instantiation SCN on ORCL and TEST DB’s

----------on TEST

DECLARE

ISCN NUMBER;

BEGIN

ISCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();

DBMS_APPLY_ADM.set_table_instantiation_scn@ORCL(source_object_name => 'HR.COUNTRIES',source_database_name => 'TEST',instantiation_scn => ISCN);

END;

----------on ORCL

DECLARE

ISCN NUMBER;

BEGIN

ISCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();

DBMS_APPLY_ADM.set_table_instantiation_scn@TEST(source_object_name => 'HR.COUNTRIES',source_database_name => 'ORCL',instantiation_scn => ISCN);

END;

 

Step 12: Start capture and apply process:

Setting the disable_on_error parameter to ‘N’ allows the apply process to continue applying row LCRs even when it encounters errors. The default value is ‘Y’ which disables the apply process automatically on the first error encountered.

----------on both DBs

BEGIN

DBMS_APPLY_ADM.SET_PARAMETER (APPLY_NAME => 'APPLY_STREAM', PARAMETER => 'DISABLE_ON_ERROR', VALUE => 'N');

DBMS_APPLY_ADM.START_APPLY (APPLY_NAME => 'APPLY_STREAM');

DBMS_CAPTURE_ADM.START_CAPTURE (CAPTURE_NAME => 'CAPTURE_STREAM');

END;

------------------------------------------------------------------------

Now you can test your replication 2-way replication which means all changes for HR.COUNTRIES from ORCL DB shipped to TEST DB and , all changes from TEST DB shipped to ORCL DB

 

Useful view’s for monitoring process are:

 

dba_apply,

dba_apply_error,

dba_apply_progress ,

dba_apply_enqueue,

dba_capture,

dba_capture_parameters,

dba_capture_prepared_tables

 

 

 

 

 

Tags: ,

Logminer Gives Error - In Memory Undo is unsupported, What is In memory undo?
by Vazha Mantua Tuesday, July 12, 2011 1:44 PM

 

 

Hello All,

Oracle uses undo segments for these proposes

Undo records are used to:

1. Roll back transactions when a ROLLBACK statement is issued

2. Recover the database

3. Provide read consistency

4. Analyze data as of an earlier point in time by using Oracle Flashback Query

5. Recover from logical corruptions using Oracle Flashback features

-----------------------------------------------------------------------------------------------------------------------

In Memory undo!

 

In 10G,Oracle begin use new feature called In memory undo know as IMU, instead of undo

segments, they use memory structure which store undo information. Oracle still creates undo because it must still provide commit, rollback, and read consistency capabilities.

This has many implications. Anytime an Oracle buffer is changed the associated change (called a redo vector) is written into the redo log buffer. Unfortunately,

even if an undo segment is changed its change must also be recorded in the redo log buffer. But since IMUs are not undo segments, their changes

do not generate redo! So IMU will reduce the amount of redo an instance generates.

When we use traditional undo segments for each change , in redo log buffer write information about change of buffer cache block and undo block!

Another amazing feature of IMU is when it comes time to transform the in-memory undo into undo segment format(commit complete),

multiple IMUs can be consolidated into a single undo segment write.

When we use IMU for each change, in redo log buffer write information only for buffer cache block and after when IMU flushed in redo log buffer wrote combine undo info.

As we know Oracle use IMU for small transaction, for large transaction it works with traditional undo segment, but we don’t know it exactly. if we find any article about it we will tell us!

 

 

--------------------------------------------------------------------------------------------------------------------------

Logminer Gives Error - In Memory Undo is unsupported!

 

Why logiminer Gives error?

We see that structure of redo entries is different, for IMU logminer can’t recover full picture of transaction in same time.

 

Now see on metalink, note ID 428167.1 which give us solution for avoid error “In Memory Undo is unsupported” in future!

1.Logminer cannot always populate all the fields of the v$logmnr_contents this is
because the redo may/may not have all the information that we need for every
column. Adding Supplemental Logging will help in more info being logged in the
redo being generated, helping populate more values. This can be done by the
following commands:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY,
UNIQUE INDEX) COLUMNS;
SQL> ALTER SYSTEM SWITCH LOGFILE;
Additionally, it is not unusual for 10g to generate more redo than previously
seen in earlier versions.

 


2: Logminer cannot always populate all the fields of the v$logmnr_contents this is because the redo
may/may not have all the information that we need for every column, you need to enable supplemental
logging on the database following Note 186150.1
Supplemental logging is required to receive reliable and consistent information from logminer.
Without supplemental logging enabled, the redo may not contain enough information
for LogMiner to construct the correct sql_redo (or any sql_redo in the case of in-memory undo, IMU).
LogMiner does not work with IMU and turning supplemental logging on disables IMU.
Please enable supplemental logging as per Note 186150.1 to eliminate these messages and update
whether you still encounter the UNSUPPORTED messages.

3: Logminer may not be able to find sql_redo for transactions that were created before supplemental
logging is enabled.
It will only work for redo's which contains information to extract the correct sql_redo

Tags: ,

DB_FILE_MULTIBLOCK_READ_COUNT parameter
by Vazha Mantua Friday, July 8, 2011 4:54 PM

DB_FILE_MULTIBLOCK_READ_COUNT
is one of the parameters you can use to minimize I/O during table scans.
It specifies the maximum number of blocks read in one I/O operation during a sequential scan.
The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count,
and whether parallel execution is being utilized for the operation.


Let see example:

Set db_file_multiblock_read_count=1

1. alter system set db_file_multiblock_read_count=1
2. alter system flush buffer_cache

And now we watch sql_plan for following statment "select /*+ full(a) */count(*) from TABLE_1 a"


SQL> explain plan for
select /*+ full(a) */count(*) from TABLE_1 a
;

Explained

SQL> select * from table (dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 635235748
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 21893 (2)| 00:05:22 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TABLE_1 | 8567K| 21893 (2)| 00:05:22 |
--------------------------------------------------------------------------

 

Time of execution statement is 387 second

--------------------------------------------------------------------------------


Now change db_file_multiblock_read_count

1.alter system set db_file_multiblock_read_count=64
2. Run statement for collecting new system statistics:
begin
dbms_stats.gather_system_stats();
end;

alter system flush buffer_cache


SQL> explain plan for
2 select /*+ full(a) */count(*) from TABLE_1 a;

Explained

SQL> select * from table (dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 635235748
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 23709 (3)| 00:03:57 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TABLE_1 | 8567K| 23709 (3)| 00:03:57 |
--------------------------------------------------------------------------


Time of execution statement is 69 second


Result told that in our case increasing of db_file_multiblock_read_count decrease runtime of statement but increase cost

The value of db_file_multiblock_read_count can have a significant impact on the overall database performance and it is not easy for the administrator to determine its most appropriate value.

Oracle Database 10g Release 2 automatically selects the appropriate value for this parameter depending on the operating system optimal I/O size and the size of the buffer cache.

Before 10g R2, DBA's used the db_file_multiblock_read_count initialization parameter to tell Oracle how many block to retrieve in the single I/O operation.

Before Oracle10g R2, the permitted values for db_file_multiblock_read_count were platform-dependent. The most common settings ranged from 4 to 64 blocks per single multi-block I/O execution.

Tags:

Oracle Database 10g: Real Application Clusters Administrator Certified Expert
by Vazha Mantua Tuesday, June 28, 2011 12:32 PM

Good Day All,

 

During last 2-3 weeks I was training for exam 1Z0-048, and today I have successfully passed it.

 

I used ebook  :

Oracle® Database
Oracle Clusterware and Oracle Real Application Clusters
Administration and Deployment Guide
10g Release 2 (10.2)
B14197-15

 

and tests for test4actual.com

 

Result of exam is that I  become Oracle Database 10g: Real Application Clusters Administrator Certified Expert.

 

If you have any question about exam 1z0-048, don’t hesitate, please email me.

Tags:

no_unnest hint, useful hint for subqueries!
by Vazha Mantua Wednesday, May 25, 2011 12:20 PM

Good Day,

Today we discuss about optimize SQL statements with subqueries . In SQL Plan subquery maybe nested or unnested.

Subqueries are nested when they appear in the WHERE clause of the parent statement. When Oracle Database evaluates a statement with a nested subquery, it must evaluate the subquery portion multiple times and may overlook some efficient access paths or joins.

Subquery unnesting unnests and merges the body of the subquery into the body of the statement that contains it, allowing the optimizer to consider them together when evaluating access paths and joins. The optimizer can unnest most subqueries, with some exceptions. Those exceptions include hierarchical subqueries and subqueries that contain a ROWNUM pseudocolumn, one of the set operators, a nested aggregate function, or a correlated reference to a query block that is not the immediate outer query block of the subquery.

Assuming no restrictions exist, the optimizer automatically unnests some (but not all) of the following nested subqueries:

  • Uncorrelated IN subqueries

  • IN and EXISTS correlated subqueries, as long as they do not contain aggregate functions or a GROUP BY clause

You can enable extended subquery unnesting by instructing the optimizer to unnest additional types of subqueries:

  • You can unnest an uncorrelated NOT IN subquery by specifying the HASH_AJ or MERGE_AJ hint in the subquery.

  • You can unnest other subqueries by specifying the UNNEST hint in the subquery.

 

 

Know talk about example how can we improve performance with no_unnest hint.

 

SELECT /*+ index(v1.table1 table1_IX1) */
v1.col1,
v1.col2,
v1.col3,
v1.col4,
v1.col5
FROM VIEW1 v1
WHERE (v1.code = :B1 And v1.ID = Nvl(null, v1.ID) And
v1.ID In
(Select
v2.sid
From VIEW2 v2
Where ('N' = 'N' And v2.Key1 = Nvl(null, Key1) And
NVL(null, Active_Flag) = Active_Flag And
NVL(null, Inform_Flag) = Inform_Flag)
Or ('Y' = 'Y' and :b2 = KEY1 and Active_Flag = 'Y')) )

sql_plan of this statement is :

 

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 244
| 1 | HASH JOIN SEMI | | 1 | 244
| 2 | NESTED LOOPS OUTER | | 1 | 231
| 3 | TABLE ACCESS BY INDEX ROWID | TABLE1 | 1 | 110
| 4 | INDEX RANGE SCAN | TABLE1_IX1 | 2 |
| 5 | TABLE ACCESS BY INDEX ROWID | TABLE2 | 1 | 121
| 6 | INDEX UNIQUE SCAN | TABLE2_PK | 1 |
| 7 | VIEW | VW_NSO_1 | 2 | 26
| 8 | CONCATENATION | | |
| 9 | TABLE ACCESS BY INDEX ROWID | TABLE3 | 1 | 21
| 10 | NESTED LOOPS | | 1 | 49
| 11 | NESTED LOOPS | | 1 | 28
| 12 | TABLE ACCESS BY INDEX ROWID| TABLE4 | 1 | 18
| 13 | INDEX UNIQUE SCAN | TABLE4_PK | 1 |
| 14 | TABLE ACCESS BY INDEX ROWID| TABLE5 | 1 | 10
| 15 | INDEX RANGE SCAN | TABLE5_PK | 1 |
| 16 | INDEX RANGE SCAN | TABLE1_IX1 | 1 |
| 17 | TABLE ACCESS BY INDEX ROWID | TABLE5 | 1 | 10
| 18 | NESTED LOOPS | | 1 | 49
| 19 | NESTED LOOPS | | 1 | 39
| 20 | TABLE ACCESS FULL | TABLE3 | 4559 | 95739
| 21 | TABLE ACCESS BY INDEX ROWID| TABLE4 | 1 | 18
| 22 | INDEX UNIQUE SCAN | TABLE4_PK | 1 |
| 23 | INDEX RANGE SCAN | TABLE5_PK | 1 |
--------------------------------------------------------------------------------

Know Write no_unnest hint in subquery:

 

SELECT /*+ index(v1.table1 table1_IX1) */
v1.col1,
v1.col2,
v1.col3,
v1.col4,
v1.col5
FROM VIEW1 v1
WHERE (v1.code = :B1 And v1.ID = Nvl(null, v1.ID) And
v1.ID In
(Select /*+ no_unnest */
v2.sid
From VIEW2 v2
Where ('N' = 'N' And v2.Key1 = Nvl(null, Key1) And
NVL(null, Active_Flag) = Active_Flag And
NVL(null, Inform_Flag) = Inform_Flag)
Or ('Y' = 'Y' and :b2 = KEY1 and Active_Flag = 'Y')) )

Cost of this plan was 9192

 

 

Let’s see new SQL PLAN

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 231 |
| 1 | FILTER | | | |
| 2 | NESTED LOOPS OUTER | | 1 | 231 |
| 3 | TABLE ACCESS BY INDEX ROWID | TABLE1 | 1 | 110 |
| 4 | INDEX RANGE SCAN | TABLE1_IX1 | 2 | |
| 5 | TABLE ACCESS BY INDEX ROWID | TABLE2 | 1 | 121 |
| 6 | INDEX UNIQUE SCAN | TABLE2_PK | 1 | |
| 7 | TABLE ACCESS BY INDEX ROWID | TABLE5 | 1 | 10 |
| 8 | NESTED LOOPS | | 1 | 49 |
| 9 | NESTED LOOPS | | 1 | 39 |
| 10 | TABLE ACCESS BY INDEX ROWID| TABLE3 | 3 | 63 |
| 11 | INDEX RANGE SCAN | TABLE3_IX1 | 3 | |
| 12 | TABLE ACCESS BY INDEX ROWID| TABLE4 | 1 | 18 |
| 13 | INDEX UNIQUE SCAN | TABLE4_PK | 1 | |
| 14 | INDEX RANGE SCAN | TABLE5_PK | 1 | |

 

Cost of this plan was 16, it’s 574 times less!!!

Tags:

Another Solution for latch library cache, Troubleshooting this event
by Vazha Mantua Tuesday, May 3, 2011 4:36 PM

Early we discuss about latch library lock and we have got some solutions for avoid this problem for Database.

 

Now discuss troubleshoot scenario for latch library cache event:

 

First of all we should find latch address of problem statement:

select p1text,p1raw from v$session_wait a
where a.EVENT='latch: library cache'

where value of p1text must be “address” and p1raw latch address for problem latch.

Now we have got latch address , we should find latch number.

 

select a.CHILD# from v$latch_children a
where a.ADDR=p1raw

 

 

We know child number of possible bottleneck.

----------------------------------------------------------------------------------------------------------------------------------------

Let’s prove that we can’t find exact SQL statement if we know only a latch address.

We should get structure of v$latch_children view with this statement:

 

select * from v$fixed_view_definition a
where a.VIEW_NAME='GV$LATCH_CHILDREN'

Result of this statement tell us that view v$latch_children is based on x$ksllt object.

 

Now find which of data dictionary object created by x$ksllt with this statement

 

select * from v$fixed_view_definition a
where upper(a.VIEW_DEFINITION) like '%X$KSLLT%'
or upper(a.VIEW_DEFINITION) like '%GV$LATCH%'

Result of this Select are :

GV$LATCH
GV$LATCH_CHILDREN
GV$LATCH_PARENT
V$LATCH
V$LATCHHOLDER
V$LATCHNAME
V$LATCH_CHILDREN
V$LATCH_MISSES
V$LATCH_PARENT

 

in these views the are no sql statements.

---------------------------------------------------------------------------------------------------------------------------------------

 

What we can do for find possible problematic SQL statement , if we only know number of child latch.

 

All SQL statements are served by number of latches given by parameter _kgl_latch_count, which means total number of latches in system , maximum value of this parameter is 66.

 

Return back to our case.

 

For find possible problematic statement we should run this statements:

 

select * from v$sqlarea a
where a.CHILD_LATCH=child#
order by a.EXECUTIONS desc

 

Problematic sql statements must be statements with maximum number of exection.

 

Also there is very interesting view named v$db_object_cache.

with this view and these SQL statements we can find cause of latch library cache

 

 

1.

select * from v$db_object_cache a
where a.CHILD_LATCH=child#
order by a.EXECUTIONS

2.

select * from v$db_object_cache a
where a.CHILD_LATCH=child#
order by a.lock

3.

select * from v$db_object_cache a
where a.CHILD_LATCH=child#
order by a.pins

if you can’t find cause of latch library cache you can restart database and SQL statements will give new order of latch child number, and you can compare statements with new problematic latch child number.

Tags: ,

Filter by APML

Calendar

<<  October 2017  >>
MoTuWeThFrSaSu
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345

View posts in large calendar

TextBox