OCP Blog
Super DBA
Step By step procedure for create 2-way replication for Oracle Databases
by Vazha Mantua Sunday, July 24, 2011 1:49 PM

Good Day All,

 

Today we will be talk about oracle streaming.

Oracle Streams is the flow of information either within a single database or from one database to another. Oracle Streams can be set up in homogeneous (all Oracle databases) or heterogeneous (non-Oracle and Oracle databases) environments. The Streams setup uses a set of processes and database objects to share data and messages. The database changes (DDL and DML) are captured at the source; those are then staged and propagated to one or more destination databases to be applied there. Message propagation uses Advanced Queuing mechanism within the Oracle databases.

 

Let see a example: It will be step by step procedure how create 2-way replica(master to master) between 2 oracle databases(Version 10.2.0.4).

We Have 2 identical Database, named ORCL and TEST.

For Step1-Step5 use database user named sys.

 

 

Step 1: Create stream administration user in both databases.

----------ORCL

create user streamadmin identified by streamadmin default tablespace users;

----------TEST

create user streamadmin identified by streamadmin default tablespace users;

Step 2: Required grants to the user streamadmin.

----------on both of the databases(ORCL&TEST)

grant dba,select_catalog_role to streamadmin;

Grants the privileges needed by a user to be an administrator for streams

begin

dbms_streams_auth.grant_admin_privilege('streamadmin',true);

end;

 

Step 3: We will use default HR schema for setting up this replication for table countries

Step 4: Check database parameters required for setting up stream replication

check job_queue_processes parameter , it should not be 0. Also you can set e global_names parameter true, but if you create database links exact as oracle SID’s value true is not necessary.

Step 5: Enable supplemental logging on the tables of the HR user in both of the databases

----------on both of the databases(ORCL&TEST)

ALTER TABLE HR.countries ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

 

Step 6: Create Database Links between the stream administrator users in the both databases.

Logon with streamadmin user…

----------ORCL

CREATE DATABASE LINK TEST CONNECT TO STREAMADMIN IDENTIFIED BY streamadmin USING 'TEST';

----------TEST

CREATE DATABASE LINK ORCL CONNECT TO STREAMADMIN IDENTIFIED BY streamadmin USING 'ORCL';

Step 7: Create Stream Queues under the streamadmin user to apply and capture the database changes to be replicated.

This also needs to be run on both databases as streamadmin user.

----------on both of the databases(ORCL&TEST)

begin

dbms_streams_adm.set_up_queue(queue_table => 'apply_q', queue_name => 'apply_q', queue_user => 'streamadmin');

end;

begin

dbms_streams_adm.set_up_queue(queue_table => 'capture_q',queue_name => 'capture_q',queue_user => 'streamadmin');

end;

Step 8: Setup data capture on both the databases:

Logon with streamadmin user…

----------on both of the databases(ORCL&TEST)

BEGIN

DBMS_STREAMS_ADM.add_table_rules

(table_name => 'HR.COUNTRIES'

,streams_type => 'CAPTURE'

,streams_name => 'CAPTURE_STREAM'

,queue_name => 'CAPTURE_Q'

,include_dml => TRUE

,include_ddl => TRUE

,inclusion_rule => TRUE

);

END;

Step 9: Setup data apply on both the databases:

Logon with streamadmin user…

----------TEST

BEGIN

DBMS_STREAMS_ADM.add_table_rules (

TABLE_NAME => 'HR.COUNTRIES',

STREAMS_TYPE => 'APPLY',

STREAMS_NAME => 'APPLY_STREAM',

QUEUE_NAME => 'APPLY_Q',

INCLUDE_DML => TRUE,

INCLUDE_DDL => TRUE,

SOURCE_DATABASE => 'ORCL');

END;

----------on ORCL

BEGIN

DBMS_STREAMS_ADM.add_table_rules (

TABLE_NAME => 'HR.COUNTRIES',

STREAMS_TYPE => 'APPLY',

STREAMS_NAME => 'APPLY_STREAM',

QUEUE_NAME => 'APPLY_Q',

INCLUDE_DML => TRUE,

INCLUDE_DDL => TRUE,

SOURCE_DATABASE => 'TEST');

END;

Step 10: Setup propagation process on both the databases:

It is basically setting up related between the capture process on one database and apply process on the other database.

Logon with streamadmin user…

----------on ORCL

BEGIN

DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(

TABLE_NAME => 'HR.COUNTRIES',

STREAMS_NAME => 'ORCL_TO_TEST',

SOURCE_QUEUE_NAME =>'CAPTURE_Q',

DESTINATION_QUEUE_NAME => 'APPLY_Q@TEST',

INCLUDE_DML => TRUE,

INCLUDE_DDL => TRUE,

SOURCE_DATABASE => 'ORCL');

END;

----------on TEST

BEGIN

DBMS_STREAMS_ADM.add_table_propagation_rules(

TABLE_NAME => 'HR.COUNTRIES',

STREAMS_NAME => 'TEST_TO_ORCL',

SOURCE_QUEUE_NAME =>'CAPTURE_Q',

DESTINATION_QUEUE_NAME => 'APPLY_Q@ORCL',

INCLUDE_DML => TRUE,

INCLUDE_DDL => TRUE,

SOURCE_DATABASE => 'TEST');

END;

Step 11: Setup schema instantiation SCN on ORCL and TEST DB’s

----------on TEST

DECLARE

ISCN NUMBER;

BEGIN

ISCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();

DBMS_APPLY_ADM.set_table_instantiation_scn@ORCL(source_object_name => 'HR.COUNTRIES',source_database_name => 'TEST',instantiation_scn => ISCN);

END;

----------on ORCL

DECLARE

ISCN NUMBER;

BEGIN

ISCN := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();

DBMS_APPLY_ADM.set_table_instantiation_scn@TEST(source_object_name => 'HR.COUNTRIES',source_database_name => 'ORCL',instantiation_scn => ISCN);

END;

 

Step 12: Start capture and apply process:

Setting the disable_on_error parameter to ‘N’ allows the apply process to continue applying row LCRs even when it encounters errors. The default value is ‘Y’ which disables the apply process automatically on the first error encountered.

----------on both DBs

BEGIN

DBMS_APPLY_ADM.SET_PARAMETER (APPLY_NAME => 'APPLY_STREAM', PARAMETER => 'DISABLE_ON_ERROR', VALUE => 'N');

DBMS_APPLY_ADM.START_APPLY (APPLY_NAME => 'APPLY_STREAM');

DBMS_CAPTURE_ADM.START_CAPTURE (CAPTURE_NAME => 'CAPTURE_STREAM');

END;

------------------------------------------------------------------------

Now you can test your replication 2-way replication which means all changes for HR.COUNTRIES from ORCL DB shipped to TEST DB and , all changes from TEST DB shipped to ORCL DB

 

Useful view’s for monitoring process are:

 

dba_apply,

dba_apply_error,

dba_apply_progress ,

dba_apply_enqueue,

dba_capture,

dba_capture_parameters,

dba_capture_prepared_tables

 

 

 

 

 

Tags: ,

Logminer Gives Error - In Memory Undo is unsupported, What is In memory undo?
by Vazha Mantua Tuesday, July 12, 2011 1:44 PM

 

 

Hello All,

Oracle uses undo segments for these proposes

Undo records are used to:

1. Roll back transactions when a ROLLBACK statement is issued

2. Recover the database

3. Provide read consistency

4. Analyze data as of an earlier point in time by using Oracle Flashback Query

5. Recover from logical corruptions using Oracle Flashback features

-----------------------------------------------------------------------------------------------------------------------

In Memory undo!

 

In 10G,Oracle begin use new feature called In memory undo know as IMU, instead of undo

segments, they use memory structure which store undo information. Oracle still creates undo because it must still provide commit, rollback, and read consistency capabilities.

This has many implications. Anytime an Oracle buffer is changed the associated change (called a redo vector) is written into the redo log buffer. Unfortunately,

even if an undo segment is changed its change must also be recorded in the redo log buffer. But since IMUs are not undo segments, their changes

do not generate redo! So IMU will reduce the amount of redo an instance generates.

When we use traditional undo segments for each change , in redo log buffer write information about change of buffer cache block and undo block!

Another amazing feature of IMU is when it comes time to transform the in-memory undo into undo segment format(commit complete),

multiple IMUs can be consolidated into a single undo segment write.

When we use IMU for each change, in redo log buffer write information only for buffer cache block and after when IMU flushed in redo log buffer wrote combine undo info.

As we know Oracle use IMU for small transaction, for large transaction it works with traditional undo segment, but we don’t know it exactly. if we find any article about it we will tell us!

 

 

--------------------------------------------------------------------------------------------------------------------------

Logminer Gives Error - In Memory Undo is unsupported!

 

Why logiminer Gives error?

We see that structure of redo entries is different, for IMU logminer can’t recover full picture of transaction in same time.

 

Now see on metalink, note ID 428167.1 which give us solution for avoid error “In Memory Undo is unsupported” in future!

1.Logminer cannot always populate all the fields of the v$logmnr_contents this is
because the redo may/may not have all the information that we need for every
column. Adding Supplemental Logging will help in more info being logged in the
redo being generated, helping populate more values. This can be done by the
following commands:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY,
UNIQUE INDEX) COLUMNS;
SQL> ALTER SYSTEM SWITCH LOGFILE;
Additionally, it is not unusual for 10g to generate more redo than previously
seen in earlier versions.

 


2: Logminer cannot always populate all the fields of the v$logmnr_contents this is because the redo
may/may not have all the information that we need for every column, you need to enable supplemental
logging on the database following Note 186150.1
Supplemental logging is required to receive reliable and consistent information from logminer.
Without supplemental logging enabled, the redo may not contain enough information
for LogMiner to construct the correct sql_redo (or any sql_redo in the case of in-memory undo, IMU).
LogMiner does not work with IMU and turning supplemental logging on disables IMU.
Please enable supplemental logging as per Note 186150.1 to eliminate these messages and update
whether you still encounter the UNSUPPORTED messages.

3: Logminer may not be able to find sql_redo for transactions that were created before supplemental
logging is enabled.
It will only work for redo's which contains information to extract the correct sql_redo

Tags: ,

DB_FILE_MULTIBLOCK_READ_COUNT parameter
by Vazha Mantua Friday, July 8, 2011 4:54 PM

DB_FILE_MULTIBLOCK_READ_COUNT
is one of the parameters you can use to minimize I/O during table scans.
It specifies the maximum number of blocks read in one I/O operation during a sequential scan.
The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count,
and whether parallel execution is being utilized for the operation.


Let see example:

Set db_file_multiblock_read_count=1

1. alter system set db_file_multiblock_read_count=1
2. alter system flush buffer_cache

And now we watch sql_plan for following statment "select /*+ full(a) */count(*) from TABLE_1 a"


SQL> explain plan for
select /*+ full(a) */count(*) from TABLE_1 a
;

Explained

SQL> select * from table (dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 635235748
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 21893 (2)| 00:05:22 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TABLE_1 | 8567K| 21893 (2)| 00:05:22 |
--------------------------------------------------------------------------

 

Time of execution statement is 387 second

--------------------------------------------------------------------------------


Now change db_file_multiblock_read_count

1.alter system set db_file_multiblock_read_count=64
2. Run statement for collecting new system statistics:
begin
dbms_stats.gather_system_stats();
end;

alter system flush buffer_cache


SQL> explain plan for
2 select /*+ full(a) */count(*) from TABLE_1 a;

Explained

SQL> select * from table (dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 635235748
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 23709 (3)| 00:03:57 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TABLE_1 | 8567K| 23709 (3)| 00:03:57 |
--------------------------------------------------------------------------


Time of execution statement is 69 second


Result told that in our case increasing of db_file_multiblock_read_count decrease runtime of statement but increase cost

The value of db_file_multiblock_read_count can have a significant impact on the overall database performance and it is not easy for the administrator to determine its most appropriate value.

Oracle Database 10g Release 2 automatically selects the appropriate value for this parameter depending on the operating system optimal I/O size and the size of the buffer cache.

Before 10g R2, DBA's used the db_file_multiblock_read_count initialization parameter to tell Oracle how many block to retrieve in the single I/O operation.

Before Oracle10g R2, the permitted values for db_file_multiblock_read_count were platform-dependent. The most common settings ranged from 4 to 64 blocks per single multi-block I/O execution.

Tags:

Oracle Database 10g: Real Application Clusters Administrator Certified Expert
by Vazha Mantua Tuesday, June 28, 2011 12:32 PM

Good Day All,

 

During last 2-3 weeks I was training for exam 1Z0-048, and today I have successfully passed it.

 

I used ebook  :

Oracle® Database
Oracle Clusterware and Oracle Real Application Clusters
Administration and Deployment Guide
10g Release 2 (10.2)
B14197-15

 

and tests for test4actual.com

 

Result of exam is that I  become Oracle Database 10g: Real Application Clusters Administrator Certified Expert.

 

If you have any question about exam 1z0-048, don’t hesitate, please email me.

Tags:

no_unnest hint, useful hint for subqueries!
by Vazha Mantua Wednesday, May 25, 2011 12:20 PM

Good Day,

Today we discuss about optimize SQL statements with subqueries . In SQL Plan subquery maybe nested or unnested.

Subqueries are nested when they appear in the WHERE clause of the parent statement. When Oracle Database evaluates a statement with a nested subquery, it must evaluate the subquery portion multiple times and may overlook some efficient access paths or joins.

Subquery unnesting unnests and merges the body of the subquery into the body of the statement that contains it, allowing the optimizer to consider them together when evaluating access paths and joins. The optimizer can unnest most subqueries, with some exceptions. Those exceptions include hierarchical subqueries and subqueries that contain a ROWNUM pseudocolumn, one of the set operators, a nested aggregate function, or a correlated reference to a query block that is not the immediate outer query block of the subquery.

Assuming no restrictions exist, the optimizer automatically unnests some (but not all) of the following nested subqueries:

  • Uncorrelated IN subqueries

  • IN and EXISTS correlated subqueries, as long as they do not contain aggregate functions or a GROUP BY clause

You can enable extended subquery unnesting by instructing the optimizer to unnest additional types of subqueries:

  • You can unnest an uncorrelated NOT IN subquery by specifying the HASH_AJ or MERGE_AJ hint in the subquery.

  • You can unnest other subqueries by specifying the UNNEST hint in the subquery.

 

 

Know talk about example how can we improve performance with no_unnest hint.

 

SELECT /*+ index(v1.table1 table1_IX1) */
v1.col1,
v1.col2,
v1.col3,
v1.col4,
v1.col5
FROM VIEW1 v1
WHERE (v1.code = :B1 And v1.ID = Nvl(null, v1.ID) And
v1.ID In
(Select
v2.sid
From VIEW2 v2
Where ('N' = 'N' And v2.Key1 = Nvl(null, Key1) And
NVL(null, Active_Flag) = Active_Flag And
NVL(null, Inform_Flag) = Inform_Flag)
Or ('Y' = 'Y' and :b2 = KEY1 and Active_Flag = 'Y')) )

sql_plan of this statement is :

 

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 244
| 1 | HASH JOIN SEMI | | 1 | 244
| 2 | NESTED LOOPS OUTER | | 1 | 231
| 3 | TABLE ACCESS BY INDEX ROWID | TABLE1 | 1 | 110
| 4 | INDEX RANGE SCAN | TABLE1_IX1 | 2 |
| 5 | TABLE ACCESS BY INDEX ROWID | TABLE2 | 1 | 121
| 6 | INDEX UNIQUE SCAN | TABLE2_PK | 1 |
| 7 | VIEW | VW_NSO_1 | 2 | 26
| 8 | CONCATENATION | | |
| 9 | TABLE ACCESS BY INDEX ROWID | TABLE3 | 1 | 21
| 10 | NESTED LOOPS | | 1 | 49
| 11 | NESTED LOOPS | | 1 | 28
| 12 | TABLE ACCESS BY INDEX ROWID| TABLE4 | 1 | 18
| 13 | INDEX UNIQUE SCAN | TABLE4_PK | 1 |
| 14 | TABLE ACCESS BY INDEX ROWID| TABLE5 | 1 | 10
| 15 | INDEX RANGE SCAN | TABLE5_PK | 1 |
| 16 | INDEX RANGE SCAN | TABLE1_IX1 | 1 |
| 17 | TABLE ACCESS BY INDEX ROWID | TABLE5 | 1 | 10
| 18 | NESTED LOOPS | | 1 | 49
| 19 | NESTED LOOPS | | 1 | 39
| 20 | TABLE ACCESS FULL | TABLE3 | 4559 | 95739
| 21 | TABLE ACCESS BY INDEX ROWID| TABLE4 | 1 | 18
| 22 | INDEX UNIQUE SCAN | TABLE4_PK | 1 |
| 23 | INDEX RANGE SCAN | TABLE5_PK | 1 |
--------------------------------------------------------------------------------

Know Write no_unnest hint in subquery:

 

SELECT /*+ index(v1.table1 table1_IX1) */
v1.col1,
v1.col2,
v1.col3,
v1.col4,
v1.col5
FROM VIEW1 v1
WHERE (v1.code = :B1 And v1.ID = Nvl(null, v1.ID) And
v1.ID In
(Select /*+ no_unnest */
v2.sid
From VIEW2 v2
Where ('N' = 'N' And v2.Key1 = Nvl(null, Key1) And
NVL(null, Active_Flag) = Active_Flag And
NVL(null, Inform_Flag) = Inform_Flag)
Or ('Y' = 'Y' and :b2 = KEY1 and Active_Flag = 'Y')) )

Cost of this plan was 9192

 

 

Let’s see new SQL PLAN

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 231 |
| 1 | FILTER | | | |
| 2 | NESTED LOOPS OUTER | | 1 | 231 |
| 3 | TABLE ACCESS BY INDEX ROWID | TABLE1 | 1 | 110 |
| 4 | INDEX RANGE SCAN | TABLE1_IX1 | 2 | |
| 5 | TABLE ACCESS BY INDEX ROWID | TABLE2 | 1 | 121 |
| 6 | INDEX UNIQUE SCAN | TABLE2_PK | 1 | |
| 7 | TABLE ACCESS BY INDEX ROWID | TABLE5 | 1 | 10 |
| 8 | NESTED LOOPS | | 1 | 49 |
| 9 | NESTED LOOPS | | 1 | 39 |
| 10 | TABLE ACCESS BY INDEX ROWID| TABLE3 | 3 | 63 |
| 11 | INDEX RANGE SCAN | TABLE3_IX1 | 3 | |
| 12 | TABLE ACCESS BY INDEX ROWID| TABLE4 | 1 | 18 |
| 13 | INDEX UNIQUE SCAN | TABLE4_PK | 1 | |
| 14 | INDEX RANGE SCAN | TABLE5_PK | 1 | |

 

Cost of this plan was 16, it’s 574 times less!!!

Tags:

Another Solution for latch library cache, Troubleshooting this event
by Vazha Mantua Tuesday, May 3, 2011 4:36 PM

Early we discuss about latch library lock and we have got some solutions for avoid this problem for Database.

 

Now discuss troubleshoot scenario for latch library cache event:

 

First of all we should find latch address of problem statement:

select p1text,p1raw from v$session_wait a
where a.EVENT='latch: library cache'

where value of p1text must be “address” and p1raw latch address for problem latch.

Now we have got latch address , we should find latch number.

 

select a.CHILD# from v$latch_children a
where a.ADDR=p1raw

 

 

We know child number of possible bottleneck.

----------------------------------------------------------------------------------------------------------------------------------------

Let’s prove that we can’t find exact SQL statement if we know only a latch address.

We should get structure of v$latch_children view with this statement:

 

select * from v$fixed_view_definition a
where a.VIEW_NAME='GV$LATCH_CHILDREN'

Result of this statement tell us that view v$latch_children is based on x$ksllt object.

 

Now find which of data dictionary object created by x$ksllt with this statement

 

select * from v$fixed_view_definition a
where upper(a.VIEW_DEFINITION) like '%X$KSLLT%'
or upper(a.VIEW_DEFINITION) like '%GV$LATCH%'

Result of this Select are :

GV$LATCH
GV$LATCH_CHILDREN
GV$LATCH_PARENT
V$LATCH
V$LATCHHOLDER
V$LATCHNAME
V$LATCH_CHILDREN
V$LATCH_MISSES
V$LATCH_PARENT

 

in these views the are no sql statements.

---------------------------------------------------------------------------------------------------------------------------------------

 

What we can do for find possible problematic SQL statement , if we only know number of child latch.

 

All SQL statements are served by number of latches given by parameter _kgl_latch_count, which means total number of latches in system , maximum value of this parameter is 66.

 

Return back to our case.

 

For find possible problematic statement we should run this statements:

 

select * from v$sqlarea a
where a.CHILD_LATCH=child#
order by a.EXECUTIONS desc

 

Problematic sql statements must be statements with maximum number of exection.

 

Also there is very interesting view named v$db_object_cache.

with this view and these SQL statements we can find cause of latch library cache

 

 

1.

select * from v$db_object_cache a
where a.CHILD_LATCH=child#
order by a.EXECUTIONS

2.

select * from v$db_object_cache a
where a.CHILD_LATCH=child#
order by a.lock

3.

select * from v$db_object_cache a
where a.CHILD_LATCH=child#
order by a.pins

if you can’t find cause of latch library cache you can restart database and SQL statements will give new order of latch child number, and you can compare statements with new problematic latch child number.

Tags: ,

Concatenation and filter option in SQL PLAN
by Vazha Mantua Friday, April 8, 2011 8:38 PM
Hello All,
For example we have this statement:
select count(*) from customers
where acc_nbr = nvl(:v,acc_nbr)
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=137 Card=1 Bytes=7)
   1    0   SORT (AGGREGATE)
   2    1     CONCATENATION
   3    2       FILTER
   4    3         INDEX (FAST FULL SCAN) OF 'CUST_PK' (UNIQUE) (Cost
   5    2       FILTER
   6    5         INDEX (UNIQUE SCAN) OF 'CUST_PK' (UNIQUE) (Cost=2
In execution plan we see 2 filters and then concatenation. Reason of this is nvl comand, Oracle CBO rule . We have 2 possible scenario bind variable :v is null or is not null.
but for RBO rule change sql plan and filters and concatenation is not appear in SQL PLAN
Many examples say that in this case that CBO is better than RBO(Rule hint), but sometimes RBU rule is better.
How can we avoid 2 filters in concatenation,simple we have 2 solutions:
select /*+ rule */ count(*) from customers where acc_nbr = nvl(:v,acc_nbr)
or
select /*+ no_expand*/ count(*) from customers where acc_nbr = nvl(:v,acc_nbr).
Let describe hint no_expand:
The NO_EXPAND hint prevents the cost-based optimizer from considering OR-expansion for queries having OR conditions or IN-lists in the WHERE clause. Usually, the optimizer considers using OR expansion and uses this method if it decides that the cost is lower than not using it.

Tags: , ,

ORA-10567: Redo is inconsistent with data block (file# 25, block# 7652)
by Vazha Mantua Wednesday, March 9, 2011 10:57 AM

Good morning all,

 

On of our standby database stop apply process with error ORA-10567: Redo is inconsistent with data block (file# 25, block# 7652).

 

ORA-10567: Redo is inconsistent with data block (file# 25, block# 7652)

ORA-10564: tablespace USER_DATA

ORA-01110: data file 124: ‘/u01/oradata/ORAST/USER_DATA_01.DBF’

ORA-10561: block type ‘TRANSACTION MANAGED INDEX BLOCK’, data object# 7652

 

Cause

The Redo when shipped across network or somewhere along the way had some issues which caused the inconsistency, hence not able to syncup with the standby datafiles.

 

Solution

On primary database

1.ALTER tablespace USER_DATA begin backup;

2.copy USER_DATA01.dbf datafile from primary DB to standby DB

On primary database

3.ALTER tablespace USER_DATA end backup;

on standby database start recovery

4. recover managed standby database using current logfile disconnect;

 

Remark: Does not do this operation when production database has pick load, begin backup can decrease DB performance

Tags: ,

Oracle 10g RAC Single Node Installation on Solaris 10
by Vazha Mantua Monday, February 21, 2011 4:28 PM

Oracle 10g RAC Single Node Installation on Solaris 10

Pre-Installation Tasks

Create oracle user and appropriate groups

Login as a root user and create required OS groups and users.

Make the following directories (app and oracle) in /u0 :

cd /u0
mkdir –p app/oracle

Create oinstall and dba groups:

groupadd oinstall
groupadd dba

Create oracle user:

useradd –s /bin/ksh –d /u0/app/oracle –g oinstall –G dba oracle

Change the owner for app folder.

chown –R oracle:oinstall app

Change password.

passwd oracle

Configuring SSH

Create RSA and DSA keys on each node. Log in as the oracle user and go to the ORACLE_BASE directory.

Create directory .ssh

mkdir .ssh

Generate RSA and DSA keys

/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa

Add keys to an authorized key file, this file should exist in .ssh directory. If it doesn’t exist, create it. Change the direction to .ssh directory and run the following.

ssh charly1 cat /u0/app/oracle/.ssh/id_rsa.pub >> authorized_keys
ssh charly1 cat /u0/app/oracle/.ssh/id_dsa.pub >> authorized_keys

Enable ssh user equivalency

/usr/bin/ssh-agent $SHELL

/usr/bin/ssh-add

To prevent Oracle clusterware installation errors caused by stty commands, add the following text in oracle user’s profile.

if [ -t 0 ]; then

stty intr ^C

fi

Network Requirements

Create second IP interface. Open or create the file /etc/hostname.ce5 and enter the following text:

Host1-priv

Add the following entries into /etc/host file:

20.0.0.100 Host1 loghost

192.168.2.117 Host1-priv

20.0.0.105 Host1-vip

Where 192.168.2.117 is a private IP address and 20.0.0.105 is a virtual IP address of the server.

Connect as a root user and create /etc /hosts.equiv file. Then open this file for editing and enter the following text:

Host1 oracle

Configuring Kernel Parameters

To add the system parameters open /etc/system file and add the following entries:

set shmsys:shminfo_shmmax = 4294967295

set shmsys:shminfo_shmmin = 1

set shmsys:shminfo_shmmni = 100

set shmsys:shminfo_shmseg = 10

set semsys:seminfo_semmni = 100

set semsys:seminfo_semmns = 1024

set semsys:seminfo_semmsl = 256

set md:mirrored_root_flag=1

set noexec_user_stack=1

Identifying Required Software Directories

Creating an Oracle Base Directory

mkdir -p /u0/app/oracle
chown -R oracle:oinstall /u0/app/oracle
chmod -R 775 /u0/app/oracle

Creating the Oracle Clusterware Home Directory

mkdir -p /u0/app/oracle/product/10.2.0/crs

chown -R root:oinstall /u0/app/oracle/product/10.2.0/crs

chmod -R 775 /u0/app/oracle/product/10.2.0/crs

Installing Oracle Clusterware

First of all, you should download Oracle Clusterware from the oracle site.

Verifying Oracle Clusterware Requirements with CVU

Extract downloaded file and run runcluvfy.sh file, which should be located in clusterware/cluvfy folder.

./runcluvfy.sh comp nodecon -n Host1 –verbose

Installing Oracle Clusterware with OUI

Initialize the devices that you want to use for the Oracle Cluster Registry (OCR) and Voting Disk(VD)

1.Voting file- uses by the cluster synchronization service deamons for monitoring information across the cluster. It’s size around 20MB(use format utility and reserve 200MB on 5th slice)

2. Oracle cluster registry(OCR file) it maintain information about the high-availability components such as cluster node list, CRS application profiles(Virtual interconnect protocol address, services). It’s size around 200MB(use format utility and reserve on 7th slice)

For voting disk

dd if=/dev/zero of=/dev/rdsk/c5t200g35A100TK170Ta00ds5 bs=125829120 count=1

chown oracle:dba /dev/rdsk/c5t200g35A100TK170Ta00ds5

chmod 660 /dev/rdsk/c5t200g35A100TK170Ta00ds5

For OCR

dd if=/dev/zero of=/dev/rdsk/c5t200g35A100TK170Ta00ds7 bs=125829120 count=1

chown root:dba /dev/rdsk/c5t200g35A100TK170Ta00ds7

chmod 660 /dev/rdsk/c5t200g35A100TK170Ta00ds7

Connect as the oracle user and run runInstaller.sh, located in CRS installation.

./runInstaller

clip_image001

clip_image002

clip_image003

clip_image004

clip_image005

clip_image006

clip_image007

clip_image008

Run these scripts and then click OK.

clip_image009

Now we should run ./vipca and configure Host1-vip virtual ip address(20.0.0.105). Go to the installation window, click ok and retry install failed component.

Installing Oracle database files

clip_image010

clip_image011

clip_image012

clip_image013

clip_image014

Run this script and click OK, then click Exit button.

Configure oracle listener by netca.

Configuring ASM

clip_image015

clip_image016

clip_image017

clip_image018

Click Yes.

clip_image019

clip_image020

Run the following command, for to make oracle user owner for the following disk slice

chown oracle:oinstall /dev/rdsk/c5156D778A145a11d0s6

clip_image021

clip_image022

Click Finish.

clip_image023

Creating database using DBCA

clip_image024

clip_image016[1]

clip_image025

clip_image026

Click Next>> and check/uncheck Configure the Database with Enterprise Manager.

clip_image027

clip_image028

clip_image022[1]

clip_image029

clip_image030

clip_image031

clip_image032

clip_image033

Tags: , ,

DBMS_SHARED_POOL Package
by Vazha Mantua Tuesday, January 11, 2011 12:51 PM

Hello All,

Mary Christmas and happy New Year.

 

I want tell you about package dbms_shared_pool.

DBMS_SHARED_POOL can be used to manage the affect of the LRU mechanism on the shared pool. It can be used to pin PL/SQL objects, SQL cursors, triggers, and sequences. it enables you to display the sizes of objects in the shared pool, and mark them for keeping or unkeeping in order to reduce memory fragmentation.

 

Install the DBMS_SHARED_POOL package.

run script: $ORACLE_HOME/rdbms/admin/dbmspool.sql;

 

Check for objects in the Shared Pool larger than X KBytes

run procedure
begin
dbms_output.enable(null);  
dbms_shared_pool.sizes(200);
end;

Pin something into the Shared Pool

begin

dbms_shared_pool.keep('SYS.STANDARD');
end;

Unpin something from the Shared Pool

begin

dbms_shared_pool.unkeep('SYS.STANDARD');

end;

 

Check what objects are pinned

SELECT * FROM v$db_object_cache WHERE kept = 'YES';

Tags: ,

Filter by APML

Calendar

<<  October 2017  >>
MoTuWeThFrSaSu
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345

View posts in large calendar

TextBox