Long-Term RMAN Backups (Online Backup Without Keep All Logs)

102091053-short-term-long-term.1910x1000

Short-term backups takes periodic images of active data in order to provide a method of recovering data that have been deleted or destroyed.
These backups are retained only for a few days or weeks. The primary purpose of short-term backup is to recover recent data or if a media failure or disaster occurs, then you can use it to restore your backups and bring your data available.

Most of Backup System have two different setup for Short-Term and Long-Term backups.

Short-Term Backups are classic backup which have short retention and are stored on  pool of data active, often these data are always available on Tape Library or Disk.

Long-Term Backups are archival backups which have long retention and often are stored only on tapes, usually these tapes are off-site or in safes.

Will try explain in few lines the purpose of Long-Term backups.

You can back up the database on the first day of every year to satisfy a regulatory requirement and store the media offsite.
Years after you make the archival backup, you could restore and recover it to query the data as it appeared at the time of the backup. We have a Data Presevation.

Data preservation is related to data protection, but it serves a different purpose.
For example, you may need to preserve a copy of a database as it existed at the end of a business quarter.
This backup is not part of the disaster recovery strategy. The media to which these backups are written are often unavailable after the backup is complete.
You may send the tape into fire storage or ship a portable hard drive to a testing facility.

RMAN provides a convenient way to create a backup and exempt it from your backup retention policy. This type of backup is known as an archival backup.

Why this introduction about short-term or long-term backups?

I manage some environments and I see that some DBA fail to understand this difference and end up making incorrect configurations for archival backup and classic backups.

IMHO, the main villain is RDBMS 10.2 and earlier version, because there is no long-term backup concept when take online backup.

In 10.2  to take a archival backup we must shutdown database to not keep all archivelogs (many envs this is not possible), if we take online backup with long retention (KEEP LOGS) you need keep all archivelogs until first backup be obsolete. On this version there is no difference between a classic backup or archival backups. (in fact have diference, but it’s a slight difference)

So, because that many DBA still stuck on this concept of this version.

In 11.1 this changed, you can create a consistent archival backup without need keep all archivelogs.

When to use this rman command ( “backup database keep logs forever/ until sysdate + XXX”) on 11.1 version or above?
It should be only used if the DBA have a requirement to keep ALL changes of database forever or until some date and need recover ANY point-in-time for a database life. If you don’t need it, this setup is insane and increase cost and management of its environment.

Let’s start with some tests. I’m using rdbms 11.2.0.4.

How to take a Long-Term Backup in 11.1 or higher version.

Best practices:  3 important things.

  1. Use RESTORE POINT to easy the restore.RESTORE POINT clause that creates a normal restore point, which is a label for an SCN to which the backup must be recovered to be made consistent. The SCN is captured just after the datafile backups complete. RMAN resynchronizes restore points with the recovery catalog and maintains the restore points as long as the backup exists.
    More detail on documentation below.
    https://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmbckba.htm#BRADV89551
  2. Use non-default TAG this will easy  manage backup later.
  3. Use CHECK LOGICAL Option to avoid backup logical corrupted blocks.By default, RMAN does not check for logical corruption. If you specify CHECK LOGICAL on the BACKUP command, however, then RMAN tests data and index blocks for logical corruption, such as corruption of a row piece or index entry, and log them in the alert log located in the Automatic Diagnostic Repository (ADR). If you use RMAN with CHECK LOGICAL option when backing up or restoring files, then it detects all types of block corruption that are possible to detect.What a disappointment you try to restore a backup taken on last year and find out which there are  many logical blocks corrupted.

P.S: I always use CHECK LOGICAL for my all backups (short or long-term backups)

 

Performing a Long-Term Backup

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Feb 18 15:56:59 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: TESTDB (DBID=2633899005)
 connected to recovery catalog database

RMAN> run {
 2> ALLOCATE CHANNEL tape1 
    DEVICE TYPE 'SBT_TAPE' PARMS 
    'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
 3> BACKUP CHECK LOGICAL 
    FILESPERSET 10 
    FULL DATABASE 
    TAG 'MONTHLY_FULL_20160218' 
    FORMAT 'RMAN_MONTHLY_FULL_%d_%s_%p_%T_%c' 
    KEEP UNTIL TIME 'SYSDATE + 400' 
    RESTORE POINT BKP_MONTHLY_20160218_1556;
 4> RELEASE CHANNEL tape1;
 5> }
 6> exit
 allocated channel: tape1
 channel tape1: SID=241 device type=SBT_TAPE
 channel tape1: Data Protection for Oracle: version 5.5.2.0

Starting backup at Feb 18 2016 15:57:07
 current log archived

backup will be obsolete on date Mar 24 2017 15:57:15
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed full datafile backup set
 channel tape1: specifying datafile(s) in backup set
 input datafile file number=00001 name=+TSM/testdb/datafile/system.501.901826545
 input datafile file number=00006 name=+TSM/testdb/datafile/users.487.901826545
 input datafile file number=00002 name=+TSM/testdb/datafile/sysaux.542.901826547
 input datafile file number=00003 name=+TSM/testdb/datafile/undotbs1.442.901826547
 input datafile file number=00004 name=+TSM/testdb/datafile/users.314.901826547
 input datafile file number=00005 name=+TSM/testdb/datafile/users.510.901826547
 channel tape1: starting piece 1 at Feb 18 2016 15:57:15
 channel tape1: finished piece 1 at Feb 18 2016 15:58:40
 piece handle=RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:01:25

backup will be obsolete on date Mar 24 2017 15:58:40
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed full datafile backup set
 channel tape1: specifying datafile(s) in backup set
 including current SPFILE in backup set
 channel tape1: starting piece 1 at Feb 18 2016 15:58:41
 channel tape1: finished piece 1 at Feb 18 2016 15:58:42
 piece handle=RMAN_MONTHLY_FULL_TESTDB_329_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:00:01
 current log archived
 backup will be obsolete on date Mar 24 2017 15:58:46
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed archived log backup set
 channel tape1: specifying archived log(s) in backup set
 input archived log thread=1 sequence=91 RECID=830 STAMP=904147125
 channel tape1: starting piece 1 at Feb 18 2016 15:58:46
 channel tape1: finished piece 1 at Feb 18 2016 15:58:47
 piece handle=RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:00:01

backup will be obsolete on date Mar 24 2017 15:58:47
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed full datafile backup set
 channel tape1: specifying datafile(s) in backup set
 including current control file in backup set
 channel tape1: starting piece 1 at Feb 18 2016 15:58:49
 channel tape1: finished piece 1 at Feb 18 2016 15:58:50
 piece handle=RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:00:01
 Finished backup at Feb 18 2016 15:58:50

released channel: tape1

Recovery Manager complete.

Change Backup To UNAVAILABLE

If this backup will be off-line in Tape Library  then mark it as UNAVAILABLE to avoid RMAN try use it during normal restore operation. Normal Restore should use short-term backups which usually  is active on Tape Library.

As I used non-default tag it’s easy change it.

RMAN> CHANGE BACKUP TAG MONTHLY_FULL_20160218 UNAVAILABLE;

changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1 RECID=311 STAMP=904147035
 changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_329_1_20160218_1 RECID=312 STAMP=904147121
 changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1 RECID=313 STAMP=904147126
 changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1 RECID=314 STAMP=904147129
 Changed 4 objects to UNAVAILABLE status

How to restore it?

RMAN> shutdown abort;

Oracle instance shut down

RMAN> startup nomount;

connected to target database (not started)
 Oracle instance started

Total System Global Area 801701888 bytes

Fixed Size 2250648 bytes
 Variable Size 251660392 bytes
 Database Buffers 536870912 bytes
 Redo Buffers 10919936 bytes

RMAN> run {
 2> SET UNTIL RESTORE POINT BKP_MONTHLY_20160218_1556;
 3> RESTORE CONTROLFILE;
 4> STARTUP MOUNT;
 5> RESTORE DATABASE;
 6> RECOVER DATABASE;
 7> ALTER  DATABASE OPEN RESETLOGS;
 8> };

executing command: SET until clause

Starting restore at 18-02-2016 16:56:32
 allocated channel: ORA_SBT_TAPE_1
 channel ORA_SBT_TAPE_1: SID=225 device type=SBT_TAPE
 channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.2.0
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: SID=233 device type=DISK

channel ORA_SBT_TAPE_1: starting datafile backup set restore
 channel ORA_SBT_TAPE_1: restoring control file
 channel ORA_SBT_TAPE_1: reading from backup piece RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1
 channel ORA_SBT_TAPE_1: piece handle=RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1 
 tag=MONTHLY_FULL_20160218
 channel ORA_SBT_TAPE_1: restored backup piece 1
 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:02
 output file name=+TSM/testdb/controlfile/current.518.901826451
 Finished restore at 18-02-2016 16:56:42

database is already started
 database mounted
 released channel: ORA_SBT_TAPE_1
 released channel: ORA_DISK_1

Starting restore at 18-02-2016 16:56:48
 Starting implicit crosscheck backup at 18-02-2016 16:56:48
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: SID=225 device type=DISK
 Finished implicit crosscheck backup at 18-02-2016 16:56:49

Starting implicit crosscheck copy at 18-02-2016 16:56:49
 using channel ORA_DISK_1
 Finished implicit crosscheck copy at 18-02-2016 16:56:49

searching for all files in the recovery area
 cataloging files...
 no files cataloged

allocated channel: ORA_SBT_TAPE_1
 channel ORA_SBT_TAPE_1: SID=233 device type=SBT_TAPE
 channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.2.0
 using channel ORA_DISK_1

channel ORA_SBT_TAPE_1: starting datafile backup set restore
 channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
 channel ORA_SBT_TAPE_1: restoring datafile 00001 to +TSM/testdb/datafile/system.501.901826545
 channel ORA_SBT_TAPE_1: restoring datafile 00002 to +TSM/testdb/datafile/sysaux.542.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00003 to +TSM/testdb/datafile/undotbs1.442.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00004 to +TSM/testdb/datafile/users.314.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00005 to +TSM/testdb/datafile/users.510.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00006 to +TSM/testdb/datafile/users.487.901826545
 channel ORA_SBT_TAPE_1: reading from backup piece RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1
 channel ORA_SBT_TAPE_1: piece handle=RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1 
 tag=MONTHLY_FULL_20160218
 channel ORA_SBT_TAPE_1: restored backup piece 1
 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:45
 Finished restore at 18-02-2016 16:58:43

Starting recover at 18-02-2016 16:58:44
 using channel ORA_SBT_TAPE_1
 using channel ORA_DISK_1

starting media recovery

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
 channel ORA_SBT_TAPE_1: restoring archived log
 archived log thread=1 sequence=91
 channel ORA_SBT_TAPE_1: reading from backup piece RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1
 channel ORA_SBT_TAPE_1: piece handle=RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1 
 tag=MONTHLY_FULL_20160218
 channel ORA_SBT_TAPE_1: restored backup piece 1
 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
 archived log file name=+TSM/testdb/archivelog/2016_02_18/thread_1_seq_91.267.904150729 thread=1 sequence=91
 channel default: deleting archived log(s)
 archived log file name=+TSM/testdb/archivelog/2016_02_18/thread_1_seq_91.267.904150729 RECID=834 STAMP=904150729
 media recovery complete, elapsed time: 00:00:02
 Finished recover at 18-02-2016 16:58:51

database opened
 new incarnation of database registered in recovery catalog
 starting full resync of recovery catalog
 full resync complete

How to migrate old fashion to new fashion to make long-term backup

Many environments inherit old backup configurations  of old DBAs, then is very common to see set up long-term backup environments using  “backup  database keep logs;”.

You can ask: I don’t want to keep all logs and  want to keep only my backups which is not obsolete? It’s possible?

Yes, There is a  workaround to avoid RMAN still keeping all logs forever, if you are using 11.1 or above.

AGAIN, if you are using 10.2 or later version you CAN’T do this below.

 

What you need to do.

First of all you need re-configure your scripts to  long-term backups by using restore point from now.

What to do with old backups.

We can leave this the old setup until the last backup take with “KEEP LOGS” retention be obsolete. All Archivelogs Backups will be obsolete and RMAN will automaticaly deleted it. (recommended)

P.S: If you have a Long Term backup with “KEEP FOREVER”  the Archivelogs Backups never will be obsolete.

There is a workaround (or bug)  to delete all archivelogs and make a “self-contained” backup even using “keep logs”. This can be dangerous if not configured properly,  RMAN will delete all archived logs that not satisfy retention policy. That’s mean, if it go wrong you will have a backup of database without archivelog to recovery it. (i.e you don’t will be able to recovery that backup).

Steps:

  • Find on RMAN Catalog all  Backups with clause keep.
  • Find all Backup of Archivelogs between 24h before and 24h after the backup with  clause “Keep” was taken.
    • e.g  backup was finished at 02/01/2016 10:00, find all backup of archivelogs between 02/01/2016 10:00 and 03/01/2016 10:00
  • Change theses backups (archivelogs, controlfile and datafile) to UNAVAILABLE
  • After change these backups to UNAVAILABLE all Backups of Archivelogs will be orphan and will be obsolete.

 

Eg.

(dd-mm-yyyy hh24:mi)

Backup 1 ( 01/01/2016 12:00)

Backup 2 (01/02/2016 12:00)

Backup 3 (01/03/2016 12:00)

Backup 4 (01/04/2016 12:00)

 

 

Scenario 1:

Change only  Backup 1  and Archivelogs to Restore this Backup to unavailable.

  • All archivelogs that does not have status UNAVAILABLE  until 01/02/2016 12:00 will be obsolete and will deleted on next “delete obsolete”.
  • All futures backups of archivelog will be kept until Backup 2 be obsolete.

Scenario 2:

Change only Backup 2  and Archivelogs to Restore this Backup to unavailable.

  • All archivelogs will be kept.

Scenario 3:

Change only Backup 1,2,3 and 4 to UNAVAILABLE.

  • All archivelogs will be obsolete, if run “delete obsolete” you can restore Backup 1,2,3 or 4, BUT WILL NOT BE ABLE TO RECOVERY IT since there is no Archivelog.

Scenario 4:

Change to unavailable Backup 1,2,3,4   and archivelogs to restore these database.

  • All archivelogs with Status AVAILABLE will be obsolete and all archivelogs will obey default retention policy.

 

It seens to me a bug, because Oracle just ignore retention policy of archivelog log, if Backup with Keep clause is UNAVAILABLE.

 

 

 

 


How to move/migrate OCR, Voting Disk, ASM (SPFILE and PWDFILE) and MGMTDB to a different Diskgroup on ASM 12.1.0.1

_______________________________________
Last Update (29/09/2014):
Oracle released the note: How to Move GI Management Repository to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)

The procedure on NOTE (Doc ID 1589394.1) does not "MOVE" but RECREATE Management Repository.
The procedure below on this post MOVE (migrating all data) from a Diskgroup to another Diskgroup.

_______________________________________

This post is only a recommendation about how to configure Clusterware Files and MGMTDB on ASM using Grid Infrastructure 12.1.0.1.

Migrating MGMTDB to a different Diskgroup is not documented until 25/09/2013. Oracle will release a note to make this procedure soon.

When we install/configure a new fresh Oracle GI for a Cluster the cluster files and MGMTDB is stored on first Diskgroup created at moment of installation.

To easy management and increase the avability of Cluster is recommended setup small diskgroup to store the cluster files.

ASM Spfile stuck on Diskgroup : ORA-15027: active use of diskgroup precludes its dismount (With no database clients connected) [ID 1082876.1]

The cluster files are:

Oracle Cluster Registry (OCR)

The Oracle RAC configuration information repository that manages information about the cluster node list and instance-to-node mapping information. The OCR also manages information about Oracle Clusterware resource profiles for customized applications.

The OCR is stored in a Diskgroup similar as Database Files are stored. The extents are spread across all disk of  diskgroup and the redundancy (which is at the extent level) is based on the redundancy of the disk group. We can store only one OCR per Diskgroup.

Voting Disk

Voting Disk aka Voting files: Is a file that manages information about node membership.

The voting disks are placed directly on ASMDISK/Luns. Oracle Clusterware will store the votedisk on the disk within a disk group that holds the Voting Files. Oracle Clusterware does not rely on ASM to access the Voting Files, that’s  means wich Oracle Clusterware does not need of Diskgroup to read and write on ASMDISK.
You cannot find/list Voting files using SQLPLUS(v$asm_diskgroup,v$asm_files,v$asm_alias), ASCMD utility or ASMCA gui.
You only know if exist a voting files in a ASMDISK (v$asm_disk using column VOTING_FILE). So, voting files not depend of Diskgroup to be accessed, does not mean that, we don’t need the diskgroup, diskgroup and voting file are linked by their settings and redundancy.

We cannot place Voting Disk in multiples Diskgroup. Is supported use only one Diskgroup to store Voting Disks.

ASM Spfile

This files is a parameter file of ASM Instance.

The Spfile is stored as OCR and Database files.

ASM Password File

New feature of ASM 12.1 allow store Database or ASM Password File on ASM Diskgroup.

An individual password file for Oracle Database or Oracle ASM can reside on a designated Oracle ASM disk group. Having the password files reside on a single location accessible across the cluster reduces maintenance costs and situations where passwords become out of sync.

The COMPATIBLE.ASM disk group attribute must be set to at least 12.1 for the disk group where the password is to be located.

Database MGMTDB

New feature of GI 12.1 Management Repository.

Management Repository is a single instance database that's managed by Oracle Clusterware in 12c. If the option is selected during GI installation, the database will be configured and managed by GI. As it's a single instance database, it will be up and running on one node in the cluster; as it's managed by GI, in case the hosting node is down, the database will be automatically failed over to other node.

Management database by default uses the same shared storage as OCR/Voting File.

There is no procedure in documentation 12.1 or Metalink Notes to change location of MGMTDB. (25/09/2013)

What are my recommendation:

Create two diskgroup:

+OCR_VOTE

Even using RAID at Storage Layer create a small diskgroup with 3 ASMDISK  using Normal Redundancy. This Diskgroup will store Voting Disk, OCR, ASM Spfile,  ASM Password File.

The reason to create this Diskgroup with Normal Redundancy is provide availability  in case of loss of one voting disk or loss of one ASM Disk, caused by human errors or storage failure  or host failure (controller).

+MGMTDB

This Diskgroup can be created using External Redudancy if you are using RAID at Storage Layer.

This Diskgroup  will store MGMTDB and OCR Mirror.

Important:  To perform the changes below all nodes in the cluster must be active. 

Let's Play:

Checking if nodes are actives.

	
[grid@node12c02 ~]$ olsnodes -s
node12c01        	Active
node12c02 			Active

Use OCRCHECK to find where OCR file is stored.

	
[grid@node12c02 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1548
         Available space (kbytes) :     408020
         ID                       : 1037914462
         Device/File Name         :   +CRS_TMP
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

Use crsctl to find where Voting Disk is stored.

	
[grid@node12c02 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0d2d3b5d705a4fefbf8824c99251ac5a (/dev/oracleasm/disks/OCR_TEMP) [CRS_TMP]
Located 1 voting disk(s).

Use asmcmd to find where ASM SPFILE and ASM PWD is stored

	
[grid@node12c02 ~]$ asmcmd spget
+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607

[grid@node12c02 ~]$ asmcmd pwget --asm
+CRS_TMP/crs12c/orapwASM

Getting info about Voting Disk on ASM.

Connect to ASM using as sysdba role

SQL>
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A20
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
FROM V$ASM_DISKGROUP
WHERE NAME='CRS_TMP');

			 
NAME                     PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
------------------------ ------------------------------ -------------------- -------------------- -------------------- --------------------
CRS_TMP_0000             /dev/oracleasm/disks/OCR_TEMP  MEMBER               CRS_TMP_0000         REGULAR              Y

Get name of cluster.

	
[grid@node12c02 ~]$ olsnodes -c
crs12c

set linesize 100
col FILES_OF_CLUSTER for a60
select concat('+'||gname, sys_connect_by_path(aname, '/')) FILES_OF_CLUSTER
from ( select b.name gname, a.parent_index pindex, a.name aname,
a.reference_index rindex , a.system_created, a.alias_directory,
c.type file_type
from v$asm_alias a, v$asm_diskgroup b, v$asm_file c
where a.group_number = b.group_number
and a.group_number = c.group_number(+)
and a.file_number = c.file_number(+)
and a.file_incarnation = c.incarnation(+)
) WHERE file_type in ( 'ASMPARAMETERFILE','PASSWORD','OCRFILE')
start with (mod(pindex, power(2, 24))) = 0
and rindex in
( select a.reference_index
from v$asm_alias a, v$asm_diskgroup b
where a.group_number = b.group_number
and (mod(a.parent_index, power(2, 24))) = 0
and a.name = LOWER('&CLUSTERNAME')
)
connect by prior rindex = pindex;

	
Enter value for clustername: crs12c
old  17:                         and a.name = LOWER('&CLUSTERNAME')
new  17:                         and a.name = LOWER('crs12c')

FILES_OF_CLUSTER
------------------------------------------------------------
+CRS_TMP/crs12c/ASMPARAMETERFILE/REGISTRY.253.826898607
+CRS_TMP/crs12c/OCRFILE/REGISTRY.255.826898619
+CRS_TMP/crs12c/orapwasm

Creating Diskgroup OCR_VOTE

Creating Diskgroup OCR_VOTE each disk must be in different failgroup. I don’t add QUORUM failgroup because theses luns are on Storage. Use QUORUM failgroup when you are placing disk out of your environment. (e.g use NFS file-disk to quorum purpouses), because this disks cannot contain data.

List ASM Disk
SQL>
col path for a30
col name for a20
col header_status for a20
select path,name,header_status from v$asm_disk
where path like '/dev/asm%';

	 
 PATH                           NAME                 HEADER_STATUS
------------------------------ -------------------- --------------------
/dev/asm-ocr_vote3                                  CANDIDATE
/dev/asm-ocr_vote1                                  CANDIDATE
/dev/asm-ocr_vote2                                  CANDIDATE          
	
SQL> CREATE DISKGROUP OCR_VOTE NORMAL REDUNDANCY
     FAILGROUP controller01 DISK '/dev/asm-ocr_vote1'
     FAILGROUP controller02 DISK '/dev/asm-ocr_vote2'
     FAILGROUP controller03 DISK '/dev/asm-ocr_vote3'
     ATTRIBUTE 
	 'au_size'='1M',
	 'compatible.asm' = '12.1';

Diskgroup created.

Mount Diskgroup OCR_VOTE on others Nodes

	
SQL>  ! srvctl start diskgroup -g ocr_vote -n node12c01

Check status of Diskgroup OCR_VOTE in all nodes.
The Diskgroup OCR_VOTE must be mounted in all hub nodes.

	
SQL> ! srvctl status diskgroup -g ocr_vote
Disk Group ocr_vote is running on node12c01,node12c02

Moving Voting Disk to Diskgroup OCR_VOTE

As grid user issue:

	
[grid@node12c01 ~]$  crsctl replace votedisk +OCR_VOTE
Successful addition of voting disk be10b93cca734f65bfa6b89cc574d485.
Successful addition of voting disk 46a9b6154c184f89bf6a503846ea3aa3.
Successful addition of voting disk 6bc41f75098d4f5dbf97783770760445.
Successful deletion of voting disk 0d2d3b5d705a4fefbf8824c99251ac5a.
Successfully replaced voting disk group with +OCR_VOTE.
CRS-4266: Voting file(s) successfully replaced

Query CSS Votedisk in all nodes

	
[grid@node12c01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   be10b93cca734f65bfa6b89cc574d485 (/dev/asm-ocr_vote1) [OCR_VOTE]
 2. ONLINE   46a9b6154c184f89bf6a503846ea3aa3 (/dev/asm-ocr_vote2) [OCR_VOTE]
 3. ONLINE   6bc41f75098d4f5dbf97783770760445 (/dev/asm-ocr_vote3) [OCR_VOTE]
Located 3 voting disk(s).

[grid@node12c02 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   be10b93cca734f65bfa6b89cc574d485 (/dev/asm-ocr_vote1) [OCR_VOTE]
 2. ONLINE   46a9b6154c184f89bf6a503846ea3aa3 (/dev/asm-ocr_vote2) [OCR_VOTE]
 3. ONLINE   6bc41f75098d4f5dbf97783770760445 (/dev/asm-ocr_vote3) [OCR_VOTE]
Located 3 voting disk(s).

Query ASMDISK on ASM
SQL >
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A10
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
FROM V$ASM_DISKGROUP
WHERE NAME='OCR_VOTE');

	
NAME                     PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
------------------------ ------------------------------ -------------------- -------------------- -------------------- --------------------
OCR_VOTE_0002            /dev/asm-ocr_vote3             MEMBER               CONTROLLER03         REGULAR              Y
OCR_VOTE_0001            /dev/asm-ocr_vote2             MEMBER               CONTROLLER02         REGULAR              Y
OCR_VOTE_0000            /dev/asm-ocr_vote1             MEMBER               CONTROLLER01         REGULAR              Y

The Voting Disk was moved successful.

Moving OCR to Diskgroup OCR_VOTE

As root user issue:

	
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1568
         Available space (kbytes) :     408000
         ID                       : 1037914462
         Device/File Name         :   +CRS_TMP
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

Add OCR on OCR_VOTE

	
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +OCR_VOTE	

Now we must see two OCR one in each Diskgroup

	
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1568
         Available space (kbytes) :     408000
         ID                       : 1037914462
         Device/File Name         :   +CRS_TMP
                                    Device/File integrity check succeeded
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

Delete OCR from Diskgroup CRS_TMP

	
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -delete +CRS_TMP

Now we must see only one OCR on Diskgroup OCR_VOTE.

	
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1568
         Available space (kbytes) :     408000
         ID                       : 1037914462
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

Check with OCRCHECK on others nodes.

	
[root@node12c01 ~]#  /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1568
         Available space (kbytes) :     408000
         ID                       : 1037914462
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

The OCR was moved successful.

Moving ASM SPFILE
You will get the error which the file is still being used, but the file is copied to the file system and the profile is updated.

	
[grid@node12c01 ~]$ asmcmd spmove '+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607' '+OCR_VOTE/crs12c/spfileASM.ora'
ORA-15032: not all alterations performed
ORA-15028: ASM file '+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)

Checking if file was copied and profile updated

	
[grid@node12c01 ~]$ asmcmd spget
+OCR_VOTE/crs12c/spfileASM.ora

Moving ASM Password File

	
[grid@node12c01 ~]$ asmcmd pwmove --asm +CRS_TMP/orapwASM +OCR_VOTE/crs12c/orapwASM
moving +CRS_TMP/orapwASM -> +OCR_VOTE/crs12c/orapwASM

Checking if file was moved and profile updated

	
[grid@node12c01 ~]$ asmcmd pwget --asm
+OCR_VOTE/crs12c/orapwASM

[grid@node12c01 ~]$ srvctl config asm
ASM home: /u01/app/12.1.0/grid
Password file: +OCR_VOTE/crs12c/orapwASM
ASM listener: LISTENER

After these changes restart cluster to release ASM SPFILE.
You can restart one node at time if you wish.

You DON'T need stop whole cluster to release ASM SPFILE.

As my cluster is a test cluster I will stop cluster on all nodes.

	
[root@node12c01 ~]# /u01/app/12.1.0/grid/bin/crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'node12c01'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node12c01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node12c01'
CRS-2673: Attempting to stop 'ora.CRS_TMP.dg' on 'node12c01'
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'node12c01'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node12c01'
CRS-2677: Stop of 'ora.scan1.vip' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.crsd' on 'node12c02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node12c02'
CRS-2673: Attempting to stop 'ora.cvu' on 'node12c02'
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'node12c02'
CRS-2673: Attempting to stop 'ora.oc4j' on 'node12c02'
CRS-2673: Attempting to stop 'ora.node12c02.vip' on 'node12c02'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node12c02'
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'node12c02'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node12c02'
CRS-2677: Stop of 'ora.cvu' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node12c02'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.node12c02.vip' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.node12c01.vip' on 'node12c02'
CRS-2677: Stop of 'ora.scan2.vip' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.node12c01.vip' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.mgmtdb' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'node12c02'
CRS-2677: Stop of 'ora.MGMTLSNR' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'node12c01' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.CRS_TMP.dg' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node12c01'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node12c01'
CRS-2677: Stop of 'ora.asm' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c01'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node12c02'
CRS-2673: Attempting to stop 'ora.CRS_TMP.dg' on 'node12c02'
CRS-2673: Attempting to stop 'ora.ons' on 'node12c01'
CRS-2677: Stop of 'ora.ons' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node12c01'
CRS-2677: Stop of 'ora.net1.network' on 'node12c01' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node12c01' has completed
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.CRS_TMP.dg' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node12c02'
CRS-2677: Stop of 'ora.asm' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c02'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node12c02'
CRS-2677: Stop of 'ora.ons' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node12c02'
CRS-2677: Stop of 'ora.net1.network' on 'node12c02' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node12c02' has completed
CRS-2677: Stop of 'ora.crsd' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node12c02'
CRS-2673: Attempting to stop 'ora.evmd' on 'node12c02'
CRS-2673: Attempting to stop 'ora.storage' on 'node12c02'
CRS-2677: Stop of 'ora.storage' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node12c02'
CRS-2677: Stop of 'ora.crsd' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node12c01'
CRS-2673: Attempting to stop 'ora.evmd' on 'node12c01'
CRS-2673: Attempting to stop 'ora.storage' on 'node12c01'
CRS-2677: Stop of 'ora.storage' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node12c01'
CRS-2677: Stop of 'ora.evmd' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node12c01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node12c01' succeeded
CRS-2677: Stop of 'ora.asm' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node12c01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node12c01'
CRS-2677: Stop of 'ora.cssd' on 'node12c01' succeeded
CRS-2677: Stop of 'ora.asm' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node12c02'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node12c02'
CRS-2677: Stop of 'ora.cssd' on 'node12c02' succeeded

Start cluster on all nodes

	
[root@node12c01 ~]# /u01/app/12.1.0/grid/bin/crsctl start cluster -all
CRS-2672: Attempting to start 'ora.evmd' on 'node12c01'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node12c01'
CRS-2672: Attempting to start 'ora.evmd' on 'node12c02'
CRS-2676: Start of 'ora.cssdmonitor' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node12c01'
CRS-2672: Attempting to start 'ora.diskmon' on 'node12c01'
CRS-2676: Start of 'ora.diskmon' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node12c02'
CRS-2676: Start of 'ora.evmd' on 'node12c01' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'node12c02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node12c02'
CRS-2672: Attempting to start 'ora.diskmon' on 'node12c02'
CRS-2676: Start of 'ora.evmd' on 'node12c02' succeeded
CRS-2676: Start of 'ora.diskmon' on 'node12c02' succeeded
CRS-2676: Start of 'ora.cssd' on 'node12c02' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'node12c02'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node12c02'
CRS-2676: Start of 'ora.cssd' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'node12c01'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node12c01'
CRS-2676: Start of 'ora.ctssd' on 'node12c02' succeeded
CRS-2676: Start of 'ora.ctssd' on 'node12c01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node12c02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'node12c02'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'node12c01'
CRS-2676: Start of 'ora.asm' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'node12c01'
CRS-2676: Start of 'ora.asm' on 'node12c02' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'node12c02'
CRS-2676: Start of 'ora.storage' on 'node12c02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'node12c02'
CRS-2676: Start of 'ora.crsd' on 'node12c02' succeeded
CRS-2676: Start of 'ora.storage' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'node12c01'
CRS-2676: Start of 'ora.crsd' on 'node12c01' succeeded

Migrate MGMTDB to other Diskgroup.

MGMTDB is a "common" Oracle RDBMS (exclusive to CHM). We can perform administrative task like in any other Oracle RDBMS.
So, to migrate MGMTDB to other Diskgroup I will use RMAN.

Creating Diskgroup MGMTDB

	
SQL> CREATE DISKGROUP MGMTDB EXTERNAL REDUNDANCY
     DISK '/dev/asm-mgmtdb1'
     ATTRIBUTE 
	 'au_size'='1M',
	 'compatible.asm' = '12.1',
	 'compatible.rdbms' = '12.1';			 
			 
Diskgroup created.

Starting MGMTDB diskgroup on others node and checking status

	
SQL> ! srvctl start diskgroup -g MGMTDB -n node12c02

SQL> ! srvctl status diskgroup -g MGMTDB
Disk Group MGMTDB is running on node12c01,node12c02

To manage MGMTDB you must use srvctl utility.
srvctl start/stop/config/modify... mgmtdb

Getting current configuration of MGMTDB

	

[grid@node12c01 ~]$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: /u01/app/12.1.0/grid
Oracle user: grid
Spfile: +CRS_TMP/_mgmtdb/spfile-MGMTDB.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
Type: Management

Check where MGMTDB is running.

	
[grid@node12c01 ~]$ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node node12c02

You must migrate MGMTDB using host(node) where MGMTDB is running.

Let's start.
Stop MGMT Listener and MGMTDB

	
[grid@node12c02 ~]$ srvctl stop mgmtdb

[grid@node12c02 ~]$ srvctl stop mgmtlsnr

Precaution: I will create a full backup of the database MGMTDB before start migration.

	
[grid@node12c02 ~]$ mkdir -p /u01/app/grid/backup_mgmtdb

[grid@node12c02 ~]$ export ORACLE_SID=-MGMTDB
[grid@node12c02 ~]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Tue Sep 24 19:30:56 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup mount

connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area     521936896 bytes

Fixed Size                     2290264 bytes
Variable Size                289410472 bytes
Database Buffers             222298112 bytes
Redo Buffers                   7938048 bytes


RMAN>  backup database format '/u01/app/grid/backup_mgmtdb/rman_mgmtdb_%U' tag='bk_db_move_dg';

Starting backup at 24-SEP-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=6 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881
input datafile file number=00001 name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845
input datafile file number=00002 name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821
input datafile file number=00004 name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917
input datafile file number=00003 name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817
channel ORA_DISK_1: starting piece 1 at 24-SEP-13
channel ORA_DISK_1: finished piece 1 at 24-SEP-13
piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_03okm63j_1_1 tag=BK_DB_MOVE_DG comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 24-SEP-13
channel ORA_DISK_1: finished piece 1 at 24-SEP-13
piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_04okm650_1_1 tag=BK_DB_MOVE_DG comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-SEP-13

Migrating SPFILE to Diskgroup +MGMTDB.
RMAN will restore SPFILE and update OCR to a new SPFILE Location.
Database _MGMTDB MUST be mounted or opened.

	

RMAN> restore spfile to "+MGMTDB";

Starting restore at 24-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=355 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: restoring SPFILE
output file name=+MGMTDB
channel ORA_DISK_1: reading from backup piece /u01/app/grid/backup_mgmtdb/rman_mgmtdb_02oltbpe_1_1
channel ORA_DISK_1: piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_02oltbpe_1_1 tag=BK_DB_MOVE_DG
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 24-SEP-13

[grid@node12c01 ~]$ srvctl config mgmtdb |grep Spfile
Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969

RMAN> shutdown immediate;
database dismounted
Oracle instance shut down

Migrating Controlfile to Diskgroup MGMTDB
P.S: Before do this change, check what spfile you are updating.

	

[grid@node12c02 ~]$ sqlplus / as sysdba
SQL> startup nomount;
ORACLE instance started.

Total System Global Area  521936896 bytes
Fixed Size                  2290264 bytes
Variable Size             285216168 bytes
Database Buffers          226492416 bytes
Redo Buffers                7938048 bytes

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969


SQL> show parameter control_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      +CRS_TMP/_MGMTDB/CONTROLFILE/current.262.826899953

SQL> alter system set control_files='+MGMTDB' scope=spfile ;

System altered.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> exit
[grid@node12c02 ~]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Tue Sep 24 19:45:37 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

Oracle instance started

Total System Global Area     521936896 bytes

Fixed Size                     2290264 bytes
Variable Size                289410472 bytes
Database Buffers             222298112 bytes
Redo Buffers                   7938048 bytes

RMAN> RESTORE CONTROLFILE FROM '+CRS_TMP/_MGMTDB/CONTROLFILE/current.262.826899953';

Starting restore at 24-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=124 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+MGMTDB/_MGMTDB/CONTROLFILE/current.263.827005571
Finished restore at 24-SEP-13

RMAN> startup mount

database is already started
database mounted
released channel: ORA_DISK_1

Controlfile was moved successful.

Migrating Database Files to Diskgroup MGMTDB

	
RMAN>  BACKUP AS COPY DEVICE TYPE DISK DATABASE FORMAT '+MGMTDB';

Starting backup at 24-SEP-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881
output file name=+MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237 tag=TAG20130924T194034 RECID=1 STAMP=827005278
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845
output file name=+MGMTDB/_MGMTDB/DATAFILE/system.257.827005281 tag=TAG20130924T194034 RECID=2 STAMP=827005291
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:16
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821
output file name=+MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295 tag=TAG20130924T194034 RECID=3 STAMP=827005303
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917
output file name=+MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311 tag=TAG20130924T194034 RECID=4 STAMP=827005312
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817
output file name=+MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313 tag=TAG20130924T194034 RECID=5 STAMP=827005316
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 tag=TAG20130924T194034 RECID=6 STAMP=827005317
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 24-SEP-13
channel ORA_DISK_1: finished piece 1 at 24-SEP-13
piece handle=+MGMTDB/_MGMTDB/BACKUPSET/2013_09_24/nnsnf0_tag20130924t194034_0.262.827005319 tag=TAG20130924T194034 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-SEP-13

RMAN> SWITCH DATABASE TO COPY;

datafile 1 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/system.257.827005281"
datafile 2 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295"
datafile 3 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313"
datafile 4 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311"
datafile 5 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237"

Datafiles was migrated successful.

Checking Datafiles


RMAN> report schema;

Report of database schema for database with db_unique_name _MGMTDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    600      SYSTEM               ***     +MGMTDB/_MGMTDB/DATAFILE/system.257.827005281
2    400      SYSAUX               ***     +MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295
3    50       UNDOTBS1             ***     +MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313
4    100      SYSGRIDHOMEDATA      ***     +MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311
5    2048     SYSMGMTDATA          ***     +MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       +CRS_TMP/_MGMTDB/TEMPFILE/temp.266.826899969

Move Tempfile to Diskgroup MGMTDB.

RMAN> run {
SET NEWNAME FOR TEMPFILE 1 TO '+MGMTDB';
SWITCH TEMPFILE ALL;
 }

executing command: SET NEWNAME

renamed tempfile 1 to +MGMTDB in control file

At startup of database TEMP file will be recreated on new Diskgroup.

Starting Database and checking.

RMAN> startup

database is already started
database opened

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name _MGMTDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    600      SYSTEM               ***     +MGMTDB/_MGMTDB/DATAFILE/system.257.827005281
2    400      SYSAUX               ***     +MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295
3    50       UNDOTBS1             ***     +MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313
4    100      SYSGRIDHOMEDATA      ***     +MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311
5    2048     SYSMGMTDATA          ***     +MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       +MGMTDB/_MGMTDB/TEMPFILE/temp.261.827005913

RMAN> exit
Recovery Manager complete.

Delete the old datafiles copy on old Diskgroup CRS_TMP.

RMAN> delete copy ;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=124 device type=DISK
specification does not match any archived log in the repository
List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
------- ---- - --------------- ---------- ---------------
7       1    A 24-SEP-13       798766     24-SEP-13
        Name: +CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845

8       2    A 24-SEP-13       798766     24-SEP-13
        Name: +CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821

9       3    A 24-SEP-13       798766     24-SEP-13
        Name: +CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817

10      4    A 24-SEP-13       798766     24-SEP-13
        Name: +CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917

11      5    A 24-SEP-13       798766     24-SEP-13
        Name: +CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881

List of Control File Copies
===========================

Key     S Completion Time Ckp SCN    Ckp Time
------- - --------------- ---------- ---------------
6       A 24-SEP-13       798766     24-SEP-13
        Name: +MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317
        Tag: TAG20130924T194034


Do you really want to delete the above objects (enter YES or NO)? yes
deleted datafile copy
datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845 RECID=7 STAMP=827005353
deleted datafile copy
datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821 RECID=8 STAMP=827005353
deleted datafile copy
datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817 RECID=9 STAMP=827005353
deleted datafile copy
datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917 RECID=10 STAMP=827005353
deleted datafile copy
datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881 RECID=11 STAMP=827005353
deleted control file copy
control file copy file name=+MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 RECID=6 STAMP=827005317
Deleted 6 objects

Migrating Redo Logs to Diskgroup MGMTDB
P.S: Database must be OPENED.

[grid@node12c02 ~]$ sqlplus / as sysdba
SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;
    GROUP# MEMBER                                             STATUS
---------- -------------------------------------------------- ----------------
         1 +CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955   INACTIVE
         2 +CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955   CURRENT
         3 +CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957   INACTIVE

Adding New Log Members

SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 1;
Database altered.

SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 2;
Database altered.
 
SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 3;
Database altered.

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> /
System altered.

SQL> /
System altered.

SQL> alter system checkpoint;

Removing Log Member from CRS_TMP

SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;

    GROUP# MEMBER                                             STATUS
---------- -------------------------------------------------- ----------------
         1 +CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955   ACTIVE
         1 +MGMTDB/_MGMTDB/ONLINELOG/group_1.264.827006475    ACTIVE
         2 +MGMTDB/_MGMTDB/ONLINELOG/group_2.266.827006507    ACTIVE
         2 +CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955   ACTIVE
         3 +MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479    CURRENT
         3 +CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957   CURRENT

6 rows selected.

SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955';
Database altered.
 
SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955';
Database altered.
 
SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'; 
Database altered.
		  
		 
		 
SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957';
ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'
*
ERROR at line 1:
ORA-01609: log 3 is the current log for thread 1 - cannot drop members
ORA-00312: online log 3 thread 1:
'+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'
ORA-00312: online log 3 thread 1:
'+MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479'

SQL> alter system switch logfile;

System altered.

SQL> alter system checkpoint;

System altered.

SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957';

Database altered.

SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;

    GROUP# MEMBER                                             STATUS
---------- -------------------------------------------------- ----------------
         1 +MGMTDB/_MGMTDB/ONLINELOG/group_1.264.827006475    CURRENT
         2 +MGMTDB/_MGMTDB/ONLINELOG/group_2.266.827006507    INACTIVE
         3 +MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479    INACTIVE

Redo Logs was moved successful.

Check Configuration of resource ora.mgmtdb.
Maybe you need modify this resource by removing old DISKGROUP.

As grid user:

[grid@node12c02 ~]$ crsctl status res ora.mgmtdb -p |grep CRS_TMP
START_DEPENDENCIES=hard(ora.MGMTLSNR,ora.CRS_TMP.dg,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.CRS_TMP.dg,ora.MGMTDB.dg)
STOP_DEPENDENCIES=hard(intermediate:ora.MGMTLSNR,shutdown:ora.CRS_TMP.dg,intermediate:ora.asm,shutdown:ora.MGMTDB.dg)

If command above return output you need update OCR by removing the old Diskgroup.
If you removed Diskgroup without update resource ora.mgmtdb, srvctl will not be able to start resource ora.mgmtdb and raise error:

[grid@node12c02 ~]$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: /u01/app/12.1.0/grid
Oracle user: grid
Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
PRCD-1012 : Failed to retrieve disk group list for database _mgmtdb.
PRCR-1035 : Failed to look up CRS resource ora.CRS_TMP.dg for _mgmtdb
PRCA-1000 : ASM Disk Group CRS_TMP does not exist
PRCR-1001 : Resource ora.CRS_TMP.dg does not exist

To fix it you need remove this resource.
If resource ora.CRS_TMP.dg does not exists you must use crsctl with force option ("-f" flag) to modify ora.mgmtdb resource.

[grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "START_DEPENDENCIES='hard(ora.MGMTLSNR,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.MGMTDB.dg)'"
CRS-2510: Resource 'ora.CRS_TMP.dg' used in dependency 'hard' does not exist or is not registered
CRS-2514: Dependency attribute specification 'hard' is invalid in resource 'ora.mgmtdb'
CRS-4000: Command Modify failed, or completed with errors.

Updating resource ora.mgmtdb by removing ora.CRS_TMP.dg resource.


[grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "START_DEPENDENCIES='hard(ora.MGMTLSNR,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.MGMTDB.dg)'" -f

[grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "STOP_DEPENDENCIES='hard(intermediate:ora.MGMTLSNR,intermediate:ora.asm,shutdown:ora.MGMTDB.dg)'" -f

[grid@node12c02 ~]$ crsctl status res ora.mgmtdb -p |grep CRS_TMP

[grid@node12c02 ~]$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: /u01/app/12.1.0/grid
Oracle user: grid
Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
Type: Management

[grid@node12c02 ~]$ srvctl start mgmtdb

[grid@node12c02 ~]$ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node node12c02 

Adding OCR Mirror on +MGMTDB.

[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +MGMTDB
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1568
         Available space (kbytes) :     408000
         ID                       : 1037914462
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded
         Device/File Name         :    +MGMTDB
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

I recommend stop and start your cluster by issuing (as root) "crsctl stop cluster -all and crsctl start cluster -all"

If all resources restart successfull, then we can stop diskgroup "CRS_TMP".

[grid@node12c02 ~]$ srvctl stop diskgroup -g CRS_TMP

[grid@node12c02 ~]$ srvctl status diskgroup -g CRS_TMP
Disk Group CRS_TMP is not running

Check if all resources is online. If yes, you can remove this diskgroup.

[grid@node12c02 ~]$ srvctl start diskgroup -g CRS_TMP -n node12c02 

[grid@node12c02 ~]$ export ORACLE_SID=+ASM1
[grid@node12c02 ~]$ sqlplus / as sysasm

SQL> DROP DISKGROUP CRS_TMP INCLUDING CONTENTS;

Diskgroup dropped.

Then restart whole clusterware again and check if all resources started without issue.


What is slow RMAN or Media Management Library?

Backup execution time  is slow.. where is the problem?

When we perform backup using third party software and backup of database is slow, there always is some uncertain who is causing the slowness.

The Database Administrator (DBA) says: is Media Management Library (aka MML) and Backup Operator says: is RMAN.

To end this conflict I ​​will show how to identify where is the problem.

I'll use the term RMAN (like RMAN spend time), but actually means Database spend time. Because RMAN is only a client. So typically the database is slow and not RMAN Client is slow.

Note: I will not diagnose what is causing it, I will only  help you  identify whether the problem is, in  MML or RMAN.

Media Management

The Oracle Media Management Layer (MML) API lets third-party vendors build a media manager, software that works with RMAN and the vendor's hardware to allow backups to sequential media devices such as tape drives. A media manager handles loading, unloading, and labeling of sequential media such as tapes.

RMAN Interaction with a Media Manager

RMAN does not issue specific commands to load, label, or unload tapes. When backing up, RMAN gives the media manager a stream of bytes and associates a unique name with this stream. When RMAN must restore the backup, it asks the media manager to retrieve the byte stream. All details of how and where that stream is stored are handled entirely by the media manager. For example, the media manager labels and keeps track of the tape and names of files on each tape, and automatically loads and unloads tapes, or signals an operator to do so.

RMAN provides a list of files requiring backup or restore to the media manager, which in turn makes all decisions regarding how and when to move the data.

Before RMAN making a call to any of functions in the media management API, the server create a EVENT WAIT. So, These with  EVENT WAIT  is possible to get the number of  wait (in seconds or number) that the server has been waiting for this call to return.

So, we can calculate it and find out how much time RMAN spend waiting MML (e.g writing  or querying a backup piece filename) processing request and return to RMAN.

Complete list  EVENT of MML

Oracle 11.2 or above:

SELECT NAME
FROM   V$EVENT_NAME
WHERE  NAME LIKE '%MML%';
NAME
----------------------------------------
Backup: MML initialization
Backup: MML v1 open backup piece
Backup: MML v1 read backup piece
Backup: MML v1 write backup piece
Backup: MML v1 close backup piece
Backup: MML v1 query backup piece
Backup: MML v1 delete backup piece
Backup: MML create a backup piece
Backup: MML commit backup piece
Backup: MML command to channel
Backup: MML shutdown
Backup: MML obtain textual error
Backup: MML query backup piece
Backup: MML extended initialization
Backup: MML read backup piece
Backup: MML delete backup piece
Backup: MML restore backup piece
Backup: MML write backup piece
Backup: MML proxy initialize backup
Backup: MML proxy cancel
Backup: MML proxy commit backup piece
Backup: MML proxy session end
Backup: MML datafile proxy backup?
Backup: MML datafile proxy restore?
Backup: MML proxy initialize restore
Backup: MML proxy start data movement
Backup: MML data movement done?
Backup: MML proxy prepare to start
Backup: MML obtain a direct buffer
Backup: MML release a direct buffer
Backup: MML get base address
Backup: MML query for direct buffers

Previous version of Oracle Database 11.2 the Event name MML does not exists because it’s was changed on version 11.2 from %STB% to %MML%.

So, If you are using Oracle 11.1 or previous you can query V$EVENT_NAME where NAME like '%sbt%'.

SELECT NAME
FROM   V$EVENT_NAME
WHERE  NAME LIKE '%sbt%';

Backup: sbtinit
Backup: ssbtopen
Backup: ssbtread
Backup: ssbtwrite
Backup: ssbtbackup
.
.
.

So, lets start...
Oracle store statistics (cumulative, since database was started) of these wait on v$system_event. I always use GV$ because is very common we admin RAC env on this days.

Before start backup I'll take a snapshot intial of gv$system_event...by creating a table RMAN_MML_EVENT_T1.

Click on icon "View Source" to see formated text.

CREATE TABLE  RMAN_MML_EVENT_T1 AS
SELECT inst_id,
  event,
  TOTAL_WAITS,
  TOTAL_TIMEOUTS,
  TIME_WAITED,
  AVERAGE_WAIT,
  TIME_WAITED_MICRO,
  sysdate as SNAPSHOT_TIME
FROM gv$system_event
WHERE event LIKE 'Backup%';

SQL> select * from RMAN_MML_EVENT_T1;

   INST_ID EVENT                                   TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO SNAPSHOT_TIME
---------- --------------------------------------- ----------- -------------- ----------- ------------ ----------------- -----------------
         1 Backup: MML initialization                      371              0       54365       146.54         543651136 08-08-12 17:11:05
         1 Backup: MML create a backup piece               450              0        4827        10.73          48270960 08-08-12 17:11:05
         1 Backup: MML commit backup piece                 450              0        7417        16.48          74172281 08-08-12 17:11:05
         1 Backup: MML shutdown                            371              0          47          .13            469267 08-08-12 17:11:05
         1 Backup: MML query backup piece                  894              0       11222        12.55         112222166 08-08-12 17:11:05
         1 Backup: MML extended initialization             371              0           0            0              3655 08-08-12 17:11:05
         1 Backup: MML delete backup piece                 444              0        5348        12.05          53480530 08-08-12 17:11:05
         1 Backup: MML write backup piece              1378078              0     3053683         2.22        3.0537E+10 08-08-12 17:11:05

8 rows selected.

I started backup using RMAN and MML (Tivoli Storage Manager). When backup finished you can query V$RMAN_BACKUP_JOB_DETAILS to get accurate time of backup

SELECT START_TIME,
  END_TIME,
  ROUND(INPUT_BYTES  /1024/1024/1024,2) IMPUT_GBYTES ,
  ROUND(OUTPUT_BYTES /1024/1024/1024,2) OUTPUT_GBYTES,
  INPUT_TYPE,
  ELAPSED_SECONDS
FROM V$RMAN_BACKUP_JOB_DETAILS
WHERE TRUNC(START_TIME) = TRUNC(SYSDATE)
AND INPUT_TYPE LIKE 'DB%';

START_TIME        END_TIME          IMPUT_GBYTES OUTPUT_GBYTES INPUT_TYPE    ELAPSED_SECONDS
----------------- ----------------- ------------ ------------- ------------- ---------------
08-08-12 17:23:44 08-08-12 17:26:38        12.85         10.06 DB FULL                   174

In my case the backup full take 174 seconds to backup read 12.85GB and Write on MML 10.06GB of data

So, after backup finish I take the 2nd snapshot by creating the table RMAN_SNAPSHOT_T2.


CREATE TABLE  RMAN_SNAPSHOT_T2 AS
SELECT inst_id,
  event,
  TOTAL_WAITS,
  TOTAL_TIMEOUTS,
  TIME_WAITED,
  AVERAGE_WAIT,
  TIME_WAITED_MICRO,
  sysdate as SNAPSHOT_TIME
FROM gv$system_event
WHERE event LIKE 'Backup%';

SQL> select * from RMAN_MML_EVENT_T2;

   INST_ID EVENT                                   TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO SNAPSHOT_TIME
---------- --------------------------------------- ----------- -------------- ----------- ------------ ----------------- -----------------
         1 Backup: MML initialization                      373              0       54665       146.56         546652333 08-08-12 17:27:45
         1 Backup: MML create a backup piece               454              0        4860        10.71          48604759 08-08-12 17:27:45
         1 Backup: MML commit backup piece                 454              0        7482        16.48          74820999 08-08-12 17:27:45
         1 Backup: MML shutdown                            373              0          47          .13            471590 08-08-12 17:27:45
         1 Backup: MML query backup piece                  900              0       11281        12.53         112808077 08-08-12 17:27:45
         1 Backup: MML extended initialization             373              0           0            0              3665 08-08-12 17:27:45
         1 Backup: MML delete backup piece                 446              0        5373        12.05          53727006 08-08-12 17:27:45
         1 Backup: MML write backup piece              1419274              0     3067298         2.16        3.0673E+10 08-08-12 17:27:45

8 rows selected.

Now I can calculate the values from RMAN_MML_EVENT_T2 minus RMAN_MML_EVENT_T1 to get the real time spend on MML.
Note:
EVENT: Name of the wait event
TOTAL_WAITS: Total number of waits for the event
TOTAL_TIMEOUTS: Total number of timeouts for the event
TIME_WAITED: Total amount of time waited for the event (in hundredths of a second)
AVERAGE_WAIT: Average amount of time waited for the event (in hundredths of a second)
TIME_WAITED_MICRO: Total amount of time waited for the event (in microseconds)

SELECT t1.inst_id,
  t1.event,
  t2.total_waits       - t1.total_waits total_waits,
  t2.total_timeouts    -t1.total_timeouts total_timeouts,
  t2.time_waited       - t1.time_waited time_waited,
  t2.time_waited_micro - t1.time_waited_micro time_waited_micro
FROM RMAN_MML_EVENT_T1 T1,
  RMAN_MML_EVENT_T2 T2
WHERE t1.inst_id = t2.inst_id
AND t1.event     = t2.event;

   INST_ID EVENT                                   TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED TIME_WAITED_MICRO
---------- --------------------------------------- ----------- -------------- ----------- -----------------
         1 Backup: MML initialization                        2              0         300           3001197
         1 Backup: MML create a backup piece                 4              0          33            333799
         1 Backup: MML commit backup piece                   4              0          65            648718
         1 Backup: MML shutdown                              2              0           0              2323
         1 Backup: MML query backup piece                    6              0          59            585911
         1 Backup: MML extended initialization               2              0           0                10
         1 Backup: MML delete backup piece                   2              0          25            246476
         1 Backup: MML write backup piece                41196              0       13615         136141912

8 rows selected.

As I can see above the MML spend more time writing backup piece.

So, I'll sum the time to get total time spend on MML.

SELECT SUM (total_waits) total_waits,
  SUM(total_timeouts) total_timeouts ,
  SUM (time_waited)/100 time_waited_in_second,
  SUM (time_waited_micro) time_waited_micro
FROM
  (SELECT t1.inst_id,
    t1.event,
    t2.total_waits       - t1.total_waits total_waits,
    t2.total_timeouts    -t1.total_timeouts total_timeouts,
    t2.time_waited       - t1.time_waited time_waited,
    t2.time_waited_micro - t1.time_waited_micro time_waited_micro
  FROM RMAN_MML_EVENT_T1 T1,
    RMAN_MML_EVENT_T2 T2
  WHERE t1.inst_id = t2.inst_id
  AND t1.event     = t2.event
  )

TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED_IN_SECOND TIME_WAITED_MICRO
----------- -------------- --------------------- -----------------
      41218              0                140.97         140960346

Calculating time total of backup window, time spend on MML and time spend of RMAN.

Note: TIME_SPEND_BY_RMAN = (ELAPSED_SECOND_BACKUP-TIME_SPEND_BY_MML_SECOND)

ELAPSED_SECONDS_BACKUP          TIME_SPEND_BY_MML_SECOND       TIME_SPEND_BY_RMAN_SECOND
------------------------------ ------------------------------ -------------------
174                             140.97                         33.03

Summarizing:
Total time of backup : 174
Time spend by MML: 141
Time spend by RMAN : 33

If this backup is slow is because MML take (141*100/174) 81% of time spend of backup window.

Additional info:
As my backup was done over Lan:
(10.06GB * 1024 = 10301MB)
10301MB / 144 = 71Mbytes/second

As I'm using network interface of 1 Gbit I can consider a normal throughput.

Also you can monitoring in real time where is wait.

Just execute this script above:

Note : if you are using previous version of 11.2 change %MML% to %sbt%.

vi monitoring_mml.sh
sqlplus -s sys/<password>@<db_name> as sysdba<<EOF
set echo off
COLUMN EVENT FORMAT a17
COLUMN SECONDS_IN_WAIT FORMAT 999
COLUMN STATE FORMAT a15
COLUMN CLIENT_INFO FORMAT a30
set linesize 200

select to_char(sysdate,'dd-mm-yyyy hh24:mi:ss') actual_date from dual;

SELECT p.SPID, sw.EVENT, sw.SECONDS_IN_WAIT AS SEC_WAIT, 
       sw.STATE, CLIENT_INFO
FROM   gV\$SESSION_WAIT sw, gv\$SESSION s, gV\$PROCESS p
WHERE  sw.EVENT LIKE '%MML%'
AND    s.SID=sw.SID
AND    s.PADDR=p.ADDR;
EOF
exit

Using shell execute the command above, and you will see in real time the wait on MML.

while true
do
sh monitoring_mml.sh
sleep 1
done

.

Find us on Google+


TIP: Some points about setting up RMAN on RAC Environment

In this post I will some tips for setting up RMAN in RAC environment.

I will not cover  topics about  RMAN that can be configured in standalone environment (e.g Incremental backup, use of FRA, etc.)

First question: Is there a difference of setting up RMAN between  standalone and RAC environments?

The answer is YES, not too much but some points must be observed.

RMAN Catalog

First of all, In my point of view use RMAN Catalog is mandatory. Because it’s a HA environment and  to restore a database without RMAN Catalog can take long time.

To protect and keep backup metadata for longer retention times than can be accommodated by the control file, you can create a recovery catalog. You should create the recovery catalog schema in a dedicated standalone database. Do not locate the recovery catalog with other production data. If you use Oracle Enterprise Manager, you can create the recovery catalog schema in the Oracle Enterprise Manager repository database.

About HA of RMAN Catalog?

I always recommend to place the Host that will hold RMAN Catalog on VirtualMachine, because  is a machine which require low resource and disk space and have low activity.

In case of the failure of Host RMAN Catalog is easy move that host to another Physical Host or Recover the whole virtual machine.

But if the option of use a VM is not avaliable. Use another cluster (e.g Test Cluster) env if avaliable.

The Database of RMAN catalog must be in ARCHIVELOG. Why?

It’s a prod env (is enough), will generate very small amount of  archivelogs, and in case of any corruption or user errors (e.g User generated new incarnation of Prod Database  during a Test Validation of Backup in Test env) can be necessary recovery point in time.

Due a small database of low activity I see some customers not giving importance to this database. It’s should not happens.

High availability of execution of backup using RMAN:

We have some challenges:

  1.  The backup must not affect the availability cluster, but the backup must be executed daily.
  2.  The backup cannot be dependent of nodes (i.e backup must be able to execute in all nodes independently if have some nodes active or not)
  3. Where store the scripts of backup? Where store the Logs?

I don’t recommend use any nodes of cluster to start/store scripts backups. Due if that node fail backup will not be executed.
Use the Host where is stored RMAN Catalog to store your backup scripts too and start these scripts from this host, the utility RMAN works as client only… the backup is always performed on server side.

Doing this you will centralize all scripts and logs of backup from your environment. That will ease the management of backup.

Configuring the RMAN Snapshot Control File Location in a RAC 11.2

RMAN creates a copy of the control file for read consistency, this is the snapshot controlfile. Due to the changes made to the controlfile backup mechanism in 11gR2 any instances in the cluster may write to the snapshot controlfile. Therefore, the snapshot controlfile file needs to be visible to all instances.
The same happens when a backup of the controlfile is created directly from sqlplus any instance in the cluster may write to the backup controfile file.
In 11gR2 onwards, the controlfile backup happens without holding the control file enqueue. For non-RAC database, this doesn’t change anything.
But, for RAC database, the snapshot controlfile location must be in a shared file system that will be accessible from all the nodes.
The snapshot controlfile MUST be accessible by all nodes of a RAC database.

See how do that:

https://forums.oracle.com/forums/thread.jspa?messageID=9997615

Since version 11.1 : Node Affinity Awareness of Fast Connections

In some cluster database configurations, some nodes of the cluster have faster access to certain data files than to other data files. RMAN automatically detects this, which is known as node affinity awareness. When deciding which channel to use to back up a particular data file, RMAN gives preference to the nodes with faster access to the data files that you want to back up. For example, if you have a three-node cluster, and if node 1 has faster read/write access to data files 7, 8, and 9 than the other nodes, then node 1 has greater node affinity to those files than nodes 2 and 3.

Channel Connections to Cluster Instances with RMAN

Channel connections to the instances are determined using the connect string defined by channel configurations. For example, in the following configuration, three channels are allocated using dbauser/pwd@service_name. If you configure the SQL Net service name with load balancing turned on, then the channels are allocated at a node as decided by the load balancing algorithm.

However, if the service name used in the connect string is not for load balancing, then you can control at which instance the channels are allocated using separate connect strings for each channel configuration. So,your backup scripts will fail if  that node/instance is down.

So, my recommendation in admin-managed database environment is create a set of nodes to perform the backup.

E.g : If  you have 3 nodes you should use one or two node to perform backup, while the other node is less loaded. If you are using Load Balance in your connection… the new connection will be directed to the least loaded node.

Autolocation for Backup and Restore Commands

RMAN automatically performs autolocation of all files that it must back up or restore. If you use the noncluster file system local archiving scheme, then a node can only read the archived redo logs that were generated by an instance on that node. RMAN never attempts to back up archived redo logs on a channel it cannot read.

During a restore operation, RMAN automatically performs the autolocation of backups. A channel connected to a specific node only attempts to restore files that were backed up to the node. For example, assume that log sequence 1001 is backed up to the drive attached to node1, while log 1002 is backed up to the drive attached to node2. If you then allocate channels that connect to each node, then the channel connected to node1 can restore log 1001 (but not 1002), and the channel connected to node2 can restore log 1002 (but not 1001).

Configuring Channels to Use Automatic Load Balancing

To configure channels to use automatic load balancing, use the following syntax:

CONFIGURE DEVICE TYPE [disk | sbt] PARALLELISM number_of_channels;

Where number_of_channels is the number of channels that you want to use for the operation. After you complete this one-time configuration, you can issue BACKUP or RESTORE commands.

Setup  Parallelism on RMAN  is not enough to keep a balance, because if you start the backup from remote host using default SERVICE_NAME and if you are using parallelism the RMAN can start a session in each node and the backup be performed by all nodes at same time, this is not a problem, but can cause a performance   issue on your environment due high load.

Even at night the backup can cause performance problems due maintenance of the database (statistics gathering, verification of new SQL plans “automatic sql tuning set”, etc).

The bottlenecks are usually in or LAN or SAN, so use all nodes to perform backup can be  a waste. If the backup is run via LAN you can gain by using more than one node, but the server that is receiving the backup data will become a bottleneck.

I really don’t like to use more than 50% of nodes of RAC to execute backup due it can increase the workload in all nodes of clusters and this can be a problem to the application or database.

So, thinking to prevent it we can configure a Database Service to control where backup will be performed.

Creating a Database Service to perform Backup

Before start I should explain about limitation of database service.

Some points about Oracle Services.

When a user or application connects to a database, Oracle recommends that you use a service for the connection. Oracle Database automatically creates one database service (default service is always the database  name)  when the database is created.   For more flexibility in the management of the workload using the database, Oracle Database enables you to create multiple services and specify which database instances offer the services.

You can define services for both policy-managed and administrator-managed databases.

  • Policy-managed database: When you define services for a policy-managed database, you assign the service to a server pool where the database is running. You can define the service as either uniform (running on all instances in the server pool) or singleton (running on only one instance in the server pool).
  • Administrator-managed database: When you define a service for an administrator-managed database, you define which instances normally support that service. These are known as the PREFERRED instances. You can also define other instances to support a service if the preferred instance fails. These are known as AVAILABLE instances.
About Service Failover in Administrator-Managed Databases

When you specify a preferred instance for a service, the service runs on that instance during normal operation. Oracle Clusterware attempts to ensure that the service always runs on all the preferred instances that have been configured for a service. If the instance fails, then the service is relocated to an available instance. You can also manually relocate the service to an available instance.

About Service Failover in Policy-Managed Databases

When you specify that a service is UNIFORM, Oracle Clusterware attempts to ensure that the service always runs on all the available instances for the specified server pool. If the instance fails, then the service is no longer available on that instance. If the cardinality of the server pool increases and a instance is added to the database, then the service is started on the new instance. You cannot manually relocate the service to a specific instance.

When you specify that a service is SINGLETON, Oracle Clusterware attempts to ensure that the service always runs on only one of the available instances for the specified server pool. If the instance fails, then the service fails overs to a different instance in the server pool. You cannot specify which instance in the server pool the service should run on.

For SINGLETON services, if a service fails over to an new instance, then the service is not moved back to its original instance when that instance becomes available again.

Summarizing about use Services

If your database is Administrator-Managed  we can create a service and define  where backup will be executed, and how much nodes we can use with preferred and available nodes.

If your database is Policy-Managed we cannot define where backup will be executed, but we can configure a service SINGLETON, that will be sure that backup  will be executed in  only node, if that node fail the service will be moved to another available node, but we cannot choose in which node backup will be performed.

Note:

For connections to the target and auxiliary databases, the following rules apply:

Starting with 10gR2, these connections can use a connect string that does not bind to any particular instance. This means you can use load balancing.

Once a connection is established, however, it must be a dedicated connection  that cannot migrate to any other process or instance. This means that you still can’t use MTS or TAF.

Example creating service for Administrator-Managed Database

The backup will be executed on db11g2 and db11g3, but can be executed on db11g1 if db11g2 and db11g3 fail.

Set ORACLE_HOME  to same used by Database

$ srvctl add service -d db_unique_name -s service_name -r preferred_list [-a available_list] [-P TAF_policy]

$ srvctl add service -d db11g -s srv_rman -r db11g2,db11g3 -a db11g1 -P NONE -j LONG

$ srvctl start service -d db11g -s srv_rman
Example creating service for Policy-Managed Database

Using a service SINGLETON the backup will be executed on node which service was started/assigned. The service will be changed to another host only if that node fail.

Set ORACLE_HOME  to same used by Database

$ srvctl config database -d db11g |grep "Server pools"

Server pools: racdb11gsp

$ srvctl add service -d db11g -s srv_rman -g racdb11gsp -c SINGLETON

$ srvctl start service -d db11g -s srv_rman

If you have more than 2 nodes on cluster (with policy managed database) and you want use only 2 or more nodes to perform backup, you can choose the options below.

Configure a Service UNIFORM (the service will be available on all nodes)   you can control how much instance will be used to perform backup, but you cannot choose in which node backup will be performed. In fact the service does not control anything, you will set  PARALLELISM (RMAN) equal number of nodes wich you want use .

Ex: I have 4 Nodes but I want start backup in 2 nodes. I must choose parallelism 2.  Remember that Oracle can start 2 Channel on same host, this depend on workload of each node.

Using Policy Managed Database you should be aware that you do not care where (node) each instance is running, you will have a pool with many nodes and Oracle will manage all resources inside that pool. For this reason is not possible to control where you will place a heavier load.

This will only work if you are performing online backup or are using Parallel Backup.

Configuring RMAN to Automatically Backup the Control File and SPFILE

If you set CONFIGURE CONTROLFILE AUTOBACKUP to ON, then RMAN automatically creates a control file and an SPFILE backup after you run the BACKUP or COPYcommands. RMAN can also automatically restore an SPFILE, if this is required to start an instance to perform recovery, because the default location for the SPFILE must be available to all nodes in your Oracle RAC database.

These features are important in disaster recovery because RMAN can restore the control file even without a recovery catalog. RMAN can restore an autobackup of the control file even after the loss of both the recovery catalog and the current control file. You can change the default name that RMAN gives to this file with the CONFIGURE CONTROLFILE AUTOBACKUP FORMAT command. Note that if you specify an absolute path name in this command, then this path must exist identically on all nodes that participate in backups.

RMAN performs the control file autobackup on the first allocated channel. Therefore, when you allocate multiple channels with different parameters, especially when you allocate a channel with the CONNECT command, determine which channel will perform the control file autobackup. Always allocate the channel for this node first.

Enjoy..


Tip & Hint: DB_CREATE_FILE_DEST Behavior when using ASM

Recently I saw database admin with issue during the Restore  Database using RMAN and ASM, when database is locate in more than one Diskgroup.
Situation:
The Datafiles and REDO were created in two different diskgroups.
It was necessary to perform a restore of the database, then the dba removed all datafiles in both diskgroup and executed restore the database.
During the restore RMAN was reported restoring the datafiles in the correct diskgroup and  showing original path/file on prompt of RMAN, but at the end of restore  all datafiles were restored in only one diskgroup, all datafiles were moved/renamed to diskgroup   specified in parameter DB_CREATE_FILE_DEST.
Question: It’s a normal behavior?
Yes…When using ASM and Oracle parameters such as DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n the dba must be careful to either use all OMF files or keep track of the files not created OMF. There is an order in which files are created with these parameters.
By default, when create a Database using DBCA storing datafiles in  ASM the parameter DB_CREATE_FILE_DEST  is set during creation of database, the parameter  DB_CREATE_ONLINE_LOG_DEST_n is not set.

Datafiles Behavior

The restore behavior is the same as with file creation behavior for OMF files. When OMF files are restored, they follow an order of precedence. That order is:

  1. If “SET NEWNAME” is specified, RMAN will use that name for restore.
  2.  If the original file exists, RMAN will use the original filename for restore.
  3.  If the DB_CREATE_FILE_DEST is set and original filename does not exist, RMAN will use the diskgroup name specified
  4.  If no DB_CREATE_FILE_DEST is set and the original file does not exist, then RMAN will create another name for that file in the original disk group.

So, If you do not want surprise after or during the restore, you must unset the parameter DB_CREATE_FILE_DEST before start restore.

alter system set DB_CREATE_FILE_DEST='' scope=memory;

After Restore/Recover (i.e during open database) you can have suprise if DB_CREATE_FILE_DEST is not set.

So, I recommend you set DB_CREATE_FILE_DEST  between   after RECOVER DATABASE and before “OPEN RESETLOGS”  or set  DB_CREATE_ONLINE_LOG_DEST_n, because during “open resetlogs” Oracle will create Online Logs files if it does not exists.

Online Logs (Redo) Behavior

 When OMF files are restored, they follow an order of precedence. That order is:

  1. If the original file exists, RMAN will use the original filename for restore.
  2. If the DB_CREATE_ONLINE_LOG_DEST_n  is set and original filename does not exist, RMAN will use the diskgroup name specified in DB_CREATE_ONLINE_LOG_DEST_n
  3. If no DB_CREATE_ONLINE_LOG_DEST_n  is set and the original file does not exist  but  DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST  (FRA) is set, RMAN will use the diskgroup name specified in both (DB_CREATE_FILE_DEST & DB_RECOVERY_FILE_DEST)  to multiplex ONLINELOG.
  4. If no DB_CREATE_ONLINE_LOG_DEST_n and DB_RECOVERY_FILE_DEST   is set and the original file does not exist  but  DB_CREATE_FILE_DEST is set,  RMAN will use the diskgroup name specified in DB_CREATE_FILE_DEST.
  5. If no DB_CREATE_ONLINE_LOG_DEST_n and DB_CREATE_FILE_DEST   is set and the original file does not exist  but  DB_RECOVERY_FILE_DEST is set,  RMAN will use the diskgroup name specified in DB_RECOVERY_FILE_DEST.
  6. If no DB_CREATE_ONLINE_LOG_DEST_n, DB_CREATE_FILE_DEST and  DB_RECOVERY_FILE_DEST is set and the original file does not exist, Oracle will raise “ORA-19801: initialization parameter DB_RECOVERY_FILE_DEST is not set” during “alter database open resetlogs“.
Hope this help

Enjoy…


Enhancing Oracle Database Protection With EMC RecoverPoint

This paper explains the benefits of using EMC RecoverPoint local and remote replication to provide operation and disaster recovery of Oracle environments.

RecoverPoint provides crash-consistent and application-consistent recovery points that can be utilized in response to a number of possible scenarios, enhancing the native availability within an Oracle environment. Oracle support third-party enterprise replication techologies to protect Oracle environments.

EM RecoverPoint provides full support for data replication and disaster recovery when working with Oracle Databases. RecoverPoint supports Oracle whether the Oracle databases are stored as raw disks or on a file system.

Click link below

recoverpoint_oracle_db_protect_wp

“RecoverPoint EMC”

Enjoy


Leveraging DS8000 Series Advanced Copy Services for Oracle User-Managed Backup and Recovery

This white paper documents the use of IBM® System Storage™ DS8000™ Series Advanced Copy Services for backing up and recovering Oracle Real Application Clusters 10g (RAC) databases using Oracle Automatic Storage Management (ASM). Both Oracle RAC and ASM technologies introduce some differences with regard to Oracle Backup and Recovery when using IBM FlashCopy® technologies, as compared to single-instance, filesystem-based Oracle configurations. Deploying DS8000 Advanced Copy Services can be highly effective in any production shop that runs Oracle RAC 10g databases on ASM.

This white paper shows the robustness, speed, and ease with which you can backup, restore and recover databases, no matter their size.
The procedures are based on an Oracle User Group white paper that is titled Backup and Recovery Generic Guidelines with Point-in-Time Copy Solutions and Automatic Storage Management (located at: http://ioug.itconvergence.com/pls/apex/CILOUG.download_my_file?p_file=403).
This white paper demonstrates the use of Advanced Copy Services FlashCopy and Metro Mirror to do the following:

• Backup and restore an Oracle RAC database with Oracle Database 10g ASM
• Clone an Oracle RAC database with Oracle ASM to a local Oracle RAC
• Clone an Oracle RAC database with Oracle ASM to a remote DS8000 and Oracle RAC
• Backup and recover the Oracle RAC cluster services information

For demonstration purposes, 4-node Oracle RAC clusters were used. The process was repeated on the Linux and AIX® operating systems to demonstrate the operating-system (OS)-independence of the procedures. On all of the test beds, Oracle Clusterware, ASM and database binaries were installed in separate local directories. On the DS8000, volumes were allocated for the Oracle Cluster Registry, voting disks and the ASM diskgroups. The volumes were created by using a consistent naming convention for understanding the relationships between DS8000 volumes, DS8000 copy relationships and the ASM diskgroups. One DS8000 volume was allocated for each Oracle RAC cluster to serve as the mountable backup disk. Consistent naming allows the user to know at a glance what portion of the database is being acted upon in the backup and recovery scenarios.

Click on link below:

Ora DB RAC ASM Bkup Recovery using DS8k Adv Copy Services

Enjoy