RHEL/OEL 7 – NTPD replaced by Chrony

Despite configuring NTP during the installation process, NTPD is not installed /configured/running after the installation completes.

date_time

New fresh installation of  RHEL/OEL 7 the default NTP Client is the CHRONY.

What is the CHRONY?
Chrony was introduced as new NTP client provided in the chrony package. Chrony does not provides all features available in old ntp client (ntp). So ntp is still provided due to compatibility.

During Oracle Grid Infrastructure Installation the runInstaller will   fail during  prerequisite  check due ntp not configured.

NTP not configured. Network Time Protocol (NTP)  - Failed 
PRVF-7590 : "ntpd" is not running on node "xxx"

Oracle recommend to use NTPD and disable CHRONYD.

Just follow step below to Install/Configure/Enable NTPD

1. Check Chronyd service

# systemctl status chronyd.service
● chronyd.service - NTP client/server
 Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
 Active: active (running) since Mon 2016-05-30 01:15:33 BRT; 5s ago
 Process: 19464 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
 Process: 19456 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 19459 (chronyd)
 CGroup: /system.slice/chronyd.service
 └─19459 /usr/sbin/chronyd

May 30 01:15:33 node1 systemd[1]: Starting NTP client/server...
May 30 01:15:33 node1 chronyd[19459]: chronyd version 2.1.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +DEBUG +ASYNCDNS +IPV6 +SECHASH)
May 30 01:15:33 node1 chronyd[19459]: Frequency -23.731 +/- 0.023 ppm read from /var/lib/chrony/drift
May 30 01:15:33 node1 systemd[1]: Started NTP client/server.

2. Stop and disable Chronyd service

# systemctl stop chronyd.service
# systemctl disable chronyd.service
Removed symlink /etc/systemd/system/multi-user.target.wants/chronyd.service.

3. Install ntpd package

# yum install ntp -y

4. Add “-x ” option into  the “/etc/sysconfig/ntpd” file.

OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

5. Enable and Start NTPD

#systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.


# systemctl start ntpd.service

# systemctl status ntpd.service
● ntpd.service - Network Time Service
 Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
 Active: active (running) since Mon 2016-05-30 01:23:09 BRT; 9s ago
 Process: 23048 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 23050 (ntpd)
 CGroup: /system.slice/ntpd.service
 └─23050 /usr/sbin/ntpd -u ntp:ntp -x -u ntp:ntp -p /var/run/ntpd.pid

May 30 00:27:07 node1 ntpd[2829]: Listen normally on 10 virbr0 192.168.122.1 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 11 lo ::1 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 12 eno16777984 fe80::250:56ff:fe90:21e9 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 13 eno33559296 fe80::250:56ff:fe90:d421 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 14 eno50338560 fe80::250:56ff:fe90:f203 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listening on routing socket on fd #31 for interface updates
May 30 00:27:07 node1 systemd[1]: Started Network Time Service.
May 30 00:27:08 node1 ntpd[2829]: 0.0.0.0 c016 06 restart
May 30 00:27:08 node1 ntpd[2829]: 0.0.0.0 c012 02 freq_set ntpd 0.000 PPM
May 30 00:27:08 node1 ntpd[2829]: 0.0.0.0 c011 01 freq_not_set

Known issues that are solved using above procedure.

# ntpq -p
localhost: timed out, nothing received
***Request timed out
# ntpstat 
Unable to talk to NTP daemon.  Is it running? 

 

 

 

Advertisements

IBM POWER7 AIX and Oracle Database performance considerations


This post is intended to provide DBA's with advice and website links to assist with IBM Power Systems running in an Oracle environment.
Most issues that show up on POWER7 are the result of not following best practices advice that applies to all Power Systems generations.

Metalink: IBM POWER7 AIX and Oracle Database performance considerations -- 10g & 11g (Doc ID 1507249.1)

IBM White paper: IBM POWER7 AIX and Oracle Database performance considerations


How to move/migrate OCR, Voting Disk, ASM (SPFILE and PWDFILE) and MGMTDB to a different Diskgroup on ASM 12.1.0.1

_______________________________________
Last Update (29/09/2014):
Oracle released the note: How to Move GI Management Repository to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)

The procedure on NOTE (Doc ID 1589394.1) does not "MOVE" but RECREATE Management Repository.
The procedure below on this post MOVE (migrating all data) from a Diskgroup to another Diskgroup.

_______________________________________

This post is only a recommendation about how to configure Clusterware Files and MGMTDB on ASM using Grid Infrastructure 12.1.0.1.

Migrating MGMTDB to a different Diskgroup is not documented until 25/09/2013. Oracle will release a note to make this procedure soon.

When we install/configure a new fresh Oracle GI for a Cluster the cluster files and MGMTDB is stored on first Diskgroup created at moment of installation.

To easy management and increase the avability of Cluster is recommended setup small diskgroup to store the cluster files.

ASM Spfile stuck on Diskgroup : ORA-15027: active use of diskgroup precludes its dismount (With no database clients connected) [ID 1082876.1]

The cluster files are:

Oracle Cluster Registry (OCR)

The Oracle RAC configuration information repository that manages information about the cluster node list and instance-to-node mapping information. The OCR also manages information about Oracle Clusterware resource profiles for customized applications.

The OCR is stored in a Diskgroup similar as Database Files are stored. The extents are spread across all disk of  diskgroup and the redundancy (which is at the extent level) is based on the redundancy of the disk group. We can store only one OCR per Diskgroup.

Voting Disk

Voting Disk aka Voting files: Is a file that manages information about node membership.

The voting disks are placed directly on ASMDISK/Luns. Oracle Clusterware will store the votedisk on the disk within a disk group that holds the Voting Files. Oracle Clusterware does not rely on ASM to access the Voting Files, that’s  means wich Oracle Clusterware does not need of Diskgroup to read and write on ASMDISK.
You cannot find/list Voting files using SQLPLUS(v$asm_diskgroup,v$asm_files,v$asm_alias), ASCMD utility or ASMCA gui.
You only know if exist a voting files in a ASMDISK (v$asm_disk using column VOTING_FILE). So, voting files not depend of Diskgroup to be accessed, does not mean that, we don’t need the diskgroup, diskgroup and voting file are linked by their settings and redundancy.

We cannot place Voting Disk in multiples Diskgroup. Is supported use only one Diskgroup to store Voting Disks.

ASM Spfile

This files is a parameter file of ASM Instance.

The Spfile is stored as OCR and Database files.

ASM Password File

New feature of ASM 12.1 allow store Database or ASM Password File on ASM Diskgroup.

An individual password file for Oracle Database or Oracle ASM can reside on a designated Oracle ASM disk group. Having the password files reside on a single location accessible across the cluster reduces maintenance costs and situations where passwords become out of sync.

The COMPATIBLE.ASM disk group attribute must be set to at least 12.1 for the disk group where the password is to be located.

Database MGMTDB

New feature of GI 12.1 Management Repository.

Management Repository is a single instance database that's managed by Oracle Clusterware in 12c. If the option is selected during GI installation, the database will be configured and managed by GI. As it's a single instance database, it will be up and running on one node in the cluster; as it's managed by GI, in case the hosting node is down, the database will be automatically failed over to other node.

Management database by default uses the same shared storage as OCR/Voting File.

There is no procedure in documentation 12.1 or Metalink Notes to change location of MGMTDB. (25/09/2013)

What are my recommendation:

Create two diskgroup:

+OCR_VOTE

Even using RAID at Storage Layer create a small diskgroup with 3 ASMDISK  using Normal Redundancy. This Diskgroup will store Voting Disk, OCR, ASM Spfile,  ASM Password File.

The reason to create this Diskgroup with Normal Redundancy is provide availability  in case of loss of one voting disk or loss of one ASM Disk, caused by human errors or storage failure  or host failure (controller).

+MGMTDB

This Diskgroup can be created using External Redudancy if you are using RAID at Storage Layer.

This Diskgroup  will store MGMTDB and OCR Mirror.

Important:  To perform the changes below all nodes in the cluster must be active. 

Let's Play:

Checking if nodes are actives.

	
[grid@node12c02 ~]$ olsnodes -s
node12c01        	Active
node12c02 			Active

Use OCRCHECK to find where OCR file is stored.

	
[grid@node12c02 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1548
         Available space (kbytes) :     408020
         ID                       : 1037914462
         Device/File Name         :   +CRS_TMP
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

Use crsctl to find where Voting Disk is stored.

	
[grid@node12c02 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0d2d3b5d705a4fefbf8824c99251ac5a (/dev/oracleasm/disks/OCR_TEMP) [CRS_TMP]
Located 1 voting disk(s).

Use asmcmd to find where ASM SPFILE and ASM PWD is stored

	
[grid@node12c02 ~]$ asmcmd spget
+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607

[grid@node12c02 ~]$ asmcmd pwget --asm
+CRS_TMP/crs12c/orapwASM

Getting info about Voting Disk on ASM.

Connect to ASM using as sysdba role

SQL>
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A20
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
FROM V$ASM_DISKGROUP
WHERE NAME='CRS_TMP');

			 
NAME                     PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
------------------------ ------------------------------ -------------------- -------------------- -------------------- --------------------
CRS_TMP_0000             /dev/oracleasm/disks/OCR_TEMP  MEMBER               CRS_TMP_0000         REGULAR              Y

Get name of cluster.

	
[grid@node12c02 ~]$ olsnodes -c
crs12c

set linesize 100
col FILES_OF_CLUSTER for a60
select concat('+'||gname, sys_connect_by_path(aname, '/')) FILES_OF_CLUSTER
from ( select b.name gname, a.parent_index pindex, a.name aname,
a.reference_index rindex , a.system_created, a.alias_directory,
c.type file_type
from v$asm_alias a, v$asm_diskgroup b, v$asm_file c
where a.group_number = b.group_number
and a.group_number = c.group_number(+)
and a.file_number = c.file_number(+)
and a.file_incarnation = c.incarnation(+)
) WHERE file_type in ( 'ASMPARAMETERFILE','PASSWORD','OCRFILE')
start with (mod(pindex, power(2, 24))) = 0
and rindex in
( select a.reference_index
from v$asm_alias a, v$asm_diskgroup b
where a.group_number = b.group_number
and (mod(a.parent_index, power(2, 24))) = 0
and a.name = LOWER('&CLUSTERNAME')
)
connect by prior rindex = pindex;

	
Enter value for clustername: crs12c
old  17:                         and a.name = LOWER('&CLUSTERNAME')
new  17:                         and a.name = LOWER('crs12c')

FILES_OF_CLUSTER
------------------------------------------------------------
+CRS_TMP/crs12c/ASMPARAMETERFILE/REGISTRY.253.826898607
+CRS_TMP/crs12c/OCRFILE/REGISTRY.255.826898619
+CRS_TMP/crs12c/orapwasm

Creating Diskgroup OCR_VOTE

Creating Diskgroup OCR_VOTE each disk must be in different failgroup. I don’t add QUORUM failgroup because theses luns are on Storage. Use QUORUM failgroup when you are placing disk out of your environment. (e.g use NFS file-disk to quorum purpouses), because this disks cannot contain data.

List ASM Disk
SQL>
col path for a30
col name for a20
col header_status for a20
select path,name,header_status from v$asm_disk
where path like '/dev/asm%';

	 
 PATH                           NAME                 HEADER_STATUS
------------------------------ -------------------- --------------------
/dev/asm-ocr_vote3                                  CANDIDATE
/dev/asm-ocr_vote1                                  CANDIDATE
/dev/asm-ocr_vote2                                  CANDIDATE          
	
SQL> CREATE DISKGROUP OCR_VOTE NORMAL REDUNDANCY
     FAILGROUP controller01 DISK '/dev/asm-ocr_vote1'
     FAILGROUP controller02 DISK '/dev/asm-ocr_vote2'
     FAILGROUP controller03 DISK '/dev/asm-ocr_vote3'
     ATTRIBUTE 
	 'au_size'='1M',
	 'compatible.asm' = '12.1';

Diskgroup created.

Mount Diskgroup OCR_VOTE on others Nodes

	
SQL>  ! srvctl start diskgroup -g ocr_vote -n node12c01

Check status of Diskgroup OCR_VOTE in all nodes.
The Diskgroup OCR_VOTE must be mounted in all hub nodes.

	
SQL> ! srvctl status diskgroup -g ocr_vote
Disk Group ocr_vote is running on node12c01,node12c02

Moving Voting Disk to Diskgroup OCR_VOTE

As grid user issue:

	
[grid@node12c01 ~]$  crsctl replace votedisk +OCR_VOTE
Successful addition of voting disk be10b93cca734f65bfa6b89cc574d485.
Successful addition of voting disk 46a9b6154c184f89bf6a503846ea3aa3.
Successful addition of voting disk 6bc41f75098d4f5dbf97783770760445.
Successful deletion of voting disk 0d2d3b5d705a4fefbf8824c99251ac5a.
Successfully replaced voting disk group with +OCR_VOTE.
CRS-4266: Voting file(s) successfully replaced

Query CSS Votedisk in all nodes

	
[grid@node12c01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   be10b93cca734f65bfa6b89cc574d485 (/dev/asm-ocr_vote1) [OCR_VOTE]
 2. ONLINE   46a9b6154c184f89bf6a503846ea3aa3 (/dev/asm-ocr_vote2) [OCR_VOTE]
 3. ONLINE   6bc41f75098d4f5dbf97783770760445 (/dev/asm-ocr_vote3) [OCR_VOTE]
Located 3 voting disk(s).

[grid@node12c02 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   be10b93cca734f65bfa6b89cc574d485 (/dev/asm-ocr_vote1) [OCR_VOTE]
 2. ONLINE   46a9b6154c184f89bf6a503846ea3aa3 (/dev/asm-ocr_vote2) [OCR_VOTE]
 3. ONLINE   6bc41f75098d4f5dbf97783770760445 (/dev/asm-ocr_vote3) [OCR_VOTE]
Located 3 voting disk(s).

Query ASMDISK on ASM
SQL >
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A10
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
FROM V$ASM_DISKGROUP
WHERE NAME='OCR_VOTE');

	
NAME                     PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
------------------------ ------------------------------ -------------------- -------------------- -------------------- --------------------
OCR_VOTE_0002            /dev/asm-ocr_vote3             MEMBER               CONTROLLER03         REGULAR              Y
OCR_VOTE_0001            /dev/asm-ocr_vote2             MEMBER               CONTROLLER02         REGULAR              Y
OCR_VOTE_0000            /dev/asm-ocr_vote1             MEMBER               CONTROLLER01         REGULAR              Y

The Voting Disk was moved successful.

Moving OCR to Diskgroup OCR_VOTE

As root user issue:

	
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1568
         Available space (kbytes) :     408000
         ID                       : 1037914462
         Device/File Name         :   +CRS_TMP
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

Add OCR on OCR_VOTE

	
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +OCR_VOTE	

Now we must see two OCR one in each Diskgroup

	
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1568
         Available space (kbytes) :     408000
         ID                       : 1037914462
         Device/File Name         :   +CRS_TMP
                                    Device/File integrity check succeeded
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

Delete OCR from Diskgroup CRS_TMP

	
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -delete +CRS_TMP

Now we must see only one OCR on Diskgroup OCR_VOTE.

	
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1568
         Available space (kbytes) :     408000
         ID                       : 1037914462
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

Check with OCRCHECK on others nodes.

	
[root@node12c01 ~]#  /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1568
         Available space (kbytes) :     408000
         ID                       : 1037914462
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

The OCR was moved successful.

Moving ASM SPFILE
You will get the error which the file is still being used, but the file is copied to the file system and the profile is updated.

	
[grid@node12c01 ~]$ asmcmd spmove '+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607' '+OCR_VOTE/crs12c/spfileASM.ora'
ORA-15032: not all alterations performed
ORA-15028: ASM file '+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)

Checking if file was copied and profile updated

	
[grid@node12c01 ~]$ asmcmd spget
+OCR_VOTE/crs12c/spfileASM.ora

Moving ASM Password File

	
[grid@node12c01 ~]$ asmcmd pwmove --asm +CRS_TMP/orapwASM +OCR_VOTE/crs12c/orapwASM
moving +CRS_TMP/orapwASM -> +OCR_VOTE/crs12c/orapwASM

Checking if file was moved and profile updated

	
[grid@node12c01 ~]$ asmcmd pwget --asm
+OCR_VOTE/crs12c/orapwASM

[grid@node12c01 ~]$ srvctl config asm
ASM home: /u01/app/12.1.0/grid
Password file: +OCR_VOTE/crs12c/orapwASM
ASM listener: LISTENER

After these changes restart cluster to release ASM SPFILE.
You can restart one node at time if you wish.

You DON'T need stop whole cluster to release ASM SPFILE.

As my cluster is a test cluster I will stop cluster on all nodes.

	
[root@node12c01 ~]# /u01/app/12.1.0/grid/bin/crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'node12c01'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node12c01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node12c01'
CRS-2673: Attempting to stop 'ora.CRS_TMP.dg' on 'node12c01'
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'node12c01'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node12c01'
CRS-2677: Stop of 'ora.scan1.vip' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.crsd' on 'node12c02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node12c02'
CRS-2673: Attempting to stop 'ora.cvu' on 'node12c02'
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'node12c02'
CRS-2673: Attempting to stop 'ora.oc4j' on 'node12c02'
CRS-2673: Attempting to stop 'ora.node12c02.vip' on 'node12c02'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node12c02'
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'node12c02'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node12c02'
CRS-2677: Stop of 'ora.cvu' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node12c02'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.node12c02.vip' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.node12c01.vip' on 'node12c02'
CRS-2677: Stop of 'ora.scan2.vip' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.node12c01.vip' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.mgmtdb' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'node12c02'
CRS-2677: Stop of 'ora.MGMTLSNR' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'node12c01' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.CRS_TMP.dg' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node12c01'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node12c01'
CRS-2677: Stop of 'ora.asm' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c01'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node12c02'
CRS-2673: Attempting to stop 'ora.CRS_TMP.dg' on 'node12c02'
CRS-2673: Attempting to stop 'ora.ons' on 'node12c01'
CRS-2677: Stop of 'ora.ons' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node12c01'
CRS-2677: Stop of 'ora.net1.network' on 'node12c01' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node12c01' has completed
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.CRS_TMP.dg' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node12c02'
CRS-2677: Stop of 'ora.asm' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c02'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node12c02'
CRS-2677: Stop of 'ora.ons' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node12c02'
CRS-2677: Stop of 'ora.net1.network' on 'node12c02' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node12c02' has completed
CRS-2677: Stop of 'ora.crsd' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node12c02'
CRS-2673: Attempting to stop 'ora.evmd' on 'node12c02'
CRS-2673: Attempting to stop 'ora.storage' on 'node12c02'
CRS-2677: Stop of 'ora.storage' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node12c02'
CRS-2677: Stop of 'ora.crsd' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node12c01'
CRS-2673: Attempting to stop 'ora.evmd' on 'node12c01'
CRS-2673: Attempting to stop 'ora.storage' on 'node12c01'
CRS-2677: Stop of 'ora.storage' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node12c01'
CRS-2677: Stop of 'ora.evmd' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node12c01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node12c02' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node12c01' succeeded
CRS-2677: Stop of 'ora.asm' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node12c01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node12c01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node12c01'
CRS-2677: Stop of 'ora.cssd' on 'node12c01' succeeded
CRS-2677: Stop of 'ora.asm' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node12c02'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node12c02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node12c02'
CRS-2677: Stop of 'ora.cssd' on 'node12c02' succeeded

Start cluster on all nodes

	
[root@node12c01 ~]# /u01/app/12.1.0/grid/bin/crsctl start cluster -all
CRS-2672: Attempting to start 'ora.evmd' on 'node12c01'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node12c01'
CRS-2672: Attempting to start 'ora.evmd' on 'node12c02'
CRS-2676: Start of 'ora.cssdmonitor' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node12c01'
CRS-2672: Attempting to start 'ora.diskmon' on 'node12c01'
CRS-2676: Start of 'ora.diskmon' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node12c02'
CRS-2676: Start of 'ora.evmd' on 'node12c01' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'node12c02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node12c02'
CRS-2672: Attempting to start 'ora.diskmon' on 'node12c02'
CRS-2676: Start of 'ora.evmd' on 'node12c02' succeeded
CRS-2676: Start of 'ora.diskmon' on 'node12c02' succeeded
CRS-2676: Start of 'ora.cssd' on 'node12c02' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'node12c02'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node12c02'
CRS-2676: Start of 'ora.cssd' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'node12c01'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node12c01'
CRS-2676: Start of 'ora.ctssd' on 'node12c02' succeeded
CRS-2676: Start of 'ora.ctssd' on 'node12c01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node12c02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'node12c02'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'node12c01'
CRS-2676: Start of 'ora.asm' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'node12c01'
CRS-2676: Start of 'ora.asm' on 'node12c02' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'node12c02'
CRS-2676: Start of 'ora.storage' on 'node12c02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'node12c02'
CRS-2676: Start of 'ora.crsd' on 'node12c02' succeeded
CRS-2676: Start of 'ora.storage' on 'node12c01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'node12c01'
CRS-2676: Start of 'ora.crsd' on 'node12c01' succeeded

Migrate MGMTDB to other Diskgroup.

MGMTDB is a "common" Oracle RDBMS (exclusive to CHM). We can perform administrative task like in any other Oracle RDBMS.
So, to migrate MGMTDB to other Diskgroup I will use RMAN.

Creating Diskgroup MGMTDB

	
SQL> CREATE DISKGROUP MGMTDB EXTERNAL REDUNDANCY
     DISK '/dev/asm-mgmtdb1'
     ATTRIBUTE 
	 'au_size'='1M',
	 'compatible.asm' = '12.1',
	 'compatible.rdbms' = '12.1';			 
			 
Diskgroup created.

Starting MGMTDB diskgroup on others node and checking status

	
SQL> ! srvctl start diskgroup -g MGMTDB -n node12c02

SQL> ! srvctl status diskgroup -g MGMTDB
Disk Group MGMTDB is running on node12c01,node12c02

To manage MGMTDB you must use srvctl utility.
srvctl start/stop/config/modify... mgmtdb

Getting current configuration of MGMTDB

	

[grid@node12c01 ~]$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: /u01/app/12.1.0/grid
Oracle user: grid
Spfile: +CRS_TMP/_mgmtdb/spfile-MGMTDB.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
Type: Management

Check where MGMTDB is running.

	
[grid@node12c01 ~]$ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node node12c02

You must migrate MGMTDB using host(node) where MGMTDB is running.

Let's start.
Stop MGMT Listener and MGMTDB

	
[grid@node12c02 ~]$ srvctl stop mgmtdb

[grid@node12c02 ~]$ srvctl stop mgmtlsnr

Precaution: I will create a full backup of the database MGMTDB before start migration.

	
[grid@node12c02 ~]$ mkdir -p /u01/app/grid/backup_mgmtdb

[grid@node12c02 ~]$ export ORACLE_SID=-MGMTDB
[grid@node12c02 ~]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Tue Sep 24 19:30:56 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup mount

connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area     521936896 bytes

Fixed Size                     2290264 bytes
Variable Size                289410472 bytes
Database Buffers             222298112 bytes
Redo Buffers                   7938048 bytes


RMAN>  backup database format '/u01/app/grid/backup_mgmtdb/rman_mgmtdb_%U' tag='bk_db_move_dg';

Starting backup at 24-SEP-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=6 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881
input datafile file number=00001 name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845
input datafile file number=00002 name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821
input datafile file number=00004 name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917
input datafile file number=00003 name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817
channel ORA_DISK_1: starting piece 1 at 24-SEP-13
channel ORA_DISK_1: finished piece 1 at 24-SEP-13
piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_03okm63j_1_1 tag=BK_DB_MOVE_DG comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 24-SEP-13
channel ORA_DISK_1: finished piece 1 at 24-SEP-13
piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_04okm650_1_1 tag=BK_DB_MOVE_DG comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-SEP-13

Migrating SPFILE to Diskgroup +MGMTDB.
RMAN will restore SPFILE and update OCR to a new SPFILE Location.
Database _MGMTDB MUST be mounted or opened.

	

RMAN> restore spfile to "+MGMTDB";

Starting restore at 24-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=355 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: restoring SPFILE
output file name=+MGMTDB
channel ORA_DISK_1: reading from backup piece /u01/app/grid/backup_mgmtdb/rman_mgmtdb_02oltbpe_1_1
channel ORA_DISK_1: piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_02oltbpe_1_1 tag=BK_DB_MOVE_DG
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 24-SEP-13

[grid@node12c01 ~]$ srvctl config mgmtdb |grep Spfile
Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969

RMAN> shutdown immediate;
database dismounted
Oracle instance shut down

Migrating Controlfile to Diskgroup MGMTDB
P.S: Before do this change, check what spfile you are updating.

	

[grid@node12c02 ~]$ sqlplus / as sysdba
SQL> startup nomount;
ORACLE instance started.

Total System Global Area  521936896 bytes
Fixed Size                  2290264 bytes
Variable Size             285216168 bytes
Database Buffers          226492416 bytes
Redo Buffers                7938048 bytes

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969


SQL> show parameter control_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      +CRS_TMP/_MGMTDB/CONTROLFILE/current.262.826899953

SQL> alter system set control_files='+MGMTDB' scope=spfile ;

System altered.

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> exit
[grid@node12c02 ~]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Tue Sep 24 19:45:37 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

Oracle instance started

Total System Global Area     521936896 bytes

Fixed Size                     2290264 bytes
Variable Size                289410472 bytes
Database Buffers             222298112 bytes
Redo Buffers                   7938048 bytes

RMAN> RESTORE CONTROLFILE FROM '+CRS_TMP/_MGMTDB/CONTROLFILE/current.262.826899953';

Starting restore at 24-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=124 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+MGMTDB/_MGMTDB/CONTROLFILE/current.263.827005571
Finished restore at 24-SEP-13

RMAN> startup mount

database is already started
database mounted
released channel: ORA_DISK_1

Controlfile was moved successful.

Migrating Database Files to Diskgroup MGMTDB

	
RMAN>  BACKUP AS COPY DEVICE TYPE DISK DATABASE FORMAT '+MGMTDB';

Starting backup at 24-SEP-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881
output file name=+MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237 tag=TAG20130924T194034 RECID=1 STAMP=827005278
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845
output file name=+MGMTDB/_MGMTDB/DATAFILE/system.257.827005281 tag=TAG20130924T194034 RECID=2 STAMP=827005291
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:16
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821
output file name=+MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295 tag=TAG20130924T194034 RECID=3 STAMP=827005303
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917
output file name=+MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311 tag=TAG20130924T194034 RECID=4 STAMP=827005312
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817
output file name=+MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313 tag=TAG20130924T194034 RECID=5 STAMP=827005316
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 tag=TAG20130924T194034 RECID=6 STAMP=827005317
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 24-SEP-13
channel ORA_DISK_1: finished piece 1 at 24-SEP-13
piece handle=+MGMTDB/_MGMTDB/BACKUPSET/2013_09_24/nnsnf0_tag20130924t194034_0.262.827005319 tag=TAG20130924T194034 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-SEP-13

RMAN> SWITCH DATABASE TO COPY;

datafile 1 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/system.257.827005281"
datafile 2 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295"
datafile 3 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313"
datafile 4 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311"
datafile 5 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237"

Datafiles was migrated successful.

Checking Datafiles


RMAN> report schema;

Report of database schema for database with db_unique_name _MGMTDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    600      SYSTEM               ***     +MGMTDB/_MGMTDB/DATAFILE/system.257.827005281
2    400      SYSAUX               ***     +MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295
3    50       UNDOTBS1             ***     +MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313
4    100      SYSGRIDHOMEDATA      ***     +MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311
5    2048     SYSMGMTDATA          ***     +MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       +CRS_TMP/_MGMTDB/TEMPFILE/temp.266.826899969

Move Tempfile to Diskgroup MGMTDB.

RMAN> run {
SET NEWNAME FOR TEMPFILE 1 TO '+MGMTDB';
SWITCH TEMPFILE ALL;
 }

executing command: SET NEWNAME

renamed tempfile 1 to +MGMTDB in control file

At startup of database TEMP file will be recreated on new Diskgroup.

Starting Database and checking.

RMAN> startup

database is already started
database opened

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name _MGMTDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    600      SYSTEM               ***     +MGMTDB/_MGMTDB/DATAFILE/system.257.827005281
2    400      SYSAUX               ***     +MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295
3    50       UNDOTBS1             ***     +MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313
4    100      SYSGRIDHOMEDATA      ***     +MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311
5    2048     SYSMGMTDATA          ***     +MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       +MGMTDB/_MGMTDB/TEMPFILE/temp.261.827005913

RMAN> exit
Recovery Manager complete.

Delete the old datafiles copy on old Diskgroup CRS_TMP.

RMAN> delete copy ;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=124 device type=DISK
specification does not match any archived log in the repository
List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
------- ---- - --------------- ---------- ---------------
7       1    A 24-SEP-13       798766     24-SEP-13
        Name: +CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845

8       2    A 24-SEP-13       798766     24-SEP-13
        Name: +CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821

9       3    A 24-SEP-13       798766     24-SEP-13
        Name: +CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817

10      4    A 24-SEP-13       798766     24-SEP-13
        Name: +CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917

11      5    A 24-SEP-13       798766     24-SEP-13
        Name: +CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881

List of Control File Copies
===========================

Key     S Completion Time Ckp SCN    Ckp Time
------- - --------------- ---------- ---------------
6       A 24-SEP-13       798766     24-SEP-13
        Name: +MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317
        Tag: TAG20130924T194034


Do you really want to delete the above objects (enter YES or NO)? yes
deleted datafile copy
datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845 RECID=7 STAMP=827005353
deleted datafile copy
datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821 RECID=8 STAMP=827005353
deleted datafile copy
datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817 RECID=9 STAMP=827005353
deleted datafile copy
datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917 RECID=10 STAMP=827005353
deleted datafile copy
datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881 RECID=11 STAMP=827005353
deleted control file copy
control file copy file name=+MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 RECID=6 STAMP=827005317
Deleted 6 objects

Migrating Redo Logs to Diskgroup MGMTDB
P.S: Database must be OPENED.

[grid@node12c02 ~]$ sqlplus / as sysdba
SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;
    GROUP# MEMBER                                             STATUS
---------- -------------------------------------------------- ----------------
         1 +CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955   INACTIVE
         2 +CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955   CURRENT
         3 +CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957   INACTIVE

Adding New Log Members

SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 1;
Database altered.

SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 2;
Database altered.
 
SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 3;
Database altered.

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> /
System altered.

SQL> /
System altered.

SQL> alter system checkpoint;

Removing Log Member from CRS_TMP

SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;

    GROUP# MEMBER                                             STATUS
---------- -------------------------------------------------- ----------------
         1 +CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955   ACTIVE
         1 +MGMTDB/_MGMTDB/ONLINELOG/group_1.264.827006475    ACTIVE
         2 +MGMTDB/_MGMTDB/ONLINELOG/group_2.266.827006507    ACTIVE
         2 +CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955   ACTIVE
         3 +MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479    CURRENT
         3 +CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957   CURRENT

6 rows selected.

SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955';
Database altered.
 
SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955';
Database altered.
 
SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'; 
Database altered.
		  
		 
		 
SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957';
ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'
*
ERROR at line 1:
ORA-01609: log 3 is the current log for thread 1 - cannot drop members
ORA-00312: online log 3 thread 1:
'+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'
ORA-00312: online log 3 thread 1:
'+MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479'

SQL> alter system switch logfile;

System altered.

SQL> alter system checkpoint;

System altered.

SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957';

Database altered.

SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;

    GROUP# MEMBER                                             STATUS
---------- -------------------------------------------------- ----------------
         1 +MGMTDB/_MGMTDB/ONLINELOG/group_1.264.827006475    CURRENT
         2 +MGMTDB/_MGMTDB/ONLINELOG/group_2.266.827006507    INACTIVE
         3 +MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479    INACTIVE

Redo Logs was moved successful.

Check Configuration of resource ora.mgmtdb.
Maybe you need modify this resource by removing old DISKGROUP.

As grid user:

[grid@node12c02 ~]$ crsctl status res ora.mgmtdb -p |grep CRS_TMP
START_DEPENDENCIES=hard(ora.MGMTLSNR,ora.CRS_TMP.dg,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.CRS_TMP.dg,ora.MGMTDB.dg)
STOP_DEPENDENCIES=hard(intermediate:ora.MGMTLSNR,shutdown:ora.CRS_TMP.dg,intermediate:ora.asm,shutdown:ora.MGMTDB.dg)

If command above return output you need update OCR by removing the old Diskgroup.
If you removed Diskgroup without update resource ora.mgmtdb, srvctl will not be able to start resource ora.mgmtdb and raise error:

[grid@node12c02 ~]$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: /u01/app/12.1.0/grid
Oracle user: grid
Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
PRCD-1012 : Failed to retrieve disk group list for database _mgmtdb.
PRCR-1035 : Failed to look up CRS resource ora.CRS_TMP.dg for _mgmtdb
PRCA-1000 : ASM Disk Group CRS_TMP does not exist
PRCR-1001 : Resource ora.CRS_TMP.dg does not exist

To fix it you need remove this resource.
If resource ora.CRS_TMP.dg does not exists you must use crsctl with force option ("-f" flag) to modify ora.mgmtdb resource.

[grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "START_DEPENDENCIES='hard(ora.MGMTLSNR,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.MGMTDB.dg)'"
CRS-2510: Resource 'ora.CRS_TMP.dg' used in dependency 'hard' does not exist or is not registered
CRS-2514: Dependency attribute specification 'hard' is invalid in resource 'ora.mgmtdb'
CRS-4000: Command Modify failed, or completed with errors.

Updating resource ora.mgmtdb by removing ora.CRS_TMP.dg resource.


[grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "START_DEPENDENCIES='hard(ora.MGMTLSNR,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.MGMTDB.dg)'" -f

[grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "STOP_DEPENDENCIES='hard(intermediate:ora.MGMTLSNR,intermediate:ora.asm,shutdown:ora.MGMTDB.dg)'" -f

[grid@node12c02 ~]$ crsctl status res ora.mgmtdb -p |grep CRS_TMP

[grid@node12c02 ~]$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: /u01/app/12.1.0/grid
Oracle user: grid
Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
Type: Management

[grid@node12c02 ~]$ srvctl start mgmtdb

[grid@node12c02 ~]$ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node node12c02 

Adding OCR Mirror on +MGMTDB.

[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +MGMTDB
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1568
         Available space (kbytes) :     408000
         ID                       : 1037914462
         Device/File Name         :  +OCR_VOTE
                                    Device/File integrity check succeeded
         Device/File Name         :    +MGMTDB
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

I recommend stop and start your cluster by issuing (as root) "crsctl stop cluster -all and crsctl start cluster -all"

If all resources restart successfull, then we can stop diskgroup "CRS_TMP".

[grid@node12c02 ~]$ srvctl stop diskgroup -g CRS_TMP

[grid@node12c02 ~]$ srvctl status diskgroup -g CRS_TMP
Disk Group CRS_TMP is not running

Check if all resources is online. If yes, you can remove this diskgroup.

[grid@node12c02 ~]$ srvctl start diskgroup -g CRS_TMP -n node12c02 

[grid@node12c02 ~]$ export ORACLE_SID=+ASM1
[grid@node12c02 ~]$ sqlplus / as sysasm

SQL> DROP DISKGROUP CRS_TMP INCLUDING CONTENTS;

Diskgroup dropped.

Then restart whole clusterware again and check if all resources started without issue.


What is slow RMAN or Media Management Library?

Backup execution time  is slow.. where is the problem?

When we perform backup using third party software and backup of database is slow, there always is some uncertain who is causing the slowness.

The Database Administrator (DBA) says: is Media Management Library (aka MML) and Backup Operator says: is RMAN.

To end this conflict I ​​will show how to identify where is the problem.

I'll use the term RMAN (like RMAN spend time), but actually means Database spend time. Because RMAN is only a client. So typically the database is slow and not RMAN Client is slow.

Note: I will not diagnose what is causing it, I will only  help you  identify whether the problem is, in  MML or RMAN.

Media Management

The Oracle Media Management Layer (MML) API lets third-party vendors build a media manager, software that works with RMAN and the vendor's hardware to allow backups to sequential media devices such as tape drives. A media manager handles loading, unloading, and labeling of sequential media such as tapes.

RMAN Interaction with a Media Manager

RMAN does not issue specific commands to load, label, or unload tapes. When backing up, RMAN gives the media manager a stream of bytes and associates a unique name with this stream. When RMAN must restore the backup, it asks the media manager to retrieve the byte stream. All details of how and where that stream is stored are handled entirely by the media manager. For example, the media manager labels and keeps track of the tape and names of files on each tape, and automatically loads and unloads tapes, or signals an operator to do so.

RMAN provides a list of files requiring backup or restore to the media manager, which in turn makes all decisions regarding how and when to move the data.

Before RMAN making a call to any of functions in the media management API, the server create a EVENT WAIT. So, These with  EVENT WAIT  is possible to get the number of  wait (in seconds or number) that the server has been waiting for this call to return.

So, we can calculate it and find out how much time RMAN spend waiting MML (e.g writing  or querying a backup piece filename) processing request and return to RMAN.

Complete list  EVENT of MML

Oracle 11.2 or above:

SELECT NAME
FROM   V$EVENT_NAME
WHERE  NAME LIKE '%MML%';
NAME
----------------------------------------
Backup: MML initialization
Backup: MML v1 open backup piece
Backup: MML v1 read backup piece
Backup: MML v1 write backup piece
Backup: MML v1 close backup piece
Backup: MML v1 query backup piece
Backup: MML v1 delete backup piece
Backup: MML create a backup piece
Backup: MML commit backup piece
Backup: MML command to channel
Backup: MML shutdown
Backup: MML obtain textual error
Backup: MML query backup piece
Backup: MML extended initialization
Backup: MML read backup piece
Backup: MML delete backup piece
Backup: MML restore backup piece
Backup: MML write backup piece
Backup: MML proxy initialize backup
Backup: MML proxy cancel
Backup: MML proxy commit backup piece
Backup: MML proxy session end
Backup: MML datafile proxy backup?
Backup: MML datafile proxy restore?
Backup: MML proxy initialize restore
Backup: MML proxy start data movement
Backup: MML data movement done?
Backup: MML proxy prepare to start
Backup: MML obtain a direct buffer
Backup: MML release a direct buffer
Backup: MML get base address
Backup: MML query for direct buffers

Previous version of Oracle Database 11.2 the Event name MML does not exists because it’s was changed on version 11.2 from %STB% to %MML%.

So, If you are using Oracle 11.1 or previous you can query V$EVENT_NAME where NAME like '%sbt%'.

SELECT NAME
FROM   V$EVENT_NAME
WHERE  NAME LIKE '%sbt%';

Backup: sbtinit
Backup: ssbtopen
Backup: ssbtread
Backup: ssbtwrite
Backup: ssbtbackup
.
.
.

So, lets start...
Oracle store statistics (cumulative, since database was started) of these wait on v$system_event. I always use GV$ because is very common we admin RAC env on this days.

Before start backup I'll take a snapshot intial of gv$system_event...by creating a table RMAN_MML_EVENT_T1.

Click on icon "View Source" to see formated text.

CREATE TABLE  RMAN_MML_EVENT_T1 AS
SELECT inst_id,
  event,
  TOTAL_WAITS,
  TOTAL_TIMEOUTS,
  TIME_WAITED,
  AVERAGE_WAIT,
  TIME_WAITED_MICRO,
  sysdate as SNAPSHOT_TIME
FROM gv$system_event
WHERE event LIKE 'Backup%';

SQL> select * from RMAN_MML_EVENT_T1;

   INST_ID EVENT                                   TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO SNAPSHOT_TIME
---------- --------------------------------------- ----------- -------------- ----------- ------------ ----------------- -----------------
         1 Backup: MML initialization                      371              0       54365       146.54         543651136 08-08-12 17:11:05
         1 Backup: MML create a backup piece               450              0        4827        10.73          48270960 08-08-12 17:11:05
         1 Backup: MML commit backup piece                 450              0        7417        16.48          74172281 08-08-12 17:11:05
         1 Backup: MML shutdown                            371              0          47          .13            469267 08-08-12 17:11:05
         1 Backup: MML query backup piece                  894              0       11222        12.55         112222166 08-08-12 17:11:05
         1 Backup: MML extended initialization             371              0           0            0              3655 08-08-12 17:11:05
         1 Backup: MML delete backup piece                 444              0        5348        12.05          53480530 08-08-12 17:11:05
         1 Backup: MML write backup piece              1378078              0     3053683         2.22        3.0537E+10 08-08-12 17:11:05

8 rows selected.

I started backup using RMAN and MML (Tivoli Storage Manager). When backup finished you can query V$RMAN_BACKUP_JOB_DETAILS to get accurate time of backup

SELECT START_TIME,
  END_TIME,
  ROUND(INPUT_BYTES  /1024/1024/1024,2) IMPUT_GBYTES ,
  ROUND(OUTPUT_BYTES /1024/1024/1024,2) OUTPUT_GBYTES,
  INPUT_TYPE,
  ELAPSED_SECONDS
FROM V$RMAN_BACKUP_JOB_DETAILS
WHERE TRUNC(START_TIME) = TRUNC(SYSDATE)
AND INPUT_TYPE LIKE 'DB%';

START_TIME        END_TIME          IMPUT_GBYTES OUTPUT_GBYTES INPUT_TYPE    ELAPSED_SECONDS
----------------- ----------------- ------------ ------------- ------------- ---------------
08-08-12 17:23:44 08-08-12 17:26:38        12.85         10.06 DB FULL                   174

In my case the backup full take 174 seconds to backup read 12.85GB and Write on MML 10.06GB of data

So, after backup finish I take the 2nd snapshot by creating the table RMAN_SNAPSHOT_T2.


CREATE TABLE  RMAN_SNAPSHOT_T2 AS
SELECT inst_id,
  event,
  TOTAL_WAITS,
  TOTAL_TIMEOUTS,
  TIME_WAITED,
  AVERAGE_WAIT,
  TIME_WAITED_MICRO,
  sysdate as SNAPSHOT_TIME
FROM gv$system_event
WHERE event LIKE 'Backup%';

SQL> select * from RMAN_MML_EVENT_T2;

   INST_ID EVENT                                   TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO SNAPSHOT_TIME
---------- --------------------------------------- ----------- -------------- ----------- ------------ ----------------- -----------------
         1 Backup: MML initialization                      373              0       54665       146.56         546652333 08-08-12 17:27:45
         1 Backup: MML create a backup piece               454              0        4860        10.71          48604759 08-08-12 17:27:45
         1 Backup: MML commit backup piece                 454              0        7482        16.48          74820999 08-08-12 17:27:45
         1 Backup: MML shutdown                            373              0          47          .13            471590 08-08-12 17:27:45
         1 Backup: MML query backup piece                  900              0       11281        12.53         112808077 08-08-12 17:27:45
         1 Backup: MML extended initialization             373              0           0            0              3665 08-08-12 17:27:45
         1 Backup: MML delete backup piece                 446              0        5373        12.05          53727006 08-08-12 17:27:45
         1 Backup: MML write backup piece              1419274              0     3067298         2.16        3.0673E+10 08-08-12 17:27:45

8 rows selected.

Now I can calculate the values from RMAN_MML_EVENT_T2 minus RMAN_MML_EVENT_T1 to get the real time spend on MML.
Note:
EVENT: Name of the wait event
TOTAL_WAITS: Total number of waits for the event
TOTAL_TIMEOUTS: Total number of timeouts for the event
TIME_WAITED: Total amount of time waited for the event (in hundredths of a second)
AVERAGE_WAIT: Average amount of time waited for the event (in hundredths of a second)
TIME_WAITED_MICRO: Total amount of time waited for the event (in microseconds)

SELECT t1.inst_id,
  t1.event,
  t2.total_waits       - t1.total_waits total_waits,
  t2.total_timeouts    -t1.total_timeouts total_timeouts,
  t2.time_waited       - t1.time_waited time_waited,
  t2.time_waited_micro - t1.time_waited_micro time_waited_micro
FROM RMAN_MML_EVENT_T1 T1,
  RMAN_MML_EVENT_T2 T2
WHERE t1.inst_id = t2.inst_id
AND t1.event     = t2.event;

   INST_ID EVENT                                   TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED TIME_WAITED_MICRO
---------- --------------------------------------- ----------- -------------- ----------- -----------------
         1 Backup: MML initialization                        2              0         300           3001197
         1 Backup: MML create a backup piece                 4              0          33            333799
         1 Backup: MML commit backup piece                   4              0          65            648718
         1 Backup: MML shutdown                              2              0           0              2323
         1 Backup: MML query backup piece                    6              0          59            585911
         1 Backup: MML extended initialization               2              0           0                10
         1 Backup: MML delete backup piece                   2              0          25            246476
         1 Backup: MML write backup piece                41196              0       13615         136141912

8 rows selected.

As I can see above the MML spend more time writing backup piece.

So, I'll sum the time to get total time spend on MML.

SELECT SUM (total_waits) total_waits,
  SUM(total_timeouts) total_timeouts ,
  SUM (time_waited)/100 time_waited_in_second,
  SUM (time_waited_micro) time_waited_micro
FROM
  (SELECT t1.inst_id,
    t1.event,
    t2.total_waits       - t1.total_waits total_waits,
    t2.total_timeouts    -t1.total_timeouts total_timeouts,
    t2.time_waited       - t1.time_waited time_waited,
    t2.time_waited_micro - t1.time_waited_micro time_waited_micro
  FROM RMAN_MML_EVENT_T1 T1,
    RMAN_MML_EVENT_T2 T2
  WHERE t1.inst_id = t2.inst_id
  AND t1.event     = t2.event
  )

TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED_IN_SECOND TIME_WAITED_MICRO
----------- -------------- --------------------- -----------------
      41218              0                140.97         140960346

Calculating time total of backup window, time spend on MML and time spend of RMAN.

Note: TIME_SPEND_BY_RMAN = (ELAPSED_SECOND_BACKUP-TIME_SPEND_BY_MML_SECOND)

ELAPSED_SECONDS_BACKUP          TIME_SPEND_BY_MML_SECOND       TIME_SPEND_BY_RMAN_SECOND
------------------------------ ------------------------------ -------------------
174                             140.97                         33.03

Summarizing:
Total time of backup : 174
Time spend by MML: 141
Time spend by RMAN : 33

If this backup is slow is because MML take (141*100/174) 81% of time spend of backup window.

Additional info:
As my backup was done over Lan:
(10.06GB * 1024 = 10301MB)
10301MB / 144 = 71Mbytes/second

As I'm using network interface of 1 Gbit I can consider a normal throughput.

Also you can monitoring in real time where is wait.

Just execute this script above:

Note : if you are using previous version of 11.2 change %MML% to %sbt%.

vi monitoring_mml.sh
sqlplus -s sys/<password>@<db_name> as sysdba<<EOF
set echo off
COLUMN EVENT FORMAT a17
COLUMN SECONDS_IN_WAIT FORMAT 999
COLUMN STATE FORMAT a15
COLUMN CLIENT_INFO FORMAT a30
set linesize 200

select to_char(sysdate,'dd-mm-yyyy hh24:mi:ss') actual_date from dual;

SELECT p.SPID, sw.EVENT, sw.SECONDS_IN_WAIT AS SEC_WAIT, 
       sw.STATE, CLIENT_INFO
FROM   gV\$SESSION_WAIT sw, gv\$SESSION s, gV\$PROCESS p
WHERE  sw.EVENT LIKE '%MML%'
AND    s.SID=sw.SID
AND    s.PADDR=p.ADDR;
EOF
exit

Using shell execute the command above, and you will see in real time the wait on MML.

while true
do
sh monitoring_mml.sh
sleep 1
done

.

Find us on Google+


Oracle Real Application Clusters on IBM AIX – Best practices in memory tuning and configuring for system stability

Customers who experience Oracle Real Application Clusters (RAC) node evictions due to  excessive AIX kernel paging should carefully review and implement these recommended best practices. Testing and experience have found that memory over commitments may
cause scheduling delays for Oracle’s ‘oprocd’ process in Oracle RAC versions prior to 11.2 which may result in node evictions. Implementing all of these recommendations will reduce scheduling delays and corresponding oprocd initiated evictions for Oracle RAC versions prior to 11.2. For Oracle RAC versions 11.2 and later, implementing all of these recommendations will ensure optimal performance and scalability

 

http://www.oracle.com/technetwork/products/clusterware/overview/rac-aix-system-stability-131022.pdf

If link above is broke see below  click below:

rac-aix-system-stability-131022

Enjoy…

 


Oracle RDBMS Server 11gR2 Pre-Install RPM

Oracle RDBMS Server 11gR2 Pre-Install RPM for Oracle Linux 6 has been released

By Lenz Grimmer

 Now that the certification of the Oracle Database 11g R2 with Oracle Linux 6 and the Unbreakable Enterprise Kernel has been announced, we are glad to announce the availability of oracle-rdbms-server-11gR2-preinstall, the Oracle RDBMS Server 11gR2 Pre-install RPM package (formerly known as oracle-validated). Designed specifically for Oracle Linux 6, this RPM aids in the installation of the Oracle Database.

In order to install the Oracle Database 11g R2 on Oracle Linux 6, your system needs to meet a few prerequisites, as outlined in the Linux Installation Guides. Using the Oracle RDBMS Server 11gR2 Pre-install RPM, you can complete most of the pre-installation configuration tasks. which is now available from the Unbreakable Linux Network, or via the Oracle public yum repository.

https://blogs.oracle.com/linux/entry/oracle_rdbms_server_11gr2_pre

Enjoy


Certification of the Oracle Database on Oracle Linux 6 and Red Hat Enterprise Linux 6

Oracle Announces the Certification of the Oracle Database on Oracle Linux 6 and Red Hat Enterprise Linux 6
Redwood Shores, Calif. – March 22, 2012
News Facts
Oracle Database 11g Release 2 (R2) and Oracle Fusion Middleware 11g Release 1 (R1) are immediately available on Oracle Linux 6 with the Unbreakable Enterprise Kernel.
Oracle Database 11g R2 and Oracle Fusion Middleware 11g R1 will be available on Red Hat Enterprise Linux 6 (RHEL6) and Oracle Linux 6 with the Red Hat Compatible Kernel in 90 days.
Oracle offers direct Linux support to customers running RHEL6, Oracle Linux 6, or a combination of both.
Oracle Linux will continue to maintain compatibility with Red Hat Linux.
Effective immediately, Oracle will provide its Red Hat compatible Linux binaries, updates and errata for free http://public-yum.oracle.com. Terms, conditions and restrictions apply.
The Unbreakable Enterprise Kernel
The Unbreakable Enterprise Kernel is Oracle’s fast, modern, reliable Linux kernel that is optimized for Oracle software and hardware.
Oracle Linux 6 with the Unbreakable Enterprise Kernel is optimized for running Oracle products.
Supporting Quote
“The Oracle Database and Oracle Fusion Middleware product availability we are announcing today is the result of stringent certification testing on Oracle Linux 6 with the Unbreakable Enterprise Kernel,” said Wim Coekaerts, senior vice president of Linux and Virtualization Engineering, Oracle. “We look forward to completing certification testing with Oracle Linux 6 with the Red Hat Compatible Kernel.”
http://www.oracle.com/us/corporate/press/1563775
Use MOS Note:
Database Client or Database Server Install on Red Hat Enterprise Linux 6 (RHEL6) or Oracle Linux 6 [ID 1350000.1]
You are attempting to install Database Server or Database Client on a Red Hat Enterprise Linux 6 (RHEL6) system or Oracle Linux 6 (OL6) system, but it is failing the Prerequisite Checks section.  The failure may indicate problems with OS version, OS packages, OS kernel parameters, or a variety of other similar failures.
At the time this article was last updated (10-Apr-2012), Certify indicates that 64-bit 11gR2 Database for Linux x86-64 is certified with OL6 and RHEL6.  There has been no announcement for 32-bit 11gR2 Database on Linux x86.
Enjoy …