How to move/migrate OCR, Voting Disk, ASM (SPFILE and PWDFILE) and MGMTDB to a different Diskgroup on ASM 12.1.0.1
Posted: 10/10/2013 | Author: Levi Pereira | Filed under: 12c R1, 12c R1, 12c R1 (12.1.0.1), ASM - Automatic Storage Management, Clusterware / Grid Infrastructure, Linux, New Features, Operating System, Oracle, RMAN |18 Comments_______________________________________
Last Update (29/09/2014):
Oracle released the note: How to Move GI Management Repository to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)
The procedure on NOTE (Doc ID 1589394.1) does not "MOVE" but RECREATE Management Repository.
The procedure below on this post MOVE (migrating all data) from a Diskgroup to another Diskgroup.
_______________________________________
This post is only a recommendation about how to configure Clusterware Files and MGMTDB on ASM using Grid Infrastructure 12.1.0.1.
Migrating MGMTDB to a different Diskgroup is not documented until 25/09/2013. Oracle will release a note to make this procedure soon.
When we install/configure a new fresh Oracle GI for a Cluster the cluster files and MGMTDB is stored on first Diskgroup created at moment of installation.
To easy management and increase the avability of Cluster is recommended setup small diskgroup to store the cluster files.
ASM Spfile stuck on Diskgroup : ORA-15027: active use of diskgroup precludes its dismount (With no database clients connected) [ID 1082876.1]
The cluster files are:
Oracle Cluster Registry (OCR)
The Oracle RAC configuration information repository that manages information about the cluster node list and instance-to-node mapping information. The OCR also manages information about Oracle Clusterware resource profiles for customized applications.
The OCR is stored in a Diskgroup similar as Database Files are stored. The extents are spread across all disk of diskgroup and the redundancy (which is at the extent level) is based on the redundancy of the disk group. We can store only one OCR per Diskgroup.
Voting Disk
Voting Disk aka Voting files: Is a file that manages information about node membership.
The voting disks are placed directly on ASMDISK/Luns. Oracle Clusterware will store the votedisk on the disk within a disk group that holds the Voting Files. Oracle Clusterware does not rely on ASM to access the Voting Files, that’s means wich Oracle Clusterware does not need of Diskgroup to read and write on ASMDISK.
You cannot find/list Voting files using SQLPLUS(v$asm_diskgroup,v$asm_files,v$asm_alias), ASCMD utility or ASMCA gui.
You only know if exist a voting files in a ASMDISK (v$asm_disk using column VOTING_FILE). So, voting files not depend of Diskgroup to be accessed, does not mean that, we don’t need the diskgroup, diskgroup and voting file are linked by their settings and redundancy.
We cannot place Voting Disk in multiples Diskgroup. Is supported use only one Diskgroup to store Voting Disks.
ASM Spfile
This files is a parameter file of ASM Instance.
The Spfile is stored as OCR and Database files.
ASM Password File
New feature of ASM 12.1 allow store Database or ASM Password File on ASM Diskgroup.
An individual password file for Oracle Database or Oracle ASM can reside on a designated Oracle ASM disk group. Having the password files reside on a single location accessible across the cluster reduces maintenance costs and situations where passwords become out of sync.
The COMPATIBLE.ASM
disk group attribute must be set to at least 12.1
for the disk group where the password is to be located.
Database MGMTDB
New feature of GI 12.1 Management Repository.
Management Repository is a single instance database that's managed by Oracle Clusterware in 12c. If the option is selected during GI installation, the database will be configured and managed by GI. As it's a single instance database, it will be up and running on one node in the cluster; as it's managed by GI, in case the hosting node is down, the database will be automatically failed over to other node.
Management database by default uses the same shared storage as OCR/Voting File.
There is no procedure in documentation 12.1 or Metalink Notes to change location of MGMTDB. (25/09/2013)
What are my recommendation:
Create two diskgroup:
+OCR_VOTE
Even using RAID at Storage Layer create a small diskgroup with 3 ASMDISK using Normal Redundancy. This Diskgroup will store Voting Disk, OCR, ASM Spfile, ASM Password File.
The reason to create this Diskgroup with Normal Redundancy is provide availability in case of loss of one voting disk or loss of one ASM Disk, caused by human errors or storage failure or host failure (controller).
+MGMTDB
This Diskgroup can be created using External Redudancy if you are using RAID at Storage Layer.
This Diskgroup will store MGMTDB and OCR Mirror.
Important: To perform the changes below all nodes in the cluster must be active.
Let's Play:
Checking if nodes are actives.
[grid@node12c02 ~]$ olsnodes -s node12c01 Active node12c02 Active
Use OCRCHECK to find where OCR file is stored.
[grid@node12c02 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 1548 Available space (kbytes) : 408020 ID : 1037914462 Device/File Name : +CRS_TMP Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check bypassed due to non-privileged user
Use crsctl to find where Voting Disk is stored.
[grid@node12c02 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 0d2d3b5d705a4fefbf8824c99251ac5a (/dev/oracleasm/disks/OCR_TEMP) [CRS_TMP] Located 1 voting disk(s).
Use asmcmd to find where ASM SPFILE and ASM PWD is stored
[grid@node12c02 ~]$ asmcmd spget +CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607 [grid@node12c02 ~]$ asmcmd pwget --asm +CRS_TMP/crs12c/orapwASM
Getting info about Voting Disk on ASM.
Connect to ASM using as sysdba role
SQL>
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A20
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
FROM V$ASM_DISKGROUP
WHERE NAME='CRS_TMP');
NAME PATH HEADER_STATUS FAILGROUP FAILGROUP_TYPE VOTING_FILE ------------------------ ------------------------------ -------------------- -------------------- -------------------- -------------------- CRS_TMP_0000 /dev/oracleasm/disks/OCR_TEMP MEMBER CRS_TMP_0000 REGULAR Y
Get name of cluster.
[grid@node12c02 ~]$ olsnodes -c crs12c
set linesize 100
col FILES_OF_CLUSTER for a60
select concat('+'||gname, sys_connect_by_path(aname, '/')) FILES_OF_CLUSTER
from ( select b.name gname, a.parent_index pindex, a.name aname,
a.reference_index rindex , a.system_created, a.alias_directory,
c.type file_type
from v$asm_alias a, v$asm_diskgroup b, v$asm_file c
where a.group_number = b.group_number
and a.group_number = c.group_number(+)
and a.file_number = c.file_number(+)
and a.file_incarnation = c.incarnation(+)
) WHERE file_type in ( 'ASMPARAMETERFILE','PASSWORD','OCRFILE')
start with (mod(pindex, power(2, 24))) = 0
and rindex in
( select a.reference_index
from v$asm_alias a, v$asm_diskgroup b
where a.group_number = b.group_number
and (mod(a.parent_index, power(2, 24))) = 0
and a.name = LOWER('&CLUSTERNAME')
)
connect by prior rindex = pindex;
Enter value for clustername: crs12c old 17: and a.name = LOWER('&CLUSTERNAME') new 17: and a.name = LOWER('crs12c') FILES_OF_CLUSTER ------------------------------------------------------------ +CRS_TMP/crs12c/ASMPARAMETERFILE/REGISTRY.253.826898607 +CRS_TMP/crs12c/OCRFILE/REGISTRY.255.826898619 +CRS_TMP/crs12c/orapwasm
Creating Diskgroup OCR_VOTE
Creating Diskgroup OCR_VOTE each disk must be in different failgroup. I don’t add QUORUM failgroup because theses luns are on Storage. Use QUORUM failgroup when you are placing disk out of your environment. (e.g use NFS file-disk to quorum purpouses), because this disks cannot contain data.
List ASM Disk
SQL>
col path for a30
col name for a20
col header_status for a20
select path,name,header_status from v$asm_disk
where path like '/dev/asm%';
PATH NAME HEADER_STATUS ------------------------------ -------------------- -------------------- /dev/asm-ocr_vote3 CANDIDATE /dev/asm-ocr_vote1 CANDIDATE /dev/asm-ocr_vote2 CANDIDATE
SQL> CREATE DISKGROUP OCR_VOTE NORMAL REDUNDANCY FAILGROUP controller01 DISK '/dev/asm-ocr_vote1' FAILGROUP controller02 DISK '/dev/asm-ocr_vote2' FAILGROUP controller03 DISK '/dev/asm-ocr_vote3' ATTRIBUTE 'au_size'='1M', 'compatible.asm' = '12.1'; Diskgroup created.
Mount Diskgroup OCR_VOTE on others Nodes
SQL> ! srvctl start diskgroup -g ocr_vote -n node12c01
Check status of Diskgroup OCR_VOTE in all nodes.
The Diskgroup OCR_VOTE must be mounted in all hub nodes.
SQL> ! srvctl status diskgroup -g ocr_vote Disk Group ocr_vote is running on node12c01,node12c02
Moving Voting Disk to Diskgroup OCR_VOTE
As grid user issue:
[grid@node12c01 ~]$ crsctl replace votedisk +OCR_VOTE Successful addition of voting disk be10b93cca734f65bfa6b89cc574d485. Successful addition of voting disk 46a9b6154c184f89bf6a503846ea3aa3. Successful addition of voting disk 6bc41f75098d4f5dbf97783770760445. Successful deletion of voting disk 0d2d3b5d705a4fefbf8824c99251ac5a. Successfully replaced voting disk group with +OCR_VOTE. CRS-4266: Voting file(s) successfully replaced
Query CSS Votedisk in all nodes
[grid@node12c01 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE be10b93cca734f65bfa6b89cc574d485 (/dev/asm-ocr_vote1) [OCR_VOTE] 2. ONLINE 46a9b6154c184f89bf6a503846ea3aa3 (/dev/asm-ocr_vote2) [OCR_VOTE] 3. ONLINE 6bc41f75098d4f5dbf97783770760445 (/dev/asm-ocr_vote3) [OCR_VOTE] Located 3 voting disk(s). [grid@node12c02 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE be10b93cca734f65bfa6b89cc574d485 (/dev/asm-ocr_vote1) [OCR_VOTE] 2. ONLINE 46a9b6154c184f89bf6a503846ea3aa3 (/dev/asm-ocr_vote2) [OCR_VOTE] 3. ONLINE 6bc41f75098d4f5dbf97783770760445 (/dev/asm-ocr_vote3) [OCR_VOTE] Located 3 voting disk(s).
Query ASMDISK on ASM
SQL >
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A10
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
FROM V$ASM_DISKGROUP
WHERE NAME='OCR_VOTE');
NAME PATH HEADER_STATUS FAILGROUP FAILGROUP_TYPE VOTING_FILE ------------------------ ------------------------------ -------------------- -------------------- -------------------- -------------------- OCR_VOTE_0002 /dev/asm-ocr_vote3 MEMBER CONTROLLER03 REGULAR Y OCR_VOTE_0001 /dev/asm-ocr_vote2 MEMBER CONTROLLER02 REGULAR Y OCR_VOTE_0000 /dev/asm-ocr_vote1 MEMBER CONTROLLER01 REGULAR Y
The Voting Disk was moved successful.
Moving OCR to Diskgroup OCR_VOTE
As root user issue:
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 1568 Available space (kbytes) : 408000 ID : 1037914462 Device/File Name : +CRS_TMP Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
Add OCR on OCR_VOTE
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +OCR_VOTE
Now we must see two OCR one in each Diskgroup
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 1568 Available space (kbytes) : 408000 ID : 1037914462 Device/File Name : +CRS_TMP Device/File integrity check succeeded Device/File Name : +OCR_VOTE Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
Delete OCR from Diskgroup CRS_TMP
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -delete +CRS_TMP
Now we must see only one OCR on Diskgroup OCR_VOTE.
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 1568 Available space (kbytes) : 408000 ID : 1037914462 Device/File Name : +OCR_VOTE Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
Check with OCRCHECK on others nodes.
[root@node12c01 ~]# /u01/app/12.1.0/grid/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 1568 Available space (kbytes) : 408000 ID : 1037914462 Device/File Name : +OCR_VOTE Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
The OCR was moved successful.
Moving ASM SPFILE
You will get the error which the file is still being used, but the file is copied to the file system and the profile is updated.
[grid@node12c01 ~]$ asmcmd spmove '+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607' '+OCR_VOTE/crs12c/spfileASM.ora' ORA-15032: not all alterations performed ORA-15028: ASM file '+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
Checking if file was copied and profile updated
[grid@node12c01 ~]$ asmcmd spget +OCR_VOTE/crs12c/spfileASM.ora
Moving ASM Password File
[grid@node12c01 ~]$ asmcmd pwmove --asm +CRS_TMP/orapwASM +OCR_VOTE/crs12c/orapwASM moving +CRS_TMP/orapwASM -> +OCR_VOTE/crs12c/orapwASM
Checking if file was moved and profile updated
[grid@node12c01 ~]$ asmcmd pwget --asm +OCR_VOTE/crs12c/orapwASM [grid@node12c01 ~]$ srvctl config asm ASM home: /u01/app/12.1.0/grid Password file: +OCR_VOTE/crs12c/orapwASM ASM listener: LISTENER
After these changes restart cluster to release ASM SPFILE.
You can restart one node at time if you wish.
You DON'T need stop whole cluster to release ASM SPFILE.
As my cluster is a test cluster I will stop cluster on all nodes.
[root@node12c01 ~]# /u01/app/12.1.0/grid/bin/crsctl stop cluster -all CRS-2673: Attempting to stop 'ora.crsd' on 'node12c01' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node12c01' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node12c01' CRS-2673: Attempting to stop 'ora.CRS_TMP.dg' on 'node12c01' CRS-2673: Attempting to stop 'ora.proxy_advm' on 'node12c01' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node12c01' CRS-2677: Stop of 'ora.scan1.vip' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.crsd' on 'node12c02' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node12c02' CRS-2673: Attempting to stop 'ora.cvu' on 'node12c02' CRS-2673: Attempting to stop 'ora.proxy_advm' on 'node12c02' CRS-2673: Attempting to stop 'ora.oc4j' on 'node12c02' CRS-2673: Attempting to stop 'ora.node12c02.vip' on 'node12c02' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node12c02' CRS-2673: Attempting to stop 'ora.mgmtdb' on 'node12c02' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node12c02' CRS-2677: Stop of 'ora.cvu' on 'node12c02' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node12c02' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node12c02' succeeded CRS-2677: Stop of 'ora.node12c02.vip' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.node12c01.vip' on 'node12c02' CRS-2677: Stop of 'ora.scan2.vip' on 'node12c02' succeeded CRS-2677: Stop of 'ora.node12c01.vip' on 'node12c02' succeeded CRS-2677: Stop of 'ora.mgmtdb' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'node12c02' CRS-2677: Stop of 'ora.MGMTLSNR' on 'node12c02' succeeded CRS-2677: Stop of 'ora.proxy_advm' on 'node12c01' succeeded CRS-2677: Stop of 'ora.oc4j' on 'node12c02' succeeded CRS-2677: Stop of 'ora.proxy_advm' on 'node12c02' succeeded CRS-2677: Stop of 'ora.CRS_TMP.dg' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node12c01' CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'node12c01' CRS-2677: Stop of 'ora.asm' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c01' CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node12c02' CRS-2673: Attempting to stop 'ora.CRS_TMP.dg' on 'node12c02' CRS-2673: Attempting to stop 'ora.ons' on 'node12c01' CRS-2677: Stop of 'ora.ons' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'node12c01' CRS-2677: Stop of 'ora.net1.network' on 'node12c01' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node12c01' has completed CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node12c02' succeeded CRS-2677: Stop of 'ora.CRS_TMP.dg' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'node12c02' CRS-2677: Stop of 'ora.asm' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c02' CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'node12c02' CRS-2677: Stop of 'ora.ons' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'node12c02' CRS-2677: Stop of 'ora.net1.network' on 'node12c02' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node12c02' has completed CRS-2677: Stop of 'ora.crsd' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'node12c02' CRS-2673: Attempting to stop 'ora.evmd' on 'node12c02' CRS-2673: Attempting to stop 'ora.storage' on 'node12c02' CRS-2677: Stop of 'ora.storage' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'node12c02' CRS-2677: Stop of 'ora.crsd' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'node12c01' CRS-2673: Attempting to stop 'ora.evmd' on 'node12c01' CRS-2673: Attempting to stop 'ora.storage' on 'node12c01' CRS-2677: Stop of 'ora.storage' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'node12c01' CRS-2677: Stop of 'ora.evmd' on 'node12c02' succeeded CRS-2677: Stop of 'ora.evmd' on 'node12c01' succeeded CRS-2677: Stop of 'ora.ctssd' on 'node12c02' succeeded CRS-2677: Stop of 'ora.ctssd' on 'node12c01' succeeded CRS-2677: Stop of 'ora.asm' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node12c01' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node12c01' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'node12c01' CRS-2677: Stop of 'ora.cssd' on 'node12c01' succeeded CRS-2677: Stop of 'ora.asm' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node12c02' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node12c02' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'node12c02' CRS-2677: Stop of 'ora.cssd' on 'node12c02' succeeded
Start cluster on all nodes
[root@node12c01 ~]# /u01/app/12.1.0/grid/bin/crsctl start cluster -all CRS-2672: Attempting to start 'ora.evmd' on 'node12c01' CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node12c01' CRS-2672: Attempting to start 'ora.evmd' on 'node12c02' CRS-2676: Start of 'ora.cssdmonitor' on 'node12c01' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'node12c01' CRS-2672: Attempting to start 'ora.diskmon' on 'node12c01' CRS-2676: Start of 'ora.diskmon' on 'node12c01' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node12c02' CRS-2676: Start of 'ora.evmd' on 'node12c01' succeeded CRS-2676: Start of 'ora.cssdmonitor' on 'node12c02' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'node12c02' CRS-2672: Attempting to start 'ora.diskmon' on 'node12c02' CRS-2676: Start of 'ora.evmd' on 'node12c02' succeeded CRS-2676: Start of 'ora.diskmon' on 'node12c02' succeeded CRS-2676: Start of 'ora.cssd' on 'node12c02' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'node12c02' CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node12c02' CRS-2676: Start of 'ora.cssd' on 'node12c01' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'node12c01' CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node12c01' CRS-2676: Start of 'ora.ctssd' on 'node12c02' succeeded CRS-2676: Start of 'ora.ctssd' on 'node12c01' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node12c02' succeeded CRS-2672: Attempting to start 'ora.asm' on 'node12c02' CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node12c01' succeeded CRS-2672: Attempting to start 'ora.asm' on 'node12c01' CRS-2676: Start of 'ora.asm' on 'node12c01' succeeded CRS-2672: Attempting to start 'ora.storage' on 'node12c01' CRS-2676: Start of 'ora.asm' on 'node12c02' succeeded CRS-2672: Attempting to start 'ora.storage' on 'node12c02' CRS-2676: Start of 'ora.storage' on 'node12c02' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'node12c02' CRS-2676: Start of 'ora.crsd' on 'node12c02' succeeded CRS-2676: Start of 'ora.storage' on 'node12c01' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'node12c01' CRS-2676: Start of 'ora.crsd' on 'node12c01' succeeded
Migrate MGMTDB to other Diskgroup.
MGMTDB is a "common" Oracle RDBMS (exclusive to CHM). We can perform administrative task like in any other Oracle RDBMS.
So, to migrate MGMTDB to other Diskgroup I will use RMAN.
Creating Diskgroup MGMTDB
SQL> CREATE DISKGROUP MGMTDB EXTERNAL REDUNDANCY DISK '/dev/asm-mgmtdb1' ATTRIBUTE 'au_size'='1M', 'compatible.asm' = '12.1', 'compatible.rdbms' = '12.1'; Diskgroup created.
Starting MGMTDB diskgroup on others node and checking status
SQL> ! srvctl start diskgroup -g MGMTDB -n node12c02 SQL> ! srvctl status diskgroup -g MGMTDB Disk Group MGMTDB is running on node12c01,node12c02
To manage MGMTDB you must use srvctl utility.
srvctl start/stop/config/modify... mgmtdb
Getting current configuration of MGMTDB
[grid@node12c01 ~]$ srvctl config mgmtdb Database unique name: _mgmtdb Database name: Oracle home: /u01/app/12.1.0/grid Oracle user: grid Spfile: +CRS_TMP/_mgmtdb/spfile-MGMTDB.ora Password file: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Database instance: -MGMTDB Type: Management
Check where MGMTDB is running.
[grid@node12c01 ~]$ srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node node12c02
You must migrate MGMTDB using host(node) where MGMTDB is running.
Let's start.
Stop MGMT Listener and MGMTDB
[grid@node12c02 ~]$ srvctl stop mgmtdb [grid@node12c02 ~]$ srvctl stop mgmtlsnr
Precaution: I will create a full backup of the database MGMTDB before start migration.
[grid@node12c02 ~]$ mkdir -p /u01/app/grid/backup_mgmtdb [grid@node12c02 ~]$ export ORACLE_SID=-MGMTDB [grid@node12c02 ~]$ rman target / Recovery Manager: Release 12.1.0.1.0 - Production on Tue Sep 24 19:30:56 2013 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. connected to target database (not started) RMAN> startup mount connected to target database (not started) Oracle instance started database mounted Total System Global Area 521936896 bytes Fixed Size 2290264 bytes Variable Size 289410472 bytes Database Buffers 222298112 bytes Redo Buffers 7938048 bytes RMAN> backup database format '/u01/app/grid/backup_mgmtdb/rman_mgmtdb_%U' tag='bk_db_move_dg'; Starting backup at 24-SEP-13 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=6 device type=DISK channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00005 name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881 input datafile file number=00001 name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845 input datafile file number=00002 name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821 input datafile file number=00004 name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917 input datafile file number=00003 name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817 channel ORA_DISK_1: starting piece 1 at 24-SEP-13 channel ORA_DISK_1: finished piece 1 at 24-SEP-13 piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_03okm63j_1_1 tag=BK_DB_MOVE_DG comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current control file in backup set including current SPFILE in backup set channel ORA_DISK_1: starting piece 1 at 24-SEP-13 channel ORA_DISK_1: finished piece 1 at 24-SEP-13 piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_04okm650_1_1 tag=BK_DB_MOVE_DG comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 24-SEP-13
Migrating SPFILE to Diskgroup +MGMTDB.
RMAN will restore SPFILE and update OCR to a new SPFILE Location.
Database _MGMTDB MUST be mounted or opened.
RMAN> restore spfile to "+MGMTDB"; Starting restore at 24-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=355 device type=DISK channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: restoring SPFILE output file name=+MGMTDB channel ORA_DISK_1: reading from backup piece /u01/app/grid/backup_mgmtdb/rman_mgmtdb_02oltbpe_1_1 channel ORA_DISK_1: piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_02oltbpe_1_1 tag=BK_DB_MOVE_DG channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 Finished restore at 24-SEP-13 [grid@node12c01 ~]$ srvctl config mgmtdb |grep Spfile Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969 RMAN> shutdown immediate; database dismounted Oracle instance shut down
Migrating Controlfile to Diskgroup MGMTDB
P.S: Before do this change, check what spfile you are updating.
[grid@node12c02 ~]$ sqlplus / as sysdba SQL> startup nomount; ORACLE instance started. Total System Global Area 521936896 bytes Fixed Size 2290264 bytes Variable Size 285216168 bytes Database Buffers 226492416 bytes Redo Buffers 7938048 bytes SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969 SQL> show parameter control_file NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ control_files string +CRS_TMP/_MGMTDB/CONTROLFILE/current.262.826899953 SQL> alter system set control_files='+MGMTDB' scope=spfile ; System altered. SQL> shutdown immediate; ORA-01507: database not mounted ORACLE instance shut down. SQL> exit [grid@node12c02 ~]$ rman target / Recovery Manager: Release 12.1.0.1.0 - Production on Tue Sep 24 19:45:37 2013 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. connected to target database (not started) RMAN> startup nomount Oracle instance started Total System Global Area 521936896 bytes Fixed Size 2290264 bytes Variable Size 289410472 bytes Database Buffers 222298112 bytes Redo Buffers 7938048 bytes RMAN> RESTORE CONTROLFILE FROM '+CRS_TMP/_MGMTDB/CONTROLFILE/current.262.826899953'; Starting restore at 24-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=124 device type=DISK channel ORA_DISK_1: copied control file copy output file name=+MGMTDB/_MGMTDB/CONTROLFILE/current.263.827005571 Finished restore at 24-SEP-13 RMAN> startup mount database is already started database mounted released channel: ORA_DISK_1
Controlfile was moved successful.
Migrating Database Files to Diskgroup MGMTDB
RMAN> BACKUP AS COPY DEVICE TYPE DISK DATABASE FORMAT '+MGMTDB'; Starting backup at 24-SEP-13 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile copy input datafile file number=00005 name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881 output file name=+MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237 tag=TAG20130924T194034 RECID=1 STAMP=827005278 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45 channel ORA_DISK_1: starting datafile copy input datafile file number=00001 name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845 output file name=+MGMTDB/_MGMTDB/DATAFILE/system.257.827005281 tag=TAG20130924T194034 RECID=2 STAMP=827005291 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:16 channel ORA_DISK_1: starting datafile copy input datafile file number=00002 name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821 output file name=+MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295 tag=TAG20130924T194034 RECID=3 STAMP=827005303 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting datafile copy input datafile file number=00004 name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917 output file name=+MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311 tag=TAG20130924T194034 RECID=4 STAMP=827005312 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03 channel ORA_DISK_1: starting datafile copy input datafile file number=00003 name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817 output file name=+MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313 tag=TAG20130924T194034 RECID=5 STAMP=827005316 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03 channel ORA_DISK_1: starting datafile copy copying current control file output file name=+MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 tag=TAG20130924T194034 RECID=6 STAMP=827005317 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current SPFILE in backup set channel ORA_DISK_1: starting piece 1 at 24-SEP-13 channel ORA_DISK_1: finished piece 1 at 24-SEP-13 piece handle=+MGMTDB/_MGMTDB/BACKUPSET/2013_09_24/nnsnf0_tag20130924t194034_0.262.827005319 tag=TAG20130924T194034 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 24-SEP-13 RMAN> SWITCH DATABASE TO COPY; datafile 1 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/system.257.827005281" datafile 2 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295" datafile 3 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313" datafile 4 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311" datafile 5 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237"
Datafiles was migrated successful.
Checking Datafiles
RMAN> report schema; Report of database schema for database with db_unique_name _MGMTDB List of Permanent Datafiles =========================== File Size(MB) Tablespace RB segs Datafile Name ---- -------- -------------------- ------- ------------------------ 1 600 SYSTEM *** +MGMTDB/_MGMTDB/DATAFILE/system.257.827005281 2 400 SYSAUX *** +MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295 3 50 UNDOTBS1 *** +MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313 4 100 SYSGRIDHOMEDATA *** +MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311 5 2048 SYSMGMTDATA *** +MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237 List of Temporary Files ======================= File Size(MB) Tablespace Maxsize(MB) Tempfile Name ---- -------- -------------------- ----------- -------------------- 1 20 TEMP 32767 +CRS_TMP/_MGMTDB/TEMPFILE/temp.266.826899969
Move Tempfile to Diskgroup MGMTDB.
RMAN> run { SET NEWNAME FOR TEMPFILE 1 TO '+MGMTDB'; SWITCH TEMPFILE ALL; } executing command: SET NEWNAME renamed tempfile 1 to +MGMTDB in control file
At startup of database TEMP file will be recreated on new Diskgroup.
Starting Database and checking.
RMAN> startup database is already started database opened RMAN> report schema; using target database control file instead of recovery catalog Report of database schema for database with db_unique_name _MGMTDB List of Permanent Datafiles =========================== File Size(MB) Tablespace RB segs Datafile Name ---- -------- -------------------- ------- ------------------------ 1 600 SYSTEM *** +MGMTDB/_MGMTDB/DATAFILE/system.257.827005281 2 400 SYSAUX *** +MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295 3 50 UNDOTBS1 *** +MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313 4 100 SYSGRIDHOMEDATA *** +MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311 5 2048 SYSMGMTDATA *** +MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237 List of Temporary Files ======================= File Size(MB) Tablespace Maxsize(MB) Tempfile Name ---- -------- -------------------- ----------- -------------------- 1 20 TEMP 32767 +MGMTDB/_MGMTDB/TEMPFILE/temp.261.827005913 RMAN> exit Recovery Manager complete.
Delete the old datafiles copy on old Diskgroup CRS_TMP.
RMAN> delete copy ; allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=124 device type=DISK specification does not match any archived log in the repository List of Datafile Copies ======================= Key File S Completion Time Ckp SCN Ckp Time ------- ---- - --------------- ---------- --------------- 7 1 A 24-SEP-13 798766 24-SEP-13 Name: +CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845 8 2 A 24-SEP-13 798766 24-SEP-13 Name: +CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821 9 3 A 24-SEP-13 798766 24-SEP-13 Name: +CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817 10 4 A 24-SEP-13 798766 24-SEP-13 Name: +CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917 11 5 A 24-SEP-13 798766 24-SEP-13 Name: +CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881 List of Control File Copies =========================== Key S Completion Time Ckp SCN Ckp Time ------- - --------------- ---------- --------------- 6 A 24-SEP-13 798766 24-SEP-13 Name: +MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 Tag: TAG20130924T194034 Do you really want to delete the above objects (enter YES or NO)? yes deleted datafile copy datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845 RECID=7 STAMP=827005353 deleted datafile copy datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821 RECID=8 STAMP=827005353 deleted datafile copy datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817 RECID=9 STAMP=827005353 deleted datafile copy datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917 RECID=10 STAMP=827005353 deleted datafile copy datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881 RECID=11 STAMP=827005353 deleted control file copy control file copy file name=+MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 RECID=6 STAMP=827005317 Deleted 6 objects
Migrating Redo Logs to Diskgroup MGMTDB
P.S: Database must be OPENED.
[grid@node12c02 ~]$ sqlplus / as sysdba SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where lf.GROUP#=lg.GROUP# order by 1;
GROUP# MEMBER STATUS ---------- -------------------------------------------------- ---------------- 1 +CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955 INACTIVE 2 +CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955 CURRENT 3 +CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957 INACTIVE
Adding New Log Members
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 1; Database altered. SQL> ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 2; Database altered. SQL> ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 3; Database altered. SQL> alter system switch logfile; System altered. SQL> / System altered. SQL> / System altered. SQL> / System altered. SQL> / System altered. SQL> alter system checkpoint;
Removing Log Member from CRS_TMP
SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where lf.GROUP#=lg.GROUP# order by 1; GROUP# MEMBER STATUS ---------- -------------------------------------------------- ---------------- 1 +CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955 ACTIVE 1 +MGMTDB/_MGMTDB/ONLINELOG/group_1.264.827006475 ACTIVE 2 +MGMTDB/_MGMTDB/ONLINELOG/group_2.266.827006507 ACTIVE 2 +CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955 ACTIVE 3 +MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479 CURRENT 3 +CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957 CURRENT 6 rows selected. SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955'; Database altered. SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955'; Database altered. SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'; Database altered. SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'; ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957' * ERROR at line 1: ORA-01609: log 3 is the current log for thread 1 - cannot drop members ORA-00312: online log 3 thread 1: '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957' ORA-00312: online log 3 thread 1: '+MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479' SQL> alter system switch logfile; System altered. SQL> alter system checkpoint; System altered. SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'; Database altered. SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where lf.GROUP#=lg.GROUP# order by 1; GROUP# MEMBER STATUS ---------- -------------------------------------------------- ---------------- 1 +MGMTDB/_MGMTDB/ONLINELOG/group_1.264.827006475 CURRENT 2 +MGMTDB/_MGMTDB/ONLINELOG/group_2.266.827006507 INACTIVE 3 +MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479 INACTIVE
Redo Logs was moved successful.
Check Configuration of resource ora.mgmtdb.
Maybe you need modify this resource by removing old DISKGROUP.
As grid user:
[grid@node12c02 ~]$ crsctl status res ora.mgmtdb -p |grep CRS_TMP START_DEPENDENCIES=hard(ora.MGMTLSNR,ora.CRS_TMP.dg,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.CRS_TMP.dg,ora.MGMTDB.dg) STOP_DEPENDENCIES=hard(intermediate:ora.MGMTLSNR,shutdown:ora.CRS_TMP.dg,intermediate:ora.asm,shutdown:ora.MGMTDB.dg)
If command above return output you need update OCR by removing the old Diskgroup.
If you removed Diskgroup without update resource ora.mgmtdb, srvctl will not be able to start resource ora.mgmtdb and raise error:
[grid@node12c02 ~]$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: /u01/app/12.1.0/grid
Oracle user: grid
Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
PRCD-1012 : Failed to retrieve disk group list for database _mgmtdb.
PRCR-1035 : Failed to look up CRS resource ora.CRS_TMP.dg for _mgmtdb
PRCA-1000 : ASM Disk Group CRS_TMP does not exist
PRCR-1001 : Resource ora.CRS_TMP.dg does not exist
To fix it you need remove this resource.
If resource ora.CRS_TMP.dg does not exists you must use crsctl with force option ("-f" flag) to modify ora.mgmtdb resource.
[grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "START_DEPENDENCIES='hard(ora.MGMTLSNR,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.MGMTDB.dg)'"
CRS-2510: Resource 'ora.CRS_TMP.dg' used in dependency 'hard' does not exist or is not registered
CRS-2514: Dependency attribute specification 'hard' is invalid in resource 'ora.mgmtdb'
CRS-4000: Command Modify failed, or completed with errors.
Updating resource ora.mgmtdb by removing ora.CRS_TMP.dg resource.
[grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "START_DEPENDENCIES='hard(ora.MGMTLSNR,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.MGMTDB.dg)'" -f [grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "STOP_DEPENDENCIES='hard(intermediate:ora.MGMTLSNR,intermediate:ora.asm,shutdown:ora.MGMTDB.dg)'" -f [grid@node12c02 ~]$ crsctl status res ora.mgmtdb -p |grep CRS_TMP [grid@node12c02 ~]$ srvctl config mgmtdb Database unique name: _mgmtdb Database name: Oracle home: /u01/app/12.1.0/grid Oracle user: grid Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969 Password file: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Database instance: -MGMTDB Type: Management [grid@node12c02 ~]$ srvctl start mgmtdb [grid@node12c02 ~]$ srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node node12c02
Adding OCR Mirror on +MGMTDB.
[root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +MGMTDB [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 1568 Available space (kbytes) : 408000 ID : 1037914462 Device/File Name : +OCR_VOTE Device/File integrity check succeeded Device/File Name : +MGMTDB Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
I recommend stop and start your cluster by issuing (as root) "crsctl stop cluster -all and crsctl start cluster -all"
If all resources restart successfull, then we can stop diskgroup "CRS_TMP".
[grid@node12c02 ~]$ srvctl stop diskgroup -g CRS_TMP [grid@node12c02 ~]$ srvctl status diskgroup -g CRS_TMP Disk Group CRS_TMP is not running
Check if all resources is online. If yes, you can remove this diskgroup.
[grid@node12c02 ~]$ srvctl start diskgroup -g CRS_TMP -n node12c02 [grid@node12c02 ~]$ export ORACLE_SID=+ASM1 [grid@node12c02 ~]$ sqlplus / as sysasm SQL> DROP DISKGROUP CRS_TMP INCLUDING CONTENTS; Diskgroup dropped.
Then restart whole clusterware again and check if all resources started without issue.
Rate this:
Share this:
- Click to share on Facebook (Opens in new window)
- Click to share on Twitter (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to email a link to a friend (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to share on Pinterest (Opens in new window)
- Click to print (Opens in new window)
Hi Friend,
Expected to see posts on 12c RAC, Nice explanation.
Thank you,
Nassyam
LikeLike
Thanks NB
There will be others post coming soon.
LikeLike
Thank you for this great article. Helped me change OCR/VotingDisk DG in 11.2.0.4!
LikeLike
Worked great, however after upgrading to 12 .1.0.2, Oracle disabled crsctl for modifying ora. resources.
Running:
crsctl modify res ora.mgmtdb -attr “START_DEPENDENCIES=’hard(ora.MGMTLSNR, ora.PRRACB_DATA.dg) pullup(ora.MGMTLSNR, ora.PRRACB_DATA.dg) weak(global:type:ora.scan_listener.type, uniform:ora.ons)'” -f
I received error:
“CRS-4995: The command ‘Modify resource’ is invalid in crsctl. Use srvctl for this command.”
Work around, adding “-unsupported” to modify command.
crsctl modify res ora.mgmtdb -attr “START_DEPENDENCIES=’hard(ora.MGMTLSNR, ora.PRRACB_DATA.dg) pullup(ora.MGMTLSNR, ora.PRRACB_DATA.dg) weak(global:type:ora.scan_listener.type, uniform:ora.ons)'” -f -unsupported
From Oracle Support:
Oracle doesn’t support customer to run crsctl on ora. resource, this has been documented since at least 10.2, however, since it’s not enforced, customer keeps running that, then report some problems later, so in 12.1.0.2 it’s now enforced and Support provides the command for situations that this must be run.
I reviewed your procedure, there’s documented processes for this which doesn’t require crsctl:
How to Move GI Management Repository to Different Shared Storage (Diskgroup, CFS or NFS etc) ( Doc ID 1589394.1 )
LikeLike
Thank you for update.
When I create this post ORACLE did not have released this note. (Doc ID 1589394.1)
Will update the post.
LikeLike
Why can you not just create first DG +MGMTDB during install as ASM Normal Redundancy with three ASM Disks (LUNs)? Migrate VD, OCR, ASM spfile and ASM password file leaving MGMTDB on the original DG. Then simply add OCR back referencing MGMTDB. Is there something I am missing?
LikeLike
Hi Los,
You can do that. No problem with that.
As DBA Consultant you must know you can manage several different environments including those you have NOT installed or participated in the installation project.
Someday in beautiful morning someone will ask you to change MGMTDB and you will understand why I wrote this post.
This post cover “how to change a installation already deployed” not “What points you can consider before installation”. This is a slight difference.
Cheers,
Levi Pereira
LikeLike
Re-read title. Clear now.
So would you recommend the initial installation be setup as mentioned.
Create initial +MGMTDB data group
DG will initially have GI MGMT Repository, Voting Disk, OCR, ASM SPfile, ASM Password File as part of installation.
2, 3 or 5 ASM Disk?
ASM Normal or High Redundancy?
Once installed, create +OCR_VOTE DG.
2, 3 or 5 ASM Disk?
ASM Normal or High Redundancy?
Move Voting Disk, OCR, ASM SPFile, ASM Password File to +OCR_VOTE DG.
Configure OCR backup in +MGMTDB DG.
Done.
As a side note, if you don’t have enough disks to have +MGMTDB and +OCR_VOTE on physically separate disk then is RAID 1 (2 HD) sufficent with LUNs carved to create ASM Disk needed for the above configuration.
LikeLike
Hi Los,
Depend on what you have.
Usually I don't use ASM mirror because Storage Mirror (hardware mirror) is much better and provide more perfomance. (Except case when we are mirroring ASM DISK between two or more Storage h/w.)
In case of use a single Storage I usually create a single Lun to first Diskgroup to store MGMTDB,OCR, ASM Files* and 3 Luns to Store Votedisk and OCR Mirror.
Initall setup I use only a Diskgroup (CRS_FILES) will store MGMGTDB, 1 OCR, 1 Vote, ASMSPFILE and ASMPASSWORD.
After setup I suggest to create an DG (CRS_VOTE) with 3 Luns and move Votedisk and create a OCR Mirror on it.
The storage h/w must support common types of failures by tolerating possible controller or disk failure.
LikeLike
I have two Physical disks for MGMTDB, Voting Disk(s), OCR, OCR Mirror, ASM SPFILE and ASM Password File.
So I can:
RAID 1 (2 HD’s)
6 LUNs (5 GB each – total of 146GB capacity for Drive)
INITIAL INSTALLATION
CRS_FILES Disk Group
Three LUNs
ASM Normal Redundancy
Includes: MGMTDB, 1 OCR, 3 Vote Disks, ASM SPFile, ASM Password File
Move (replace) Voting Disks from CRS_FILES Disk Group to CRS_VOTE Disk Group.
Create OcR Mirror on CRS_VOTE Disk Group.
CRS_FILES Disk Group
Three LUNs
ASM Normal Redundancy
Includes: MGMTDB, OCR, ASM SPFile, ASM Password File
CRS_VOTE Disk Group
Three LUNs
Includes: Votedisk and OCR Mirror
I have only two Physical disks. If you had only two physical disks for Clusterware Area (MGMTDB, Voting Disk(s), OCR, ASM SPFILE and ASM Password File) how would you configure? Will performance hit be significant for ASM Redundancy in CRS_FILES Disk group using Normal Redundancy. Is there any benefit to making CRS_FILES ASM Normal Redundant. I have DATA and FRA disk groups. An OCR mirror can go to one of these Disk Groups but maintenance flexibility will take a hit.
LikeLike
Hi All.
I have a question. The migration of MGMTDB to other diskgroup involves shutdown instance. Should I have to stop the RAC database and cluster services? I mean, I need downtime window?
Regards.
LikeLike
I dont have sure now but I think you don’t need downtime to move MGMTDB.
LikeLike
[…] So, as a reminder (which is one of the purposes of this blog), here’s a link to a very extensive article about how to migrate the CRS, Vote and spfile from one ASM DiskGroup to another: Migrate OCR to another DiskGroup […]
LikeLike
Hi Levi, I saw all tutorials creates the DATA diskgroup and then move the OCR and vote files to a new diskgroup (OCR_VOTE or similar).
Questions:
In a fresh install of Oracle 12c Grid Infrastructure (without repository DB) when we get to
the “Create ASM Disk Group” screen:
Why don’t we create there the diskgroup OCR_VOTE with normal redundancy using 3 LUNS? In this scenario will we achieve the goal of having 3 Voting files and 2 OCR required?
And after finish Grid Infrastructure installation, we then create the DATA and FRA diskgroups.
Is there any reason this procedure does not work?
Thanks in advance
Cristian
LikeLike
Hi Cristian,
Sorry for delay.
Yes. You can do that no problem with that.
The point here is not how to deploy a new fresh install, but how change it later.
This post cover how to change it no matter how it was deployed. I have seen many DBA’s with concern how changing it to reorganize its envs and this post help acheive this goal.
Regards,
Levi Pereira
LikeLike
Nice Post!!! I’ll be migrating the MGMTDB to new Disk Group.
Thanks for sharing this wonderful info!!!
LikeLike
Excellent Post
LikeLike
Excellent Article, very useful !!!
LikeLike