ORAchk Health Checks For The Oracle Stack (ORACHK 2.2.4 and above)


Oracle Database 12c (12.0.1.*) and 11g (11.2.0.4) comes with new feature called RACcheck.
Although the RACcheck already exists before these releases. (In 2011 I posted about benefits of RACcheck)

Brief of RACcheck.
.....
RACcheck - The Oracle RAC Configuration Audit Tool
RACcheck is designed to audit vital configuration settings for the Oracle Database, single instance databases, as well as Oracle Real Application Clusters (Oracle RAC) databases. It also includes checks for Oracle Clusterware, Oracle Automatic Storage Management (Oracle ASM) and Oracle Grid Infrastructure.

RACcheck provides best practices recommedations considering the whole stack, including Maximum Availability Architecture (MAA) configurations and is therefore the ideal tool for regular health checks as well as pre- and post-upgrade best practices assessments.
.....

Now Oracle replace/renamed Oracle RACcheck to ORAchk.

ORAchk- Health Checks for the Oracle Stack

ORAchk replaces the popular RACcheck to support a wider range of products. ORAchk version 2.2.4 is now available for download and includes the following key features:

  • RACcheck renamed to ORAchk
  • ORAchk daemon auto-start mode after node reboot (init integration)
  • Merge multiple ORAchk collection reports
  • Upload of installed patches to database
  • Collection Manager for ORAchk, RACcheck and Exachk (Document 1602329.1)
  • ORAchk signature file in /tmp on all nodes to verify last ORAchk run
  • New checks and bug fixes, including
  • 30 Oracle Ebusiness AP module data integrity checks
  • 12 new Database checks
  • 8 new Solaris system checks
  • Supported Platforms

  • Linux x86-64* (Enterprise Linux, RedHat and SuSE 9, SuSE 10 & SuSE 11)
  • Oracle Solaris SPARC (Solaris 10 and 11)
  • Oracle Solaris x86-64 (Solaris 10 and 11)
  • AIX **
  • HPUX**
  • * 32-bit platforms not supported, no planned support for Linux Itanium
    **Requires BASH Shell 3.2 or higher to be installed

    Supported Oracle Releases

  • 10gR2
  • 11gR1
  • 11gR2
  • 12cR1
  • When to Run ORAchk

  • After initial Oracle RAC deployment
  • Before planned system maintenance
  • After planned system maintenance
  • At least once every three months
  • Install/Configure

    It is recommended to run the tool as the database software owner (e.g. oracle). The user may run the tool as the Grid Infrastructure software owner (e.g. grid) and it will collect the same data but database credentials must manually be supplied to perform the database related audit checks. Typically when run as oracle the customer will have OS authentication set up for the oracle database software owner and the database login credentials will not be needed.

    Download ORAchk

    Stage Location:
    It is recommended that the kit be staged and operated from a local filesystem on a single database server in order to provide the best performance possible.


    $ mkdir -p /u01/app/oracle/orachk

    [oracle@node11g01 install]$ cd /u01/app/oracle/orachk

    [oracle@node11g01 orachk]$ unzip orachk.zip
    Archive: orachk.zip
    inflating: raccheck
    inflating: rules.dat
    inflating: collections.dat
    inflating: readme.txt
    inflating: orachk
    creating: .cgrep/
    inflating: .cgrep/ogghc_12101.sql
    inflating: .cgrep/lcgrep4
    inflating: .cgrep/checkDiskFGMapping.sh
    inflating: .cgrep/ogghc_11204.sql
    inflating: .cgrep/lcgreps9
    inflating: .cgrep/ogghc_11203.sql
    inflating: .cgrep/scgrepx86
    inflating: .cgrep/acgrep
    inflating: .cgrep/oracle-upstarttmpl.conf
    inflating: .cgrep/check_reblance_free_space.sql
    inflating: .cgrep/CollectionManager_App.sql
    inflating: .cgrep/exalogic_zfs_checks.aksh
    inflating: .cgrep/hiacgrep
    inflating: .cgrep/init.tmpl
    inflating: .cgrep/lcgreps10
    inflating: .cgrep/preupgrd.sql
    inflating: .cgrep/diff_collections.pl
    inflating: .cgrep/merge_collections.pl
    inflating: .cgrep/ggdiscovery.sh
    creating: .cgrep/profiles/
    inflating: .cgrep/profiles/DA94919CD0DE0913E04312C0E50A7996.prf
    inflating: .cgrep/profiles/D49C0FBF8FBF4B1AE0431EC0E50A0F24.prf
    extracting: .cgrep/profiles/F13E11974A282AB3E04312C0E50ABCBF.prf
    inflating: .cgrep/profiles/EF6C016813C51366E04313C0E50AE11F.prf
    inflating: .cgrep/profiles/D8367AD6754763FEE04312C0E50A6FCB.prf
    inflating: .cgrep/profiles/DF65D6117CB41054E04312C0E50A69D1.prf
    inflating: .cgrep/profiles/EA5EE324E7E05128E04313C0E50A4B2A.prf
    inflating: .cgrep/profiles/E1BF012E8F210839E04313C0E50A7B68.prf
    inflating: .cgrep/profiles/D462A6F7E9C340FDE0431EC0E50ABE12.prf
    inflating: .cgrep/profiles/D49AD88F8EE75CD8E0431EC0E50A0BC3.prf
    inflating: .cgrep/profiles/E2E972DDE1E14493E04312C0E50A1AB1.prf
    inflating: .cgrep/profiles/F32F44CE0BCD662FE04312C0E50AB058.prf
    inflating: .cgrep/profiles/E8DF76E07DD82E0DE04313C0E50AA55D.prf
    inflating: .cgrep/profiles/D49B218473787400E0431EC0E50A0BB9.prf
    inflating: .cgrep/profiles/D49C0AB26A6D45A8E0431EC0E50ADE06.prf
    inflating: .cgrep/profiles/DFE9C207A8F2428CE04313C0E50A6B0A.prf
    inflating: .cgrep/profiles/D49C4F9F48735396E0431EC0E50A9A0B.prf
    inflating: .cgrep/profiles/D49BDC2EC9E624AEE0431EC0E50A3E12.prf
    inflating: .cgrep/profiles/DF65D0F7FB6F1014E04312C0E50A7808.prf
    inflating: .cgrep/scnhealthcheck.sql
    inflating: .cgrep/pxhcdr.sql
    inflating: .cgrep/lcgrep5
    inflating: .cgrep/scgrep
    inflating: .cgrep/raw_data_browser.pl
    inflating: .cgrep/profiles.dat
    inflating: .cgrep/rack_comparison.py
    inflating: .cgrep/versions.dat
    inflating: .cgrep/create_version.pl
    inflating: .cgrep/lcgreps11
    inflating: .cgrep/utluppkg.sql
    inflating: .cgrep/utlusts.sql
    inflating: .cgrep/reset_crshome.pl
    inflating: .cgrep/asrexacheck
    inflating: .cgrep/lcgrep6
    inflating: .cgrep/utlu112i.sql
    inflating: UserGuide.txt

    Running ORAchk Interactively

    [oracle@node11g01 orachk]$ ./orachk

    CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to /u01/app/11.2.0/grid?[y/n][y]

    Checking ssh user equivalency settings on all nodes in cluster

    Node node11g02 is configured for ssh user equivalency for oracle user

    Searching for running databases . . . . .

    . .
    List of running databases registered in OCR
    1. dborcl
    2. None of above

    Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1].1
    . .

    Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    -------------------------------------------------------------------------------------------------------
    Oracle Stack Status
    -------------------------------------------------------------------------------------------------------
    Host Name CRS Installed ASM HOME RDBMS Installed CRS UP ASM UP RDBMS UP DB Instance Name
    -------------------------------------------------------------------------------------------------------
    node11g01 Yes Yes Yes Yes Yes Yes dborcl_1
    node11g02 Yes Yes Yes Yes Yes Yes dborcl_2
    -------------------------------------------------------------------------------------------------------

    Copying plug-ins

    . . . . . . . . . . . . . . . . . . . . . . . . . . .

    . . . . . .

    17 of the included audit checks require root privileged data collection . If sudo is not configured or the root password is not available, audit checks which require root privileged data collection can be skipped.

    1. Enter 1 if you will enter root password for each host when prompted

    2. Enter 2 if you have sudo configured for oracle user to execute root_orachk.sh script

    3. Enter 3 to skip the root privileged collections

    4. Enter 4 to exit and work with the SA to configure sudo or to arrange for root access and run the tool later.

    Please indicate your selection from one of the above options for root access[1-4][1]:- 2

    *** Checking Best Practice Recommendations (PASS/WARNING/FAIL) ***

    Collections and audit checks log file is
    /u01/app/oracle/orachk/orachk_node11g01_dborcl_031814_170552/log/orachk.log
    .
    .
    .
    .
    ---------------------------------------------------------------------------------
    CLUSTERWIDE CHECKS
    ---------------------------------------------------------------------------------
    ---------------------------------------------------------------------------------

    Detailed report (html) - /u01/app/oracle/orachk/orachk_node11g01_dborcl_031814_170552/orachk_node11g01_dborcl_031814_170552.html

    UPLOAD(if required) - /u01/app/oracle/orachk/orachk_node11g01_dborcl_031814_170552.zip


    Below the report generated in this test.
    Oracle RAC Assessment Report

    orachk usage options

    [oracle@node11g01 orachk]$ ./orachk -h
    Usage : ./orachk [-abvhpfmsuSo:c:t:]
    -a All (Perform best practice check and recommended patch check)
    -b Best Practice check only. No recommended patch check
    -h Show usage
    -v Show version
    -p Patch check only
    -m exclude checks for Maximum Availability Architecture (MAA) scorecards(see user guide for more details)
    -u Run orachk to check pre-upgrade or post-upgrade best practices for 11.2.0.3,11.2.0.4.0 and 12.1.0.1
    -o pre or -o post is mandatory with -u option like ./orachk -u -o pre
    -f Run Offline.Checks will be performed on data already collected from the system
    -o Argument to an option. if -o is followed by v,V,Verbose,VERBOSE or Verbose, it will print checks which passs on the screen
    if -o option is not specified,it will print only failures on screen. for eg: orachk -a -o v

    -clusternodes
    Pass comma separated node names to run orachk only on subset of nodes.
    -dbnames
    Pass comma separated database names to run orachk only on subset of databases
    -localonly
    Run orachk only on local node.
    -debug
    Run orachk in debug mode. Debug log will be generated.
    eg:- ./orachk -debug
    -nopass
    Skip PASS'ed check to print in orachk report and upload to database.

    -noscore
    Do not print healthscore in HTML report.

    -diff [-outfile ]
    Diff two orachk reports. Pass directory name or zip file or html report file as &

    -exadiff
    Compare two different Exalogic rack and see if both are from the same release.Pass directory name or zip file as & (applicable for Exalogic only)

    -c Used only under the guidance of Oracle support or development to override default components

    -
    initsetup : Setup auto restart. Auto restart functionality automatically brings up orachk daemon when node starts
    initrmsetup : Remove auto restart functionality
    initcheck : Check if auto restart functionality is setup or not
    initpresetup : Sets root user equivalency for COMPUTE, STORAGE and IBSWITCHES.(root equivalency for COMPUTE nodes is mandatory for setting up auto restart functionality)

    -d
    start : Start the orachk daemon
    start_debug : Start the orachk daemon in debug mode
    stop : Stop the orachk daemon
    status : Check if the orachk daemon is running
    info : Print information about running orachk daemon
    stop_client : Stop the orachk daemon client
    nextautorun : print the next auto run time
    -daemon
    run orachk only if daemon is running
    -nodaemon
    Dont use daemon to run orachk
    -set
    configure orachk daemon parameter like "param1=value1;param2=value2... "

    Supported parameters are:-

    AUTORUN_INTERVAL :- Automatic rerun interval in daemon mode.Set it zero to disable automatic rerun which is zero.

    AUTORUN_SCHEDULE * * * * :- Automatic run at specific time in daemon mode.
    - - - -
    ▒ ▒ ▒ ▒
    ▒ ▒ ▒ +----- day of week (0 - 6) (0 to 6 are Sunday to Saturday)
    ▒ ▒ +---------- month (1 - 12)
    ▒ +--------------- day of month (1 - 31)
    +-------------------- hour (0 - 23)

    example: orachk -set "AUTORUN_SCHEDULE=8,20 * * 2,5" will schedule runs on tuesday and friday at 8 and 20 hour.

    AUTORUN_FLAGS : orachk flags to use for auto runs.

    example: orachk -set "AUTORUN_INTERVAL=12h;AUTORUN_FLAGS=-profile sysadmin" to run sysadmin profile every 12 hours

    orachk -set "AUTORUN_INTERVAL=2d;AUTORUN_FLAGS=-profile dba" to run dba profile once every 2 days.

    NOTIFICATION_EMAIL : Comma separated list of email addresses used for notifications by daemon if mail server is configured.

    PASSWORD_CHECK_INTERVAL : Interval to verify passwords in daemon mode

    COLLECTION_RETENTION : Purge orachk collection directories and zip files older than specified days.

    -unset
    unset the parameter
    example: orachk -unset "AUTORUN_SCHEDULE"

    -get
    Print the value of parameter

    -excludeprofile
    Pass specific profile.
    List of supported profiles is same as for -profile.

    -merge
    Pass comma separated collection names(directory or zip files) to merge collections and prepare single report.
    eg:- ./orachk -merge orachk_hostname1_db1_120213_163405.zip,orachk_hostname2_db2_120213_164826.zip

    -vmguest
    Pass comma separated filenames containing exalogic guest VM list(applicable for Exalogic only)

    -hybrid [-phy]
    phy :Pass comma separated physical compute nodes(applicable for Exalogic only)
    eg:- ./orachk -hybrid -phy phy_node1,phy_node2

    -profile Pass specific profile.
    List of supported profiles:

    asm asm Checks
    clusterware Oracle clusterware checks
    compute_node Compute Node checks (Exalogic only)
    control_VM Checks only for Control VM(ec1-vm, ovmm, db, pc1, pc2). No cross node checks
    dba dba Checks
    ebs Oracle E-Business Suite checks
    el_extensive Extensive EL checks
    el_lite Exalogic-Lite Checks(Exalogic Only)
    el_rackcompare Data Collection for Exalogic Rack Comparison Tool(Exalogic Only)
    goldengate Oracle GoldenGate checks
    maa Maximum Availability Architecture Checks
    obiee obiee Checks(Exalytics Only)
    storage Oracle Storage Server Checks
    switch Infiniband switch checks
    sysadmin sysadmin checks
    timesten timesten Checks(Exalytics Only)
    virtual_infra OVS, Control VM, NTP-related and stale VNICs check (Exalogic Only)
    zfs ZFS storage appliances checks (Exalogic Only)

    -cells
    Pass comma separated storage server names to run orachk only on selected storage servers.

    -ibswitches
    Pass comma separated infiniband switch names to run orachk only on selected infiniband switches.

    -zfsnodes
    Pass comma separated ZFS storage appliance names to run orachk only on selected storage appliances.

    ORAchk Other Useful Options Not Covered Here

  • Using ORAchk Silently
  • ORAchk can be optionally run in “silent” or “non-interactive” mode in order to enable scheduling and automation
    Is required only if customer does not want to use orachk daemon functionality.

  • Using ORAchk Daemon Mode Operation
  • This functionality permit non-interactive (batch or silent mode) execution on a regular interval.

    When running ORAchk in daemon mode, the most recent and next most recent (if any) collection reports are automatically compared. If the mail address is configured a summary will be emailed along with attachments for the reports and the comparison report.

  • Report Comparisons with ORAchk
  • ORAchk has the ability to perform report comparisons between 2 ORAchk reports.
    This allows for trending of Success Factor and Best Practice changes over time, after planned maintenance, etc within a user friendly HTML report.

  • ORAchk in Upgrade Readiness Mode
  • ORAchk can be used to obtain an automated 11.2.0.3 (or above) Upgrade Readiness Assessment.
    The goal of the ORAchk Upgrade Readiness Assessment is to make the process of upgrade planning for Oracle RAC and Oracle Clusterware target versions 11.2.0.3 and above as smooth as possible by automating many of the
    manual pre and post checks detailed in various upgrade related documents.

    Refer MoS Notes for more details:
    ORAchk - Oracle Configuration Audit Tool (Doc ID 1268927.2)

    ORAchk Users Guide
    For details instructions on how to run ORAchk including troubleshooting steps, available options, etc.

    Enjoy!!!


    IBM POWER7 AIX and Oracle Database performance considerations


    This post is intended to provide DBA's with advice and website links to assist with IBM Power Systems running in an Oracle environment.
    Most issues that show up on POWER7 are the result of not following best practices advice that applies to all Power Systems generations.

    Metalink: IBM POWER7 AIX and Oracle Database performance considerations -- 10g & 11g (Doc ID 1507249.1)

    IBM White paper: IBM POWER7 AIX and Oracle Database performance considerations


    Basic Concept About Oracle Flex Cluster


    Documentation isn't much clear about concept and purposes of Oracle Flex Cluster.
    Because that I recommend you read below Oracle White Paper before start with Oracle Clusterware 12.1.0.1.
    Some books (about clusterware 12c) link are making mistakes on this new architecture.

    Read the White Paper on link below:
    Oracle Link
    WordPress Link

    Hope this helps


    Visual cleaning….


    I'm removing the visual clutter from the Blog. So, I have chosen a Clean Theme.
    I hope you enjoy.


    How to move/migrate OCR, Voting Disk, ASM (SPFILE and PWDFILE) and MGMTDB to a different Diskgroup on ASM 12.1.0.1


    This post is only a recommendation about how to configure Clusterware Files and MGMTDB on ASM using Grid Infrastructure 12.1.0.1.

    Migrating MGMTDB to a different Diskgroup is not documented until 25/09/2013. Oracle will release a note to make this procedure soon.

    When we install/configure a new fresh Oracle GI for a Cluster the cluster files and MGMTDB is stored on first Diskgroup created at moment of installation.

    To easy management and increase the avability of Cluster is recommended setup small diskgroup to store the cluster files.

    ASM Spfile stuck on Diskgroup : ORA-15027: active use of diskgroup precludes its dismount (With no database clients connected) [ID 1082876.1]

    The cluster files are:

    Oracle Cluster Registry (OCR)

    The Oracle RAC configuration information repository that manages information about the cluster node list and instance-to-node mapping information. The OCR also manages information about Oracle Clusterware resource profiles for customized applications.

    The OCR is stored in a Diskgroup similar as Database Files are stored. The extents are spread across all disk of  diskgroup and the redundancy (which is at the extent level) is based on the redundancy of the disk group. We can store only one OCR per Diskgroup.

    Voting Disk

    Voting Disk aka Voting files: Is a file that manages information about node membership.

    The voting disks are placed directly on ASMDISK/Luns. Oracle Clusterware will store the votedisk on the disk within a disk group that holds the Voting Files. Oracle Clusterware does not rely on ASM to access the Voting Files, that’s  means wich Oracle Clusterware does not need of Diskgroup to read and write on ASMDISK.
    You cannot find/list Voting files using SQLPLUS(v$asm_diskgroup,v$asm_files,v$asm_alias), ASCMD utility or ASMCA gui.
    You only know if exist a voting files in a ASMDISK (v$asm_disk using column VOTING_FILE). So, voting files not depend of Diskgroup to be accessed, does not mean that, we don’t need the diskgroup, diskgroup and voting file are linked by their settings and redundancy.

    We cannot place Voting Disk in multiples Diskgroup. Is supported use only one Diskgroup to store Voting Disks.

    ASM Spfile

    This files is a parameter file of ASM Instance.

    The Spfile is stored as OCR and Database files.

    ASM Password File

    New feature of ASM 12.1 allow store Database or ASM Password File on ASM Diskgroup.

    An individual password file for Oracle Database or Oracle ASM can reside on a designated Oracle ASM disk group. Having the password files reside on a single location accessible across the cluster reduces maintenance costs and situations where passwords become out of sync.

    The COMPATIBLE.ASM disk group attribute must be set to at least 12.1 for the disk group where the password is to be located.

    Database MGMTDB

    New feature of GI 12.1 Management Repository.

    Management Repository is a single instance database that's managed by Oracle Clusterware in 12c. If the option is selected during GI installation, the database will be configured and managed by GI. As it's a single instance database, it will be up and running on one node in the cluster; as it's managed by GI, in case the hosting node is down, the database will be automatically failed over to other node.

    Management database by default uses the same shared storage as OCR/Voting File.

    There is no procedure in documentation 12.1 or Metalink Notes to change location of MGMTDB. (25/09/2013)

    What are my recommendation:

    Create two diskgroup:

    +OCR_VOTE

    Even using RAID at Storage Layer create a small diskgroup with 3 ASMDISK  using Normal Redundancy. This Diskgroup will store Voting Disk, OCR, ASM Spfile,  ASM Password File.

    The reason to create this Diskgroup with Normal Redundancy is provide availability  in case of loss of one voting disk or loss of one ASM Disk, caused by human errors or storage failure  or host failure (controller).

    +MGMTDB

    This Diskgroup can be created using External Redudancy if you are using RAID at Storage Layer.

    This Diskgroup  will store MGMTDB and OCR Mirror.

    Extremely important:  To perform the changes below all nodes in the cluster must be active. 

    NEVER TRY REMOVE A ASMDISK  USING  ASMCA,ASMCMD or SQLPLUS if exists VOTING DISK on that ASMDISK.

    You always must remove VOTING DISK from ASMDISK using ‘crsctl replace’  and after remove ASMDISK.

    Let's Play:

    Checking if nodes are actives.

    	
    [grid@node12c02 ~]$ olsnodes -s
    node12c01        	Active
    node12c02 			Active
    

    Use OCRCHECK to find where OCR file is stored.

    	
    [grid@node12c02 ~]$ ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1548
             Available space (kbytes) :     408020
             ID                       : 1037914462
             Device/File Name         :   +CRS_TMP
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check bypassed due to non-privileged user
    

    Use crsctl to find where Voting Disk is stored.

    	
    [grid@node12c02 ~]$ crsctl query css votedisk
    ##  STATE    File Universal Id                File Name Disk group
    --  -----    -----------------                --------- ---------
     1. ONLINE   0d2d3b5d705a4fefbf8824c99251ac5a (/dev/oracleasm/disks/OCR_TEMP) [CRS_TMP]
    Located 1 voting disk(s).
    

    Use asmcmd to find where ASM SPFILE and ASM PWD is stored

    	
    [grid@node12c02 ~]$ asmcmd spget
    +CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607
    
    [grid@node12c02 ~]$ asmcmd pwget --asm
    +CRS_TMP/crs12c/orapwASM
    

    Getting info about Voting Disk on ASM.

    Connect to ASM using as sysdba role

    SQL>
    SET LINESIZE 150
    COL PATH FOR A30
    COL NAME FOR A20
    COL HEADER_STATUS FOR A20
    COL FAILGROUP FOR A20
    COL FAILGROUP_TYPE FOR A20
    COL VOTING_FILE FOR A20
    SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
    FROM V$ASM_DISK
    WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
    FROM V$ASM_DISKGROUP
    WHERE NAME='CRS_TMP');

    			 
    NAME                     PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
    ------------------------ ------------------------------ -------------------- -------------------- -------------------- --------------------
    CRS_TMP_0000             /dev/oracleasm/disks/OCR_TEMP  MEMBER               CRS_TMP_0000         REGULAR              Y
    

    Get name of cluster.

    	
    [grid@node12c02 ~]$ olsnodes -c
    crs12c
    

    set linesize 100
    col FILES_OF_CLUSTER for a60
    select concat('+'||gname, sys_connect_by_path(aname, '/')) FILES_OF_CLUSTER
    from ( select b.name gname, a.parent_index pindex, a.name aname,
    a.reference_index rindex , a.system_created, a.alias_directory,
    c.type file_type
    from v$asm_alias a, v$asm_diskgroup b, v$asm_file c
    where a.group_number = b.group_number
    and a.group_number = c.group_number(+)
    and a.file_number = c.file_number(+)
    and a.file_incarnation = c.incarnation(+)
    ) WHERE file_type in ( 'ASMPARAMETERFILE','PASSWORD','OCRFILE')
    start with (mod(pindex, power(2, 24))) = 0
    and rindex in
    ( select a.reference_index
    from v$asm_alias a, v$asm_diskgroup b
    where a.group_number = b.group_number
    and (mod(a.parent_index, power(2, 24))) = 0
    and a.name = LOWER('&CLUSTERNAME')
    )
    connect by prior rindex = pindex;

    	
    Enter value for clustername: crs12c
    old  17:                         and a.name = LOWER('&CLUSTERNAME')
    new  17:                         and a.name = LOWER('crs12c')
    
    FILES_OF_CLUSTER
    ------------------------------------------------------------
    +CRS_TMP/crs12c/ASMPARAMETERFILE/REGISTRY.253.826898607
    +CRS_TMP/crs12c/OCRFILE/REGISTRY.255.826898619
    +CRS_TMP/crs12c/orapwasm
    

    Creating Diskgroup OCR_VOTE

    Creating Diskgroup OCR_VOTE each disk must be in different failgroup. I don’t add QUORUM failgroup because theses luns are on Storage. Use QUORUM failgroup when you are placing disk out of your environment. (e.g use NFS file-disk to quorum purpouses), because this disks cannot contain data.

    List ASM Disk
    SQL>
    col path for a30
    col name for a20
    col header_status for a20
    select path,name,header_status from v$asm_disk
    where path like '/dev/asm%';

    	 
     PATH                           NAME                 HEADER_STATUS
    ------------------------------ -------------------- --------------------
    /dev/asm-ocr_vote3                                  CANDIDATE
    /dev/asm-ocr_vote1                                  CANDIDATE
    /dev/asm-ocr_vote2                                  CANDIDATE          
    
    	
    SQL> CREATE DISKGROUP OCR_VOTE NORMAL REDUNDANCY
         FAILGROUP controller01 DISK '/dev/asm-ocr_vote1'
         FAILGROUP controller02 DISK '/dev/asm-ocr_vote2'
         FAILGROUP controller03 DISK '/dev/asm-ocr_vote3'
         ATTRIBUTE 
    	 'au_size'='1M',
    	 'compatible.asm' = '12.1';
    
    Diskgroup created.
    

    Mount Diskgroup OCR_VOTE on others Nodes

    	
    SQL>  ! srvctl start diskgroup -g ocr_vote -n node12c01
    

    Check status of Diskgroup OCR_VOTE in all nodes.
    The Diskgroup OCR_VOTE must be mounted in all hub nodes.

    	
    SQL> ! srvctl status diskgroup -g ocr_vote
    Disk Group ocr_vote is running on node12c01,node12c02
    

    Moving Voting Disk to Diskgroup OCR_VOTE

    As grid user issue:

    	
    [grid@node12c01 ~]$  crsctl replace votedisk +OCR_VOTE
    Successful addition of voting disk be10b93cca734f65bfa6b89cc574d485.
    Successful addition of voting disk 46a9b6154c184f89bf6a503846ea3aa3.
    Successful addition of voting disk 6bc41f75098d4f5dbf97783770760445.
    Successful deletion of voting disk 0d2d3b5d705a4fefbf8824c99251ac5a.
    Successfully replaced voting disk group with +OCR_VOTE.
    CRS-4266: Voting file(s) successfully replaced
    

    Query CSS Votedisk in all nodes

    	
    [grid@node12c01 ~]$ crsctl query css votedisk
    ##  STATE    File Universal Id                File Name Disk group
    --  -----    -----------------                --------- ---------
     1. ONLINE   be10b93cca734f65bfa6b89cc574d485 (/dev/asm-ocr_vote1) [OCR_VOTE]
     2. ONLINE   46a9b6154c184f89bf6a503846ea3aa3 (/dev/asm-ocr_vote2) [OCR_VOTE]
     3. ONLINE   6bc41f75098d4f5dbf97783770760445 (/dev/asm-ocr_vote3) [OCR_VOTE]
    Located 3 voting disk(s).
    
    [grid@node12c02 ~]$ crsctl query css votedisk
    ##  STATE    File Universal Id                File Name Disk group
    --  -----    -----------------                --------- ---------
     1. ONLINE   be10b93cca734f65bfa6b89cc574d485 (/dev/asm-ocr_vote1) [OCR_VOTE]
     2. ONLINE   46a9b6154c184f89bf6a503846ea3aa3 (/dev/asm-ocr_vote2) [OCR_VOTE]
     3. ONLINE   6bc41f75098d4f5dbf97783770760445 (/dev/asm-ocr_vote3) [OCR_VOTE]
    Located 3 voting disk(s).
    

    Query ASMDISK on ASM
    SQL >
    SET LINESIZE 150
    COL PATH FOR A30
    COL NAME FOR A10
    COL HEADER_STATUS FOR A20
    COL FAILGROUP FOR A20
    COL FAILGROUP_TYPE FOR A20
    COL VOTING_FILE FOR A20
    SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
    FROM V$ASM_DISK
    WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
    FROM V$ASM_DISKGROUP
    WHERE NAME='OCR_VOTE');

    	
    NAME                     PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
    ------------------------ ------------------------------ -------------------- -------------------- -------------------- --------------------
    OCR_VOTE_0002            /dev/asm-ocr_vote3             MEMBER               CONTROLLER03         REGULAR              Y
    OCR_VOTE_0001            /dev/asm-ocr_vote2             MEMBER               CONTROLLER02         REGULAR              Y
    OCR_VOTE_0000            /dev/asm-ocr_vote1             MEMBER               CONTROLLER01         REGULAR              Y
    

    The Voting Disk was moved successful.

    Moving OCR to Diskgroup OCR_VOTE

    As root user issue:

    	
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1568
             Available space (kbytes) :     408000
             ID                       : 1037914462
             Device/File Name         :   +CRS_TMP
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check succeeded
    

    Add OCR on OCR_VOTE

    	
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +OCR_VOTE	
    

    Now we must see two OCR one in each Diskgroup

    	
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1568
             Available space (kbytes) :     408000
             ID                       : 1037914462
             Device/File Name         :   +CRS_TMP
                                        Device/File integrity check succeeded
             Device/File Name         :  +OCR_VOTE
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check succeeded
    

    Delete OCR from Diskgroup CRS_TMP

    	
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -delete +CRS_TMP
    

    Now we must see only one OCR on Diskgroup OCR_VOTE.

    	
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1568
             Available space (kbytes) :     408000
             ID                       : 1037914462
             Device/File Name         :  +OCR_VOTE
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check succeeded
    

    Check with OCRCHECK on others nodes.

    	
    [root@node12c01 ~]#  /u01/app/12.1.0/grid/bin/ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1568
             Available space (kbytes) :     408000
             ID                       : 1037914462
             Device/File Name         :  +OCR_VOTE
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check succeeded
    

    The OCR was moved successful.

    Moving ASM SPFILE
    You will get the error which the file is still being used, but the file is copied to the file system and the profile is updated.

    	
    [grid@node12c01 ~]$ asmcmd spmove '+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607' '+OCR_VOTE/crs12c/spfileASM.ora'
    ORA-15032: not all alterations performed
    ORA-15028: ASM file '+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
    

    Checking if file was copied and profile updated

    	
    [grid@node12c01 ~]$ asmcmd spget
    +OCR_VOTE/crs12c/spfileASM.ora
    

    Moving ASM Password File

    	
    [grid@node12c01 ~]$ asmcmd pwmove --asm +CRS_TMP/orapwASM +OCR_VOTE/crs12c/orapwASM
    moving +CRS_TMP/orapwASM -> +OCR_VOTE/crs12c/orapwASM
    

    Checking if file was moved and profile updated

    	
    [grid@node12c01 ~]$ asmcmd pwget --asm
    +OCR_VOTE/crs12c/orapwASM
    
    [grid@node12c01 ~]$ srvctl config asm
    ASM home: /u01/app/12.1.0/grid
    Password file: +OCR_VOTE/crs12c/orapwASM
    ASM listener: LISTENER
    

    After these changes restart cluster to release ASM SPFILE.
    You can restart one node at time if you wish.

    You DON'T need stop whole cluster to release ASM SPFILE.

    As my cluster is a test cluster I will stop cluster on all nodes.

    	
    [root@node12c01 ~]# /u01/app/12.1.0/grid/bin/crsctl stop cluster -all
    CRS-2673: Attempting to stop 'ora.crsd' on 'node12c01'
    CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node12c01'
    CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node12c01'
    CRS-2673: Attempting to stop 'ora.CRS_TMP.dg' on 'node12c01'
    CRS-2673: Attempting to stop 'ora.proxy_advm' on 'node12c01'
    CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node12c01'
    CRS-2677: Stop of 'ora.scan1.vip' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.crsd' on 'node12c02'
    CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node12c02'
    CRS-2673: Attempting to stop 'ora.cvu' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.proxy_advm' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.oc4j' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.node12c02.vip' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.mgmtdb' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node12c02'
    CRS-2677: Stop of 'ora.cvu' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node12c02'
    CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.node12c02.vip' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.node12c01.vip' on 'node12c02'
    CRS-2677: Stop of 'ora.scan2.vip' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.node12c01.vip' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.mgmtdb' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'node12c02'
    CRS-2677: Stop of 'ora.MGMTLSNR' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.proxy_advm' on 'node12c01' succeeded
    CRS-2677: Stop of 'ora.oc4j' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.proxy_advm' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.CRS_TMP.dg' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node12c01'
    CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'node12c01'
    CRS-2677: Stop of 'ora.asm' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c01'
    CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.CRS_TMP.dg' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.ons' on 'node12c01'
    CRS-2677: Stop of 'ora.ons' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.net1.network' on 'node12c01'
    CRS-2677: Stop of 'ora.net1.network' on 'node12c01' succeeded
    CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node12c01' has completed
    CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.CRS_TMP.dg' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'node12c02'
    CRS-2677: Stop of 'ora.asm' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c02'
    CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.ons' on 'node12c02'
    CRS-2677: Stop of 'ora.ons' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.net1.network' on 'node12c02'
    CRS-2677: Stop of 'ora.net1.network' on 'node12c02' succeeded
    CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node12c02' has completed
    CRS-2677: Stop of 'ora.crsd' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.ctssd' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.evmd' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.storage' on 'node12c02'
    CRS-2677: Stop of 'ora.storage' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'node12c02'
    CRS-2677: Stop of 'ora.crsd' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.ctssd' on 'node12c01'
    CRS-2673: Attempting to stop 'ora.evmd' on 'node12c01'
    CRS-2673: Attempting to stop 'ora.storage' on 'node12c01'
    CRS-2677: Stop of 'ora.storage' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'node12c01'
    CRS-2677: Stop of 'ora.evmd' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.evmd' on 'node12c01' succeeded
    CRS-2677: Stop of 'ora.ctssd' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.ctssd' on 'node12c01' succeeded
    CRS-2677: Stop of 'ora.asm' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node12c01'
    CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.cssd' on 'node12c01'
    CRS-2677: Stop of 'ora.cssd' on 'node12c01' succeeded
    CRS-2677: Stop of 'ora.asm' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node12c02'
    CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.cssd' on 'node12c02'
    CRS-2677: Stop of 'ora.cssd' on 'node12c02' succeeded
    

    Start cluster on all nodes

    	
    [root@node12c01 ~]# /u01/app/12.1.0/grid/bin/crsctl start cluster -all
    CRS-2672: Attempting to start 'ora.evmd' on 'node12c01'
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node12c01'
    CRS-2672: Attempting to start 'ora.evmd' on 'node12c02'
    CRS-2676: Start of 'ora.cssdmonitor' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'node12c01'
    CRS-2672: Attempting to start 'ora.diskmon' on 'node12c01'
    CRS-2676: Start of 'ora.diskmon' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node12c02'
    CRS-2676: Start of 'ora.evmd' on 'node12c01' succeeded
    CRS-2676: Start of 'ora.cssdmonitor' on 'node12c02' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'node12c02'
    CRS-2672: Attempting to start 'ora.diskmon' on 'node12c02'
    CRS-2676: Start of 'ora.evmd' on 'node12c02' succeeded
    CRS-2676: Start of 'ora.diskmon' on 'node12c02' succeeded
    CRS-2676: Start of 'ora.cssd' on 'node12c02' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'node12c02'
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node12c02'
    CRS-2676: Start of 'ora.cssd' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'node12c01'
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node12c01'
    CRS-2676: Start of 'ora.ctssd' on 'node12c02' succeeded
    CRS-2676: Start of 'ora.ctssd' on 'node12c01' succeeded
    CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node12c02' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'node12c02'
    CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'node12c01'
    CRS-2676: Start of 'ora.asm' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.storage' on 'node12c01'
    CRS-2676: Start of 'ora.asm' on 'node12c02' succeeded
    CRS-2672: Attempting to start 'ora.storage' on 'node12c02'
    CRS-2676: Start of 'ora.storage' on 'node12c02' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'node12c02'
    CRS-2676: Start of 'ora.crsd' on 'node12c02' succeeded
    CRS-2676: Start of 'ora.storage' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'node12c01'
    CRS-2676: Start of 'ora.crsd' on 'node12c01' succeeded
    

    Migrate MGMTDB to other Diskgroup.

    MGMTDB is a "common" Oracle RDBMS (exclusive to CHM). We can perform administrative task like in any other Oracle RDBMS.
    So, to migrate MGMTDB to other Diskgroup I will use RMAN.

    Creating Diskgroup MGMTDB

    	
    SQL> CREATE DISKGROUP MGMTDB EXTERNAL REDUNDANCY
         DISK '/dev/asm-mgmtdb1'
         ATTRIBUTE 
    	 'au_size'='1M',
    	 'compatible.asm' = '12.1',
    	 'compatible.rdbms' = '12.1';			 
    			 
    Diskgroup created.
    

    Starting MGMTDB diskgroup on others node and checking status

    	
    SQL> ! srvctl start diskgroup -g MGMTDB -n node12c02
    
    SQL> ! srvctl status diskgroup -g MGMTDB
    Disk Group MGMTDB is running on node12c01,node12c02
    

    To manage MGMTDB you must use srvctl utility.
    srvctl start/stop/config/modify... mgmtdb

    Getting current configuration of MGMTDB

    	
    
    [grid@node12c01 ~]$ srvctl config mgmtdb
    Database unique name: _mgmtdb
    Database name:
    Oracle home: /u01/app/12.1.0/grid
    Oracle user: grid
    Spfile: +CRS_TMP/_mgmtdb/spfile-MGMTDB.ora
    Password file:
    Domain:
    Start options: open
    Stop options: immediate
    Database role: PRIMARY
    Management policy: AUTOMATIC
    Database instance: -MGMTDB
    Type: Management
    

    Check where MGMTDB is running.

    	
    [grid@node12c01 ~]$ srvctl status mgmtdb
    Database is enabled
    Instance -MGMTDB is running on node node12c02
    

    You must migrate MGMTDB using host(node) where MGMTDB is running.

    Let's start.
    Stop MGMT Listener and MGMTDB

    	
    [grid@node12c02 ~]$ srvctl stop mgmtdb
    
    [grid@node12c02 ~]$ srvctl stop mgmtlsnr
    
    

    Precaution: I will create a full backup of the database MGMTDB before start migration.

    	
    [grid@node12c02 ~]$ mkdir -p /u01/app/grid/backup_mgmtdb
    
    [grid@node12c02 ~]$ export ORACLE_SID=-MGMTDB
    [grid@node12c02 ~]$ rman target /
    
    Recovery Manager: Release 12.1.0.1.0 - Production on Tue Sep 24 19:30:56 2013
    
    Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database (not started)
    
    RMAN> startup mount
    
    connected to target database (not started)
    Oracle instance started
    database mounted
    
    Total System Global Area     521936896 bytes
    
    Fixed Size                     2290264 bytes
    Variable Size                289410472 bytes
    Database Buffers             222298112 bytes
    Redo Buffers                   7938048 bytes
    
    
    RMAN>  backup database format '/u01/app/grid/backup_mgmtdb/rman_mgmtdb_%U' tag='bk_db_move_dg';
    
    Starting backup at 24-SEP-13
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=6 device type=DISK
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00005 name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881
    input datafile file number=00001 name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845
    input datafile file number=00002 name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821
    input datafile file number=00004 name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917
    input datafile file number=00003 name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817
    channel ORA_DISK_1: starting piece 1 at 24-SEP-13
    channel ORA_DISK_1: finished piece 1 at 24-SEP-13
    piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_03okm63j_1_1 tag=BK_DB_MOVE_DG comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 24-SEP-13
    channel ORA_DISK_1: finished piece 1 at 24-SEP-13
    piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_04okm650_1_1 tag=BK_DB_MOVE_DG comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-SEP-13
    

    Migrating SPFILE to Diskgroup +MGMTDB.
    RMAN will restore SPFILE and update OCR to a new SPFILE Location.
    Database _MGMTDB MUST be mounted or opened.

    	
    
    RMAN> restore spfile to "+MGMTDB";
    
    Starting restore at 24-SEP-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=355 device type=DISK
    
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: restoring SPFILE
    output file name=+MGMTDB
    channel ORA_DISK_1: reading from backup piece /u01/app/grid/backup_mgmtdb/rman_mgmtdb_02oltbpe_1_1
    channel ORA_DISK_1: piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_02oltbpe_1_1 tag=BK_DB_MOVE_DG
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
    Finished restore at 24-SEP-13
    
    [grid@node12c01 ~]$ srvctl config mgmtdb |grep Spfile
    Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
    
    RMAN> shutdown immediate;
    database dismounted
    Oracle instance shut down
    

    Migrating Controlfile to Diskgroup MGMTDB
    P.S: Before do this change, check what spfile you are updating.

    	
    
    [grid@node12c02 ~]$ sqlplus / as sysdba
    SQL> startup nomount;
    ORACLE instance started.
    
    Total System Global Area  521936896 bytes
    Fixed Size                  2290264 bytes
    Variable Size             285216168 bytes
    Database Buffers          226492416 bytes
    Redo Buffers                7938048 bytes
    
    SQL> show parameter spfile
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    spfile                               string      +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
    
    
    SQL> show parameter control_file
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    control_files                        string      +CRS_TMP/_MGMTDB/CONTROLFILE/current.262.826899953
    
    SQL> alter system set control_files='+MGMTDB' scope=spfile ;
    
    System altered.
    
    SQL> shutdown immediate;
    ORA-01507: database not mounted
    
    
    ORACLE instance shut down.
    SQL> exit
    [grid@node12c02 ~]$ rman target /
    
    Recovery Manager: Release 12.1.0.1.0 - Production on Tue Sep 24 19:45:37 2013
    
    Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database (not started)
    
    RMAN> startup nomount
    
    Oracle instance started
    
    Total System Global Area     521936896 bytes
    
    Fixed Size                     2290264 bytes
    Variable Size                289410472 bytes
    Database Buffers             222298112 bytes
    Redo Buffers                   7938048 bytes
    
    RMAN> RESTORE CONTROLFILE FROM '+CRS_TMP/_MGMTDB/CONTROLFILE/current.262.826899953';
    
    Starting restore at 24-SEP-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=124 device type=DISK
    
    channel ORA_DISK_1: copied control file copy
    output file name=+MGMTDB/_MGMTDB/CONTROLFILE/current.263.827005571
    Finished restore at 24-SEP-13
    
    RMAN> startup mount
    
    database is already started
    database mounted
    released channel: ORA_DISK_1
    
    

    Controlfile was moved successful.

    Migrating Database Files to Diskgroup MGMTDB

    	
    RMAN>  BACKUP AS COPY DEVICE TYPE DISK DATABASE FORMAT '+MGMTDB';
    
    Starting backup at 24-SEP-13
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting datafile copy
    input datafile file number=00005 name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881
    output file name=+MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237 tag=TAG20130924T194034 RECID=1 STAMP=827005278
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45
    channel ORA_DISK_1: starting datafile copy
    input datafile file number=00001 name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845
    output file name=+MGMTDB/_MGMTDB/DATAFILE/system.257.827005281 tag=TAG20130924T194034 RECID=2 STAMP=827005291
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:16
    channel ORA_DISK_1: starting datafile copy
    input datafile file number=00002 name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821
    output file name=+MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295 tag=TAG20130924T194034 RECID=3 STAMP=827005303
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
    channel ORA_DISK_1: starting datafile copy
    input datafile file number=00004 name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917
    output file name=+MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311 tag=TAG20130924T194034 RECID=4 STAMP=827005312
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
    channel ORA_DISK_1: starting datafile copy
    input datafile file number=00003 name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817
    output file name=+MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313 tag=TAG20130924T194034 RECID=5 STAMP=827005316
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
    channel ORA_DISK_1: starting datafile copy
    copying current control file
    output file name=+MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 tag=TAG20130924T194034 RECID=6 STAMP=827005317
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 24-SEP-13
    channel ORA_DISK_1: finished piece 1 at 24-SEP-13
    piece handle=+MGMTDB/_MGMTDB/BACKUPSET/2013_09_24/nnsnf0_tag20130924t194034_0.262.827005319 tag=TAG20130924T194034 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-SEP-13
    
    RMAN> SWITCH DATABASE TO COPY;
    
    datafile 1 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/system.257.827005281"
    datafile 2 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295"
    datafile 3 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313"
    datafile 4 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311"
    datafile 5 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237"
    
    

    Datafiles was migrated successful.

    Checking Datafiles

    
    RMAN> report schema;
    
    Report of database schema for database with db_unique_name _MGMTDB
    
    List of Permanent Datafiles
    ===========================
    File Size(MB) Tablespace           RB segs Datafile Name
    ---- -------- -------------------- ------- ------------------------
    1    600      SYSTEM               ***     +MGMTDB/_MGMTDB/DATAFILE/system.257.827005281
    2    400      SYSAUX               ***     +MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295
    3    50       UNDOTBS1             ***     +MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313
    4    100      SYSGRIDHOMEDATA      ***     +MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311
    5    2048     SYSMGMTDATA          ***     +MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237
    
    List of Temporary Files
    =======================
    File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
    ---- -------- -------------------- ----------- --------------------
    1    20       TEMP                 32767       +CRS_TMP/_MGMTDB/TEMPFILE/temp.266.826899969
    

    Move Tempfile to Diskgroup MGMTDB.

    RMAN> run {
    SET NEWNAME FOR TEMPFILE 1 TO '+MGMTDB';
    SWITCH TEMPFILE ALL;
     }
    
    executing command: SET NEWNAME
    
    renamed tempfile 1 to +MGMTDB in control file
    

    At startup of database TEMP file will be recreated on new Diskgroup.

    Starting Database and checking.

    RMAN> startup
    
    database is already started
    database opened
    
    RMAN> report schema;
    
    using target database control file instead of recovery catalog
    Report of database schema for database with db_unique_name _MGMTDB
    
    List of Permanent Datafiles
    ===========================
    File Size(MB) Tablespace           RB segs Datafile Name
    ---- -------- -------------------- ------- ------------------------
    1    600      SYSTEM               ***     +MGMTDB/_MGMTDB/DATAFILE/system.257.827005281
    2    400      SYSAUX               ***     +MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295
    3    50       UNDOTBS1             ***     +MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313
    4    100      SYSGRIDHOMEDATA      ***     +MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311
    5    2048     SYSMGMTDATA          ***     +MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237
    
    List of Temporary Files
    =======================
    File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
    ---- -------- -------------------- ----------- --------------------
    1    20       TEMP                 32767       +MGMTDB/_MGMTDB/TEMPFILE/temp.261.827005913
    
    RMAN> exit
    Recovery Manager complete.
    

    Delete the old datafiles copy on old Diskgroup CRS_TMP.

    RMAN> delete copy ;
    
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=124 device type=DISK
    specification does not match any archived log in the repository
    List of Datafile Copies
    =======================
    
    Key     File S Completion Time Ckp SCN    Ckp Time
    ------- ---- - --------------- ---------- ---------------
    7       1    A 24-SEP-13       798766     24-SEP-13
            Name: +CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845
    
    8       2    A 24-SEP-13       798766     24-SEP-13
            Name: +CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821
    
    9       3    A 24-SEP-13       798766     24-SEP-13
            Name: +CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817
    
    10      4    A 24-SEP-13       798766     24-SEP-13
            Name: +CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917
    
    11      5    A 24-SEP-13       798766     24-SEP-13
            Name: +CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881
    
    List of Control File Copies
    ===========================
    
    Key     S Completion Time Ckp SCN    Ckp Time
    ------- - --------------- ---------- ---------------
    6       A 24-SEP-13       798766     24-SEP-13
            Name: +MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317
            Tag: TAG20130924T194034
    
    
    Do you really want to delete the above objects (enter YES or NO)? yes
    deleted datafile copy
    datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845 RECID=7 STAMP=827005353
    deleted datafile copy
    datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821 RECID=8 STAMP=827005353
    deleted datafile copy
    datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817 RECID=9 STAMP=827005353
    deleted datafile copy
    datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917 RECID=10 STAMP=827005353
    deleted datafile copy
    datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881 RECID=11 STAMP=827005353
    deleted control file copy
    control file copy file name=+MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 RECID=6 STAMP=827005317
    Deleted 6 objects
    

    Migrating Redo Logs to Diskgroup MGMTDB
    P.S: Database must be OPENED.

    [grid@node12c02 ~]$ sqlplus / as sysdba
    SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;
    
        GROUP# MEMBER                                             STATUS
    ---------- -------------------------------------------------- ----------------
             1 +CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955   INACTIVE
             2 +CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955   CURRENT
             3 +CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957   INACTIVE
    

    Adding New Log Members

    SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 1;
    Database altered.
    
    SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 2;
    Database altered.
     
    SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 3;
    Database altered.
    
    SQL> alter system switch logfile;
    
    System altered.
    
    SQL> /
    
    System altered.
    
    SQL> /
    
    System altered.
    
    SQL> /
    System altered.
    
    SQL> /
    System altered.
    
    SQL> alter system checkpoint;
    
    

    Removing Log Member from CRS_TMP

    SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;
    
        GROUP# MEMBER                                             STATUS
    ---------- -------------------------------------------------- ----------------
             1 +CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955   ACTIVE
             1 +MGMTDB/_MGMTDB/ONLINELOG/group_1.264.827006475    ACTIVE
             2 +MGMTDB/_MGMTDB/ONLINELOG/group_2.266.827006507    ACTIVE
             2 +CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955   ACTIVE
             3 +MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479    CURRENT
             3 +CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957   CURRENT
    
    6 rows selected.
    
    SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955';
    Database altered.
     
    SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955';
    Database altered.
     
    SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'; 
    Database altered.
    		  
    		 
    		 
    SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957';
    ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'
    *
    ERROR at line 1:
    ORA-01609: log 3 is the current log for thread 1 - cannot drop members
    ORA-00312: online log 3 thread 1:
    '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'
    ORA-00312: online log 3 thread 1:
    '+MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479'
    
    SQL> alter system switch logfile;
    
    System altered.
    
    SQL> alter system checkpoint;
    
    System altered.
    
    SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957';
    
    Database altered.
    
    SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;
    
        GROUP# MEMBER                                             STATUS
    ---------- -------------------------------------------------- ----------------
             1 +MGMTDB/_MGMTDB/ONLINELOG/group_1.264.827006475    CURRENT
             2 +MGMTDB/_MGMTDB/ONLINELOG/group_2.266.827006507    INACTIVE
             3 +MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479    INACTIVE
    

    Redo Logs was moved successful.

    Check Configuration of resource ora.mgmtdb.
    Maybe you need modify this resource by removing old DISKGROUP.

    As grid user:

    [grid@node12c02 ~]$ crsctl status res ora.mgmtdb -p |grep CRS_TMP
    START_DEPENDENCIES=hard(ora.MGMTLSNR,ora.CRS_TMP.dg,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.CRS_TMP.dg,ora.MGMTDB.dg)
    STOP_DEPENDENCIES=hard(intermediate:ora.MGMTLSNR,shutdown:ora.CRS_TMP.dg,intermediate:ora.asm,shutdown:ora.MGMTDB.dg)
    

    If command above return output you need update OCR by removing the old Diskgroup.
    If you removed Diskgroup without update resource ora.mgmtdb, srvctl will not be able to start resource ora.mgmtdb and raise error:

    [grid@node12c02 ~]$ srvctl config mgmtdb
    Database unique name: _mgmtdb
    Database name:
    Oracle home: /u01/app/12.1.0/grid
    Oracle user: grid
    Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
    Password file:
    Domain:
    Start options: open
    Stop options: immediate
    Database role: PRIMARY
    Management policy: AUTOMATIC
    Database instance: -MGMTDB
    PRCD-1012 : Failed to retrieve disk group list for database _mgmtdb.
    PRCR-1035 : Failed to look up CRS resource ora.CRS_TMP.dg for _mgmtdb
    PRCA-1000 : ASM Disk Group CRS_TMP does not exist
    PRCR-1001 : Resource ora.CRS_TMP.dg does not exist

    To fix it you need remove this resource.
    If resource ora.CRS_TMP.dg does not exists you must use crsctl with force option ("-f" flag) to modify ora.mgmtdb resource.

    [grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "START_DEPENDENCIES='hard(ora.MGMTLSNR,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.MGMTDB.dg)'"
    CRS-2510: Resource 'ora.CRS_TMP.dg' used in dependency 'hard' does not exist or is not registered
    CRS-2514: Dependency attribute specification 'hard' is invalid in resource 'ora.mgmtdb'
    CRS-4000: Command Modify failed, or completed with errors.

    Updating resource ora.mgmtdb by removing ora.CRS_TMP.dg resource.

    
    [grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "START_DEPENDENCIES='hard(ora.MGMTLSNR,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.MGMTDB.dg)'" -f
    
    [grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "STOP_DEPENDENCIES='hard(intermediate:ora.MGMTLSNR,intermediate:ora.asm,shutdown:ora.MGMTDB.dg)'" -f
    
    [grid@node12c02 ~]$ crsctl status res ora.mgmtdb -p |grep CRS_TMP
    
    [grid@node12c02 ~]$ srvctl config mgmtdb
    Database unique name: _mgmtdb
    Database name:
    Oracle home: /u01/app/12.1.0/grid
    Oracle user: grid
    Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
    Password file:
    Domain:
    Start options: open
    Stop options: immediate
    Database role: PRIMARY
    Management policy: AUTOMATIC
    Database instance: -MGMTDB
    Type: Management
    
    [grid@node12c02 ~]$ srvctl start mgmtdb
    
    [grid@node12c02 ~]$ srvctl status mgmtdb
    Database is enabled
    Instance -MGMTDB is running on node node12c02 
    

    Adding OCR Mirror on +MGMTDB.

    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +MGMTDB
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1568
             Available space (kbytes) :     408000
             ID                       : 1037914462
             Device/File Name         :  +OCR_VOTE
                                        Device/File integrity check succeeded
             Device/File Name         :    +MGMTDB
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check succeeded
    

    I recommend stop and start your cluster by issuing (as root) "crsctl stop cluster -all and crsctl start cluster -all"

    If all resources restart successfull, then we can stop diskgroup "CRS_TMP".

    [grid@node12c02 ~]$ srvctl stop diskgroup -g CRS_TMP
    
    [grid@node12c02 ~]$ srvctl status diskgroup -g CRS_TMP
    Disk Group CRS_TMP is not running
    

    Check if all resources is online. If yes, you can remove this diskgroup.

    [grid@node12c02 ~]$ srvctl start diskgroup -g CRS_TMP -n node12c02 
    
    [grid@node12c02 ~]$ export ORACLE_SID=+ASM1
    [grid@node12c02 ~]$ sqlplus / as sysasm
    
    SQL> DROP DISKGROUP CRS_TMP INCLUDING CONTENTS;
    
    Diskgroup dropped.
    

    Then restart whole clusterware again and check if all resources started without issue.


    Oracle Database 12c Architecture – POSTER

    The Oracle Database 12c: Interactive Quick Reference is a multimedia tool for various terms and concepts used in the Oracle Database 12c release. Built as a multimedia web page, this diagram provides descriptions of database architectural components, as well as references to relevant documentation. Use this helpful reference as a cheat sheet for writing custom data dictionary scripts, locating views pertinent to a specific database component or category, understanding the database architecture, and more.

    This essential reference includes:

    • DBA Views: Key DBA static data dictionary views and dynamic performance views organized by product feature areas
    • Performance Views: Dynamic performance views organized by product feature areas. Click on a view and see the details for that view.
    • Architecture Views: A single diagram that illustrates the relationships between key database dictionary views. Click on a view and see the definition for that view.
    • Database Architecture: A single diagram that illustrates the relationships between key database memory structures, processes, and storage. Click on the diagram to find out detailed information.
    • Multitenant Architecture: A single diagram that illustrates the architecture for a multitenant container database. Click on the diagram to find out detailed information.
    • Background Processes: A comprehensive list that categorizes the background processes and flags which are new in Oracle Database 12c

    Download Information

    You can download and unzip this file to run the poster locally.  Open the file poster.html file located in the OUTPUT_poster folder.

    Additional Information:

    • Please note that initial loading of the poster may take 20-30 seconds. If you experience performance problems, see the Download Information section above.
    • Firefox v22 or later is the recommended browser. Earlier versions may work as well.
    • Browser / device support varies from device to device.
    • We welcome your feedback to help us improve this tool.

     

    http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/poster/OUTPUT_poster/poster.html


    Plugging an Unplugged Pluggable Database

    Purpose

    This tutorial covers the steps required to plug an unplugged pluggable database (PDB) from a container database (CDB) into another CDB.

    Time to Complete

    Approximately 20 minutes

    Introduction

    You can disassociate or unplug a PDB from a CDB and reassociate or plug the PDB into the same CDB or into another CDB. This capability is suitable for the following situations:

      • You have to upgrade a PDB to the latest Oracle version, but you do not want to apply it on all PDBs. Instead of upgrading a CDB from one release to another, you can unplug a PDB from one Oracle Database release, and then plug it into a newly created CDB from a later release.
      • You want to test the performance of the CDB without a particular PDB. You unplug the PDB, test the performance without the PDB and, if necessary, replug the PDB into the CDB.
      • You want to maintain a collection of PDB “gold images” as unplugged PDBs.
    Scenario

    In this tutorial, you perform a PDB unplugging operation from a CDB. Next, you perform a plugging operation of the same PDB into another CDB by using SQL*Plus.

    Different plugging scenarios are allowed:

        • Plug the unplugged PDB by using the data files of the unplugged PDB. The unplugged PDB is disassociated from the source CDB.
          • The source data files are used with or without any copy.
          • The source data files are used after being moved to another location.
        • Plug the unplugged PDB as a clone to:
          • Allow developers and testers to rapidly and repeatedly provision a well-known starting state
          • Support self-paced learning
          • Provide a new way to deliver a brand-new application
    Prerequisites

    Before starting this tutorial, you should:

    • Install Oracle Database 12c.
    • Create two CDBs with two PDBs in the first CDB.

    The environment used in the development of this tutorial is as follows:

      • ORACLE_HOME: /u01/app/oracle/product/12.1.0
      • TNS Listener port: 1521
      • Container databases:
        • SID: cdb1
        • SID: cdb2
      • Pluggable databases (in cdb1):
        • pdb1
        • pdb2

    http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/pdb/pdb_unplug_plug/pdb_unplug_plug.html


    Follow

    Get every new post delivered to your Inbox.

    Join 209 other followers