PRCR-1079 CRS-5017 ORA-01017 – DBCA Fails to create a Database in Oracle Restart Environment 12c.

The Oracle Grid Infrastructure software owner (typically, grid) must be a member of the OSDBA group. Membership in the OSDBA group enables access to the files managed by Oracle ASM. If you have a separate OSDBA group for Oracle ASM, then the Oracle Restart software owner must be a member of the OSDBA group for each database and the OSDBA group for Oracle ASM.

http://docs.oracle.com/database/121/LADBI/usr_grps.htm#BABFECII

Issue:

If grid user is not a member of dba os group, then DBCA will fails at end of database creation and DBCA will perform rollback of database creation (database will not be created).

The error below will be raised:

srvctl start database -d prdcdb
PRCR-1079 : Failed to start resource ora.prdcdb.db
ORA-01017: invalid username/password; logon denied
CRS-5017: The resource action "ora.prdcdb.db start" encountered the following
error: ORA-01017: invalid username/password; logon denied
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/db-oracle/crs/
trace/ohasd_oraagent_grid.trc".

 

Solution:

To fix the issue you must add  user “grid” to OSDBA group “dba” and relink RDBMS Binaries.

# id grid
uid=205(grid) gid=202(oinstall) groups=208(asmdba),209(asmadmin),210(asmoper)
# usermod -a -G dba grid (linux)
# id grid
uid=205(grid) gid=202(oinstall) groups=203(dba),208(asmdba),209(asmadmin),
210(asmoper)


# su - oracle
oracle@db-oracle:/home/oracle> export AIXTHREAD_SCOPE=S
oracle@db-oracle:/home/oracle> export ORACLE_BASE=/u01/app/oracle
oracle@db-oracle:/home/oracle> export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/
dbhome_12102
oracle@db-oracle:/home/oracle> export PATH=$ORACLE_HOME/bin:$PATH

oracle@db-oracle:/home/oracle> relink all
writing relink log to: /u01/app/oracle/product/12.1.0/dbhome_12102/
install/relink.log


If there are no errors reported in $ORACLE_HOME/install/relink.log, then retry to create the database with dbca.

 

 

 


What Patch to Apply? PSU ? GI PSU ? Proactive Bundle Patch?

For those who unfamiliar with Oracle Patch is little confusing what patch to apply when get  a table with different patch in the same version.

patch

I will try clarify some doubts.

Note: You must provide a valid My Oracle Support login name in order to access below Links.

Patch version numbering changed

In November 2015 the version numbering for new Bundle Patches, Patch Set Updates and Security Patch Updates for Oracle Database changed the format from  5th digit of the bundle version with a release date in the form “YYMMDD” where:

  • YY is the last 2 digits of the year
  • MM is the numeric month (2 digits)
  • DD is the numeric day of the month (2 digits)

More detail can be found here: Oracle Database, Enterprise Manager and Middleware – Change to Patch Numbering from Nov 2015 onwards (Doc ID 2061926.1)

 

Changes on Database Security Patching from 12.1.0.1 onwards

Starting with Oracle Database version 12.1.0.1 , Oracle will only provide Patch Set Update (PSU) patches to meet the Critical Patch Update (CPU) program requirements for security patching. SPU (Security Patch Update) patches will no longer be available. Oracle has moved to this simplified model due to the popularity of the PSU patches. PSUs are Oracle’s preferred proactive patching vehicle since their inception in 2009.

Database Security Patching from 12.1.0.1 onwards (Doc ID 1581950.1)

 

Where to find last Patches for Database?

Use the Patch Assistant: Assistant: Download Reference for Oracle Database PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases (Doc ID 2118136.2)

 

What Patch to apply PSU, GI PSU,Proactive Bundle Patch, Bundle Patch (Windows 32bit & 64bit)?

When using the Patchset Assistant the assistant show below table:
In this case I search for last patch for 12.1.0.2.

PSU_Bundle_Patch

Understanding the Patch Nomenclature :
New Patch Nomenclature for Oracle Products (Doc ID 1430923.1)

Note: As of April 2016, the Database Patch for Engineered Systems and Database In-Memory has been renamed from “Bundle Patch (BP) ” to “Database Proactive Bundle Patch”.

Note: Windows Platform must use “Bundle Patch (Windows 32bit & 6bit)”.

Database patch content:

  • SPU contains only the CPU program security fixes
  • PSU contains the CPU program security fixes and additional high-impact/low-risk critical bug fixes
  • Proactive Bundle Patch (PBP) includes all PSU fixes along with fixes targeted at the specific Bundle Patch environment.

They are cumulatives,so  if you have a OH (12.1.0.2) in base release (i.e no fix)  and apply the last PSU or PBP it will fix all bugs from base release until current version of patch.

Where to apply each Patch?

  • PSU –  Can be applied on Database Servers, Client-Only and  Instant Client.
  • GI PSU – Can be applied on GI Home (Oracle Restart or Oracle Clusterware) in conjunction with RAC, RACOne,  Single Instance home, Client-Only and  Instant Client.
  • Proactive Bundle Patch – Can be applied on GI Home in conjunction with RAC, RACOne, or Single Instance home, Client-Only and  Instant Client.

 

An installation can only use one of the SPU, PSU or Proactive Bundle Patch patching methods.

 

How to choose between them?

The “Database Proactive Bundle Patch” requires a bit more testing than a Patch Set Update (PSU) as it delivers a larger set of fixes.

If you are installing a new fresh installation you should to apply Database Proactive Bundle Patch.

PSU is addressed to environments sensitive to changes, because it required less testing.

I have Applied “Database PSU” how to move to “Database Proactive Bundle Patch”? 

Moving from “Database PSU” to “Database Proactive Bundle Patch”

  • Back up your current setup
  • Fully rollback / deinstall “Database PSU”
    • If using OJVM PSU that is likely to require OJVM PSU to be rolled out too
  • Apply / install the latest “Database Proactive Bundle Patch”
  • Apply any interim patches also rolled out above (including OJVM PSU if that was installed)

Note from Oracle: It is not generally advisable to switch from “Database PSU” to “Database SPU” method.

The below note can clarify any doubt on this post.
Oracle Database – Overview of Database Patch Delivery Methods (Doc ID 1962125.1)

 

OPLAN Support

GI PSU and Proactive Bundle Patch are supported by OPlan.

OPlan is a utility that facilitates the patch installation process by providing you with step-by-step patching instructions specific to your environment.
In contrast to the traditional patching operation, applying a patch based on the README requires you to understand the target configuration and manually identify the patching commands relevant to your environment. OPlan eliminates the requirement of identifying the patching commands by automatically collecting the configuration information for the target, then generating instructions specific to the target configuration.

Oracle Software Patching with OPLAN (Doc ID 1306814.1)

Useful Notes:

Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

Frequently Asked Questions (FAQ): Patching Oracle Database Server (Doc ID 1446582.1)

12.1.0.2 Database Proactive Bundle Patches / Bundle Patches for Engineered Systems and DB In-Memory – List of Fixes in each Bundle (Doc ID 1937782.1)


RHEL/OEL 7 – NTPD replaced by Chrony

Despite configuring NTP during the installation process, NTPD is not installed /configured/running after the installation completes.

date_time

New fresh installation of  RHEL/OEL 7 the default NTP Client is the CHRONY.

What is the CHRONY?
Chrony was introduced as new NTP client provided in the chrony package. Chrony does not provides all features available in old ntp client (ntp). So ntp is still provided due to compatibility.

During Oracle Grid Infrastructure Installation the runInstaller will   fail during  prerequisite  check due ntp not configured.

NTP not configured. Network Time Protocol (NTP)  - Failed 
PRVF-7590 : "ntpd" is not running on node "xxx"

Oracle recommend to use NTPD and disable CHRONYD.

Just follow step below to Install/Configure/Enable NTPD

1. Check Chronyd service

# systemctl status chronyd.service
● chronyd.service - NTP client/server
 Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
 Active: active (running) since Mon 2016-05-30 01:15:33 BRT; 5s ago
 Process: 19464 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
 Process: 19456 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 19459 (chronyd)
 CGroup: /system.slice/chronyd.service
 └─19459 /usr/sbin/chronyd

May 30 01:15:33 node1 systemd[1]: Starting NTP client/server...
May 30 01:15:33 node1 chronyd[19459]: chronyd version 2.1.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +DEBUG +ASYNCDNS +IPV6 +SECHASH)
May 30 01:15:33 node1 chronyd[19459]: Frequency -23.731 +/- 0.023 ppm read from /var/lib/chrony/drift
May 30 01:15:33 node1 systemd[1]: Started NTP client/server.

2. Stop and disable Chronyd service

# systemctl stop chronyd.service
# systemctl disable chronyd.service
Removed symlink /etc/systemd/system/multi-user.target.wants/chronyd.service.

3. Install ntpd package

# yum install ntp -y

4. Add “-x ” option into  the “/etc/sysconfig/ntpd” file.

OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

5. Enable and Start NTPD

#systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.


# systemctl start ntpd.service

# systemctl status ntpd.service
● ntpd.service - Network Time Service
 Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
 Active: active (running) since Mon 2016-05-30 01:23:09 BRT; 9s ago
 Process: 23048 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 23050 (ntpd)
 CGroup: /system.slice/ntpd.service
 └─23050 /usr/sbin/ntpd -u ntp:ntp -x -u ntp:ntp -p /var/run/ntpd.pid

May 30 00:27:07 node1 ntpd[2829]: Listen normally on 10 virbr0 192.168.122.1 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 11 lo ::1 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 12 eno16777984 fe80::250:56ff:fe90:21e9 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 13 eno33559296 fe80::250:56ff:fe90:d421 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 14 eno50338560 fe80::250:56ff:fe90:f203 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listening on routing socket on fd #31 for interface updates
May 30 00:27:07 node1 systemd[1]: Started Network Time Service.
May 30 00:27:08 node1 ntpd[2829]: 0.0.0.0 c016 06 restart
May 30 00:27:08 node1 ntpd[2829]: 0.0.0.0 c012 02 freq_set ntpd 0.000 PPM
May 30 00:27:08 node1 ntpd[2829]: 0.0.0.0 c011 01 freq_not_set

Known issues that are solved using above procedure.

# ntpq -p
localhost: timed out, nothing received
***Request timed out
# ntpstat 
Unable to talk to NTP daemon.  Is it running? 

 

 

 


ORAchk Health Checks For The Oracle Stack (ORACHK 2.2.4 and above)


Oracle Database 12c (12.0.1.*) and 11g (11.2.0.4) comes with new feature called RACcheck.
Although the RACcheck already exists before these releases. (In 2011 I posted about benefits of RACcheck)

Brief of RACcheck.
.....
RACcheck - The Oracle RAC Configuration Audit Tool
RACcheck is designed to audit vital configuration settings for the Oracle Database, single instance databases, as well as Oracle Real Application Clusters (Oracle RAC) databases. It also includes checks for Oracle Clusterware, Oracle Automatic Storage Management (Oracle ASM) and Oracle Grid Infrastructure.

RACcheck provides best practices recommedations considering the whole stack, including Maximum Availability Architecture (MAA) configurations and is therefore the ideal tool for regular health checks as well as pre- and post-upgrade best practices assessments.
.....

Now Oracle replace/renamed Oracle RACcheck to ORAchk.

ORAchk- Health Checks for the Oracle Stack

ORAchk replaces the popular RACcheck to support a wider range of products. ORAchk version 2.2.4 is now available for download and includes the following key features:

  • RACcheck renamed to ORAchk
  • ORAchk daemon auto-start mode after node reboot (init integration)
  • Merge multiple ORAchk collection reports
  • Upload of installed patches to database
  • Collection Manager for ORAchk, RACcheck and Exachk (Document 1602329.1)
  • ORAchk signature file in /tmp on all nodes to verify last ORAchk run
  • New checks and bug fixes, including
  • 30 Oracle Ebusiness AP module data integrity checks
  • 12 new Database checks
  • 8 new Solaris system checks
  • Supported Platforms

  • Linux x86-64* (Enterprise Linux, RedHat and SuSE 9, SuSE 10 & SuSE 11)
  • Oracle Solaris SPARC (Solaris 10 and 11)
  • Oracle Solaris x86-64 (Solaris 10 and 11)
  • AIX **
  • HPUX**
  • * 32-bit platforms not supported, no planned support for Linux Itanium
    **Requires BASH Shell 3.2 or higher to be installed

    Supported Oracle Releases

  • 10gR2
  • 11gR1
  • 11gR2
  • 12cR1
  • When to Run ORAchk

  • After initial Oracle RAC deployment
  • Before planned system maintenance
  • After planned system maintenance
  • At least once every three months
  • Install/Configure

    It is recommended to run the tool as the database software owner (e.g. oracle). The user may run the tool as the Grid Infrastructure software owner (e.g. grid) and it will collect the same data but database credentials must manually be supplied to perform the database related audit checks. Typically when run as oracle the customer will have OS authentication set up for the oracle database software owner and the database login credentials will not be needed.

    Download ORAchk

    Stage Location:
    It is recommended that the kit be staged and operated from a local filesystem on a single database server in order to provide the best performance possible.


    $ mkdir -p /u01/app/oracle/orachk

    [oracle@node11g01 install]$ cd /u01/app/oracle/orachk

    [oracle@node11g01 orachk]$ unzip orachk.zip
    Archive: orachk.zip
    inflating: raccheck
    inflating: rules.dat
    inflating: collections.dat
    inflating: readme.txt
    inflating: orachk
    creating: .cgrep/
    inflating: .cgrep/ogghc_12101.sql
    inflating: .cgrep/lcgrep4
    inflating: .cgrep/checkDiskFGMapping.sh
    inflating: .cgrep/ogghc_11204.sql
    inflating: .cgrep/lcgreps9
    inflating: .cgrep/ogghc_11203.sql
    inflating: .cgrep/scgrepx86
    inflating: .cgrep/acgrep
    inflating: .cgrep/oracle-upstarttmpl.conf
    inflating: .cgrep/check_reblance_free_space.sql
    inflating: .cgrep/CollectionManager_App.sql
    inflating: .cgrep/exalogic_zfs_checks.aksh
    inflating: .cgrep/hiacgrep
    inflating: .cgrep/init.tmpl
    inflating: .cgrep/lcgreps10
    inflating: .cgrep/preupgrd.sql
    inflating: .cgrep/diff_collections.pl
    inflating: .cgrep/merge_collections.pl
    inflating: .cgrep/ggdiscovery.sh
    creating: .cgrep/profiles/
    inflating: .cgrep/profiles/DA94919CD0DE0913E04312C0E50A7996.prf
    inflating: .cgrep/profiles/D49C0FBF8FBF4B1AE0431EC0E50A0F24.prf
    extracting: .cgrep/profiles/F13E11974A282AB3E04312C0E50ABCBF.prf
    inflating: .cgrep/profiles/EF6C016813C51366E04313C0E50AE11F.prf
    inflating: .cgrep/profiles/D8367AD6754763FEE04312C0E50A6FCB.prf
    inflating: .cgrep/profiles/DF65D6117CB41054E04312C0E50A69D1.prf
    inflating: .cgrep/profiles/EA5EE324E7E05128E04313C0E50A4B2A.prf
    inflating: .cgrep/profiles/E1BF012E8F210839E04313C0E50A7B68.prf
    inflating: .cgrep/profiles/D462A6F7E9C340FDE0431EC0E50ABE12.prf
    inflating: .cgrep/profiles/D49AD88F8EE75CD8E0431EC0E50A0BC3.prf
    inflating: .cgrep/profiles/E2E972DDE1E14493E04312C0E50A1AB1.prf
    inflating: .cgrep/profiles/F32F44CE0BCD662FE04312C0E50AB058.prf
    inflating: .cgrep/profiles/E8DF76E07DD82E0DE04313C0E50AA55D.prf
    inflating: .cgrep/profiles/D49B218473787400E0431EC0E50A0BB9.prf
    inflating: .cgrep/profiles/D49C0AB26A6D45A8E0431EC0E50ADE06.prf
    inflating: .cgrep/profiles/DFE9C207A8F2428CE04313C0E50A6B0A.prf
    inflating: .cgrep/profiles/D49C4F9F48735396E0431EC0E50A9A0B.prf
    inflating: .cgrep/profiles/D49BDC2EC9E624AEE0431EC0E50A3E12.prf
    inflating: .cgrep/profiles/DF65D0F7FB6F1014E04312C0E50A7808.prf
    inflating: .cgrep/scnhealthcheck.sql
    inflating: .cgrep/pxhcdr.sql
    inflating: .cgrep/lcgrep5
    inflating: .cgrep/scgrep
    inflating: .cgrep/raw_data_browser.pl
    inflating: .cgrep/profiles.dat
    inflating: .cgrep/rack_comparison.py
    inflating: .cgrep/versions.dat
    inflating: .cgrep/create_version.pl
    inflating: .cgrep/lcgreps11
    inflating: .cgrep/utluppkg.sql
    inflating: .cgrep/utlusts.sql
    inflating: .cgrep/reset_crshome.pl
    inflating: .cgrep/asrexacheck
    inflating: .cgrep/lcgrep6
    inflating: .cgrep/utlu112i.sql
    inflating: UserGuide.txt

    Running ORAchk Interactively

    [oracle@node11g01 orachk]$ ./orachk

    CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to /u01/app/11.2.0/grid?[y/n][y]

    Checking ssh user equivalency settings on all nodes in cluster

    Node node11g02 is configured for ssh user equivalency for oracle user

    Searching for running databases . . . . .

    . .
    List of running databases registered in OCR
    1. dborcl
    2. None of above

    Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1].1
    . .

    Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    -------------------------------------------------------------------------------------------------------
    Oracle Stack Status
    -------------------------------------------------------------------------------------------------------
    Host Name CRS Installed ASM HOME RDBMS Installed CRS UP ASM UP RDBMS UP DB Instance Name
    -------------------------------------------------------------------------------------------------------
    node11g01 Yes Yes Yes Yes Yes Yes dborcl_1
    node11g02 Yes Yes Yes Yes Yes Yes dborcl_2
    -------------------------------------------------------------------------------------------------------

    Copying plug-ins

    . . . . . . . . . . . . . . . . . . . . . . . . . . .

    . . . . . .

    17 of the included audit checks require root privileged data collection . If sudo is not configured or the root password is not available, audit checks which require root privileged data collection can be skipped.

    1. Enter 1 if you will enter root password for each host when prompted

    2. Enter 2 if you have sudo configured for oracle user to execute root_orachk.sh script

    3. Enter 3 to skip the root privileged collections

    4. Enter 4 to exit and work with the SA to configure sudo or to arrange for root access and run the tool later.

    Please indicate your selection from one of the above options for root access[1-4][1]:- 2

    *** Checking Best Practice Recommendations (PASS/WARNING/FAIL) ***

    Collections and audit checks log file is
    /u01/app/oracle/orachk/orachk_node11g01_dborcl_031814_170552/log/orachk.log
    .
    .
    .
    .
    ---------------------------------------------------------------------------------
    CLUSTERWIDE CHECKS
    ---------------------------------------------------------------------------------
    ---------------------------------------------------------------------------------

    Detailed report (html) - /u01/app/oracle/orachk/orachk_node11g01_dborcl_031814_170552/orachk_node11g01_dborcl_031814_170552.html

    UPLOAD(if required) - /u01/app/oracle/orachk/orachk_node11g01_dborcl_031814_170552.zip


    Below the report generated in this test.
    Oracle RAC Assessment Report

    orachk usage options

    [oracle@node11g01 orachk]$ ./orachk -h
    Usage : ./orachk [-abvhpfmsuSo:c:t:]
    -a All (Perform best practice check and recommended patch check)
    -b Best Practice check only. No recommended patch check
    -h Show usage
    -v Show version
    -p Patch check only
    -m exclude checks for Maximum Availability Architecture (MAA) scorecards(see user guide for more details)
    -u Run orachk to check pre-upgrade or post-upgrade best practices for 11.2.0.3,11.2.0.4.0 and 12.1.0.1
    -o pre or -o post is mandatory with -u option like ./orachk -u -o pre
    -f Run Offline.Checks will be performed on data already collected from the system
    -o Argument to an option. if -o is followed by v,V,Verbose,VERBOSE or Verbose, it will print checks which passs on the screen
    if -o option is not specified,it will print only failures on screen. for eg: orachk -a -o v

    -clusternodes
    Pass comma separated node names to run orachk only on subset of nodes.
    -dbnames
    Pass comma separated database names to run orachk only on subset of databases
    -localonly
    Run orachk only on local node.
    -debug
    Run orachk in debug mode. Debug log will be generated.
    eg:- ./orachk -debug
    -nopass
    Skip PASS'ed check to print in orachk report and upload to database.

    -noscore
    Do not print healthscore in HTML report.

    -diff [-outfile ]
    Diff two orachk reports. Pass directory name or zip file or html report file as &

    -exadiff
    Compare two different Exalogic rack and see if both are from the same release.Pass directory name or zip file as & (applicable for Exalogic only)

    -c Used only under the guidance of Oracle support or development to override default components

    -
    initsetup : Setup auto restart. Auto restart functionality automatically brings up orachk daemon when node starts
    initrmsetup : Remove auto restart functionality
    initcheck : Check if auto restart functionality is setup or not
    initpresetup : Sets root user equivalency for COMPUTE, STORAGE and IBSWITCHES.(root equivalency for COMPUTE nodes is mandatory for setting up auto restart functionality)

    -d
    start : Start the orachk daemon
    start_debug : Start the orachk daemon in debug mode
    stop : Stop the orachk daemon
    status : Check if the orachk daemon is running
    info : Print information about running orachk daemon
    stop_client : Stop the orachk daemon client
    nextautorun : print the next auto run time
    -daemon
    run orachk only if daemon is running
    -nodaemon
    Dont use daemon to run orachk
    -set
    configure orachk daemon parameter like "param1=value1;param2=value2... "

    Supported parameters are:-

    AUTORUN_INTERVAL :- Automatic rerun interval in daemon mode.Set it zero to disable automatic rerun which is zero.

    AUTORUN_SCHEDULE * * * * :- Automatic run at specific time in daemon mode.
    - - - -
    ▒ ▒ ▒ ▒
    ▒ ▒ ▒ +----- day of week (0 - 6) (0 to 6 are Sunday to Saturday)
    ▒ ▒ +---------- month (1 - 12)
    ▒ +--------------- day of month (1 - 31)
    +-------------------- hour (0 - 23)

    example: orachk -set "AUTORUN_SCHEDULE=8,20 * * 2,5" will schedule runs on tuesday and friday at 8 and 20 hour.

    AUTORUN_FLAGS : orachk flags to use for auto runs.

    example: orachk -set "AUTORUN_INTERVAL=12h;AUTORUN_FLAGS=-profile sysadmin" to run sysadmin profile every 12 hours

    orachk -set "AUTORUN_INTERVAL=2d;AUTORUN_FLAGS=-profile dba" to run dba profile once every 2 days.

    NOTIFICATION_EMAIL : Comma separated list of email addresses used for notifications by daemon if mail server is configured.

    PASSWORD_CHECK_INTERVAL : Interval to verify passwords in daemon mode

    COLLECTION_RETENTION : Purge orachk collection directories and zip files older than specified days.

    -unset
    unset the parameter
    example: orachk -unset "AUTORUN_SCHEDULE"

    -get
    Print the value of parameter

    -excludeprofile
    Pass specific profile.
    List of supported profiles is same as for -profile.

    -merge
    Pass comma separated collection names(directory or zip files) to merge collections and prepare single report.
    eg:- ./orachk -merge orachk_hostname1_db1_120213_163405.zip,orachk_hostname2_db2_120213_164826.zip

    -vmguest
    Pass comma separated filenames containing exalogic guest VM list(applicable for Exalogic only)

    -hybrid [-phy]
    phy :Pass comma separated physical compute nodes(applicable for Exalogic only)
    eg:- ./orachk -hybrid -phy phy_node1,phy_node2

    -profile Pass specific profile.
    List of supported profiles:

    asm asm Checks
    clusterware Oracle clusterware checks
    compute_node Compute Node checks (Exalogic only)
    control_VM Checks only for Control VM(ec1-vm, ovmm, db, pc1, pc2). No cross node checks
    dba dba Checks
    ebs Oracle E-Business Suite checks
    el_extensive Extensive EL checks
    el_lite Exalogic-Lite Checks(Exalogic Only)
    el_rackcompare Data Collection for Exalogic Rack Comparison Tool(Exalogic Only)
    goldengate Oracle GoldenGate checks
    maa Maximum Availability Architecture Checks
    obiee obiee Checks(Exalytics Only)
    storage Oracle Storage Server Checks
    switch Infiniband switch checks
    sysadmin sysadmin checks
    timesten timesten Checks(Exalytics Only)
    virtual_infra OVS, Control VM, NTP-related and stale VNICs check (Exalogic Only)
    zfs ZFS storage appliances checks (Exalogic Only)

    -cells
    Pass comma separated storage server names to run orachk only on selected storage servers.

    -ibswitches
    Pass comma separated infiniband switch names to run orachk only on selected infiniband switches.

    -zfsnodes
    Pass comma separated ZFS storage appliance names to run orachk only on selected storage appliances.

    ORAchk Other Useful Options Not Covered Here

  • Using ORAchk Silently
  • ORAchk can be optionally run in “silent” or “non-interactive” mode in order to enable scheduling and automation
    Is required only if customer does not want to use orachk daemon functionality.

  • Using ORAchk Daemon Mode Operation
  • This functionality permit non-interactive (batch or silent mode) execution on a regular interval.

    When running ORAchk in daemon mode, the most recent and next most recent (if any) collection reports are automatically compared. If the mail address is configured a summary will be emailed along with attachments for the reports and the comparison report.

  • Report Comparisons with ORAchk
  • ORAchk has the ability to perform report comparisons between 2 ORAchk reports.
    This allows for trending of Success Factor and Best Practice changes over time, after planned maintenance, etc within a user friendly HTML report.

  • ORAchk in Upgrade Readiness Mode
  • ORAchk can be used to obtain an automated 11.2.0.3 (or above) Upgrade Readiness Assessment.
    The goal of the ORAchk Upgrade Readiness Assessment is to make the process of upgrade planning for Oracle RAC and Oracle Clusterware target versions 11.2.0.3 and above as smooth as possible by automating many of the
    manual pre and post checks detailed in various upgrade related documents.

    Refer MoS Notes for more details:
    ORAchk - Oracle Configuration Audit Tool (Doc ID 1268927.2)

    ORAchk Users Guide
    For details instructions on how to run ORAchk including troubleshooting steps, available options, etc.

    Enjoy!!!


    Basic Concept About Oracle Flex Cluster


    Documentation isn't much clear about concept and purposes of Oracle Flex Cluster.
    Because that I recommend you read below Oracle White Paper before start with Oracle Clusterware 12.1.0.1.
    Some books (about clusterware 12c) link are making mistakes on this new architecture.

    Read the White Paper on link below:
    Oracle Link
    WordPress Link

    Hope this helps


    How to move/migrate OCR, Voting Disk, ASM (SPFILE and PWDFILE) and MGMTDB to a different Diskgroup on ASM 12.1.0.1

    _______________________________________
    Last Update (29/09/2014):
    Oracle released the note: How to Move GI Management Repository to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)

    The procedure on NOTE (Doc ID 1589394.1) does not "MOVE" but RECREATE Management Repository.
    The procedure below on this post MOVE (migrating all data) from a Diskgroup to another Diskgroup.

    _______________________________________

    This post is only a recommendation about how to configure Clusterware Files and MGMTDB on ASM using Grid Infrastructure 12.1.0.1.

    Migrating MGMTDB to a different Diskgroup is not documented until 25/09/2013. Oracle will release a note to make this procedure soon.

    When we install/configure a new fresh Oracle GI for a Cluster the cluster files and MGMTDB is stored on first Diskgroup created at moment of installation.

    To easy management and increase the avability of Cluster is recommended setup small diskgroup to store the cluster files.

    ASM Spfile stuck on Diskgroup : ORA-15027: active use of diskgroup precludes its dismount (With no database clients connected) [ID 1082876.1]

    The cluster files are:

    Oracle Cluster Registry (OCR)

    The Oracle RAC configuration information repository that manages information about the cluster node list and instance-to-node mapping information. The OCR also manages information about Oracle Clusterware resource profiles for customized applications.

    The OCR is stored in a Diskgroup similar as Database Files are stored. The extents are spread across all disk of  diskgroup and the redundancy (which is at the extent level) is based on the redundancy of the disk group. We can store only one OCR per Diskgroup.

    Voting Disk

    Voting Disk aka Voting files: Is a file that manages information about node membership.

    The voting disks are placed directly on ASMDISK/Luns. Oracle Clusterware will store the votedisk on the disk within a disk group that holds the Voting Files. Oracle Clusterware does not rely on ASM to access the Voting Files, that’s  means wich Oracle Clusterware does not need of Diskgroup to read and write on ASMDISK.
    You cannot find/list Voting files using SQLPLUS(v$asm_diskgroup,v$asm_files,v$asm_alias), ASCMD utility or ASMCA gui.
    You only know if exist a voting files in a ASMDISK (v$asm_disk using column VOTING_FILE). So, voting files not depend of Diskgroup to be accessed, does not mean that, we don’t need the diskgroup, diskgroup and voting file are linked by their settings and redundancy.

    We cannot place Voting Disk in multiples Diskgroup. Is supported use only one Diskgroup to store Voting Disks.

    ASM Spfile

    This files is a parameter file of ASM Instance.

    The Spfile is stored as OCR and Database files.

    ASM Password File

    New feature of ASM 12.1 allow store Database or ASM Password File on ASM Diskgroup.

    An individual password file for Oracle Database or Oracle ASM can reside on a designated Oracle ASM disk group. Having the password files reside on a single location accessible across the cluster reduces maintenance costs and situations where passwords become out of sync.

    The COMPATIBLE.ASM disk group attribute must be set to at least 12.1 for the disk group where the password is to be located.

    Database MGMTDB

    New feature of GI 12.1 Management Repository.

    Management Repository is a single instance database that's managed by Oracle Clusterware in 12c. If the option is selected during GI installation, the database will be configured and managed by GI. As it's a single instance database, it will be up and running on one node in the cluster; as it's managed by GI, in case the hosting node is down, the database will be automatically failed over to other node.

    Management database by default uses the same shared storage as OCR/Voting File.

    There is no procedure in documentation 12.1 or Metalink Notes to change location of MGMTDB. (25/09/2013)

    What are my recommendation:

    Create two diskgroup:

    +OCR_VOTE

    Even using RAID at Storage Layer create a small diskgroup with 3 ASMDISK  using Normal Redundancy. This Diskgroup will store Voting Disk, OCR, ASM Spfile,  ASM Password File.

    The reason to create this Diskgroup with Normal Redundancy is provide availability  in case of loss of one voting disk or loss of one ASM Disk, caused by human errors or storage failure  or host failure (controller).

    +MGMTDB

    This Diskgroup can be created using External Redudancy if you are using RAID at Storage Layer.

    This Diskgroup  will store MGMTDB and OCR Mirror.

    Important:  To perform the changes below all nodes in the cluster must be active. 

    Let's Play:

    Checking if nodes are actives.

    	
    [grid@node12c02 ~]$ olsnodes -s
    node12c01        	Active
    node12c02 			Active
    

    Use OCRCHECK to find where OCR file is stored.

    	
    [grid@node12c02 ~]$ ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1548
             Available space (kbytes) :     408020
             ID                       : 1037914462
             Device/File Name         :   +CRS_TMP
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check bypassed due to non-privileged user
    

    Use crsctl to find where Voting Disk is stored.

    	
    [grid@node12c02 ~]$ crsctl query css votedisk
    ##  STATE    File Universal Id                File Name Disk group
    --  -----    -----------------                --------- ---------
     1. ONLINE   0d2d3b5d705a4fefbf8824c99251ac5a (/dev/oracleasm/disks/OCR_TEMP) [CRS_TMP]
    Located 1 voting disk(s).
    

    Use asmcmd to find where ASM SPFILE and ASM PWD is stored

    	
    [grid@node12c02 ~]$ asmcmd spget
    +CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607
    
    [grid@node12c02 ~]$ asmcmd pwget --asm
    +CRS_TMP/crs12c/orapwASM
    

    Getting info about Voting Disk on ASM.

    Connect to ASM using as sysdba role

    SQL>
    SET LINESIZE 150
    COL PATH FOR A30
    COL NAME FOR A20
    COL HEADER_STATUS FOR A20
    COL FAILGROUP FOR A20
    COL FAILGROUP_TYPE FOR A20
    COL VOTING_FILE FOR A20
    SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
    FROM V$ASM_DISK
    WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
    FROM V$ASM_DISKGROUP
    WHERE NAME='CRS_TMP');

    			 
    NAME                     PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
    ------------------------ ------------------------------ -------------------- -------------------- -------------------- --------------------
    CRS_TMP_0000             /dev/oracleasm/disks/OCR_TEMP  MEMBER               CRS_TMP_0000         REGULAR              Y
    

    Get name of cluster.

    	
    [grid@node12c02 ~]$ olsnodes -c
    crs12c
    

    set linesize 100
    col FILES_OF_CLUSTER for a60
    select concat('+'||gname, sys_connect_by_path(aname, '/')) FILES_OF_CLUSTER
    from ( select b.name gname, a.parent_index pindex, a.name aname,
    a.reference_index rindex , a.system_created, a.alias_directory,
    c.type file_type
    from v$asm_alias a, v$asm_diskgroup b, v$asm_file c
    where a.group_number = b.group_number
    and a.group_number = c.group_number(+)
    and a.file_number = c.file_number(+)
    and a.file_incarnation = c.incarnation(+)
    ) WHERE file_type in ( 'ASMPARAMETERFILE','PASSWORD','OCRFILE')
    start with (mod(pindex, power(2, 24))) = 0
    and rindex in
    ( select a.reference_index
    from v$asm_alias a, v$asm_diskgroup b
    where a.group_number = b.group_number
    and (mod(a.parent_index, power(2, 24))) = 0
    and a.name = LOWER('&CLUSTERNAME')
    )
    connect by prior rindex = pindex;

    	
    Enter value for clustername: crs12c
    old  17:                         and a.name = LOWER('&CLUSTERNAME')
    new  17:                         and a.name = LOWER('crs12c')
    
    FILES_OF_CLUSTER
    ------------------------------------------------------------
    +CRS_TMP/crs12c/ASMPARAMETERFILE/REGISTRY.253.826898607
    +CRS_TMP/crs12c/OCRFILE/REGISTRY.255.826898619
    +CRS_TMP/crs12c/orapwasm
    

    Creating Diskgroup OCR_VOTE

    Creating Diskgroup OCR_VOTE each disk must be in different failgroup. I don’t add QUORUM failgroup because theses luns are on Storage. Use QUORUM failgroup when you are placing disk out of your environment. (e.g use NFS file-disk to quorum purpouses), because this disks cannot contain data.

    List ASM Disk
    SQL>
    col path for a30
    col name for a20
    col header_status for a20
    select path,name,header_status from v$asm_disk
    where path like '/dev/asm%';

    	 
     PATH                           NAME                 HEADER_STATUS
    ------------------------------ -------------------- --------------------
    /dev/asm-ocr_vote3                                  CANDIDATE
    /dev/asm-ocr_vote1                                  CANDIDATE
    /dev/asm-ocr_vote2                                  CANDIDATE          
    
    	
    SQL> CREATE DISKGROUP OCR_VOTE NORMAL REDUNDANCY
         FAILGROUP controller01 DISK '/dev/asm-ocr_vote1'
         FAILGROUP controller02 DISK '/dev/asm-ocr_vote2'
         FAILGROUP controller03 DISK '/dev/asm-ocr_vote3'
         ATTRIBUTE 
    	 'au_size'='1M',
    	 'compatible.asm' = '12.1';
    
    Diskgroup created.
    

    Mount Diskgroup OCR_VOTE on others Nodes

    	
    SQL>  ! srvctl start diskgroup -g ocr_vote -n node12c01
    

    Check status of Diskgroup OCR_VOTE in all nodes.
    The Diskgroup OCR_VOTE must be mounted in all hub nodes.

    	
    SQL> ! srvctl status diskgroup -g ocr_vote
    Disk Group ocr_vote is running on node12c01,node12c02
    

    Moving Voting Disk to Diskgroup OCR_VOTE

    As grid user issue:

    	
    [grid@node12c01 ~]$  crsctl replace votedisk +OCR_VOTE
    Successful addition of voting disk be10b93cca734f65bfa6b89cc574d485.
    Successful addition of voting disk 46a9b6154c184f89bf6a503846ea3aa3.
    Successful addition of voting disk 6bc41f75098d4f5dbf97783770760445.
    Successful deletion of voting disk 0d2d3b5d705a4fefbf8824c99251ac5a.
    Successfully replaced voting disk group with +OCR_VOTE.
    CRS-4266: Voting file(s) successfully replaced
    

    Query CSS Votedisk in all nodes

    	
    [grid@node12c01 ~]$ crsctl query css votedisk
    ##  STATE    File Universal Id                File Name Disk group
    --  -----    -----------------                --------- ---------
     1. ONLINE   be10b93cca734f65bfa6b89cc574d485 (/dev/asm-ocr_vote1) [OCR_VOTE]
     2. ONLINE   46a9b6154c184f89bf6a503846ea3aa3 (/dev/asm-ocr_vote2) [OCR_VOTE]
     3. ONLINE   6bc41f75098d4f5dbf97783770760445 (/dev/asm-ocr_vote3) [OCR_VOTE]
    Located 3 voting disk(s).
    
    [grid@node12c02 ~]$ crsctl query css votedisk
    ##  STATE    File Universal Id                File Name Disk group
    --  -----    -----------------                --------- ---------
     1. ONLINE   be10b93cca734f65bfa6b89cc574d485 (/dev/asm-ocr_vote1) [OCR_VOTE]
     2. ONLINE   46a9b6154c184f89bf6a503846ea3aa3 (/dev/asm-ocr_vote2) [OCR_VOTE]
     3. ONLINE   6bc41f75098d4f5dbf97783770760445 (/dev/asm-ocr_vote3) [OCR_VOTE]
    Located 3 voting disk(s).
    

    Query ASMDISK on ASM
    SQL >
    SET LINESIZE 150
    COL PATH FOR A30
    COL NAME FOR A10
    COL HEADER_STATUS FOR A20
    COL FAILGROUP FOR A20
    COL FAILGROUP_TYPE FOR A20
    COL VOTING_FILE FOR A20
    SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
    FROM V$ASM_DISK
    WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
    FROM V$ASM_DISKGROUP
    WHERE NAME='OCR_VOTE');

    	
    NAME                     PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
    ------------------------ ------------------------------ -------------------- -------------------- -------------------- --------------------
    OCR_VOTE_0002            /dev/asm-ocr_vote3             MEMBER               CONTROLLER03         REGULAR              Y
    OCR_VOTE_0001            /dev/asm-ocr_vote2             MEMBER               CONTROLLER02         REGULAR              Y
    OCR_VOTE_0000            /dev/asm-ocr_vote1             MEMBER               CONTROLLER01         REGULAR              Y
    

    The Voting Disk was moved successful.

    Moving OCR to Diskgroup OCR_VOTE

    As root user issue:

    	
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1568
             Available space (kbytes) :     408000
             ID                       : 1037914462
             Device/File Name         :   +CRS_TMP
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check succeeded
    

    Add OCR on OCR_VOTE

    	
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +OCR_VOTE	
    

    Now we must see two OCR one in each Diskgroup

    	
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1568
             Available space (kbytes) :     408000
             ID                       : 1037914462
             Device/File Name         :   +CRS_TMP
                                        Device/File integrity check succeeded
             Device/File Name         :  +OCR_VOTE
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check succeeded
    

    Delete OCR from Diskgroup CRS_TMP

    	
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -delete +CRS_TMP
    

    Now we must see only one OCR on Diskgroup OCR_VOTE.

    	
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1568
             Available space (kbytes) :     408000
             ID                       : 1037914462
             Device/File Name         :  +OCR_VOTE
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check succeeded
    

    Check with OCRCHECK on others nodes.

    	
    [root@node12c01 ~]#  /u01/app/12.1.0/grid/bin/ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1568
             Available space (kbytes) :     408000
             ID                       : 1037914462
             Device/File Name         :  +OCR_VOTE
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check succeeded
    

    The OCR was moved successful.

    Moving ASM SPFILE
    You will get the error which the file is still being used, but the file is copied to the file system and the profile is updated.

    	
    [grid@node12c01 ~]$ asmcmd spmove '+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607' '+OCR_VOTE/crs12c/spfileASM.ora'
    ORA-15032: not all alterations performed
    ORA-15028: ASM file '+CRS_TMP/crs12c/ASMPARAMETERFILE/registry.253.826898607' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
    

    Checking if file was copied and profile updated

    	
    [grid@node12c01 ~]$ asmcmd spget
    +OCR_VOTE/crs12c/spfileASM.ora
    

    Moving ASM Password File

    	
    [grid@node12c01 ~]$ asmcmd pwmove --asm +CRS_TMP/orapwASM +OCR_VOTE/crs12c/orapwASM
    moving +CRS_TMP/orapwASM -> +OCR_VOTE/crs12c/orapwASM
    

    Checking if file was moved and profile updated

    	
    [grid@node12c01 ~]$ asmcmd pwget --asm
    +OCR_VOTE/crs12c/orapwASM
    
    [grid@node12c01 ~]$ srvctl config asm
    ASM home: /u01/app/12.1.0/grid
    Password file: +OCR_VOTE/crs12c/orapwASM
    ASM listener: LISTENER
    

    After these changes restart cluster to release ASM SPFILE.
    You can restart one node at time if you wish.

    You DON'T need stop whole cluster to release ASM SPFILE.

    As my cluster is a test cluster I will stop cluster on all nodes.

    	
    [root@node12c01 ~]# /u01/app/12.1.0/grid/bin/crsctl stop cluster -all
    CRS-2673: Attempting to stop 'ora.crsd' on 'node12c01'
    CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node12c01'
    CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node12c01'
    CRS-2673: Attempting to stop 'ora.CRS_TMP.dg' on 'node12c01'
    CRS-2673: Attempting to stop 'ora.proxy_advm' on 'node12c01'
    CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node12c01'
    CRS-2677: Stop of 'ora.scan1.vip' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.crsd' on 'node12c02'
    CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node12c02'
    CRS-2673: Attempting to stop 'ora.cvu' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.proxy_advm' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.oc4j' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.node12c02.vip' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.mgmtdb' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node12c02'
    CRS-2677: Stop of 'ora.cvu' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node12c02'
    CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.node12c02.vip' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.node12c01.vip' on 'node12c02'
    CRS-2677: Stop of 'ora.scan2.vip' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.node12c01.vip' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.mgmtdb' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'node12c02'
    CRS-2677: Stop of 'ora.MGMTLSNR' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.proxy_advm' on 'node12c01' succeeded
    CRS-2677: Stop of 'ora.oc4j' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.proxy_advm' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.CRS_TMP.dg' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node12c01'
    CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'node12c01'
    CRS-2677: Stop of 'ora.asm' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c01'
    CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.CRS_TMP.dg' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.ons' on 'node12c01'
    CRS-2677: Stop of 'ora.ons' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.net1.network' on 'node12c01'
    CRS-2677: Stop of 'ora.net1.network' on 'node12c01' succeeded
    CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node12c01' has completed
    CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.CRS_TMP.dg' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'node12c02'
    CRS-2677: Stop of 'ora.asm' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c02'
    CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.ons' on 'node12c02'
    CRS-2677: Stop of 'ora.ons' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.net1.network' on 'node12c02'
    CRS-2677: Stop of 'ora.net1.network' on 'node12c02' succeeded
    CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node12c02' has completed
    CRS-2677: Stop of 'ora.crsd' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.ctssd' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.evmd' on 'node12c02'
    CRS-2673: Attempting to stop 'ora.storage' on 'node12c02'
    CRS-2677: Stop of 'ora.storage' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'node12c02'
    CRS-2677: Stop of 'ora.crsd' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.ctssd' on 'node12c01'
    CRS-2673: Attempting to stop 'ora.evmd' on 'node12c01'
    CRS-2673: Attempting to stop 'ora.storage' on 'node12c01'
    CRS-2677: Stop of 'ora.storage' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'node12c01'
    CRS-2677: Stop of 'ora.evmd' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.evmd' on 'node12c01' succeeded
    CRS-2677: Stop of 'ora.ctssd' on 'node12c02' succeeded
    CRS-2677: Stop of 'ora.ctssd' on 'node12c01' succeeded
    CRS-2677: Stop of 'ora.asm' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node12c01'
    CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node12c01' succeeded
    CRS-2673: Attempting to stop 'ora.cssd' on 'node12c01'
    CRS-2677: Stop of 'ora.cssd' on 'node12c01' succeeded
    CRS-2677: Stop of 'ora.asm' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node12c02'
    CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node12c02' succeeded
    CRS-2673: Attempting to stop 'ora.cssd' on 'node12c02'
    CRS-2677: Stop of 'ora.cssd' on 'node12c02' succeeded
    

    Start cluster on all nodes

    	
    [root@node12c01 ~]# /u01/app/12.1.0/grid/bin/crsctl start cluster -all
    CRS-2672: Attempting to start 'ora.evmd' on 'node12c01'
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node12c01'
    CRS-2672: Attempting to start 'ora.evmd' on 'node12c02'
    CRS-2676: Start of 'ora.cssdmonitor' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'node12c01'
    CRS-2672: Attempting to start 'ora.diskmon' on 'node12c01'
    CRS-2676: Start of 'ora.diskmon' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node12c02'
    CRS-2676: Start of 'ora.evmd' on 'node12c01' succeeded
    CRS-2676: Start of 'ora.cssdmonitor' on 'node12c02' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'node12c02'
    CRS-2672: Attempting to start 'ora.diskmon' on 'node12c02'
    CRS-2676: Start of 'ora.evmd' on 'node12c02' succeeded
    CRS-2676: Start of 'ora.diskmon' on 'node12c02' succeeded
    CRS-2676: Start of 'ora.cssd' on 'node12c02' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'node12c02'
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node12c02'
    CRS-2676: Start of 'ora.cssd' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'node12c01'
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node12c01'
    CRS-2676: Start of 'ora.ctssd' on 'node12c02' succeeded
    CRS-2676: Start of 'ora.ctssd' on 'node12c01' succeeded
    CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node12c02' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'node12c02'
    CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'node12c01'
    CRS-2676: Start of 'ora.asm' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.storage' on 'node12c01'
    CRS-2676: Start of 'ora.asm' on 'node12c02' succeeded
    CRS-2672: Attempting to start 'ora.storage' on 'node12c02'
    CRS-2676: Start of 'ora.storage' on 'node12c02' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'node12c02'
    CRS-2676: Start of 'ora.crsd' on 'node12c02' succeeded
    CRS-2676: Start of 'ora.storage' on 'node12c01' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'node12c01'
    CRS-2676: Start of 'ora.crsd' on 'node12c01' succeeded
    

    Migrate MGMTDB to other Diskgroup.

    MGMTDB is a "common" Oracle RDBMS (exclusive to CHM). We can perform administrative task like in any other Oracle RDBMS.
    So, to migrate MGMTDB to other Diskgroup I will use RMAN.

    Creating Diskgroup MGMTDB

    	
    SQL> CREATE DISKGROUP MGMTDB EXTERNAL REDUNDANCY
         DISK '/dev/asm-mgmtdb1'
         ATTRIBUTE 
    	 'au_size'='1M',
    	 'compatible.asm' = '12.1',
    	 'compatible.rdbms' = '12.1';			 
    			 
    Diskgroup created.
    

    Starting MGMTDB diskgroup on others node and checking status

    	
    SQL> ! srvctl start diskgroup -g MGMTDB -n node12c02
    
    SQL> ! srvctl status diskgroup -g MGMTDB
    Disk Group MGMTDB is running on node12c01,node12c02
    

    To manage MGMTDB you must use srvctl utility.
    srvctl start/stop/config/modify... mgmtdb

    Getting current configuration of MGMTDB

    	
    
    [grid@node12c01 ~]$ srvctl config mgmtdb
    Database unique name: _mgmtdb
    Database name:
    Oracle home: /u01/app/12.1.0/grid
    Oracle user: grid
    Spfile: +CRS_TMP/_mgmtdb/spfile-MGMTDB.ora
    Password file:
    Domain:
    Start options: open
    Stop options: immediate
    Database role: PRIMARY
    Management policy: AUTOMATIC
    Database instance: -MGMTDB
    Type: Management
    

    Check where MGMTDB is running.

    	
    [grid@node12c01 ~]$ srvctl status mgmtdb
    Database is enabled
    Instance -MGMTDB is running on node node12c02
    

    You must migrate MGMTDB using host(node) where MGMTDB is running.

    Let's start.
    Stop MGMT Listener and MGMTDB

    	
    [grid@node12c02 ~]$ srvctl stop mgmtdb
    
    [grid@node12c02 ~]$ srvctl stop mgmtlsnr
    
    

    Precaution: I will create a full backup of the database MGMTDB before start migration.

    	
    [grid@node12c02 ~]$ mkdir -p /u01/app/grid/backup_mgmtdb
    
    [grid@node12c02 ~]$ export ORACLE_SID=-MGMTDB
    [grid@node12c02 ~]$ rman target /
    
    Recovery Manager: Release 12.1.0.1.0 - Production on Tue Sep 24 19:30:56 2013
    
    Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database (not started)
    
    RMAN> startup mount
    
    connected to target database (not started)
    Oracle instance started
    database mounted
    
    Total System Global Area     521936896 bytes
    
    Fixed Size                     2290264 bytes
    Variable Size                289410472 bytes
    Database Buffers             222298112 bytes
    Redo Buffers                   7938048 bytes
    
    
    RMAN>  backup database format '/u01/app/grid/backup_mgmtdb/rman_mgmtdb_%U' tag='bk_db_move_dg';
    
    Starting backup at 24-SEP-13
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=6 device type=DISK
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00005 name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881
    input datafile file number=00001 name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845
    input datafile file number=00002 name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821
    input datafile file number=00004 name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917
    input datafile file number=00003 name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817
    channel ORA_DISK_1: starting piece 1 at 24-SEP-13
    channel ORA_DISK_1: finished piece 1 at 24-SEP-13
    piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_03okm63j_1_1 tag=BK_DB_MOVE_DG comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 24-SEP-13
    channel ORA_DISK_1: finished piece 1 at 24-SEP-13
    piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_04okm650_1_1 tag=BK_DB_MOVE_DG comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-SEP-13
    

    Migrating SPFILE to Diskgroup +MGMTDB.
    RMAN will restore SPFILE and update OCR to a new SPFILE Location.
    Database _MGMTDB MUST be mounted or opened.

    	
    
    RMAN> restore spfile to "+MGMTDB";
    
    Starting restore at 24-SEP-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=355 device type=DISK
    
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: restoring SPFILE
    output file name=+MGMTDB
    channel ORA_DISK_1: reading from backup piece /u01/app/grid/backup_mgmtdb/rman_mgmtdb_02oltbpe_1_1
    channel ORA_DISK_1: piece handle=/u01/app/grid/backup_mgmtdb/rman_mgmtdb_02oltbpe_1_1 tag=BK_DB_MOVE_DG
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
    Finished restore at 24-SEP-13
    
    [grid@node12c01 ~]$ srvctl config mgmtdb |grep Spfile
    Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
    
    RMAN> shutdown immediate;
    database dismounted
    Oracle instance shut down
    

    Migrating Controlfile to Diskgroup MGMTDB
    P.S: Before do this change, check what spfile you are updating.

    	
    
    [grid@node12c02 ~]$ sqlplus / as sysdba
    SQL> startup nomount;
    ORACLE instance started.
    
    Total System Global Area  521936896 bytes
    Fixed Size                  2290264 bytes
    Variable Size             285216168 bytes
    Database Buffers          226492416 bytes
    Redo Buffers                7938048 bytes
    
    SQL> show parameter spfile
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    spfile                               string      +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
    
    
    SQL> show parameter control_file
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    control_files                        string      +CRS_TMP/_MGMTDB/CONTROLFILE/current.262.826899953
    
    SQL> alter system set control_files='+MGMTDB' scope=spfile ;
    
    System altered.
    
    SQL> shutdown immediate;
    ORA-01507: database not mounted
    
    
    ORACLE instance shut down.
    SQL> exit
    [grid@node12c02 ~]$ rman target /
    
    Recovery Manager: Release 12.1.0.1.0 - Production on Tue Sep 24 19:45:37 2013
    
    Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database (not started)
    
    RMAN> startup nomount
    
    Oracle instance started
    
    Total System Global Area     521936896 bytes
    
    Fixed Size                     2290264 bytes
    Variable Size                289410472 bytes
    Database Buffers             222298112 bytes
    Redo Buffers                   7938048 bytes
    
    RMAN> RESTORE CONTROLFILE FROM '+CRS_TMP/_MGMTDB/CONTROLFILE/current.262.826899953';
    
    Starting restore at 24-SEP-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=124 device type=DISK
    
    channel ORA_DISK_1: copied control file copy
    output file name=+MGMTDB/_MGMTDB/CONTROLFILE/current.263.827005571
    Finished restore at 24-SEP-13
    
    RMAN> startup mount
    
    database is already started
    database mounted
    released channel: ORA_DISK_1
    
    

    Controlfile was moved successful.

    Migrating Database Files to Diskgroup MGMTDB

    	
    RMAN>  BACKUP AS COPY DEVICE TYPE DISK DATABASE FORMAT '+MGMTDB';
    
    Starting backup at 24-SEP-13
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting datafile copy
    input datafile file number=00005 name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881
    output file name=+MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237 tag=TAG20130924T194034 RECID=1 STAMP=827005278
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45
    channel ORA_DISK_1: starting datafile copy
    input datafile file number=00001 name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845
    output file name=+MGMTDB/_MGMTDB/DATAFILE/system.257.827005281 tag=TAG20130924T194034 RECID=2 STAMP=827005291
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:16
    channel ORA_DISK_1: starting datafile copy
    input datafile file number=00002 name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821
    output file name=+MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295 tag=TAG20130924T194034 RECID=3 STAMP=827005303
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
    channel ORA_DISK_1: starting datafile copy
    input datafile file number=00004 name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917
    output file name=+MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311 tag=TAG20130924T194034 RECID=4 STAMP=827005312
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
    channel ORA_DISK_1: starting datafile copy
    input datafile file number=00003 name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817
    output file name=+MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313 tag=TAG20130924T194034 RECID=5 STAMP=827005316
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
    channel ORA_DISK_1: starting datafile copy
    copying current control file
    output file name=+MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 tag=TAG20130924T194034 RECID=6 STAMP=827005317
    channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 24-SEP-13
    channel ORA_DISK_1: finished piece 1 at 24-SEP-13
    piece handle=+MGMTDB/_MGMTDB/BACKUPSET/2013_09_24/nnsnf0_tag20130924t194034_0.262.827005319 tag=TAG20130924T194034 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-SEP-13
    
    RMAN> SWITCH DATABASE TO COPY;
    
    datafile 1 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/system.257.827005281"
    datafile 2 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295"
    datafile 3 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313"
    datafile 4 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311"
    datafile 5 switched to datafile copy "+MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237"
    
    

    Datafiles was migrated successful.

    Checking Datafiles

    
    RMAN> report schema;
    
    Report of database schema for database with db_unique_name _MGMTDB
    
    List of Permanent Datafiles
    ===========================
    File Size(MB) Tablespace           RB segs Datafile Name
    ---- -------- -------------------- ------- ------------------------
    1    600      SYSTEM               ***     +MGMTDB/_MGMTDB/DATAFILE/system.257.827005281
    2    400      SYSAUX               ***     +MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295
    3    50       UNDOTBS1             ***     +MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313
    4    100      SYSGRIDHOMEDATA      ***     +MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311
    5    2048     SYSMGMTDATA          ***     +MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237
    
    List of Temporary Files
    =======================
    File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
    ---- -------- -------------------- ----------- --------------------
    1    20       TEMP                 32767       +CRS_TMP/_MGMTDB/TEMPFILE/temp.266.826899969
    

    Move Tempfile to Diskgroup MGMTDB.

    RMAN> run {
    SET NEWNAME FOR TEMPFILE 1 TO '+MGMTDB';
    SWITCH TEMPFILE ALL;
     }
    
    executing command: SET NEWNAME
    
    renamed tempfile 1 to +MGMTDB in control file
    

    At startup of database TEMP file will be recreated on new Diskgroup.

    Starting Database and checking.

    RMAN> startup
    
    database is already started
    database opened
    
    RMAN> report schema;
    
    using target database control file instead of recovery catalog
    Report of database schema for database with db_unique_name _MGMTDB
    
    List of Permanent Datafiles
    ===========================
    File Size(MB) Tablespace           RB segs Datafile Name
    ---- -------- -------------------- ------- ------------------------
    1    600      SYSTEM               ***     +MGMTDB/_MGMTDB/DATAFILE/system.257.827005281
    2    400      SYSAUX               ***     +MGMTDB/_MGMTDB/DATAFILE/sysaux.258.827005295
    3    50       UNDOTBS1             ***     +MGMTDB/_MGMTDB/DATAFILE/undotbs1.260.827005313
    4    100      SYSGRIDHOMEDATA      ***     +MGMTDB/_MGMTDB/DATAFILE/sysgridhomedata.259.827005311
    5    2048     SYSMGMTDATA          ***     +MGMTDB/_MGMTDB/DATAFILE/sysmgmtdata.256.827005237
    
    List of Temporary Files
    =======================
    File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
    ---- -------- -------------------- ----------- --------------------
    1    20       TEMP                 32767       +MGMTDB/_MGMTDB/TEMPFILE/temp.261.827005913
    
    RMAN> exit
    Recovery Manager complete.
    

    Delete the old datafiles copy on old Diskgroup CRS_TMP.

    RMAN> delete copy ;
    
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=124 device type=DISK
    specification does not match any archived log in the repository
    List of Datafile Copies
    =======================
    
    Key     File S Completion Time Ckp SCN    Ckp Time
    ------- ---- - --------------- ---------- ---------------
    7       1    A 24-SEP-13       798766     24-SEP-13
            Name: +CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845
    
    8       2    A 24-SEP-13       798766     24-SEP-13
            Name: +CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821
    
    9       3    A 24-SEP-13       798766     24-SEP-13
            Name: +CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817
    
    10      4    A 24-SEP-13       798766     24-SEP-13
            Name: +CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917
    
    11      5    A 24-SEP-13       798766     24-SEP-13
            Name: +CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881
    
    List of Control File Copies
    ===========================
    
    Key     S Completion Time Ckp SCN    Ckp Time
    ------- - --------------- ---------- ---------------
    6       A 24-SEP-13       798766     24-SEP-13
            Name: +MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317
            Tag: TAG20130924T194034
    
    
    Do you really want to delete the above objects (enter YES or NO)? yes
    deleted datafile copy
    datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/system.259.826899845 RECID=7 STAMP=827005353
    deleted datafile copy
    datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysaux.258.826899821 RECID=8 STAMP=827005353
    deleted datafile copy
    datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/undotbs1.257.826899817 RECID=9 STAMP=827005353
    deleted datafile copy
    datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysgridhomedata.261.826899917 RECID=10 STAMP=827005353
    deleted datafile copy
    datafile copy file name=+CRS_TMP/_MGMTDB/DATAFILE/sysmgmtdata.260.826899881 RECID=11 STAMP=827005353
    deleted control file copy
    control file copy file name=+MGMTDB/_MGMTDB/CONTROLFILE/backup.261.827005317 RECID=6 STAMP=827005317
    Deleted 6 objects
    

    Migrating Redo Logs to Diskgroup MGMTDB
    P.S: Database must be OPENED.

    [grid@node12c02 ~]$ sqlplus / as sysdba
    SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;
    
        GROUP# MEMBER                                             STATUS
    ---------- -------------------------------------------------- ----------------
             1 +CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955   INACTIVE
             2 +CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955   CURRENT
             3 +CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957   INACTIVE
    

    Adding New Log Members

    SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 1;
    Database altered.
    
    SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 2;
    Database altered.
     
    SQL>  ALTER DATABASE ADD LOGFILE MEMBER '+MGMTDB' TO GROUP 3;
    Database altered.
    
    SQL> alter system switch logfile;
    
    System altered.
    
    SQL> /
    
    System altered.
    
    SQL> /
    
    System altered.
    
    SQL> /
    System altered.
    
    SQL> /
    System altered.
    
    SQL> alter system checkpoint;
    
    

    Removing Log Member from CRS_TMP

    SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;
    
        GROUP# MEMBER                                             STATUS
    ---------- -------------------------------------------------- ----------------
             1 +CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955   ACTIVE
             1 +MGMTDB/_MGMTDB/ONLINELOG/group_1.264.827006475    ACTIVE
             2 +MGMTDB/_MGMTDB/ONLINELOG/group_2.266.827006507    ACTIVE
             2 +CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955   ACTIVE
             3 +MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479    CURRENT
             3 +CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957   CURRENT
    
    6 rows selected.
    
    SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_1.263.826899955';
    Database altered.
     
    SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_2.264.826899955';
    Database altered.
     
    SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'; 
    Database altered.
    		  
    		 
    		 
    SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957';
    ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'
    *
    ERROR at line 1:
    ORA-01609: log 3 is the current log for thread 1 - cannot drop members
    ORA-00312: online log 3 thread 1:
    '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957'
    ORA-00312: online log 3 thread 1:
    '+MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479'
    
    SQL> alter system switch logfile;
    
    System altered.
    
    SQL> alter system checkpoint;
    
    System altered.
    
    SQL> ALTER DATABASE DROP LOGFILE MEMBER '+CRS_TMP/_MGMTDB/ONLINELOG/group_3.265.826899957';
    
    Database altered.
    
    SQL> select lf.group#,lf.member,lg.status from v$logfile lf, v$log lg where  lf.GROUP#=lg.GROUP#  order by 1;
    
        GROUP# MEMBER                                             STATUS
    ---------- -------------------------------------------------- ----------------
             1 +MGMTDB/_MGMTDB/ONLINELOG/group_1.264.827006475    CURRENT
             2 +MGMTDB/_MGMTDB/ONLINELOG/group_2.266.827006507    INACTIVE
             3 +MGMTDB/_MGMTDB/ONLINELOG/group_3.265.827006479    INACTIVE
    

    Redo Logs was moved successful.

    Check Configuration of resource ora.mgmtdb.
    Maybe you need modify this resource by removing old DISKGROUP.

    As grid user:

    [grid@node12c02 ~]$ crsctl status res ora.mgmtdb -p |grep CRS_TMP
    START_DEPENDENCIES=hard(ora.MGMTLSNR,ora.CRS_TMP.dg,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.CRS_TMP.dg,ora.MGMTDB.dg)
    STOP_DEPENDENCIES=hard(intermediate:ora.MGMTLSNR,shutdown:ora.CRS_TMP.dg,intermediate:ora.asm,shutdown:ora.MGMTDB.dg)
    

    If command above return output you need update OCR by removing the old Diskgroup.
    If you removed Diskgroup without update resource ora.mgmtdb, srvctl will not be able to start resource ora.mgmtdb and raise error:

    [grid@node12c02 ~]$ srvctl config mgmtdb
    Database unique name: _mgmtdb
    Database name:
    Oracle home: /u01/app/12.1.0/grid
    Oracle user: grid
    Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
    Password file:
    Domain:
    Start options: open
    Stop options: immediate
    Database role: PRIMARY
    Management policy: AUTOMATIC
    Database instance: -MGMTDB
    PRCD-1012 : Failed to retrieve disk group list for database _mgmtdb.
    PRCR-1035 : Failed to look up CRS resource ora.CRS_TMP.dg for _mgmtdb
    PRCA-1000 : ASM Disk Group CRS_TMP does not exist
    PRCR-1001 : Resource ora.CRS_TMP.dg does not exist

    To fix it you need remove this resource.
    If resource ora.CRS_TMP.dg does not exists you must use crsctl with force option ("-f" flag) to modify ora.mgmtdb resource.

    [grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "START_DEPENDENCIES='hard(ora.MGMTLSNR,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.MGMTDB.dg)'"
    CRS-2510: Resource 'ora.CRS_TMP.dg' used in dependency 'hard' does not exist or is not registered
    CRS-2514: Dependency attribute specification 'hard' is invalid in resource 'ora.mgmtdb'
    CRS-4000: Command Modify failed, or completed with errors.

    Updating resource ora.mgmtdb by removing ora.CRS_TMP.dg resource.

    
    [grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "START_DEPENDENCIES='hard(ora.MGMTLSNR,ora.MGMTDB.dg) weak(uniform:ora.ons) pullup(ora.MGMTLSNR,ora.MGMTDB.dg)'" -f
    
    [grid@node12c02 ~]$ crsctl modify res ora.mgmtdb -attr "STOP_DEPENDENCIES='hard(intermediate:ora.MGMTLSNR,intermediate:ora.asm,shutdown:ora.MGMTDB.dg)'" -f
    
    [grid@node12c02 ~]$ crsctl status res ora.mgmtdb -p |grep CRS_TMP
    
    [grid@node12c02 ~]$ srvctl config mgmtdb
    Database unique name: _mgmtdb
    Database name:
    Oracle home: /u01/app/12.1.0/grid
    Oracle user: grid
    Spfile: +MGMTDB/_MGMTDB/PARAMETERFILE/spfile.256.828288969
    Password file:
    Domain:
    Start options: open
    Stop options: immediate
    Database role: PRIMARY
    Management policy: AUTOMATIC
    Database instance: -MGMTDB
    Type: Management
    
    [grid@node12c02 ~]$ srvctl start mgmtdb
    
    [grid@node12c02 ~]$ srvctl status mgmtdb
    Database is enabled
    Instance -MGMTDB is running on node node12c02 
    

    Adding OCR Mirror on +MGMTDB.

    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +MGMTDB
    [root@node12c02 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
    Status of Oracle Cluster Registry is as follows :
             Version                  :          4
             Total space (kbytes)     :     409568
             Used space (kbytes)      :       1568
             Available space (kbytes) :     408000
             ID                       : 1037914462
             Device/File Name         :  +OCR_VOTE
                                        Device/File integrity check succeeded
             Device/File Name         :    +MGMTDB
                                        Device/File integrity check succeeded
    
                                        Device/File not configured
    
                                        Device/File not configured
    
                                        Device/File not configured
    
             Cluster registry integrity check succeeded
    
             Logical corruption check succeeded
    

    I recommend stop and start your cluster by issuing (as root) "crsctl stop cluster -all and crsctl start cluster -all"

    If all resources restart successfull, then we can stop diskgroup "CRS_TMP".

    [grid@node12c02 ~]$ srvctl stop diskgroup -g CRS_TMP
    
    [grid@node12c02 ~]$ srvctl status diskgroup -g CRS_TMP
    Disk Group CRS_TMP is not running
    

    Check if all resources is online. If yes, you can remove this diskgroup.

    [grid@node12c02 ~]$ srvctl start diskgroup -g CRS_TMP -n node12c02 
    
    [grid@node12c02 ~]$ export ORACLE_SID=+ASM1
    [grid@node12c02 ~]$ sqlplus / as sysasm
    
    SQL> DROP DISKGROUP CRS_TMP INCLUDING CONTENTS;
    
    Diskgroup dropped.
    

    Then restart whole clusterware again and check if all resources started without issue.


    Oracle Database 12c RELEASED… Download Available

    After a long wait, finally the Oracle released the commercial version of Oracle Database 12c.


    Oracle Database Products are:

  • Oracle Database 12c Release 1
  • Oracle Database 12c Release 1 Grid Infrastructure (12.1.0.1.0)
    Contains the Grid Infrastructure Software including Oracle Clusterware, Automated Storage Management (ASM), and ASM Cluster File System. Download and install prior to installing Oracle Real Application Clusters, Oracle Real Application Clusters One Node, or other application software in a Grid Environment

  • Oracle Database 12c Release 1 Global Service Manager (GSM/GDS) (12.1.0.1.0)
  • Oracle Database Gateways 12c Release 1 (12.1.0.1.0)
    Contains the Oracle Database Gateways to non-Oracle Databases. Download if you want to set up a heterogeneous data integration environment

  • Oracle Database 12c Release 1 Examples
    Contains examples of how to use the Oracle Database. Download if you are new to Oracle and want to try some of the examples presented in the Documentation

  • Oracle Database 12c Release 1 Client (12.1.0.1.0)
    Contains the Oracle Client Libraries for Linux. Download if you want the client libraries only


    Major New Oracle Database 12c Features and Products

    • Adaptive Execution Plans
    • Application Continuity
    • Automatic Data Optimization (ADO)
    • Data Guard Far Sync
    • Data Redaction
    • Global Data Services
    • Heat Map
    • Multitenant (Pluggable Databases)
    • Pattern Matching
    • SQL Translation Framework

    http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html

    Download Available only to:

  • Linux x86-64
    Linux Download

  • Solaris Sparc64
    Solaris Sparc64 Download

  • Solaris (x86-64)
    Solaris x64 Download

    Plug into the Cloud with Oracle Database 12c

    Enjoy…