What Patch to Apply? PSU ? GI PSU ? Proactive Bundle Patch?

For those who unfamiliar with Oracle Patch is little confusing what patch to apply when get  a table with different patch in the same version.

patch

I will try clarify some doubts.

Note: You must provide a valid My Oracle Support login name in order to access below Links.

Patch version numbering changed

In November 2015 the version numbering for new Bundle Patches, Patch Set Updates and Security Patch Updates for Oracle Database changed the format from  5th digit of the bundle version with a release date in the form “YYMMDD” where:

  • YY is the last 2 digits of the year
  • MM is the numeric month (2 digits)
  • DD is the numeric day of the month (2 digits)

More detail can be found here: Oracle Database, Enterprise Manager and Middleware – Change to Patch Numbering from Nov 2015 onwards (Doc ID 2061926.1)

 

Changes on Database Security Patching from 12.1.0.1 onwards

Starting with Oracle Database version 12.1.0.1 , Oracle will only provide Patch Set Update (PSU) patches to meet the Critical Patch Update (CPU) program requirements for security patching. SPU (Security Patch Update) patches will no longer be available. Oracle has moved to this simplified model due to the popularity of the PSU patches. PSUs are Oracle’s preferred proactive patching vehicle since their inception in 2009.

Database Security Patching from 12.1.0.1 onwards (Doc ID 1581950.1)

 

Where to find last Patches for Database?

Use the Patch Assistant: Assistant: Download Reference for Oracle Database PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases (Doc ID 2118136.2)

 

What Patch to apply PSU, GI PSU,Proactive Bundle Patch, Bundle Patch (Windows 32bit & 64bit)?

When using the Patchset Assistant the assistant show below table:
In this case I search for last patch for 12.1.0.2.

PSU_Bundle_Patch

Understanding the Patch Nomenclature :
New Patch Nomenclature for Oracle Products (Doc ID 1430923.1)

Note: As of April 2016, the Database Patch for Engineered Systems and Database In-Memory has been renamed from “Bundle Patch (BP) ” to “Database Proactive Bundle Patch”.

Note: Windows Platform must use “Bundle Patch (Windows 32bit & 6bit)”.

Database patch content:

  • SPU contains only the CPU program security fixes
  • PSU contains the CPU program security fixes and additional high-impact/low-risk critical bug fixes
  • Proactive Bundle Patch (PBP) includes all PSU fixes along with fixes targeted at the specific Bundle Patch environment.

They are cumulatives,so  if you have a OH (12.1.0.2) in base release (i.e no fix)  and apply the last PSU or PBP it will fix all bugs from base release until current version of patch.

Where to apply each Patch?

  • PSU –  Can be applied on Database Servers, Client-Only and  Instant Client.
  • GI PSU – Can be applied on GI Home (Oracle Restart or Oracle Clusterware) in conjunction with RAC, RACOne,  Single Instance home, Client-Only and  Instant Client.
  • Proactive Bundle Patch – Can be applied on GI Home in conjunction with RAC, RACOne, or Single Instance home, Client-Only and  Instant Client.

 

An installation can only use one of the SPU, PSU or Proactive Bundle Patch patching methods.

 

How to choose between them?

The “Database Proactive Bundle Patch” requires a bit more testing than a Patch Set Update (PSU) as it delivers a larger set of fixes.

If you are installing a new fresh installation you should to apply Database Proactive Bundle Patch.

PSU is addressed to environments sensitive to changes, because it required less testing.

I have Applied “Database PSU” how to move to “Database Proactive Bundle Patch”? 

Moving from “Database PSU” to “Database Proactive Bundle Patch”

  • Back up your current setup
  • Fully rollback / deinstall “Database PSU”
    • If using OJVM PSU that is likely to require OJVM PSU to be rolled out too
  • Apply / install the latest “Database Proactive Bundle Patch”
  • Apply any interim patches also rolled out above (including OJVM PSU if that was installed)

Note from Oracle: It is not generally advisable to switch from “Database PSU” to “Database SPU” method.

The below note can clarify any doubt on this post.
Oracle Database – Overview of Database Patch Delivery Methods (Doc ID 1962125.1)

 

OPLAN Support

GI PSU and Proactive Bundle Patch are supported by OPlan.

OPlan is a utility that facilitates the patch installation process by providing you with step-by-step patching instructions specific to your environment.
In contrast to the traditional patching operation, applying a patch based on the README requires you to understand the target configuration and manually identify the patching commands relevant to your environment. OPlan eliminates the requirement of identifying the patching commands by automatically collecting the configuration information for the target, then generating instructions specific to the target configuration.

Oracle Software Patching with OPLAN (Doc ID 1306814.1)

Useful Notes:

Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

Frequently Asked Questions (FAQ): Patching Oracle Database Server (Doc ID 1446582.1)

12.1.0.2 Database Proactive Bundle Patches / Bundle Patches for Engineered Systems and DB In-Memory – List of Fixes in each Bundle (Doc ID 1937782.1)

Advertisements

Explaining: How to store OCR, Voting disks and ASM SPFILE on ASM Diskgroup (RAC or RAC Extended)

In 2011 I saw many doubts and concerns about how to store Voting,OCR and ASM SPFILE on ASM Diskgroup in this post I’ll show you how to set up your environment by applying best practices based on my experience.

To start I need explain some concepts:

Voting Disks is like an “Database Instance” and OCR is like a “Database”. During startup of Cluster Oracle first read and open all Voting Disks and after ASM be Started Oracle read and open all OCR.

So, Oracle does not need of ASM Instance  be started or  DISKGROUP be mounted to read and open Voting Disk.

Voting disks:

Voting Disk also known as Voting files: Is a file that manages information about node membership.

How they are stored in ASM?
Voting disks are placed directly on ASMDISK. Oracle Clusterware will store the votedisk on the disk within a disk group that holds the Voting Files. Oracle Clusterware does not rely on ASM to access the Voting Files, that’s means wich Oracle Clusterware does not need of Diskgroup to read and write on ASMDISK.
You cannot find/list Voting files using SQLPLUS(v$asm_diskgroup,v$asm_files,v$asm_alias), ASCMD utility or ASMCA gui.
You only know if exist a voting files in a ASMDISK (v$asm_disk using column VOTING_FILE). So, voting files not depend of Diskgroup to be accessed, does not mean that, we don’t need the diskgroup, diskgroup and voting file are linked by their settings.

Oracle Clusterware take configuration of DISKGROUP to configure the own voting files.
As Voting Disk are placed directly in ASMDISK of Diskgroup, we cannot use more than 1(one) Diskgroup.
The redundancy of voting files depend on ASMDISK not of Diskgroup. If you lose one ASMDISK  it’s means you lose one voting file. Differently when using files managed by Diskgroup.

  • When votedisk is on ASM diskgroup, no crsctl add option available. The number of votedisk is determined by the diskgroup redundancy. If more copy of votedisk is desired, one can move votedisk to a diskgroup with higher redundancy.
  • When votedisk is on ASM, no delete option available, one can only replace the existing votedisk group with another ASM diskgroup.

You cannot place Voting files in differents Diskgroup. To use a quorum failgroup is required if you are using RAC Extended or if you are using more than 1 Storage in your cluster.
The COMPATIBLE.ASM disk group compatibility attribute must be set to 11.2 or greater to store OCR or voting disk data in a disk group.

Oracle Cluster Registry (OCR) and ASM Server Parameter File (ASM SPFILE):

OCR: The Oracle RAC configuration information repository that manages information about the cluster node list and instance-to-node mapping information. OCR also manages information about Oracle Clusterware resource profiles for customized applications.

The OCR is totally different from Voting Disk. Oracle Clusterware rely on ASM to access the OCR and SPFILE. The OCR and SPFILE are stored similar to how Oracle Database files are stored. The extents are spread across all the disks in the diskgroup and the redundancy (which is at the extent level) is based on the redundancy of the disk group. For this reason you can only have one OCR in a diskgroup.

So, if your Diskgroup where OCR is stored become unavaliable you will lose your OCR and SPFILE. Then we need put OCR mirror in another disk group to support failure of disk group.

The interesting discussion is what happens if you have the OCR mirrored and one of the copies gets corrupt? You would expect that everything will continue to work seemlessly. Well.. Almost.. The real answer depends on when the corruption takes place.

If the corruption happens while the Oracle Clusterware stack is up and running, then the corruption will be tolerated and the Oracle Clusterware will continue to funtion without interruptions. Despite the corrupt copy. DBA is advised to repair this hardware/software problem that prevent OCR from accessing the device as soon as possible; alternatively, DBA can replace the failed device with another healthy device using the ocrconfig utility with -replace flag.

If however the corruption happens while the Oracle Clusterware stack is down, then it will not be possible to start it up until the failed device becomes online again or some administrative action using ocrconfig utility with -overwrite flag is taken.

Basic rules: You cannot create more than 1 (one) OCR or SPFILE in same Diskgroup.
The COMPATIBLE.ASM disk group compatibility attribute must be set to 11.2 or greater to store OCR or voting disk data in a disk group.

Best Practice for ASM is to have 2 diskgroups to store OCR.

Oracle Recommend: With Oracle Grid Infrastructure 11g Release 2, it is recommended to put the OCR and Voting Disks in Oracle ASM, using the same disk group you use for your database data.
I don’t agree!!!
I really don’t recommend put database files and clusterware files together. This can disrupt the management of the environment and cause dowtime. (e.g you never can stop this diskgroup)
Example:  The voting files are not stored in the diskgroup (+data), they are placed directly in asmdisk, then in case of maintenance in the diskgroup, for example to increase the size of Luns, you can not just remove the asmdisk, you must move the voting files to another place and achieve the maintenance in diskgroup.

Downtime ???? Yes… You can move only vote and ocr without downtime, but to move ASMSPIFILE you need downtime. This is required to ASM use new SPFILE and release the old Diskgroup. See: ORA-15027: active use of diskgroup precludes its dismount (With no database clients connected) [ID 1082876.1]

Voting files can be stored in only one diskgroup.
We can have X number of disk groups, during maintenance operations (replicate, drop, move,resize,clone etc.) the Clusterware files are unnecessarily involved.

So I recommend always create two small DISKGROUP:
+VOTE – Storing Voting files and OCR mirror
+CRS – Storing OCR and ASM Spfile.

Keep mind: You must make desing of LUNs of theses diskgroup (+VOTE, +CRS) before clusterware become avaliable to RAC databases (i.e Before install RAC).

Recomendation of Design of Luns:
Voting Disk need 300Mb
OCR and ASM SPFILE need 300M

Even using mirroring by Hardware (Storage), I recommend  create mirroring by ASM to Diskgroup that will store voting files, because these files will be configured as multiplexing.

Diskgroup VOTE:
Create 3 Luns of 500Mb each. If possible put each Lun in different controller, array or storage.

Diskgroup CRS:
If you are using mirror of storage or you are using only one storage, it’s recommended you create 1(one) Lun using external redundancy.

If you are not using mirror of storage or you are using more than one storage.

Using more than one storage :
1 Lun (500M) in each storage. Creating a diskgroup with normal redundancy.

If you are using one storage, but not using mirroring of storage.
Create 2 Luns of 500Mb each. Creating a diskgroup with normal redundancy.
Place each Lun in different controller, array or storage.

These Luns are exclusive to Cluster.

Returning to the old days…

So, we return to the old days (10.2), when we created separated luns to clusterware files using raw devices.

It may seem like a setback, but is a process that will facilitate management of environment, and makes it safer by separating files (clusterware files) that are extremely important  keeping the high availability cluster.

When you perform maintenance of the clusterware files you will change only  the diskgroup(CRS and VOTE) when you perform maintenance of the Diskgroup (Database Files or  ACFS)  the clusterware  files will not be involved.

Now, let’s do somes tests:

During Grid Install … What I can do to accomplish it?
We cannot achieve desired result during setup, but we can reconfigure it at end of installation. So, during install I always create a temporary diskgroup named +CRSTMP with external redundancy, asmdisk (lun) size 1G.

The diskgroup +CRSTMP will have one voting file, ocr and asm spfile.

Checking if nodes is Actives:

$ olsnodes -s
lnxora01        Active
lnxora02        Active
lnxora03        Active

Use OCRCHECK to know where you OCR files are stored.

$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3848
         Available space (kbytes) :     258272
         ID                       : 1997055112
         Device/File Name         :    +CRSTMP
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

Use crsctl to know where Voting file is stored

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   a0d6ea8dfb944fe7bfb799a451195a18 (ORCL:CRSTMP01) [CRSTMP]
Located 1 voting disk(s).

Use asmcmd to known where ASM SPFILE is stored

$ asmcmd spget
+CRSTMP/testcluster/ASMPARAMETERFILE/REGISTRY.253.772133609

Getting info about Voting Disk on ASM. We cannot see the voting file on ASM, we only know wich asmdisk he is stored.

SQL>
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A10
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
			 FROM V$ASM_DISKGROUP
			 WHERE NAME='CRSTMP');

NAME       PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
---------- ------------------------------ -------------------- -------------------- -------------------- --------------------
CRSTMP01   ORCL:CRSTMP01                  MEMBER               CRSTMP01             REGULAR              Y

Getting full name of OCR and ASM SPFILE on ASM

olsnodes -c : show name of cluster

$ olsnodes -c
tstcluster

set linesize 100
col FILES_OF_CLUSTER for a60

select concat('+'||gname, sys_connect_by_path(aname, '/')) FILES_OF_CLUSTER
     from ( select b.name gname, a.parent_index pindex, a.name aname,
              a.reference_index rindex , a.system_created, a.alias_directory,
              c.type file_type
       from v$asm_alias a, v$asm_diskgroup b, v$asm_file c
       where a.group_number = b.group_number
             and a.group_number = c.group_number(+)
             and a.file_number = c.file_number(+)
             and a.file_incarnation = c.incarnation(+)
     ) WHERE file_type in ( 'ASMPARAMETERFILE','OCRFILE')
start with (mod(pindex, power(2, 24))) = 0
            and rindex in
                ( select a.reference_index
                  from v$asm_alias a, v$asm_diskgroup b
                  where a.group_number = b.group_number
                        and (mod(a.parent_index, power(2, 24))) = 0
                        and a.name = LOWER('&CLUSTERNAME')
                )
connect by prior rindex = pindex;

Enter value for clustername: tstcluster
old  17:                         and a.name = LOWER('&CLUSTERNAME')
new  17:                         and a.name = LOWER('tstcluster')

FILES_OF_CLUSTER
---------------------------------------------------------
+CRSTMP/tstcluster/OCRFILE/REGISTRY.255.772133361
+CRSTMP/tstclsuter/ASMPARAMETERFILE/REGISTRY.253.772133609

After the disks are available on all hosts, we can start.

CRS01 and CRS02  will be used to diskgroup CRS

VOTE01,VOTE02 and VOTE03 will be used to diskgroup VOTE

 col path for a30
 col name for a20
 col header_status for a20
 select path,name,header_status from v$asm_disk
 where path like '%CRS%' or path like '%VOTE%';

PATH                           NAME                 HEADER_STATUS
------------------------------ -------------------- --------------------
ORCL:CRS01                                          PROVISIONED
ORCL:CRS02                                          PROVISIONED
ORCL:VOTE01                                         PROVISIONED
ORCL:VOTE02                                         PROVISIONED
ORCL:VOTE03                                         PROVISIONED
ORCL:CRSTMP01                  CRSTMP01             MEMBER

Creating Diskgroup VOTE each disk must be in different failgroup. I don’t add QUORUM failgroup because theses luns are on Storage. I recommend use QUORUM failgroup when you are placing disk out of your  environment. (e.g use NFS file-disk to quorum purpouses), because this disks cannot contain data.

SQL>
CREATE DISKGROUP VOTE NORMAL REDUNDANCY
     FAILGROUP STG1_C1 DISK 'ORCL:VOTE01'
     FAILGROUP STG1_C2 DISK 'ORCL:VOTE02'
     FAILGROUP STG1_C1_1 DISK 'ORCL:VOTE03'
     ATTRIBUTE 'compatible.asm' = '11.2.0.0.0';

Diskgroup created.

# starting diskgroup on others nodes
SQL> ! srvctl start diskgroup -g vote -n lnxora02,lnxora03

# checking if diskgroup is active on all nodes
SQL> ! srvctl status diskgroup -g vote
Disk Group vote is running on lnxora01,lnxora02,lnxora03

Creating DISKGROUP CRS:

SQL>
CREATE DISKGROUP CRS NORMAL REDUNDANCY
     FAILGROUP STG1_C1 DISK 'ORCL:CRS01'
     FAILGROUP STG1_C2 DISK 'ORCL:CRS02'
     ATTRIBUTE 'compatible.asm' = '11.2.0.0.0';

Diskgroup created.

# starting diskgroup on others nodes
SQL> ! srvctl start diskgroup -g crs -n lnxora02,lnxora03

# checking if diskgroup is active on all nodes
SQL> ! srvctl status diskgroup -g crs
Disk Group crs is running on lnxora01,lnxora02,lnxora03
SQL>
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A10
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER IN ( SELECT GROUP_NUMBER
			 FROM V$ASM_DISKGROUP
			 WHERE NAME IN ('CRS','VOTE'));

NAME       PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
---------- ------------------------------ -------------------- -------------------- -------------------- --------------------
VOTE03     ORCL:VOTE03                    MEMBER               STG1_C1_1            REGULAR              N
VOTE02     ORCL:VOTE02                    MEMBER               STG1_C2              REGULAR              N
VOTE01     ORCL:VOTE01                    MEMBER               STG1_C1              REGULAR              N
CRS01      ORCL:CRS01                     MEMBER               STG1_C1              REGULAR              N
CRS02      ORCL:CRS02                     MEMBER               STG1_C2              REGULAR              N

Moving Voting Files from +CRSTMP to +VOTE

$ crsctl replace votedisk +VOTE
Successful addition of voting disk aaa75b9e7ce24f39bfd9eecb3e3c0e38.
Successful addition of voting disk 873d51346cd34fc2bf9caa94999c4cd8.
Successful addition of voting disk acda8619b74c4fe8bf886ee6c9fe8d1a.
Successful deletion of voting disk a0d6ea8dfb944fe7bfb799a451195a18.
Successfully replaced voting disk group with +VOTE.
CRS-4266: Voting file(s) successfully replaced

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   aaa75b9e7ce24f39bfd9eecb3e3c0e38 (ORCL:VOTE01) [VOTE]
 2. ONLINE   873d51346cd34fc2bf9caa94999c4cd8 (ORCL:VOTE02) [VOTE]
 3. ONLINE   acda8619b74c4fe8bf886ee6c9fe8d1a (ORCL:VOTE03) [VOTE]
Located 3 voting disk(s).

SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A10
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
			 FROM V$ASM_DISKGROUP
			 WHERE NAME='VOTE');

NAME       PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
---------- ------------------------------ -------------------- -------------------- -------------------- --------------------
VOTE03     ORCL:VOTE03                    MEMBER               STG1_C1_1            REGULAR              Y
VOTE02     ORCL:VOTE02                    MEMBER               STG1_C2              REGULAR              Y
VOTE01     ORCL:VOTE01                    MEMBER               STG1_C1              REGULAR              Y

Moving OCR to diskgroup +CRS and +VOTE and removing from diskgroup +CRSTMP

What is OCR determines whether the principal or mirror, is the order wich we add new OCR.
Therefore, we add first on the diskgroup CRS +  and later in diskgroup VOTE,  at time to remove the OCR on diskgroup CRSTMP the OCR on diskgroup CRS will become the principal.

# /u01/app/11.2.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3868
         Available space (kbytes) :     258252
         ID                       : 1997055112
         Device/File Name         :    +CRSTMP
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

# /u01/app/11.2.0/grid/bin/ocrconfig -add +CRS

# /u01/app/11.2.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3836
         Available space (kbytes) :     258284
         ID                       : 1997055112
         Device/File Name         :    +CRSTMP
                                    Device/File integrity check succeeded
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

# /u01/app/11.2.0/grid/bin/ocrconfig -add +VOTE

 /u01/app/11.2.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3836
         Available space (kbytes) :     258284
         ID                       : 1997055112
         Device/File Name         :    +CRSTMP
                                    Device/File integrity check succeeded
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded
         Device/File Name         :      +VOTE
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

# /u01/app/11.2.0/grid/bin/ocrconfig -delete +CRSTMP

/u01/app/11.2.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3836
         Available space (kbytes) :     258284
         ID                       : 1997055112
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded
         Device/File Name         :      +VOTE
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

Moving ASM SPFILE to diskgroup +CRS

You will get the error that the file is still being used, but actually the file is copied to the file system and the profile is updated.

$ asmcmd spget
+CRSTMP/tstcluster/ASMPARAMETERFILE/REGISTRY.253.772133609

$ asmcmd spmove '+CRSTMP/tstcluster/ASMPARAMETERFILE/REGISTRY.253.772133609' '+CRS/tstcluster/spfileASM.ora'
ORA-15032: not all alterations performed
ORA-15028: ASM file '+CRSTMP/tstcluster/ASMPARAMETERFILE/REGISTRY.253.772133609' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)

# checking if  file was copied and profile updated
$ asmcmd spget
+CRS/tstcluster/spfileASM.ora

Checking files of cluster on ASM

set linesize 100
col FILES_OF_CLUSTER for a60

select concat('+'||gname, sys_connect_by_path(aname, '/')) FILES_OF_CLUSTER
     from ( select b.name gname, a.parent_index pindex, a.name aname,
              a.reference_index rindex , a.system_created, a.alias_directory,
              c.type file_type
       from v$asm_alias a, v$asm_diskgroup b, v$asm_file c
       where a.group_number = b.group_number
             and a.group_number = c.group_number(+)
             and a.file_number = c.file_number(+)
             and a.file_incarnation = c.incarnation(+)
     ) WHERE file_type in ( 'ASMPARAMETERFILE','OCRFILE')
start with (mod(pindex, power(2, 24))) = 0
            and rindex in
                ( select a.reference_index
                  from v$asm_alias a, v$asm_diskgroup b
                  where a.group_number = b.group_number
                        and (mod(a.parent_index, power(2, 24))) = 0
                        and a.name = LOWER('&CLUSTERNAME')
                )
connect by prior rindex = pindex;
Enter value for clustername: tstcluster
old  17:                         and a.name = LOWER('&CLUSTERNAME')
new  17:                         and a.name = LOWER('tstcluster')

FILES_OF_CLUSTER
------------------------------------------------------------
+CRSTMP/tstcluster/OCRFILE/REGISTRY.255.772133361
+CRSTMP/tstcluster/ASMPARAMETERFILE/REGISTRY.253.772133609
+VOTE/tstcluster/OCRFILE/REGISTRY.255.772207785
+CRS/tstcluster/OCRFILE/REGISTRY.255.772207425
+CRS/tstcluster/ASMPARAMETERFILE/REGISTRY.253.772208263
+CRS/tstcluster/spfileASM.ora

In order the ASM can use the new SPFILE and  disconnect from the diskgroup + CRSTMP, we need to restart the cluster.

# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'lnxora01'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'lnxora01'
.
.
.
CRS-2673: Attempting to stop 'ora.crsd' on 'lnxora02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'lnxora02'
.
.
.
CRS-2673: Attempting to stop 'ora.crsd' on 'lnxora03'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'lnxora03'
.
.
.
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'lnxora01' has completed
.
.
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'lnxora02' has completed
.
.
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'lnxora03' has completed

# /u01/app/11.2.0/grid/bin/crsctl start cluster -all
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lnxora01'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lnxora02'
.

$ asmcmd spget
+CRS/tstcluster/spfileASM.ora

Now we can drop diskgroup +CRSTMP

SQL> ! srvctl stop diskgroup -g crstmp -n lnxora02,lnxora02

SQL> drop diskgroup crstmp including contents;

Diskgroup dropped.

SQL>
FILES_OF_CLUSTER
------------------------------------------------------------
+CRS/tstcluster/OCRFILE/REGISTRY.255.772207425
+CRS/tstcluster/ASMPARAMETERFILE/REGISTRY.253.772211229
+CRS/tstcluster/spfileASM.ora
+VOTE/tstcluster/OCRFILE/REGISTRY.255.772207785
Adding a 3rd Voting File on NFS to a Cluster using Oracle ASM

In this post I’ll show how configure it on Linux, more detailed step, or how configure it in others platform you can use this Oracle white paper  (http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf)

Based on the above settings I’ll show you how easy it is to add a votedisk using ASM. (Linux only)

Preparing NFS Server: ( Oracle recommend use a exclusive host to 3rd votedisk)


# mkdir /votedisk
# vi /etc/exports
/votedisk *(rw,sync,all_squash,anonuid=54321,anongid=54325)

Setting Up NFS Clients
This conf above must be in all nodes of cluster

# cat /etc/filesystem
lnxnfs:/votedisk      /voting_disk    nfs     rw,bg,hard,intr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600       0       0

Mount the /voting_disk in all nodes of cluster and check if they are with right options

# mount /voting_disk

$ mount |grep voting_disk
lnxnfs:/votedisk on /voting_disk type nfs (rw,bg,hard,intr,rsize=32768,wsize=32768,tcp,nfsvers=3,timeo=600,noac,addr=192.168.217.45)

Create a Disk-File to be used by ASM

$ dd if=/dev/zero of=/voting_disk/asm_vote_quorum bs=10M count=58
58+0 records in
58+0 records out
608174080 bytes (608 MB) copied, 3.68873 seconds, 165 MB/s

# chmod 660 /voting_disk/asm_vote_quorum
# chown oracle.asmadmin /voting_disk/asm_vote_quorum

# ls -ltr /voting_disk/asm_vote_quorum
-rw-rw---- 1 oracle asmadmin 608174080 Jan 10 20:00 /voting_disk/asm_vote_quorum

Adding the new Diskstring on ASM

SQL> show parameter asm_diskstring
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      ORCL:*

SQL> ALTER SYSTEM SET asm_diskstring ='ORCL:*','/voting_disk/asm_vote_quorum' SCOPE=both SID='*';

SQL> show parameter asm_diskstring
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------------
asm_diskstring                       string      ORCL:*, /voting_disk/asm_vote_quorum</pre>

$ asmcmd dsget
parameter:ORCL:*, /voting_disk/asm_vote_quorum
profile:ORCL:*,/voting_disk/asm_vote_quorum

Checking if this new disk is avaliable on ASM

$ kfod disks=all
--------------------------------------------------------------------------------
 Disk          Size Path                                     User     Group
================================================================================
   1:        580 Mb /voting_disk/asm_vote_quorum             oracle   asmadmin
   2:        486 Mb ORCL:CRS01
   3:        486 Mb ORCL:CRS02
   .
   .
   .
  9:         580 Mb ORCL:VOTE01
  10:        580 Mb ORCL:VOTE02
  11:        580 Mb ORCL:VOTE03
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME
================================================================================
     +ASM3 /u01/app/11.2.0/grid
     +ASM1 /u01/app/11.2.0/grid
     +ASM2 /u01/app/11.2.0/grid

SQL> col path for a30
SQL>
select path,header_status
from v$asm_disk
 where path like '%vote_quorum%';

PATH                           HEADER_STATUS
------------------------------ --------------------
/voting_disk/asm_vote_quorum   CANDIDATE

SQL>  ALTER DISKGROUP VOTE
      ADD
      QUORUM FAILGROUP STG_NFS DISK '/voting_disk/asm_vote_quorum';

Diskgroup altered.

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   aaa75b9e7ce24f39bfd9eecb3e3c0e38 (ORCL:VOTE01) [VOTE]
 2. ONLINE   873d51346cd34fc2bf9caa94999c4cd8 (ORCL:VOTE02) [VOTE]
 3. ONLINE   51f29389684e4f60bfb4b1683db8bd09 (/voting_disk/asm_vote_quorum) [VOTE]
Located 3 voting disk(s).

SQL>
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A10
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
			 FROM V$ASM_DISKGROUP
			 WHERE NAME='VOTE');

NAME       PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
---------- ------------------------------ -------------------- -------------------- -------------------- --------------------
VOTE01     ORCL:VOTE01                    MEMBER               STG1_C1              REGULAR              Y
VOTE02     ORCL:VOTE02                    MEMBER               STG1_C2              REGULAR              Y
VOTE03     ORCL:VOTE03                    MEMBER               STG1_C1_1            REGULAR              N
VOTE_0003  /voting_disk/asm_vote_quorum   MEMBER               STG_NFS              QUORUM               Y

### Use WAIT option to make sure wich you can remove asmdisk, it will not release the prompt until the rebalance operation completed.
SQL>  ALTER DISKGROUP VOTE
     DROP DISK 'VOTE03'
     REBALANCE POWER 3 WAIT;

Diskgroup altered.

NAME       PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
---------- ------------------------------ -------------------- -------------------- -------------------- --------------------
VOTE01     ORCL:VOTE01                    MEMBER               STG1_C1              REGULAR              Y
VOTE02     ORCL:VOTE02                    MEMBER               STG1_C2              REGULAR              Y
VOTE_0003  /voting_disk/asm_vote_quorum   MEMBER               STG_NFS              QUORUM               Y

Can we have 15 Voting Disk on ASM?

No. 15 voting files is allowed if you not storing voting on ASM. If you are using ASM the maximum number of voting files is 5. Because Oracle will take configuration of Diskgroup.
Using high number of voting disks can be useful when you have a big cluster environment with (e.g) 5 Storage Subsystem and 20 Hosts in a single Cluster. You must set up a voting file in each storage … but if you’re using only one storage voting 3 files is enough.

https://forums.oracle.com/forums/thread.jspa?messageID=10070225

Oracle Doc’s: You should have at least three voting disks, unless you have a storage device, such as a disk array, that provides external redundancy. Oracle recommends that you do not use more than 5 voting disks. The maximum number of voting disks that is supported is 15.
http://docs.oracle.com/cd/E11882_01/rac.112/e16794/crsref.htm#CHEJDHFH

See this example;

I configured 7 ASM DISK but ORACLE used only 5 ASM DISK.

SQL> CREATE DISKGROUP DG_VOTE HIGH REDUNDANCY
     FAILGROUP STG1 DISK 'ORCL:DG_VOTE01'
     FAILGROUP STG2 DISK 'ORCL:DG_VOTE02'
     FAILGROUP STG3 DISK 'ORCL:DG_VOTE03'
     FAILGROUP STG4 DISK 'ORCL:DG_VOTE04'
     FAILGROUP STG5 DISK 'ORCL:DG_VOTE05'
     FAILGROUP STG6 DISK 'ORCL:DG_VOTE06'
     FAILGROUP STG7 DISK 'ORCL:DG_VOTE07'
   ATTRIBUTE 'compatible.asm' = '11.2.0.0.0';

Diskgroup created.

SQL> ! srvctl start diskgroup -g DG_VOTE -n lnxora02,lnxora03

$  crsctl replace votedisk +DG_VOTE
CRS-4256: Updating the profile
Successful addition of voting disk 427f38b47ff24f52bf1228978354f1b2.
Successful addition of voting disk 891c4a40caed4f05bfac445b2fef2e14.
Successful addition of voting disk 5421865636524f5abf008becb19efe0e.
Successful addition of voting disk a803232576a44f1bbff65ab626f51c9e.
Successful addition of voting disk 346142ea30574f93bf870a117bea1a39.
Successful deletion of voting disk 2166953a27a14fcbbf38dae2c4049fa2.
Successfully replaced voting disk group with +DG_VOTE.

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   427f38b47ff24f52bf1228978354f1b2 (ORCL:DG_VOTE01) [DG_VOTE]
 2. ONLINE   891c4a40caed4f05bfac445b2fef2e14 (ORCL:DG_VOTE02) [DG_VOTE]
 3. ONLINE   5421865636524f5abf008becb19efe0e (ORCL:DG_VOTE03) [DG_VOTE]
 4. ONLINE   a803232576a44f1bbff65ab626f51c9e (ORCL:DG_VOTE04) [DG_VOTE]
 5. ONLINE   346142ea30574f93bf870a117bea1a39 (ORCL:DG_VOTE05) [DG_VOTE]

SQL >
SET LINESIZE 150
COL PATH FOR A30
COL NAME FOR A10
COL HEADER_STATUS FOR A20
COL FAILGROUP FOR A20
COL FAILGROUP_TYPE FOR A20
COL VOTING_FILE FOR A20
SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
FROM V$ASM_DISK
WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
			 FROM V$ASM_DISKGROUP
			 WHERE NAME='DG_VOTE');

NAME       PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
---------- ------------------------------ -------------------- -------------------- -------------------- --------------------
DG_VOTE01  ORCL:DG_VOTE01                 MEMBER               STG1                 REGULAR              Y
DG_VOTE02  ORCL:DG_VOTE02                 MEMBER               STG2                 REGULAR              Y
DG_VOTE03  ORCL:DG_VOTE03                 MEMBER               STG3                 REGULAR              Y
DG_VOTE04  ORCL:DG_VOTE04                 MEMBER               STG4                 REGULAR              Y
DG_VOTE05  ORCL:DG_VOTE05                 MEMBER               STG5                 REGULAR              Y
DG_VOTE06  ORCL:DG_VOTE06                 MEMBER               STG6                 REGULAR              N
DG_VOTE07  ORCL:DG_VOTE07                 MEMBER               STG7                 REGULAR              N
Errors and Workaround

ASM removed VOTEDISK from wrong ASMDISK (failgroup)… How fix it?

You can not choose which ASMDISK the votedisk will be removed. This can be a problem.
It is easy to solve this problem.

Follow this steps:


## As you configured an NFS then you can move the votedisk to NFS.

$ crsctl replace votedisk '/voting_disk/vote_temp

#### So, you can drop desired ASMDISK and ADD new ASMDISK with QUORUM option.
#### It's recommended you have 3 Failgroup (one failgroup in each storage) and 3rd failgroup is a quorum on nfs.
#### After reconfigure ASM Diskgroup VOTE you can move votedisk on nfs to ASM.

$ crsctl replace votedisk +VOTE

### Everthing will work

After restart Cluster CRS is not Starting, how fix it?

Problem: After restart Cluster in all nodes the CRS is not starting in some nodes after change OCR Location.
The node with problem was not updated the OCR Location, so he can trying find old diskgroup.

I changed OCR from +CRSTMP to +CRS, +VOTE

You can solve it’s manually:
The error on log crsd.log is look like:

2012-01-10 14:39:26.144: [ CRSMAIN][4039143920] Initializing OCR
2012-01-10 14:39:26.145: [ CRSMAIN][1089243456] Policy Engine is not initialized yet!
[   CLWAL][4039143920]clsw_Initialize: OLR initlevel [70000]
2012-01-10 14:39:32.712: [  OCRRAW][4039143920]proprioo: for disk 0 (+CRSTMP), id match (0), total id sets, (0) need recover (0), my votes (0), total votes (0), commit_lsn (0), lsn (0)
2012-01-10 14:39:32.712: [  OCRRAW][4039143920]proprioo: my id set: (723563391, 1028247821, 0, 0, 0)
2012-01-10 14:39:32.712: [  OCRRAW][4039143920]proprioo: 1st set: (0, 0, 0, 0, 0)
2012-01-10 14:39:32.712: [  OCRRAW][4039143920]proprioo: 2nd set: (0, 0, 0, 0, 0)
2012-01-10 14:39:32.838: [  OCRRAW][4039143920]utiid:problem validating header for owner db phy_addr=0
2012-01-10 14:39:32.838: [  OCRRAW][4039143920]proprinit:problem reading the bootblock or superbloc 26

2012-01-10 14:39:33.565: [  OCRAPI][4039143920]a_init:16!: Backend init unsuccessful : [26]
2012-01-10 14:39:33.570: [  CRSOCR][4039143920] OCR context init failure.  Error: PROC-26: Error while accessing the physical storage
2012-01-10 14:39:33.570: [  CRSOCR][4039143920][PANIC] OCR Context is NULL(File: caaocr.cpp, line: 145)

2012-01-10 14:39:33.570: [    CRSD][4039143920][PANIC] CRSD Exiting. OCR Failed
2012-01-10 14:39:33.571: [    CRSD][4039143920] Done.
[/sourcode]

We can get error in two phrases: 

2012-01-10 14:39:32.712: [  OCRRAW][4039143920]proprioo: for disk 0 (+CRSTMP), id match (0), total id sets, (0) need recover (0), my votes (0), total votes (0), commit_lsn (0), lsn (0)
2012-01-10 14:39:33.570: [  CRSOCR][4039143920] OCR context init failure.  Error: PROC-26: Error while accessing the physical storage

To solve it:

Connect on server wich CRS is working.

And see the content of file “cat /etc/oracle/ocr.loc”

In my case:

On node where CRS is working:

host: lnxora01
$ cat /etc/oracle/ocr.loc
#Device/file +CRSTMP getting replaced by device +CRS
ocrconfig_loc=+CRS
ocrmirrorconfig_loc=+VOTE
local_only=false

On node where CRS is not working:

host: lnxora02
$ cat /etc/oracle/ocr.loc
ocrconfig_loc=+CRSTMP
local_only=false

The file “/etc/oracle/ocr.loc” must be equal in all node, so I updated the ocr.loc on server with problem and all the CRS started without error

I need to do a few revisions in this post. (Sorry for grammar errors)

Enjoy…

.


RACcheck – RAC Configuration Audit Tool

RACcheck is a tool developed by the RAC Assurance development team for use by customers to automate the assessment of RAC systems for known configuration problems and best practices.

RACcheck is a RAC Configuration Audit tool  designed to audit various important configuration settings within a Real Application Clusters (RAC), Oracle Clusterware (CRS), Automatic Storage Management (ASM) and Grid Infrastructure environment. The tool audits configuration settings within the following categories:

  1. OS kernel parameters
  2. OS packages
  3. Many other OS configuration settings important to RAC.
  4. CRS/Grid Infrastructure
  5. RDBMS
  6. ASM
  7. Database parameters
  8. Many other database configuration settings important to RAC.

Features
1. RACcheck is NON-INTRUSIVE and does not change anything in the environment, except as detailed below:

– SSH user equivalence for the RDBMS software owner is assumed to be configured among all the database servers being audited in order for it to execute commands on the remote database server nodes. If the tool determines that this user equivalence is not established it will offer to set it up either temporarily or permanently at the option of the user. If the user chooses to set up SSH user equivalence temporarily then the script will do so for the duration of the execution of the tool but then it will return the system to the state in which it found SSH user equivalence originally. For those wishing to configure SSH user equivalence outside the tool (if not already configured), consult My Oracle Support Note: 372795.1.

– RACcheck creates a number of small output files into which the data necessary to perform the assessment is collected

– RACcheck creates and executes some scripts dynamically in order to accomplish some of the data collection

– RACcheck cleans up after itself any temporary files that are created and not needed as part of the collection.

2. RACcheck interrogates the system to determine the status of the Oracle stack components (ie., Grid Infrastructure, RDBMS, RAC, etc) and whether they are installed and/or running. Depending upon the status of each component, the tool runs the appropriate collections and audit checks. If due to local environmental configuration the tool is unable to properly determine the needed environmental information please refer to the TROUBLESHOOTING section.

3. Watchdog daemon – RACcheck automatically runs a daemon in the background to monitor command execution progress. If, for any reason, one of the commands run by the tool should hang or take longer than anticipated, the monitor daemon kills the hung command after a configurable timeout so that main tool execution can progress. If that happens then the collection or command that was hung is skipped and a notation is made in the log. If the default timeout is too short please see the TROUBLESHOOTING section regarding adjustment of the RAT_TIMEOUT, and RAT_ROOT_TIMEOUT parameters.

4. If RACcheck’s driver files are older than 90 days, the driver files are considered to be “stale” and the script will notify the user of a stale driver file. A new version of the tool and its driver files (kit) must be obtained from MOS Note 1268927.1.

5. When the RACcheck completes the collection and analysis it produces two reports, summary and detailed. A output .zip file is also produced by RACcheck. This output .zip file can be provided to Oracle Support for further analysis if an SR needs to be logged. The detailed report will contain Benefit/Impact, Risk and Action/Repair information. In many cases it will also reference publicly available documents with additional information about the problem and how to resolve it.

6. The results of the audit checks can be optionally uploaded into database tables for reporting purposes. See below for more details on this subject.

7. In some cases customers may want to stage RACcheck on a shared filesystem so that it can be accessed from various systems but be maintained in a single location rather than being copied to each cluster on which it may be used. The default behavior of the tool is to create a subdirectory and its output files in the location where the tool is staged. If that staging area is a read only filesystem or if the user for any reason would like the output to be created elsewhere then there is an environment variable which can be used for that purpose. The RAT_OUTPUT parameter can be set to any valid writable location and the output will be created there.

Applies to:
Oracle Server – Enterprise Edition – Version: 10.2.0.1 to 11.2.0.2 – Release: 10.2 to 11.2

  • Linux x86
  • IBM AIX on POWER Systems (64-bit)
  • Oracle Solaris on SPARC (64-bit)
  • Linux x86-64

To download RAC Check tool use this note on MoS:
RACcheck – RAC Configuration Audit Tool [ID 1268927.1]

Example of report output:

raccheck Report

Enjoy


CFGMGR fails: startacfsctl defacfsctl.bin: Dependent module libhasgen11.so could not be loaded.

Recently I faced an error when I execute the cfgmgr command on AIX.

root@aix:/ > cfgmgr
Method error (/usr/lib/methods/startacfsctl):
        0514-068 Cause not known.
Could not load program /usr/lib/methods/defacfsctl.bin:
        Dependent module libhasgen11.so could not be loaded.
Could not load module libhasgen11.so.
System error: No such file or directory

At startup  of AIX some services are not started automatically (eg: sshd and many others), the cause of issue is because “cfgmgr” command is returning error and it hold start of services.

At first sight I thought it was a AIX problem, but the lib “defacfsctl.bin” was not loaded because a module dependent ‘libhasgen11.so “was not loaded.

root@aix:/ >ldd /usr/lib/methods/defacfsctl.bin
/usr/lib/methods/defacfsctl.bin needs:
         /u01/app/grid/grid_has/lib/libhasgen11.so
         /u01/app/grid/grid_has/lib/libttsh11.so
         /usr/lib/libcfg.a(shr_64.o)
         /usr/lib/libodm.a(shr_64.o)
         /usr/lib/libc.a(shr_64.o)
         /usr/lib/libpthreads.a(shr_xpg5_64.o)
         /u01/app/grid/grid_has/lib/libskgxn2.so
         /u01/app/grid/grid_has/lib/libocr11.so
         /u01/app/grid/grid_has/lib/libocrutl11.so
         /usr/lib/libdl.a(shr_64.o)
         /usr/lib/libc.a(aio_64.o)
         /usr/lib/libperfstat.a(shr_64.o)
         /unix
         /usr/lib/libcrypt.a(shr_64.o)
         /u01/app/grid/grid_has/lib/libocrb11.so
         /usr/lib/liblvm.a(shr_64.o)
         /usr/lib/libcorcfg.a(shr_64.o)
         /usr/lib/libsrc.a(shr_64.o)

Since lib “libhasgen11.so” is an lib of Oracle Database the issue is related to the AIX and Oracle.

Cause:

The Filesystem where Grid Infrastructure was installed is not mounted or avaliable.
Oracle Grid Infrastructure 11.2 has a feature called ACFS (ASM Cluster Filesystem) that has own drivers called “Oracle Kernel Services Driver” (OKS).

The OKS drivers are installed  into the native operating system, these drivers are required for managing the filesystem (ACFS) using OS commands. (e.g mount).

You can find these drivers in “$GUI_HOME/usm/V/powerpc/bin/”.

Is It Possible to Avoid ADVM Driver Install During Grid Infrastructure Install ?
The answer is – NO. It is not possible to disable that. The reasoning behind this is that Clusterware components are not user configurable.
ACFS is not mandatory to use, but it is mandatory to initially install and config ACFS (drivers).

In my case the host was a contingency, so do not want the filesystem where GUI are installled available.

Solution:

1°: The easy solution is to make the filesystem of GUI available and retry command “cfgmgr”.

2°: If you are not using ACFS feature, we can disable them upon the installation.
To do that follow the next steps:
1) Dismount the ACFS filesystem first
2) Verify the ACFS filesystem is dismounted
3) Stop the OHAS services
4) Offload the ACFS/ADVM modules from memory (as root)
$GUI_HOME/bin/acfsload stop
5) Then remove the ACFS/ADVM modules installation (as root)
$GUI_HOME/bin/acfsroot uninstall

If after these steps you keep getting this error, or if OH was already removed:

root@aix:/ >cfgmgr
Method error (/usr/lib/methods/startadvmctl):
        0514-068 Cause not known.
sh: /usr/lib/methods/startadvmctl:  not found

Solution:

odmdelete -q rule=/usr/lib/methods/startacfsctl -o Config_Rules
odmdelete -q rule=/usr/lib/methods/startadvmctl -o Config_Rules

Oracle ACFS Filesystem managed by OHAS on Oracle Restart

Bad news from Oracle 11.2.0.2:

Oracle ACFS and Oracle Restart


Oracle Restart does not support root-based Oracle ACFS resources for this release. Consequently, the following operations are not automatically performed:

Loading Oracle ACFS drivers
Mounting Oracle ACFS file systems listed in the Oracle ACFS mount registry
Mounting resource-based Oracle ACFS database home file systems
The Oracle ACFS resources associated with these actions are not created for Oracle Restart configurations.

While Oracle ACFS resource management is fully supported for Oracle Grid Infrastructure configurations, the Oracle ACFS resource-based management actions must be replaced with alternative, sometimes manual, operations in Oracle Restart configurations.

http://download.oracle.com/docs/cd/E11882_01/server.112/e16102/asmfs_extra.htm#CACBDGCC

Big Question!
Why use ACFS (i.e Cluster Filesystem) on Single Node Installation? It makes no sense!!!
Using O.S filesystem I have all resources of ACFS (Performance, Manageability and Availability).
And using ACFS on Single Node (Oracle Restart) is very poor many task must be performed manually (like load drivers, mountfilesystem,etc) .
Particularly I do not use ACFS on Single Nodes, I see no need for that.

I created this post just for fun and to help those  wich use ACFS on Oracle Restart.

BUT…
ACFS has a feature that does not have on S.O filesystem.

ACFS Snapshots

ACFS provides snapshotting capability for the respective filesystem. This snapshot uses the First Copy-on-Write (FCOW) methodology to enable a consistent, version-based, online view of the source filesystem.
Snapshots are immutable views of the source file system as it appeared at a specific point in time.
Snapshots are initially a sparse filesystem, as the source filesystem files change, the before-image extent of that file is copied into the snapshot directory.
Accessing the snapshot will always provide a point-in-time view of a file, thus ACFS snapshots can be very useful for file-based recovery or for filesystem backups. If file level recovery is needed (for the base filesystem), it can be performed using standard file copy or replace commands.
A possible use case scenario for snapshots can be to create a consistent recovery point set between the database ORACLE_HOME and the database.
This is useful, for example, when a recovery point needs to be established before applying a database patchset.

Important: Be very careful using Oracle ACFS filesystem to ORACLE_HOME. When you create a database using this ORACLE_HOME this database is automatically registred on OHAS (Oracle Restart). If you restart Server OHAS will try start this database automatically, but the filesystem (ACFS) of ORACLE_HOME is not mounted automatically must be done manually , therefore all databases of this ORACLE_HOME must be started manually too.

I’ll show you how to automate the process of load and drivers of ACFS and mounting/umounting filesystem using OHAS.

LINUX PLATAFORM

Create a ACFS Filesystem.

I will create a ACFS Filesystem on DISKGROUP “DG_ORAHOME11GR2_1”

Create ACFS for Database Home

Configure Oracle Home location, Size (GB), user and group.

Creating ACFS…

ASMCA does not support root-based Oracle ACFS resources, so must be done manually.

Oracle automatically generate a script to mount this filesystem.

Volume created…and ENABLED

ACFS MOUNTED…..

If you restart your Server, the ASCS drivers will be not loaded automatically, therefore te TAB (Volumes and ASM Cluster File Systems) of ASMCA will be inactive.

Loading ACFS Drivers Automatically

To configure Autostart of ACFS drivers during node reboot on Linux

Add an acfsload start command in /etc/rc.local.so they are automatically loaded during node bootup

cat /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

/sbin/modprobe hangcheck-timer
/u01/app/grid/product/11.2.0/grid/bin/acfsload start –s
devlabel restart

Setting up  AUTO MOUNT/AUTO UMOUNT of Filesystem on OHAS. 

We can use OHAS to start, stop, monitor and restart applications. Using feature RESOURCE of OHAS.

Let’s create a script to mount/umount ACFS Filesystem.

Using Oracle user (oracle)

cd /u01/app/grid/product/11.2.0/grid/crs/script

vi acfs_ORAHOME11GR2.sh
#!/bin/sh
case $1 in
'start')
# Check if Volume is Enabled if not enable volume

if [ ! -f /dev/asm/db112_dbh1-220 ]; then
ORACLE_HOME=/u01/app/grid/product/11.2.0/grid
ORACLE_SID=+ASM
$ORACLE_HOME/bin/asmcmd <<EOF
volenable -G DG_ORAHOME11GR2_1 db112_dbh1
EOF
fi
# Mount filesystem

/usr/bin/sudo /bin/mount -t acfs /dev/asm/db112_dbh1-220 /u01/app/oracle/product/11.2.0/dbhome_1

# Change permission of Filesystem
 if [ $? = "0" ]; then
 /usr/bin/sudo  /bin/chown oracle:oinstall /u01/app/oracle/product/11.2.0/dbhome_1
 /usr/bin/sudo  /bin/chmod 775 /u01/app/oracle/product/11.2.0/dbhome_1
  exit 0
 fi
 RET=$?
;;

# Stop Filesystem
'stop')
/usr/bin/sudo /bin/umount -t acfs /dev/asm/db112_dbh1-220
 RET=$?
;;
'clean')
/usr/bin/sudo  /bin/umount -t acfs /dev/asm/db112_dbh1-220
  RET=$?
    ;;
# Check if Filesystem is Mounted
'check')
  OUTCMD=`/bin/mount |grep '/dev/asm/db112_dbh1-220' |wc -l`
  if [ $OUTCMD = 1 ]; then
  RET=0
  else
  RET=1
  fi
;;
esac
# 0: success; 1 : error
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

I used the “sudo” to perform root-tasks. So you must allow user oracle perform root actions without password.

Using root privileges (or root user)  type “visudo” to edit sudoers file.

# visudo

Comment the line "Defaults    requiretty"
# Defaults    requiretty

Add user Oracle above this:
# %wheel        ALL=(ALL)       NOPASSWD: ALL
oracle          ALL=(ALL)       NOPASSWD: /bin/chown, /bin/chmod, /bin/mount, /bin/umount

Registering Resource in OHAS

Register Resource “acfs.orahome11gR2_1.fs” on OHAS creating dependency of DISKGROUP “DG_ORAHOME11GR2_1”

START_DEPENDENCIES: Specifies a set of relationships that OHAS considers when starting a resource.
You can specify a space-separated list of dependencies on several resources and resource types on which a particular resource can depend.
hard: Specify a hard start dependency for a resource when you want the resource to start only when a particular resource or resource of a particular type starts.
Oracle recommends that resources with hard start dependencies also have pullup start dependencies.
pullup: When you specify the pullup start dependency for a resource, then this resource starts as a result of named resources starting.

So, We must specify the START_DEPENDENCIES referencing to DISKGROUP DG_ORAHOME11GR2_1 where are ORACLE_HOME filesystem, that’s means when you try start the resource “acfs.orahome11gR2_1.fs” the DISKGROUP DG_ORAHOME11GR2_1 must be started, if DISKGROUP DG_ORAHOME11GR2_1 is not started “pullup” will try start DISKGROUP DG_ORAHOME11GR2_1 before try start resource “acfs.orahome11gR2_1.fs”.

STOP_DEPENDENCIES: Specifies a set of relationships that OHAS considers when stopping a resource.
hard: Specify a hard stop dependency for a resource that you want to stop when named resources or resources of a particular resource type stop.

So, if we try to stop (using OHAS) DISKGROUP DG_ORAHOME11GR2_1 when ACFS (ORACLE_HOME) remain mounted (ONLINE) OHAS must raise CRS-2529

CRS-2529: Unable to act on  because that would require stopping or relocating ,
but the force option was not specified

 

crsctl add resource acfs.orahome11gR2_1.fs \
-type local_resource \
-attr "\
ACTION_SCRIPT=/u01/app/grid/product/11.2.0/grid/crs/script/acfs_ORAHOME11GR2.sh,\
AUTO_START=always,\
START_TIMEOUT=100,\
STOP_TIMEOUT=100,\
CHECK_INTERVAL=10,\
START_DEPENDENCIES=hard(ora.DG_ORAHOME11GR2_1.dg)pullup(ora.DG_ORAHOME11GR2_1.dg),\
STOP_DEPENDENCIES='hard(ora.DG_ORAHOME11GR2_1.dg)'"

More info about attributes used here you can found here:
http://download.oracle.com/docs/cd/E11882_01/rac.112/e16794/resatt.htm

$ crsctl status resource acfs.orahome11gR2_1.fs
NAME=acfs.orahome11gR2_1.fs
TYPE=local_resource
TARGET=OFFLINE
STATE=OFFLINE

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       54G  9.0G   42G  18% /
/dev/sda1              99M   31M   64M  33% /boot
tmpfs                 2.5G  176M  2.3G   8% /dev/shm

# Mounting Filesystem with OHAS:
$ crsctl start resource acfs.orahome11gR2_1.fs
CRS-2672: Attempting to start 'acfs.orahome11gR2_1.fs' on 'macedonia'
CRS-2676: Start of 'acfs.orahome11gR2_1.fs' on 'macedonia' succeeded

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       54G  9.0G   42G  18% /
/dev/sda1              99M   31M   64M  33% /boot
tmpfs                 2.5G  176M  2.3G   8% /dev/shm
/dev/asm/db112_dbh1-220
                       15G  10M   15G  1% /u01/app/oracle/product/11.2.0/dbhome_1

$ crsctl status resource acfs.orahome11gR2_1.fs
NAME=acfs.orahome11gR2_1.fs
TYPE=local_resource
TARGET=ONLINE
STATE=ONLINE on macedonia

Trying stop DISKGROUP without umount filesystem

$ srvctl stop diskgroup -g DG_ORAHOME11GR2_1
PRCR-1065 : Failed to stop resource ora.DG_ORAHOME11GR2_1.dg
CRS-2529: Unable to act on 'ora.DG_ORAHOME11GR2_1.dg' because that would require stopping or relocating 'acfs.orahome11gR2_1.fs', but the force option was not specified

Now you can restart your server and make sure wich the filesystem will be mounted at startup of OHAS.
If everything is ok you can install Oracle Software on ACFS.

I created a Database (db11g) using ORACLE_HOME on ACFS.

When DBCA create a Database (Oracle Restart) he doesn’t create dependencies between ACFS Mount and Database.

So, when OHAS start a Database (db11g)  he will try start only DISKGROUPs where database files are and just it. As ORACLE_HOME is not mounted Database will not start automatically.

We must create a dependencie between ACFS Mount and Database (like oracle does with Diskgroup).

Using Resource name of Database (ora.db11g.db) let’s create dependencies.

To Start Database (db11g) the ACFS Filesystem must be mounted. And to Stop ACFS Filesystem Database must be OFFLINE. If we try start Database without ACFS Filesystem mounted  OHAS will try start ACFS Filesystem before start Database.

$ crsctl modify resource ora.db11g.db -attr "\
START_DEPENDENCIES='hard(acfs.orahome11gR2_1.fs)pullup(acfs.orahome11gR2_1.fs)',\
STOP_DEPENDENCIES='hard(acfs.orahome11gR2_1.fs)'"

Creating this dependencies does not affect config of Database in OHAS.

$ srvctl config database -d db11g -a
Database unique name: db11g
Database name: db11g
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DG_DATA/db11g/spfiledb11g.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DG_DATA,DG_FRA
Services:
Database is enabled

Testing:

All Online:

crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
acfs.orahome11gR2_1.fs
               ONLINE  ONLINE       macedonia
ora.DG_DATA.dg
               ONLINE  ONLINE       macedonia
ora.DG_FRA.dg
               ONLINE  ONLINE       macedonia
ora.DG_ORAHOME11GR2_1.dg
               ONLINE  ONLINE       macedonia
ora.LISTENER.lsnr
               ONLINE  ONLINE       macedonia
ora.asm
               ONLINE  ONLINE       macedonia                Started
ora.ons
               OFFLINE OFFLINE      macedonia
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       macedonia
ora.db11g.db
      1        ONLINE  ONLINE       macedonia                Open
ora.diskmon
      1        ONLINE  ONLINE       macedonia
ora.evmd
      1        ONLINE  ONLINE       macedonia

Stopping Diskgroup DG_ORAHOME11GR2_1

$ srvctl stop database -d db11g
$ crsctl stop resource acfs.orahome11gR2_1.fs
CRS-2673: Attempting to stop 'acfs.orahome11gR2_1.fs' on 'macedonia'
CRS-2677: Stop of 'acfs.orahome11gR2_1.fs' on 'macedonia' succeeded
$ srvctl stop diskgroup -g DG_ORAHOME11GR2_1

### Database, Filesystem and Diskgroup is Offline
$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
acfs.orahome11gR2_1.fs
               OFFLINE OFFLINE      macedonia
ora.DG_DATA.dg
               ONLINE  ONLINE       macedonia
ora.DG_FRA.dg
               ONLINE  ONLINE       macedonia
ora.DG_ORAHOME11GR2_1.dg
               OFFLINE OFFLINE      macedonia
ora.LISTENER.lsnr
               ONLINE  ONLINE       macedonia
ora.asm
               ONLINE  ONLINE       macedonia                Started
ora.ons
               OFFLINE OFFLINE      macedonia
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       macedonia
ora.db11g.db
      1        OFFLINE OFFLINE                               Instance Shutdown
ora.diskmon
      1        ONLINE  ONLINE       macedonia
ora.evmd
      1        ONLINE  ONLINE       macedonia

Starting only database, all resources dependents must be UP automatically.


$ srvctl start database -d db11g

$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
acfs.orahome11gR2_1.fs
               ONLINE  ONLINE       macedonia
ora.DG_DATA.dg
               ONLINE  ONLINE       macedonia
ora.DG_FRA.dg
               ONLINE  ONLINE       macedonia
ora.DG_ORAHOME11GR2_1.dg
               ONLINE  ONLINE       macedonia
ora.LISTENER.lsnr
               ONLINE  ONLINE       macedonia
ora.asm
               ONLINE  ONLINE       macedonia                Started
ora.ons
               OFFLINE OFFLINE      macedonia
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       macedonia
ora.db11g.db
      1        ONLINE  ONLINE       macedonia                Open
ora.diskmon
      1        ONLINE  ONLINE       macedonia
ora.evmd
      1        ONLINE  ONLINE       macedonia

Everthing is fine.

This solution is not supported by Oracle this is a Workaround to start ACFS Filesystem at boot of Server

Enjoy…


Installing the Grid Infrastructure for a Standalone Server

Oracle by Example

Installing the Grid Infrastructure for a Standalone Server

Enjoy


Installing Grid Infrastructure for a Standalone Server

Purpose

This tutorial shows you how to install the Grid Infrastructure for a standalone server, configure Oracle Restart, move the SPFILE for an ASM instance into an ASM diskgroup.

Time to Complete

Approximately 1 hour

Topics

This tutorial covers the following topics:

Overview
Scenario
Prerequisites
Preparing to Install Grid Infrastructure
Installing Grid Infrastructure for a Standalone Server
Enabling Grid Infrastructure Features
Cleanup
Summary
Related information

 

….

http://st-curriculum.oracle.com/obe/db/11g/r2/prod/install/gridinstss/gridinstss.htm