Oracle Grid Infrastructure is supported on Operation System, but ACFS isn’t. Why?
The Grid Infrastructure is a Software that use OS Library, so Grid Infrastructure isn’t Kernel dependent but OS Package dependent.
ACFS is a module of Grid Infrastructure that have drivers installed/configured into OS Kernel, then ACFS is Kernel Dependent.
An Oracle ACFS file system is installed as a dynamically loadable vendor operating system (OS) file system driver and tool set that is developed for each supported operating system platform. The driver is implemented as a Virtual File System (VFS) and processes all file and directory operations directed to a specific file system.
The ACFS is composed by three components:
The Oracle ACFS, Oracle Kernel Services (OKS) and Oracle ADVM drivers, they are dynamically loaded when the Oracle ASM instance is started.
- Oracle ACFS
This driver processes all Oracle ACFS file and directory operations.
- Oracle ADVM
This driver provides block device services for Oracle ASM volume files that are used by file systems for creating file systems.
- Oracle Kernel Services Driver (OKS)
This driver provides portable driver services for memory allocation, synchronization primitives, and distributed locking services to Oracle ACFS and Oracle ADVM.
Before upgrade your OS Kernel you must place into your check list the ACFS Drivers, as you do with others OS Drives.
How to Oracle support the ACFS into future Kernels?
By releasing Patch Set Updates (PSUs), they are proactive cumulative patches containing recommended bug fixes that are released on a regular and predictable schedule.
How it’s works?
Check this EXAMPLE below:
Note: This table is only a example where these kernel and dates are fictitious.
On image above we can see.
In jan/2016 the GI 18.104.22.168 and 22.214.171.124 don’t need apply PSU because already have supported drives into base release, but 126.96.36.199 need PSU 188.8.131.52.2 to get supported drivers to kernel 2.6.32-100.
In Feb/2016 was released the kernel 2.6.65-30, so ACFS Deployment take some time until developers build new ACFS Drives. So, in FeB/2016 none release is supported for the kernel 2.6.65-30.
In Mar/2016 Oracle release the PSU where all ACFS versions was supported under that PSU.
With example above we can conclude: Does not matter what Grid Infrastructure base version (such as 184.108.40.206 or 220.127.116.11 ) you are using, what matter is, the Grid Infrastructure must have supported ACFS Drivers for that Kernel.
Where to find Certification Matrix for ACFS ?
Oracle Support have a detailed MOS Note : ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)
You need ALWAYS check above mos note before any kernel updates for environments that have ACFS Configured.
I saw some doubts as to which utility to use and in what situation we should use.
I searched on some sites related to Oracle and saw that the people is still a bit confused about which command we should use.
But before start there is a rule to a Clusterware Envorinment:
The “srvctl” is to be used to managed resources with the prefix ora.* resources and “crsctl” is to be used to query or start/stop resources with prefix ora.*, but crsctl is not supported to modify or edit resources with prefix ora.* .
See this note on MOS:
|Oracle Clusterware and Application Failover Management [ID 790189.1]|
Using crs_* or crsctl commands on resources with the prefix ora.* (resources provided by Oracle) remains unsupported.
So, if you created a resource with “srvctl” this resource should be managed only by “srvctl”. If you create a resource with “crsctl” this resource should be managed using “crsctl” command.
Let’s talk about the concept Policy-Based Cluster.
Oracle Clusterware 11g release 2 (11.2) introduces a different method of managing nodes and resources used by a database called policy-based management.
With Oracle Clusterware 11g release 2 (11.2) and later, resources managed by Oracle Clusterware are contained in logical groups of servers called server pools. Resources are hosted on a shared infrastructure and are contained within server pools. The resources are restricted with respect to their hardware resource (such as CPU and memory) consumption by policies, behaving as if they were deployed in a single-system environment.
- Enables dynamic capacity assignment when needed to provide server capacity in accordance with the priorities you set with policies
- Enables allocation of resources by importance, so that applications obtain the required minimum resources, whenever possible, and so that lower priority applications do not take resources from more important applications
- Ensures isolation where necessary, so that you can provide dedicated servers in a cluster for applications and databases
Applications and databases running in server pools do not share resources. Because of this, server pools isolate resources where necessary, but enable dynamic capacity assignments as required. Together with role-separated management, this capability addresses the needs of organizations that have standardized cluster environments, but allow multiple administrator groups to share the common cluster infrastructure.
This is only a concept.
Therefore Oracle divided this concept to be used for two types of configuration
Policy-Managed Database and Policy-Based Management to non-database.
A database that you define as a cluster resource. Management of the database is defined by how you configure the resource, including on which servers the database can run and how many instances of the database are necessary to support the expected workload.
To configure Policy managed database, Oracle already have pre-defined configuration for that.
So, the options are limited and specific to Database resources (such as Services,Database).
For that reason Oracle provided “srvctl add serverpool”.
$ srvctl add serverpool -h Adds a server pool to the Oracle Clusterware. Usage: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"] [-f] -g <pool_name> Server pool name -l <min> Minimum size of the server pool (Default value is 0) -u <max> Maximum size of the server pool (Default value is -1 for unlimited maximum size) -i <importance> Importance of the server pool (Default value is 0) -n "<server_list>" Comma separated list of candidate server names -f Force the operation even though some resource(s) will be stopped -h Print usage
Policy-Based Management to non-database.
To configure Non-Database resources, Oracle provided another command with much more options “crsctl add serverpool”.
This allow the DBA explore all options which Policy Managed can supply.
$ crsctl add serverpool -h Usage: crsctl add serverpool <spName> [[-file <filePath>] | [-attr "<attrName>=<value>[,...]"]] [-i] where spName Add named server pool filePath Attribute file attrName Attribute name value Attribute value -i Fail if request cannot be processed immediately -f Force option
So, we NEVER should not mix the serverpool used by database resource and serverpool used by non-database resource.
Also never use “crsctl” command to change Database Server Pool wich was created by “srvctl”. Never put a database in a serverpool created by using “crsctl” command.
Server Pool to database resource must be created by using “srvctl”.
Server Pool to non-database resource must be created by using “crsctl”
Question: Is possible change ora.* resources with “crsctl”?
Yes, It’s possible but not supported by Oracle.
Hope make this clear.
RACcheck is a tool developed by the RAC Assurance development team for use by customers to automate the assessment of RAC systems for known configuration problems and best practices.
RACcheck is a RAC Configuration Audit tool designed to audit various important configuration settings within a Real Application Clusters (RAC), Oracle Clusterware (CRS), Automatic Storage Management (ASM) and Grid Infrastructure environment. The tool audits configuration settings within the following categories:
- OS kernel parameters
- OS packages
- Many other OS configuration settings important to RAC.
- CRS/Grid Infrastructure
- Database parameters
- Many other database configuration settings important to RAC.
1. RACcheck is NON-INTRUSIVE and does not change anything in the environment, except as detailed below:
– SSH user equivalence for the RDBMS software owner is assumed to be configured among all the database servers being audited in order for it to execute commands on the remote database server nodes. If the tool determines that this user equivalence is not established it will offer to set it up either temporarily or permanently at the option of the user. If the user chooses to set up SSH user equivalence temporarily then the script will do so for the duration of the execution of the tool but then it will return the system to the state in which it found SSH user equivalence originally. For those wishing to configure SSH user equivalence outside the tool (if not already configured), consult My Oracle Support Note: 372795.1.
– RACcheck creates a number of small output files into which the data necessary to perform the assessment is collected
– RACcheck creates and executes some scripts dynamically in order to accomplish some of the data collection
– RACcheck cleans up after itself any temporary files that are created and not needed as part of the collection.
2. RACcheck interrogates the system to determine the status of the Oracle stack components (ie., Grid Infrastructure, RDBMS, RAC, etc) and whether they are installed and/or running. Depending upon the status of each component, the tool runs the appropriate collections and audit checks. If due to local environmental configuration the tool is unable to properly determine the needed environmental information please refer to the TROUBLESHOOTING section.
3. Watchdog daemon – RACcheck automatically runs a daemon in the background to monitor command execution progress. If, for any reason, one of the commands run by the tool should hang or take longer than anticipated, the monitor daemon kills the hung command after a configurable timeout so that main tool execution can progress. If that happens then the collection or command that was hung is skipped and a notation is made in the log. If the default timeout is too short please see the TROUBLESHOOTING section regarding adjustment of the RAT_TIMEOUT, and RAT_ROOT_TIMEOUT parameters.
4. If RACcheck’s driver files are older than 90 days, the driver files are considered to be “stale” and the script will notify the user of a stale driver file. A new version of the tool and its driver files (kit) must be obtained from MOS Note 1268927.1.
5. When the RACcheck completes the collection and analysis it produces two reports, summary and detailed. A output .zip file is also produced by RACcheck. This output .zip file can be provided to Oracle Support for further analysis if an SR needs to be logged. The detailed report will contain Benefit/Impact, Risk and Action/Repair information. In many cases it will also reference publicly available documents with additional information about the problem and how to resolve it.
6. The results of the audit checks can be optionally uploaded into database tables for reporting purposes. See below for more details on this subject.
7. In some cases customers may want to stage RACcheck on a shared filesystem so that it can be accessed from various systems but be maintained in a single location rather than being copied to each cluster on which it may be used. The default behavior of the tool is to create a subdirectory and its output files in the location where the tool is staged. If that staging area is a read only filesystem or if the user for any reason would like the output to be created elsewhere then there is an environment variable which can be used for that purpose. The RAT_OUTPUT parameter can be set to any valid writable location and the output will be created there.
Oracle Server – Enterprise Edition – Version: 10.2.0.1 to 18.104.22.168 – Release: 10.2 to 11.2
- Linux x86
- IBM AIX on POWER Systems (64-bit)
- Oracle Solaris on SPARC (64-bit)
- Linux x86-64
To download RAC Check tool use this note on MoS:
RACcheck – RAC Configuration Audit Tool [ID 1268927.1]
Example of report output:
The Datafiles and REDO were created in two different diskgroups.
It was necessary to perform a restore of the database, then the dba removed all datafiles in both diskgroup and executed restore the database.
During the restore RMAN was reported restoring the datafiles in the correct diskgroup and showing original path/file on prompt of RMAN, but at the end of restore all datafiles were restored in only one diskgroup, all datafiles were moved/renamed to diskgroup specified in parameter DB_CREATE_FILE_DEST.
Question: It’s a normal behavior?
The restore behavior is the same as with file creation behavior for OMF files. When OMF files are restored, they follow an order of precedence. That order is:
- If “SET NEWNAME” is specified, RMAN will use that name for restore.
- If the original file exists, RMAN will use the original filename for restore.
- If the DB_CREATE_FILE_DEST is set and original filename does not exist, RMAN will use the diskgroup name specified
- If no DB_CREATE_FILE_DEST is set and the original file does not exist, then RMAN will create another name for that file in the original disk group.
So, If you do not want surprise after or during the restore, you must unset the parameter DB_CREATE_FILE_DEST before start restore.
alter system set DB_CREATE_FILE_DEST='' scope=memory;
After Restore/Recover (i.e during open database) you can have suprise if DB_CREATE_FILE_DEST is not set.
So, I recommend you set DB_CREATE_FILE_DEST between after RECOVER DATABASE and before “OPEN RESETLOGS” or set DB_CREATE_ONLINE_LOG_DEST_n, because during “open resetlogs” Oracle will create Online Logs files if it does not exists.
Online Logs (Redo) Behavior
When OMF files are restored, they follow an order of precedence. That order is:
- If the original file exists, RMAN will use the original filename for restore.
- If the DB_CREATE_ONLINE_LOG_DEST_n is set and original filename does not exist, RMAN will use the diskgroup name specified in DB_CREATE_ONLINE_LOG_DEST_n
- If no DB_CREATE_ONLINE_LOG_DEST_n is set and the original file does not exist but DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST (FRA) is set, RMAN will use the diskgroup name specified in both (DB_CREATE_FILE_DEST & DB_RECOVERY_FILE_DEST) to multiplex ONLINELOG.
- If no DB_CREATE_ONLINE_LOG_DEST_n and DB_RECOVERY_FILE_DEST is set and the original file does not exist but DB_CREATE_FILE_DEST is set, RMAN will use the diskgroup name specified in DB_CREATE_FILE_DEST.
- If no DB_CREATE_ONLINE_LOG_DEST_n and DB_CREATE_FILE_DEST is set and the original file does not exist but DB_RECOVERY_FILE_DEST is set, RMAN will use the diskgroup name specified in DB_RECOVERY_FILE_DEST.
- If no DB_CREATE_ONLINE_LOG_DEST_n, DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST is set and the original file does not exist, Oracle will raise “
ORA-19801: initialization parameter DB_RECOVERY_FILE_DEST is not set” during “
alter database open resetlogs“.
This paper explains the benefits of using EMC RecoverPoint local and remote replication to provide operation and disaster recovery of Oracle environments.
RecoverPoint provides crash-consistent and application-consistent recovery points that can be utilized in response to a number of possible scenarios, enhancing the native availability within an Oracle environment. Oracle support third-party enterprise replication techologies to protect Oracle environments.
EM RecoverPoint provides full support for data replication and disaster recovery when working with Oracle Databases. RecoverPoint supports Oracle whether the Oracle databases are stored as raw disks or on a file system.
Click link below