What Patch to Apply? PSU ? GI PSU ? Proactive Bundle Patch?

For those who unfamiliar with Oracle Patch is little confusing what patch to apply when get  a table with different patch in the same version.

patch

I will try clarify some doubts.

Note: You must provide a valid My Oracle Support login name in order to access below Links.

Patch version numbering changed

In November 2015 the version numbering for new Bundle Patches, Patch Set Updates and Security Patch Updates for Oracle Database changed the format from  5th digit of the bundle version with a release date in the form “YYMMDD” where:

  • YY is the last 2 digits of the year
  • MM is the numeric month (2 digits)
  • DD is the numeric day of the month (2 digits)

More detail can be found here: Oracle Database, Enterprise Manager and Middleware – Change to Patch Numbering from Nov 2015 onwards (Doc ID 2061926.1)

 

Changes on Database Security Patching from 12.1.0.1 onwards

Starting with Oracle Database version 12.1.0.1 , Oracle will only provide Patch Set Update (PSU) patches to meet the Critical Patch Update (CPU) program requirements for security patching. SPU (Security Patch Update) patches will no longer be available. Oracle has moved to this simplified model due to the popularity of the PSU patches. PSUs are Oracle’s preferred proactive patching vehicle since their inception in 2009.

Database Security Patching from 12.1.0.1 onwards (Doc ID 1581950.1)

 

Where to find last Patches for Database?

Use the Patch Assistant: Assistant: Download Reference for Oracle Database PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases (Doc ID 2118136.2)

 

What Patch to apply PSU, GI PSU,Proactive Bundle Patch, Bundle Patch (Windows 32bit & 64bit)?

When using the Patchset Assistant the assistant show below table:
In this case I search for last patch for 12.1.0.2.

PSU_Bundle_Patch

Understanding the Patch Nomenclature :
New Patch Nomenclature for Oracle Products (Doc ID 1430923.1)

Note: As of April 2016, the Database Patch for Engineered Systems and Database In-Memory has been renamed from “Bundle Patch (BP) ” to “Database Proactive Bundle Patch”.

Note: Windows Platform must use “Bundle Patch (Windows 32bit & 6bit)”.

Database patch content:

  • SPU contains only the CPU program security fixes
  • PSU contains the CPU program security fixes and additional high-impact/low-risk critical bug fixes
  • Proactive Bundle Patch (PBP) includes all PSU fixes along with fixes targeted at the specific Bundle Patch environment.

They are cumulatives,so  if you have a OH (12.1.0.2) in base release (i.e no fix)  and apply the last PSU or PBP it will fix all bugs from base release until current version of patch.

Where to apply each Patch?

  • PSU –  Can be applied on Database Servers, Client-Only and  Instant Client.
  • GI PSU – Can be applied on GI Home (Oracle Restart or Oracle Clusterware) in conjunction with RAC, RACOne,  Single Instance home, Client-Only and  Instant Client.
  • Proactive Bundle Patch – Can be applied on GI Home in conjunction with RAC, RACOne, or Single Instance home, Client-Only and  Instant Client.

 

An installation can only use one of the SPU, PSU or Proactive Bundle Patch patching methods.

 

How to choose between them?

The “Database Proactive Bundle Patch” requires a bit more testing than a Patch Set Update (PSU) as it delivers a larger set of fixes.

If you are installing a new fresh installation you should to apply Database Proactive Bundle Patch.

PSU is addressed to environments sensitive to changes, because it required less testing.

I have Applied “Database PSU” how to move to “Database Proactive Bundle Patch”? 

Moving from “Database PSU” to “Database Proactive Bundle Patch”

  • Back up your current setup
  • Fully rollback / deinstall “Database PSU”
    • If using OJVM PSU that is likely to require OJVM PSU to be rolled out too
  • Apply / install the latest “Database Proactive Bundle Patch”
  • Apply any interim patches also rolled out above (including OJVM PSU if that was installed)

Note from Oracle: It is not generally advisable to switch from “Database PSU” to “Database SPU” method.

The below note can clarify any doubt on this post.
Oracle Database – Overview of Database Patch Delivery Methods (Doc ID 1962125.1)

 

OPLAN Support

GI PSU and Proactive Bundle Patch are supported by OPlan.

OPlan is a utility that facilitates the patch installation process by providing you with step-by-step patching instructions specific to your environment.
In contrast to the traditional patching operation, applying a patch based on the README requires you to understand the target configuration and manually identify the patching commands relevant to your environment. OPlan eliminates the requirement of identifying the patching commands by automatically collecting the configuration information for the target, then generating instructions specific to the target configuration.

Oracle Software Patching with OPLAN (Doc ID 1306814.1)

Useful Notes:

Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

Frequently Asked Questions (FAQ): Patching Oracle Database Server (Doc ID 1446582.1)

12.1.0.2 Database Proactive Bundle Patches / Bundle Patches for Engineered Systems and DB In-Memory – List of Fixes in each Bundle (Doc ID 1937782.1)

Advertisements

Long-Term RMAN Backups (Online Backup Without Keep All Logs)

102091053-short-term-long-term.1910x1000

Short-term backups takes periodic images of active data in order to provide a method of recovering data that have been deleted or destroyed.
These backups are retained only for a few days or weeks. The primary purpose of short-term backup is to recover recent data or if a media failure or disaster occurs, then you can use it to restore your backups and bring your data available.

Most of Backup System have two different setup for Short-Term and Long-Term backups.

Short-Term Backups are classic backup which have short retention and are stored on  pool of data active, often these data are always available on Tape Library or Disk.

Long-Term Backups are archival backups which have long retention and often are stored only on tapes, usually these tapes are off-site or in safes.

Will try explain in few lines the purpose of Long-Term backups.

You can back up the database on the first day of every year to satisfy a regulatory requirement and store the media offsite.
Years after you make the archival backup, you could restore and recover it to query the data as it appeared at the time of the backup. We have a Data Presevation.

Data preservation is related to data protection, but it serves a different purpose.
For example, you may need to preserve a copy of a database as it existed at the end of a business quarter.
This backup is not part of the disaster recovery strategy. The media to which these backups are written are often unavailable after the backup is complete.
You may send the tape into fire storage or ship a portable hard drive to a testing facility.

RMAN provides a convenient way to create a backup and exempt it from your backup retention policy. This type of backup is known as an archival backup.

Why this introduction about short-term or long-term backups?

I manage some environments and I see that some DBA fail to understand this difference and end up making incorrect configurations for archival backup and classic backups.

IMHO, the main villain is RDBMS 10.2 and earlier version, because there is no long-term backup concept when take online backup.

In 10.2  to take a archival backup we must shutdown database to not keep all archivelogs (many envs this is not possible), if we take online backup with long retention (KEEP LOGS) you need keep all archivelogs until first backup be obsolete. On this version there is no difference between a classic backup or archival backups. (in fact have diference, but it’s a slight difference)

So, because that many DBA still stuck on this concept of this version.

In 11.1 this changed, you can create a consistent archival backup without need keep all archivelogs.

When to use this rman command ( “backup database keep logs forever/ until sysdate + XXX”) on 11.1 version or above?
It should be only used if the DBA have a requirement to keep ALL changes of database forever or until some date and need recover ANY point-in-time for a database life. If you don’t need it, this setup is insane and increase cost and management of its environment.

Let’s start with some tests. I’m using rdbms 11.2.0.4.

How to take a Long-Term Backup in 11.1 or higher version.

Best practices:  3 important things.

  1. Use RESTORE POINT to easy the restore.RESTORE POINT clause that creates a normal restore point, which is a label for an SCN to which the backup must be recovered to be made consistent. The SCN is captured just after the datafile backups complete. RMAN resynchronizes restore points with the recovery catalog and maintains the restore points as long as the backup exists.
    More detail on documentation below.
    https://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmbckba.htm#BRADV89551
  2. Use non-default TAG this will easy  manage backup later.
  3. Use CHECK LOGICAL Option to avoid backup logical corrupted blocks.By default, RMAN does not check for logical corruption. If you specify CHECK LOGICAL on the BACKUP command, however, then RMAN tests data and index blocks for logical corruption, such as corruption of a row piece or index entry, and log them in the alert log located in the Automatic Diagnostic Repository (ADR). If you use RMAN with CHECK LOGICAL option when backing up or restoring files, then it detects all types of block corruption that are possible to detect.What a disappointment you try to restore a backup taken on last year and find out which there are  many logical blocks corrupted.

P.S: I always use CHECK LOGICAL for my all backups (short or long-term backups)

 

Performing a Long-Term Backup

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Feb 18 15:56:59 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: TESTDB (DBID=2633899005)
 connected to recovery catalog database

RMAN> run {
 2> ALLOCATE CHANNEL tape1 
    DEVICE TYPE 'SBT_TAPE' PARMS 
    'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
 3> BACKUP CHECK LOGICAL 
    FILESPERSET 10 
    FULL DATABASE 
    TAG 'MONTHLY_FULL_20160218' 
    FORMAT 'RMAN_MONTHLY_FULL_%d_%s_%p_%T_%c' 
    KEEP UNTIL TIME 'SYSDATE + 400' 
    RESTORE POINT BKP_MONTHLY_20160218_1556;
 4> RELEASE CHANNEL tape1;
 5> }
 6> exit
 allocated channel: tape1
 channel tape1: SID=241 device type=SBT_TAPE
 channel tape1: Data Protection for Oracle: version 5.5.2.0

Starting backup at Feb 18 2016 15:57:07
 current log archived

backup will be obsolete on date Mar 24 2017 15:57:15
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed full datafile backup set
 channel tape1: specifying datafile(s) in backup set
 input datafile file number=00001 name=+TSM/testdb/datafile/system.501.901826545
 input datafile file number=00006 name=+TSM/testdb/datafile/users.487.901826545
 input datafile file number=00002 name=+TSM/testdb/datafile/sysaux.542.901826547
 input datafile file number=00003 name=+TSM/testdb/datafile/undotbs1.442.901826547
 input datafile file number=00004 name=+TSM/testdb/datafile/users.314.901826547
 input datafile file number=00005 name=+TSM/testdb/datafile/users.510.901826547
 channel tape1: starting piece 1 at Feb 18 2016 15:57:15
 channel tape1: finished piece 1 at Feb 18 2016 15:58:40
 piece handle=RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:01:25

backup will be obsolete on date Mar 24 2017 15:58:40
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed full datafile backup set
 channel tape1: specifying datafile(s) in backup set
 including current SPFILE in backup set
 channel tape1: starting piece 1 at Feb 18 2016 15:58:41
 channel tape1: finished piece 1 at Feb 18 2016 15:58:42
 piece handle=RMAN_MONTHLY_FULL_TESTDB_329_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:00:01
 current log archived
 backup will be obsolete on date Mar 24 2017 15:58:46
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed archived log backup set
 channel tape1: specifying archived log(s) in backup set
 input archived log thread=1 sequence=91 RECID=830 STAMP=904147125
 channel tape1: starting piece 1 at Feb 18 2016 15:58:46
 channel tape1: finished piece 1 at Feb 18 2016 15:58:47
 piece handle=RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:00:01

backup will be obsolete on date Mar 24 2017 15:58:47
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed full datafile backup set
 channel tape1: specifying datafile(s) in backup set
 including current control file in backup set
 channel tape1: starting piece 1 at Feb 18 2016 15:58:49
 channel tape1: finished piece 1 at Feb 18 2016 15:58:50
 piece handle=RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:00:01
 Finished backup at Feb 18 2016 15:58:50

released channel: tape1

Recovery Manager complete.

Change Backup To UNAVAILABLE

If this backup will be off-line in Tape Library  then mark it as UNAVAILABLE to avoid RMAN try use it during normal restore operation. Normal Restore should use short-term backups which usually  is active on Tape Library.

As I used non-default tag it’s easy change it.

RMAN> CHANGE BACKUP TAG MONTHLY_FULL_20160218 UNAVAILABLE;

changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1 RECID=311 STAMP=904147035
 changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_329_1_20160218_1 RECID=312 STAMP=904147121
 changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1 RECID=313 STAMP=904147126
 changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1 RECID=314 STAMP=904147129
 Changed 4 objects to UNAVAILABLE status

How to restore it?

RMAN> shutdown abort;

Oracle instance shut down

RMAN> startup nomount;

connected to target database (not started)
 Oracle instance started

Total System Global Area 801701888 bytes

Fixed Size 2250648 bytes
 Variable Size 251660392 bytes
 Database Buffers 536870912 bytes
 Redo Buffers 10919936 bytes

RMAN> run {
 2> SET UNTIL RESTORE POINT BKP_MONTHLY_20160218_1556;
 3> RESTORE CONTROLFILE;
 4> STARTUP MOUNT;
 5> RESTORE DATABASE;
 6> RECOVER DATABASE;
 7> ALTER  DATABASE OPEN RESETLOGS;
 8> };

executing command: SET until clause

Starting restore at 18-02-2016 16:56:32
 allocated channel: ORA_SBT_TAPE_1
 channel ORA_SBT_TAPE_1: SID=225 device type=SBT_TAPE
 channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.2.0
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: SID=233 device type=DISK

channel ORA_SBT_TAPE_1: starting datafile backup set restore
 channel ORA_SBT_TAPE_1: restoring control file
 channel ORA_SBT_TAPE_1: reading from backup piece RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1
 channel ORA_SBT_TAPE_1: piece handle=RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1 
 tag=MONTHLY_FULL_20160218
 channel ORA_SBT_TAPE_1: restored backup piece 1
 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:02
 output file name=+TSM/testdb/controlfile/current.518.901826451
 Finished restore at 18-02-2016 16:56:42

database is already started
 database mounted
 released channel: ORA_SBT_TAPE_1
 released channel: ORA_DISK_1

Starting restore at 18-02-2016 16:56:48
 Starting implicit crosscheck backup at 18-02-2016 16:56:48
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: SID=225 device type=DISK
 Finished implicit crosscheck backup at 18-02-2016 16:56:49

Starting implicit crosscheck copy at 18-02-2016 16:56:49
 using channel ORA_DISK_1
 Finished implicit crosscheck copy at 18-02-2016 16:56:49

searching for all files in the recovery area
 cataloging files...
 no files cataloged

allocated channel: ORA_SBT_TAPE_1
 channel ORA_SBT_TAPE_1: SID=233 device type=SBT_TAPE
 channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.2.0
 using channel ORA_DISK_1

channel ORA_SBT_TAPE_1: starting datafile backup set restore
 channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
 channel ORA_SBT_TAPE_1: restoring datafile 00001 to +TSM/testdb/datafile/system.501.901826545
 channel ORA_SBT_TAPE_1: restoring datafile 00002 to +TSM/testdb/datafile/sysaux.542.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00003 to +TSM/testdb/datafile/undotbs1.442.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00004 to +TSM/testdb/datafile/users.314.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00005 to +TSM/testdb/datafile/users.510.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00006 to +TSM/testdb/datafile/users.487.901826545
 channel ORA_SBT_TAPE_1: reading from backup piece RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1
 channel ORA_SBT_TAPE_1: piece handle=RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1 
 tag=MONTHLY_FULL_20160218
 channel ORA_SBT_TAPE_1: restored backup piece 1
 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:45
 Finished restore at 18-02-2016 16:58:43

Starting recover at 18-02-2016 16:58:44
 using channel ORA_SBT_TAPE_1
 using channel ORA_DISK_1

starting media recovery

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
 channel ORA_SBT_TAPE_1: restoring archived log
 archived log thread=1 sequence=91
 channel ORA_SBT_TAPE_1: reading from backup piece RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1
 channel ORA_SBT_TAPE_1: piece handle=RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1 
 tag=MONTHLY_FULL_20160218
 channel ORA_SBT_TAPE_1: restored backup piece 1
 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
 archived log file name=+TSM/testdb/archivelog/2016_02_18/thread_1_seq_91.267.904150729 thread=1 sequence=91
 channel default: deleting archived log(s)
 archived log file name=+TSM/testdb/archivelog/2016_02_18/thread_1_seq_91.267.904150729 RECID=834 STAMP=904150729
 media recovery complete, elapsed time: 00:00:02
 Finished recover at 18-02-2016 16:58:51

database opened
 new incarnation of database registered in recovery catalog
 starting full resync of recovery catalog
 full resync complete

How to migrate old fashion to new fashion to make long-term backup

Many environments inherit old backup configurations  of old DBAs, then is very common to see set up long-term backup environments using  “backup  database keep logs;”.

You can ask: I don’t want to keep all logs and  want to keep only my backups which is not obsolete? It’s possible?

Yes, There is a  workaround to avoid RMAN still keeping all logs forever, if you are using 11.1 or above.

AGAIN, if you are using 10.2 or later version you CAN’T do this below.

 

What you need to do.

First of all you need re-configure your scripts to  long-term backups by using restore point from now.

What to do with old backups.

We can leave this the old setup until the last backup take with “KEEP LOGS” retention be obsolete. All Archivelogs Backups will be obsolete and RMAN will automaticaly deleted it. (recommended)

P.S: If you have a Long Term backup with “KEEP FOREVER”  the Archivelogs Backups never will be obsolete.

There is a workaround (or bug)  to delete all archivelogs and make a “self-contained” backup even using “keep logs”. This can be dangerous if not configured properly,  RMAN will delete all archived logs that not satisfy retention policy. That’s mean, if it go wrong you will have a backup of database without archivelog to recovery it. (i.e you don’t will be able to recovery that backup).

Steps:

  • Find on RMAN Catalog all  Backups with clause keep.
  • Find all Backup of Archivelogs between 24h before and 24h after the backup with  clause “Keep” was taken.
    • e.g  backup was finished at 02/01/2016 10:00, find all backup of archivelogs between 02/01/2016 10:00 and 03/01/2016 10:00
  • Change theses backups (archivelogs, controlfile and datafile) to UNAVAILABLE
  • After change these backups to UNAVAILABLE all Backups of Archivelogs will be orphan and will be obsolete.

 

Eg.

(dd-mm-yyyy hh24:mi)

Backup 1 ( 01/01/2016 12:00)

Backup 2 (01/02/2016 12:00)

Backup 3 (01/03/2016 12:00)

Backup 4 (01/04/2016 12:00)

 

 

Scenario 1:

Change only  Backup 1  and Archivelogs to Restore this Backup to unavailable.

  • All archivelogs that does not have status UNAVAILABLE  until 01/02/2016 12:00 will be obsolete and will deleted on next “delete obsolete”.
  • All futures backups of archivelog will be kept until Backup 2 be obsolete.

Scenario 2:

Change only Backup 2  and Archivelogs to Restore this Backup to unavailable.

  • All archivelogs will be kept.

Scenario 3:

Change only Backup 1,2,3 and 4 to UNAVAILABLE.

  • All archivelogs will be obsolete, if run “delete obsolete” you can restore Backup 1,2,3 or 4, BUT WILL NOT BE ABLE TO RECOVERY IT since there is no Archivelog.

Scenario 4:

Change to unavailable Backup 1,2,3,4   and archivelogs to restore these database.

  • All archivelogs with Status AVAILABLE will be obsolete and all archivelogs will obey default retention policy.

 

It seens to me a bug, because Oracle just ignore retention policy of archivelog log, if Backup with Keep clause is UNAVAILABLE.

 

 

 

 


IBM POWER7 AIX and Oracle Database performance considerations


This post is intended to provide DBA's with advice and website links to assist with IBM Power Systems running in an Oracle environment.
Most issues that show up on POWER7 are the result of not following best practices advice that applies to all Power Systems generations.

Metalink: IBM POWER7 AIX and Oracle Database performance considerations -- 10g & 11g (Doc ID 1507249.1)

IBM White paper: IBM POWER7 AIX and Oracle Database performance considerations


Use SHARED SERVER over POLICY-MANAGED Database is supported?


Helping colleagues in OTN Forum I found a issue that have not been solved yet, but we must be aware of this issue before performing implementation.

When you are using policy-managed database Oracle Manage all instance automatically on demand without dba intervention. For the whole thing work well is mandatory use SCAN (IP and SCAN Listeners) and VIP (IP and Local Listeners) database use these resource to automatically register it's own services and that's way how the Oracle client will find a Oracle Instance.

When you Configure a SHARED SERVER on Policy-Managed Dababase one question arise:
SHARED SERVER don't use Parameter LOCAL_LISTENER to register database on Listener, but it use DISPATCHERS parameter.

DISPATCHERS parameter is NOT Like LOCAL_LISTENER parameter which automatically updated by Database Agent. Then you must set DISPATCHERS manually using VIP of current Node on each Instance. As explained in note How To Configure Shared Server Dispatchers For RAC Environment [ID 578524.1]. BUT this will work only if you have a ADMIN-MANAGED database because instance is fixed on that node.

Policy-managed database has not fixed a instance on specific node and also has not a specific INSTANCE_NAME, because INSTANCE_NAME can be changed automatically on demand or when you change configuration of your SERVER POOL.

The BIG QUESTION:
How configure DISPATCHERS parameter when database is POLICY-MANAGED ? Since DISPATCHERS can't be automatically configured dynamically, but the INSTANCE is dynamic and can be reloacated to any node on that server pool.

When I get the anwser I'll post here Oracle Solution. (If someone already has the answer please post here)


What should I use “srvctl add serverpool” or “crsctl add serverpool”

I saw some doubts as to which utility to use and in what situation we should use.

I searched on some sites related to Oracle and saw that the people is still a bit confused about  which command we should use.

But before start there is a rule to a Clusterware Envorinment:

The “srvctl” is to be used to managed  resources with the prefix  ora.* resources and “crsctl” is to be used  to query or start/stop resources with prefix ora.*, but crsctl is not supported to modify or edit resources with prefix ora.* .

See this note on MOS:

Oracle Clusterware and Application Failover Management [ID 790189.1]

Using crs_* or crsctl commands on resources with the prefix ora.* (resources provided by Oracle) remains unsupported.

 

So, if you created a resource with “srvctl” this resource should be managed only by “srvctl”. If you create a resource with “crsctl” this resource should be managed using “crsctl” command.

Let’s talk about  the concept Policy-Based Cluster.

Oracle Clusterware 11g release 2 (11.2) introduces a different method of managing nodes and resources used by a database called policy-based management.

With Oracle Clusterware 11g release 2 (11.2) and later, resources managed by Oracle Clusterware are contained in logical groups of servers called server pools. Resources are hosted on a shared infrastructure and are contained within server pools. The resources are restricted with respect to their hardware resource (such as CPU and memory) consumption by policies, behaving as if they were deployed in a single-system environment.

Policy-based management:

  • Enables dynamic capacity assignment when needed to provide server capacity in accordance with the priorities you set with policies
  • Enables allocation of resources by importance, so that applications obtain the required minimum resources, whenever possible, and so that lower priority applications do not take resources from more important applications
  • Ensures isolation where necessary, so that you can provide dedicated servers in a cluster for applications and databases

Applications and databases running in server pools do not share resources. Because of this, server pools isolate resources where necessary, but enable dynamic capacity assignments as required. Together with role-separated management, this capability addresses the needs of organizations that have standardized cluster environments, but allow multiple administrator groups to share the common cluster infrastructure.

This is only a concept.

Therefore Oracle divided this concept to be used for two types of configuration

Policy-Managed Database and  Policy-Based Management to non-database.

Policy-Managed Database

A database that you define as a cluster resource. Management of the database is defined by how you configure the resource, including on which servers the database can run and how many instances of the database are necessary to support the expected workload.

To configure Policy managed database, Oracle already have pre-defined configuration for that.

So, the options are limited and specific to Database resources (such as Services,Database).

For that reason Oracle provided “srvctl add serverpool”.

 $ srvctl add serverpool -h

Adds a server pool to the Oracle Clusterware.

Usage: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"] [-f]
 -g <pool_name> Server pool name
 -l <min> Minimum size of the server pool (Default value is 0)
 -u <max> Maximum size of the server pool (Default value is -1 for unlimited maximum size)
 -i <importance> Importance of the server pool (Default value is 0)
 -n "<server_list>" Comma separated list of candidate server names
 -f Force the operation even though some resource(s) will be stopped
 -h Print usage
 

http://docs.oracle.com/cd/E11882_01/rac.112/e16795/srvctladmin.htm#BAJDJFCJ

 Policy-Based Management to non-database.

To configure Non-Database resources, Oracle provided another command with much more options “crsctl add serverpool”.

This allow the DBA explore all options which Policy Managed can supply.

 $ crsctl add serverpool -h
 Usage:
 crsctl add serverpool <spName> [[-file <filePath>] | [-attr "<attrName>=<value>[,...]"]] [-i]
 where
 spName Add named server pool
 filePath Attribute file
 attrName Attribute name
 value Attribute value
 -i Fail if request cannot be processed immediately
 -f Force option

http://docs.oracle.com/cd/E11882_01/rac.112/e16794/crsref.htm#CWADD92179

So, we NEVER should not mix the serverpool used by database resource and serverpool used by non-database resource.

Also never use “crsctl” command to change Database Server Pool wich was created by “srvctl”.  Never put a database  in a serverpool created by using “crsctl” command.

Server Pool to database resource must be created by using “srvctl”.

Server Pool to non-database resource must be created by using “crsctl”

Question: Is possible change ora.* resources with “crsctl”?

Yes, It’s possible but not supported by Oracle.

Hope make this clear.

Enjoy…


TIP: Some points about setting up RMAN on RAC Environment

In this post I will some tips for setting up RMAN in RAC environment.

I will not cover  topics about  RMAN that can be configured in standalone environment (e.g Incremental backup, use of FRA, etc.)

First question: Is there a difference of setting up RMAN between  standalone and RAC environments?

The answer is YES, not too much but some points must be observed.

RMAN Catalog

First of all, In my point of view use RMAN Catalog is mandatory. Because it’s a HA environment and  to restore a database without RMAN Catalog can take long time.

To protect and keep backup metadata for longer retention times than can be accommodated by the control file, you can create a recovery catalog. You should create the recovery catalog schema in a dedicated standalone database. Do not locate the recovery catalog with other production data. If you use Oracle Enterprise Manager, you can create the recovery catalog schema in the Oracle Enterprise Manager repository database.

About HA of RMAN Catalog?

I always recommend to place the Host that will hold RMAN Catalog on VirtualMachine, because  is a machine which require low resource and disk space and have low activity.

In case of the failure of Host RMAN Catalog is easy move that host to another Physical Host or Recover the whole virtual machine.

But if the option of use a VM is not avaliable. Use another cluster (e.g Test Cluster) env if avaliable.

The Database of RMAN catalog must be in ARCHIVELOG. Why?

It’s a prod env (is enough), will generate very small amount of  archivelogs, and in case of any corruption or user errors (e.g User generated new incarnation of Prod Database  during a Test Validation of Backup in Test env) can be necessary recovery point in time.

Due a small database of low activity I see some customers not giving importance to this database. It’s should not happens.

High availability of execution of backup using RMAN:

We have some challenges:

  1.  The backup must not affect the availability cluster, but the backup must be executed daily.
  2.  The backup cannot be dependent of nodes (i.e backup must be able to execute in all nodes independently if have some nodes active or not)
  3. Where store the scripts of backup? Where store the Logs?

I don’t recommend use any nodes of cluster to start/store scripts backups. Due if that node fail backup will not be executed.
Use the Host where is stored RMAN Catalog to store your backup scripts too and start these scripts from this host, the utility RMAN works as client only… the backup is always performed on server side.

Doing this you will centralize all scripts and logs of backup from your environment. That will ease the management of backup.

Configuring the RMAN Snapshot Control File Location in a RAC 11.2

RMAN creates a copy of the control file for read consistency, this is the snapshot controlfile. Due to the changes made to the controlfile backup mechanism in 11gR2 any instances in the cluster may write to the snapshot controlfile. Therefore, the snapshot controlfile file needs to be visible to all instances.
The same happens when a backup of the controlfile is created directly from sqlplus any instance in the cluster may write to the backup controfile file.
In 11gR2 onwards, the controlfile backup happens without holding the control file enqueue. For non-RAC database, this doesn’t change anything.
But, for RAC database, the snapshot controlfile location must be in a shared file system that will be accessible from all the nodes.
The snapshot controlfile MUST be accessible by all nodes of a RAC database.

See how do that:

https://forums.oracle.com/forums/thread.jspa?messageID=9997615

Since version 11.1 : Node Affinity Awareness of Fast Connections

In some cluster database configurations, some nodes of the cluster have faster access to certain data files than to other data files. RMAN automatically detects this, which is known as node affinity awareness. When deciding which channel to use to back up a particular data file, RMAN gives preference to the nodes with faster access to the data files that you want to back up. For example, if you have a three-node cluster, and if node 1 has faster read/write access to data files 7, 8, and 9 than the other nodes, then node 1 has greater node affinity to those files than nodes 2 and 3.

Channel Connections to Cluster Instances with RMAN

Channel connections to the instances are determined using the connect string defined by channel configurations. For example, in the following configuration, three channels are allocated using dbauser/pwd@service_name. If you configure the SQL Net service name with load balancing turned on, then the channels are allocated at a node as decided by the load balancing algorithm.

However, if the service name used in the connect string is not for load balancing, then you can control at which instance the channels are allocated using separate connect strings for each channel configuration. So,your backup scripts will fail if  that node/instance is down.

So, my recommendation in admin-managed database environment is create a set of nodes to perform the backup.

E.g : If  you have 3 nodes you should use one or two node to perform backup, while the other node is less loaded. If you are using Load Balance in your connection… the new connection will be directed to the least loaded node.

Autolocation for Backup and Restore Commands

RMAN automatically performs autolocation of all files that it must back up or restore. If you use the noncluster file system local archiving scheme, then a node can only read the archived redo logs that were generated by an instance on that node. RMAN never attempts to back up archived redo logs on a channel it cannot read.

During a restore operation, RMAN automatically performs the autolocation of backups. A channel connected to a specific node only attempts to restore files that were backed up to the node. For example, assume that log sequence 1001 is backed up to the drive attached to node1, while log 1002 is backed up to the drive attached to node2. If you then allocate channels that connect to each node, then the channel connected to node1 can restore log 1001 (but not 1002), and the channel connected to node2 can restore log 1002 (but not 1001).

Configuring Channels to Use Automatic Load Balancing

To configure channels to use automatic load balancing, use the following syntax:

CONFIGURE DEVICE TYPE [disk | sbt] PARALLELISM number_of_channels;

Where number_of_channels is the number of channels that you want to use for the operation. After you complete this one-time configuration, you can issue BACKUP or RESTORE commands.

Setup  Parallelism on RMAN  is not enough to keep a balance, because if you start the backup from remote host using default SERVICE_NAME and if you are using parallelism the RMAN can start a session in each node and the backup be performed by all nodes at same time, this is not a problem, but can cause a performance   issue on your environment due high load.

Even at night the backup can cause performance problems due maintenance of the database (statistics gathering, verification of new SQL plans “automatic sql tuning set”, etc).

The bottlenecks are usually in or LAN or SAN, so use all nodes to perform backup can be  a waste. If the backup is run via LAN you can gain by using more than one node, but the server that is receiving the backup data will become a bottleneck.

I really don’t like to use more than 50% of nodes of RAC to execute backup due it can increase the workload in all nodes of clusters and this can be a problem to the application or database.

So, thinking to prevent it we can configure a Database Service to control where backup will be performed.

Creating a Database Service to perform Backup

Before start I should explain about limitation of database service.

Some points about Oracle Services.

When a user or application connects to a database, Oracle recommends that you use a service for the connection. Oracle Database automatically creates one database service (default service is always the database  name)  when the database is created.   For more flexibility in the management of the workload using the database, Oracle Database enables you to create multiple services and specify which database instances offer the services.

You can define services for both policy-managed and administrator-managed databases.

  • Policy-managed database: When you define services for a policy-managed database, you assign the service to a server pool where the database is running. You can define the service as either uniform (running on all instances in the server pool) or singleton (running on only one instance in the server pool).
  • Administrator-managed database: When you define a service for an administrator-managed database, you define which instances normally support that service. These are known as the PREFERRED instances. You can also define other instances to support a service if the preferred instance fails. These are known as AVAILABLE instances.
About Service Failover in Administrator-Managed Databases

When you specify a preferred instance for a service, the service runs on that instance during normal operation. Oracle Clusterware attempts to ensure that the service always runs on all the preferred instances that have been configured for a service. If the instance fails, then the service is relocated to an available instance. You can also manually relocate the service to an available instance.

About Service Failover in Policy-Managed Databases

When you specify that a service is UNIFORM, Oracle Clusterware attempts to ensure that the service always runs on all the available instances for the specified server pool. If the instance fails, then the service is no longer available on that instance. If the cardinality of the server pool increases and a instance is added to the database, then the service is started on the new instance. You cannot manually relocate the service to a specific instance.

When you specify that a service is SINGLETON, Oracle Clusterware attempts to ensure that the service always runs on only one of the available instances for the specified server pool. If the instance fails, then the service fails overs to a different instance in the server pool. You cannot specify which instance in the server pool the service should run on.

For SINGLETON services, if a service fails over to an new instance, then the service is not moved back to its original instance when that instance becomes available again.

Summarizing about use Services

If your database is Administrator-Managed  we can create a service and define  where backup will be executed, and how much nodes we can use with preferred and available nodes.

If your database is Policy-Managed we cannot define where backup will be executed, but we can configure a service SINGLETON, that will be sure that backup  will be executed in  only node, if that node fail the service will be moved to another available node, but we cannot choose in which node backup will be performed.

Note:

For connections to the target and auxiliary databases, the following rules apply:

Starting with 10gR2, these connections can use a connect string that does not bind to any particular instance. This means you can use load balancing.

Once a connection is established, however, it must be a dedicated connection  that cannot migrate to any other process or instance. This means that you still can’t use MTS or TAF.

Example creating service for Administrator-Managed Database

The backup will be executed on db11g2 and db11g3, but can be executed on db11g1 if db11g2 and db11g3 fail.

Set ORACLE_HOME  to same used by Database

$ srvctl add service -d db_unique_name -s service_name -r preferred_list [-a available_list] [-P TAF_policy]

$ srvctl add service -d db11g -s srv_rman -r db11g2,db11g3 -a db11g1 -P NONE -j LONG

$ srvctl start service -d db11g -s srv_rman
Example creating service for Policy-Managed Database

Using a service SINGLETON the backup will be executed on node which service was started/assigned. The service will be changed to another host only if that node fail.

Set ORACLE_HOME  to same used by Database

$ srvctl config database -d db11g |grep "Server pools"

Server pools: racdb11gsp

$ srvctl add service -d db11g -s srv_rman -g racdb11gsp -c SINGLETON

$ srvctl start service -d db11g -s srv_rman

If you have more than 2 nodes on cluster (with policy managed database) and you want use only 2 or more nodes to perform backup, you can choose the options below.

Configure a Service UNIFORM (the service will be available on all nodes)   you can control how much instance will be used to perform backup, but you cannot choose in which node backup will be performed. In fact the service does not control anything, you will set  PARALLELISM (RMAN) equal number of nodes wich you want use .

Ex: I have 4 Nodes but I want start backup in 2 nodes. I must choose parallelism 2.  Remember that Oracle can start 2 Channel on same host, this depend on workload of each node.

Using Policy Managed Database you should be aware that you do not care where (node) each instance is running, you will have a pool with many nodes and Oracle will manage all resources inside that pool. For this reason is not possible to control where you will place a heavier load.

This will only work if you are performing online backup or are using Parallel Backup.

Configuring RMAN to Automatically Backup the Control File and SPFILE

If you set CONFIGURE CONTROLFILE AUTOBACKUP to ON, then RMAN automatically creates a control file and an SPFILE backup after you run the BACKUP or COPYcommands. RMAN can also automatically restore an SPFILE, if this is required to start an instance to perform recovery, because the default location for the SPFILE must be available to all nodes in your Oracle RAC database.

These features are important in disaster recovery because RMAN can restore the control file even without a recovery catalog. RMAN can restore an autobackup of the control file even after the loss of both the recovery catalog and the current control file. You can change the default name that RMAN gives to this file with the CONFIGURE CONTROLFILE AUTOBACKUP FORMAT command. Note that if you specify an absolute path name in this command, then this path must exist identically on all nodes that participate in backups.

RMAN performs the control file autobackup on the first allocated channel. Therefore, when you allocate multiple channels with different parameters, especially when you allocate a channel with the CONNECT command, determine which channel will perform the control file autobackup. Always allocate the channel for this node first.

Enjoy..


Oracle Database Failover Active/Passive Unix/Linux Plataform – Free of Charge

In this post I will show you how to setting up environment high availability without the option Oracle RAC.

Oracle Fail Safe is available only for Windows, for Unix / Linux would need third party software  Cluster to do the  Failover.

Good News From Oracle:

Oracle Clusterware

Oracle Clusterware provides cluster membership and high availability services. It provides the cluster membership for features such as Oracle Real Application Clusters and Oracle ASM. It includes the following features:

  • Application monitoring, restart, and failover
  • Cluster membership services
  • Server monitoring and fencing
  • Single Client Access Name (SCAN)
  • Server Pools
  • Grid Naming Services

Oracle Clusterware can be used to protect any application (restarting or failing over the application in the event of a failure), free of charge, if one or more of the following conditions are met:

  • The server OS is supported by a valid Oracle Unbreakable Linux support contract.
  • The product to be protected is either:
    • Any Oracle product (e.g. Oracle Applications, Siebel, Hyperion, Oracle Database EE, Oracle Database XE)
    • Any third-party product that directly or indirectly stores data in an Oracle database
  • At least one of the servers in the cluster is licensed for Oracle Database (SE or EE)

A cluster is defined to include all the machines that share the same Oracle Cluster Registry (OCR) and Voting Disk.

http://download.oracle.com/docs/cd/E11882_01/license.112/e10594/editions.htm

See step by step here using clusterware 11.1, we can improvise this setup to 11.2 using SCAN feature which is more easy.

http://www.oracle.com/technetwork/products/clusterware/overview/si-db-failover-11g-134623.pdf

If link above is off, click here