What Patch to Apply? PSU ? GI PSU ? Proactive Bundle Patch?

For those who unfamiliar with Oracle Patch is little confusing what patch to apply when get  a table with different patch in the same version.

patch

I will try clarify some doubts.

Note: You must provide a valid My Oracle Support login name in order to access below Links.

Patch version numbering changed

In November 2015 the version numbering for new Bundle Patches, Patch Set Updates and Security Patch Updates for Oracle Database changed the format from  5th digit of the bundle version with a release date in the form “YYMMDD” where:

  • YY is the last 2 digits of the year
  • MM is the numeric month (2 digits)
  • DD is the numeric day of the month (2 digits)

More detail can be found here: Oracle Database, Enterprise Manager and Middleware – Change to Patch Numbering from Nov 2015 onwards (Doc ID 2061926.1)

 

Changes on Database Security Patching from 12.1.0.1 onwards

Starting with Oracle Database version 12.1.0.1 , Oracle will only provide Patch Set Update (PSU) patches to meet the Critical Patch Update (CPU) program requirements for security patching. SPU (Security Patch Update) patches will no longer be available. Oracle has moved to this simplified model due to the popularity of the PSU patches. PSUs are Oracle’s preferred proactive patching vehicle since their inception in 2009.

Database Security Patching from 12.1.0.1 onwards (Doc ID 1581950.1)

 

Where to find last Patches for Database?

Use the Patch Assistant: Assistant: Download Reference for Oracle Database PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases (Doc ID 2118136.2)

 

What Patch to apply PSU, GI PSU,Proactive Bundle Patch, Bundle Patch (Windows 32bit & 64bit)?

When using the Patchset Assistant the assistant show below table:
In this case I search for last patch for 12.1.0.2.

PSU_Bundle_Patch

Understanding the Patch Nomenclature :
New Patch Nomenclature for Oracle Products (Doc ID 1430923.1)

Note: As of April 2016, the Database Patch for Engineered Systems and Database In-Memory has been renamed from “Bundle Patch (BP) ” to “Database Proactive Bundle Patch”.

Note: Windows Platform must use “Bundle Patch (Windows 32bit & 6bit)”.

Database patch content:

  • SPU contains only the CPU program security fixes
  • PSU contains the CPU program security fixes and additional high-impact/low-risk critical bug fixes
  • Proactive Bundle Patch (PBP) includes all PSU fixes along with fixes targeted at the specific Bundle Patch environment.

They are cumulatives,so  if you have a OH (12.1.0.2) in base release (i.e no fix)  and apply the last PSU or PBP it will fix all bugs from base release until current version of patch.

Where to apply each Patch?

  • PSU –  Can be applied on Database Servers, Client-Only and  Instant Client.
  • GI PSU – Can be applied on GI Home (Oracle Restart or Oracle Clusterware) in conjunction with RAC, RACOne,  Single Instance home, Client-Only and  Instant Client.
  • Proactive Bundle Patch – Can be applied on GI Home in conjunction with RAC, RACOne, or Single Instance home, Client-Only and  Instant Client.

 

An installation can only use one of the SPU, PSU or Proactive Bundle Patch patching methods.

 

How to choose between them?

The “Database Proactive Bundle Patch” requires a bit more testing than a Patch Set Update (PSU) as it delivers a larger set of fixes.

If you are installing a new fresh installation you should to apply Database Proactive Bundle Patch.

PSU is addressed to environments sensitive to changes, because it required less testing.

I have Applied “Database PSU” how to move to “Database Proactive Bundle Patch”? 

Moving from “Database PSU” to “Database Proactive Bundle Patch”

  • Back up your current setup
  • Fully rollback / deinstall “Database PSU”
    • If using OJVM PSU that is likely to require OJVM PSU to be rolled out too
  • Apply / install the latest “Database Proactive Bundle Patch”
  • Apply any interim patches also rolled out above (including OJVM PSU if that was installed)

Note from Oracle: It is not generally advisable to switch from “Database PSU” to “Database SPU” method.

The below note can clarify any doubt on this post.
Oracle Database – Overview of Database Patch Delivery Methods (Doc ID 1962125.1)

 

OPLAN Support

GI PSU and Proactive Bundle Patch are supported by OPlan.

OPlan is a utility that facilitates the patch installation process by providing you with step-by-step patching instructions specific to your environment.
In contrast to the traditional patching operation, applying a patch based on the README requires you to understand the target configuration and manually identify the patching commands relevant to your environment. OPlan eliminates the requirement of identifying the patching commands by automatically collecting the configuration information for the target, then generating instructions specific to the target configuration.

Oracle Software Patching with OPLAN (Doc ID 1306814.1)

Useful Notes:

Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

Frequently Asked Questions (FAQ): Patching Oracle Database Server (Doc ID 1446582.1)

12.1.0.2 Database Proactive Bundle Patches / Bundle Patches for Engineered Systems and DB In-Memory – List of Fixes in each Bundle (Doc ID 1937782.1)


ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘unknown’

Oracle Grid Infrastructure is supported on Operation System, but ACFS isn’t. Why?

OLYMPUS DIGITAL CAMERA

The Grid Infrastructure is a Software that use OS Library, so Grid Infrastructure isn’t Kernel dependent but OS Package dependent.

ACFS is a module of Grid Infrastructure that have drivers installed/configured  into OS Kernel, then ACFS is Kernel Dependent.

An Oracle ACFS file system is installed as a dynamically loadable vendor operating system (OS) file system driver and tool set that is developed for each supported operating system platform. The driver is implemented as a Virtual File System (VFS) and processes all file and directory operations directed to a specific file system.

The ACFS  is composed by three components:

The Oracle ACFS, Oracle Kernel Services (OKS) and Oracle ADVM drivers, they are dynamically loaded when the Oracle ASM instance is started.

  • Oracle ACFS
    This driver processes all Oracle ACFS file and directory operations.
  • Oracle ADVM
    This driver provides block device services for Oracle ASM volume files that are used by file systems for creating file systems.
  • Oracle Kernel Services Driver (OKS)
    This driver provides portable driver services for memory allocation, synchronization primitives, and distributed locking services to Oracle ACFS and Oracle ADVM.

Before upgrade your OS Kernel you must place into your check list the ACFS Drivers, as you do with others OS Drives.

How to Oracle support the ACFS into future Kernels?

By releasing Patch Set Updates (PSUs), they are proactive cumulative patches containing recommended bug fixes that are released on a regular and predictable schedule.

How it’s works?

Check this EXAMPLE below:

acfs_release_date

Note: This table is only a example where these kernel and dates are fictitious.

On image above we can see.

In jan/2016 the  GI 11.2.0.4 and 12.0.1.2 don’t need apply PSU because already have supported drives into base release, but 11.2.0.3 need PSU 11.2.0.3.2 to get supported drivers to kernel 2.6.32-100.

In Feb/2016 was released the kernel 2.6.65-30, so ACFS Deployment take some time until developers build new ACFS Drives. So, in FeB/2016 none release is supported for the kernel 2.6.65-30.

In Mar/2016 Oracle release the PSU where all ACFS versions was supported under that PSU.

With example above we can conclude: Does not matter what Grid Infrastructure base version  (such as 11.2.0.4 or 12.0.1.2 ) you are using,  what matter is, the Grid Infrastructure must have supported  ACFS Drivers for that Kernel.

Where to find Certification Matrix for ACFS ?

Oracle Support have a detailed MOS Note : ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)

You need ALWAYS check above mos note before any kernel updates for environments that have ACFS Configured.

 


RHEL/OEL 7 – NTPD replaced by Chrony

Despite configuring NTP during the installation process, NTPD is not installed /configured/running after the installation completes.

date_time

New fresh installation of  RHEL/OEL 7 the default NTP Client is the CHRONY.

What is the CHRONY?
Chrony was introduced as new NTP client provided in the chrony package. Chrony does not provides all features available in old ntp client (ntp). So ntp is still provided due to compatibility.

During Oracle Grid Infrastructure Installation the runInstaller will   fail during  prerequisite  check due ntp not configured.

NTP not configured. Network Time Protocol (NTP)  - Failed 
PRVF-7590 : "ntpd" is not running on node "xxx"

Oracle recommend to use NTPD and disable CHRONYD.

Just follow step below to Install/Configure/Enable NTPD

1. Check Chronyd service

# systemctl status chronyd.service
● chronyd.service - NTP client/server
 Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
 Active: active (running) since Mon 2016-05-30 01:15:33 BRT; 5s ago
 Process: 19464 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
 Process: 19456 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 19459 (chronyd)
 CGroup: /system.slice/chronyd.service
 └─19459 /usr/sbin/chronyd

May 30 01:15:33 node1 systemd[1]: Starting NTP client/server...
May 30 01:15:33 node1 chronyd[19459]: chronyd version 2.1.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +DEBUG +ASYNCDNS +IPV6 +SECHASH)
May 30 01:15:33 node1 chronyd[19459]: Frequency -23.731 +/- 0.023 ppm read from /var/lib/chrony/drift
May 30 01:15:33 node1 systemd[1]: Started NTP client/server.

2. Stop and disable Chronyd service

# systemctl stop chronyd.service
# systemctl disable chronyd.service
Removed symlink /etc/systemd/system/multi-user.target.wants/chronyd.service.

3. Install ntpd package

# yum install ntp -y

4. Add “-x ” option into  the “/etc/sysconfig/ntpd” file.

OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

5. Enable and Start NTPD

#systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.


# systemctl start ntpd.service

# systemctl status ntpd.service
● ntpd.service - Network Time Service
 Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
 Active: active (running) since Mon 2016-05-30 01:23:09 BRT; 9s ago
 Process: 23048 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 23050 (ntpd)
 CGroup: /system.slice/ntpd.service
 └─23050 /usr/sbin/ntpd -u ntp:ntp -x -u ntp:ntp -p /var/run/ntpd.pid

May 30 00:27:07 node1 ntpd[2829]: Listen normally on 10 virbr0 192.168.122.1 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 11 lo ::1 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 12 eno16777984 fe80::250:56ff:fe90:21e9 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 13 eno33559296 fe80::250:56ff:fe90:d421 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listen normally on 14 eno50338560 fe80::250:56ff:fe90:f203 UDP 123
May 30 00:27:07 node1 ntpd[2829]: Listening on routing socket on fd #31 for interface updates
May 30 00:27:07 node1 systemd[1]: Started Network Time Service.
May 30 00:27:08 node1 ntpd[2829]: 0.0.0.0 c016 06 restart
May 30 00:27:08 node1 ntpd[2829]: 0.0.0.0 c012 02 freq_set ntpd 0.000 PPM
May 30 00:27:08 node1 ntpd[2829]: 0.0.0.0 c011 01 freq_not_set

Known issues that are solved using above procedure.

# ntpq -p
localhost: timed out, nothing received
***Request timed out
# ntpstat 
Unable to talk to NTP daemon.  Is it running? 

 

 

 


Long-Term RMAN Backups (Online Backup Without Keep All Logs)

102091053-short-term-long-term.1910x1000

Short-term backups takes periodic images of active data in order to provide a method of recovering data that have been deleted or destroyed.
These backups are retained only for a few days or weeks. The primary purpose of short-term backup is to recover recent data or if a media failure or disaster occurs, then you can use it to restore your backups and bring your data available.

Most of Backup System have two different setup for Short-Term and Long-Term backups.

Short-Term Backups are classic backup which have short retention and are stored on  pool of data active, often these data are always available on Tape Library or Disk.

Long-Term Backups are archival backups which have long retention and often are stored only on tapes, usually these tapes are off-site or in safes.

Will try explain in few lines the purpose of Long-Term backups.

You can back up the database on the first day of every year to satisfy a regulatory requirement and store the media offsite.
Years after you make the archival backup, you could restore and recover it to query the data as it appeared at the time of the backup. We have a Data Presevation.

Data preservation is related to data protection, but it serves a different purpose.
For example, you may need to preserve a copy of a database as it existed at the end of a business quarter.
This backup is not part of the disaster recovery strategy. The media to which these backups are written are often unavailable after the backup is complete.
You may send the tape into fire storage or ship a portable hard drive to a testing facility.

RMAN provides a convenient way to create a backup and exempt it from your backup retention policy. This type of backup is known as an archival backup.

Why this introduction about short-term or long-term backups?

I manage some environments and I see that some DBA fail to understand this difference and end up making incorrect configurations for archival backup and classic backups.

IMHO, the main villain is RDBMS 10.2 and earlier version, because there is no long-term backup concept when take online backup.

In 10.2  to take a archival backup we must shutdown database to not keep all archivelogs (many envs this is not possible), if we take online backup with long retention (KEEP LOGS) you need keep all archivelogs until first backup be obsolete. On this version there is no difference between a classic backup or archival backups. (in fact have diference, but it’s a slight difference)

So, because that many DBA still stuck on this concept of this version.

In 11.1 this changed, you can create a consistent archival backup without need keep all archivelogs.

When to use this rman command ( “backup database keep logs forever/ until sysdate + XXX”) on 11.1 version or above?
It should be only used if the DBA have a requirement to keep ALL changes of database forever or until some date and need recover ANY point-in-time for a database life. If you don’t need it, this setup is insane and increase cost and management of its environment.

Let’s start with some tests. I’m using rdbms 11.2.0.4.

How to take a Long-Term Backup in 11.1 or higher version.

Best practices:  3 important things.

  1. Use RESTORE POINT to easy the restore.RESTORE POINT clause that creates a normal restore point, which is a label for an SCN to which the backup must be recovered to be made consistent. The SCN is captured just after the datafile backups complete. RMAN resynchronizes restore points with the recovery catalog and maintains the restore points as long as the backup exists.
    More detail on documentation below.
    https://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmbckba.htm#BRADV89551
  2. Use non-default TAG this will easy  manage backup later.
  3. Use CHECK LOGICAL Option to avoid backup logical corrupted blocks.By default, RMAN does not check for logical corruption. If you specify CHECK LOGICAL on the BACKUP command, however, then RMAN tests data and index blocks for logical corruption, such as corruption of a row piece or index entry, and log them in the alert log located in the Automatic Diagnostic Repository (ADR). If you use RMAN with CHECK LOGICAL option when backing up or restoring files, then it detects all types of block corruption that are possible to detect.What a disappointment you try to restore a backup taken on last year and find out which there are  many logical blocks corrupted.

P.S: I always use CHECK LOGICAL for my all backups (short or long-term backups)

 

Performing a Long-Term Backup

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Feb 18 15:56:59 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: TESTDB (DBID=2633899005)
 connected to recovery catalog database

RMAN> run {
 2> ALLOCATE CHANNEL tape1 
    DEVICE TYPE 'SBT_TAPE' PARMS 
    'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
 3> BACKUP CHECK LOGICAL 
    FILESPERSET 10 
    FULL DATABASE 
    TAG 'MONTHLY_FULL_20160218' 
    FORMAT 'RMAN_MONTHLY_FULL_%d_%s_%p_%T_%c' 
    KEEP UNTIL TIME 'SYSDATE + 400' 
    RESTORE POINT BKP_MONTHLY_20160218_1556;
 4> RELEASE CHANNEL tape1;
 5> }
 6> exit
 allocated channel: tape1
 channel tape1: SID=241 device type=SBT_TAPE
 channel tape1: Data Protection for Oracle: version 5.5.2.0

Starting backup at Feb 18 2016 15:57:07
 current log archived

backup will be obsolete on date Mar 24 2017 15:57:15
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed full datafile backup set
 channel tape1: specifying datafile(s) in backup set
 input datafile file number=00001 name=+TSM/testdb/datafile/system.501.901826545
 input datafile file number=00006 name=+TSM/testdb/datafile/users.487.901826545
 input datafile file number=00002 name=+TSM/testdb/datafile/sysaux.542.901826547
 input datafile file number=00003 name=+TSM/testdb/datafile/undotbs1.442.901826547
 input datafile file number=00004 name=+TSM/testdb/datafile/users.314.901826547
 input datafile file number=00005 name=+TSM/testdb/datafile/users.510.901826547
 channel tape1: starting piece 1 at Feb 18 2016 15:57:15
 channel tape1: finished piece 1 at Feb 18 2016 15:58:40
 piece handle=RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:01:25

backup will be obsolete on date Mar 24 2017 15:58:40
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed full datafile backup set
 channel tape1: specifying datafile(s) in backup set
 including current SPFILE in backup set
 channel tape1: starting piece 1 at Feb 18 2016 15:58:41
 channel tape1: finished piece 1 at Feb 18 2016 15:58:42
 piece handle=RMAN_MONTHLY_FULL_TESTDB_329_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:00:01
 current log archived
 backup will be obsolete on date Mar 24 2017 15:58:46
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed archived log backup set
 channel tape1: specifying archived log(s) in backup set
 input archived log thread=1 sequence=91 RECID=830 STAMP=904147125
 channel tape1: starting piece 1 at Feb 18 2016 15:58:46
 channel tape1: finished piece 1 at Feb 18 2016 15:58:47
 piece handle=RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:00:01

backup will be obsolete on date Mar 24 2017 15:58:47
 archived logs required to recover from this backup will be backed up
 channel tape1: starting compressed full datafile backup set
 channel tape1: specifying datafile(s) in backup set
 including current control file in backup set
 channel tape1: starting piece 1 at Feb 18 2016 15:58:49
 channel tape1: finished piece 1 at Feb 18 2016 15:58:50
 piece handle=RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1 tag=MONTHLY_FULL_20160218 
 comment=API Version 2.0,MMS Version 5.5.2.0
 channel tape1: backup set complete, elapsed time: 00:00:01
 Finished backup at Feb 18 2016 15:58:50

released channel: tape1

Recovery Manager complete.

Change Backup To UNAVAILABLE

If this backup will be off-line in Tape Library  then mark it as UNAVAILABLE to avoid RMAN try use it during normal restore operation. Normal Restore should use short-term backups which usually  is active on Tape Library.

As I used non-default tag it’s easy change it.

RMAN> CHANGE BACKUP TAG MONTHLY_FULL_20160218 UNAVAILABLE;

changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1 RECID=311 STAMP=904147035
 changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_329_1_20160218_1 RECID=312 STAMP=904147121
 changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1 RECID=313 STAMP=904147126
 changed backup piece unavailable
 backup piece handle=RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1 RECID=314 STAMP=904147129
 Changed 4 objects to UNAVAILABLE status

How to restore it?

RMAN> shutdown abort;

Oracle instance shut down

RMAN> startup nomount;

connected to target database (not started)
 Oracle instance started

Total System Global Area 801701888 bytes

Fixed Size 2250648 bytes
 Variable Size 251660392 bytes
 Database Buffers 536870912 bytes
 Redo Buffers 10919936 bytes

RMAN> run {
 2> SET UNTIL RESTORE POINT BKP_MONTHLY_20160218_1556;
 3> RESTORE CONTROLFILE;
 4> STARTUP MOUNT;
 5> RESTORE DATABASE;
 6> RECOVER DATABASE;
 7> ALTER  DATABASE OPEN RESETLOGS;
 8> };

executing command: SET until clause

Starting restore at 18-02-2016 16:56:32
 allocated channel: ORA_SBT_TAPE_1
 channel ORA_SBT_TAPE_1: SID=225 device type=SBT_TAPE
 channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.2.0
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: SID=233 device type=DISK

channel ORA_SBT_TAPE_1: starting datafile backup set restore
 channel ORA_SBT_TAPE_1: restoring control file
 channel ORA_SBT_TAPE_1: reading from backup piece RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1
 channel ORA_SBT_TAPE_1: piece handle=RMAN_MONTHLY_FULL_TESTDB_331_1_20160218_1 
 tag=MONTHLY_FULL_20160218
 channel ORA_SBT_TAPE_1: restored backup piece 1
 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:02
 output file name=+TSM/testdb/controlfile/current.518.901826451
 Finished restore at 18-02-2016 16:56:42

database is already started
 database mounted
 released channel: ORA_SBT_TAPE_1
 released channel: ORA_DISK_1

Starting restore at 18-02-2016 16:56:48
 Starting implicit crosscheck backup at 18-02-2016 16:56:48
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: SID=225 device type=DISK
 Finished implicit crosscheck backup at 18-02-2016 16:56:49

Starting implicit crosscheck copy at 18-02-2016 16:56:49
 using channel ORA_DISK_1
 Finished implicit crosscheck copy at 18-02-2016 16:56:49

searching for all files in the recovery area
 cataloging files...
 no files cataloged

allocated channel: ORA_SBT_TAPE_1
 channel ORA_SBT_TAPE_1: SID=233 device type=SBT_TAPE
 channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.2.0
 using channel ORA_DISK_1

channel ORA_SBT_TAPE_1: starting datafile backup set restore
 channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
 channel ORA_SBT_TAPE_1: restoring datafile 00001 to +TSM/testdb/datafile/system.501.901826545
 channel ORA_SBT_TAPE_1: restoring datafile 00002 to +TSM/testdb/datafile/sysaux.542.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00003 to +TSM/testdb/datafile/undotbs1.442.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00004 to +TSM/testdb/datafile/users.314.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00005 to +TSM/testdb/datafile/users.510.901826547
 channel ORA_SBT_TAPE_1: restoring datafile 00006 to +TSM/testdb/datafile/users.487.901826545
 channel ORA_SBT_TAPE_1: reading from backup piece RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1
 channel ORA_SBT_TAPE_1: piece handle=RMAN_MONTHLY_FULL_TESTDB_328_1_20160218_1 
 tag=MONTHLY_FULL_20160218
 channel ORA_SBT_TAPE_1: restored backup piece 1
 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:45
 Finished restore at 18-02-2016 16:58:43

Starting recover at 18-02-2016 16:58:44
 using channel ORA_SBT_TAPE_1
 using channel ORA_DISK_1

starting media recovery

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
 channel ORA_SBT_TAPE_1: restoring archived log
 archived log thread=1 sequence=91
 channel ORA_SBT_TAPE_1: reading from backup piece RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1
 channel ORA_SBT_TAPE_1: piece handle=RMAN_MONTHLY_FULL_TESTDB_330_1_20160218_1 
 tag=MONTHLY_FULL_20160218
 channel ORA_SBT_TAPE_1: restored backup piece 1
 channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
 archived log file name=+TSM/testdb/archivelog/2016_02_18/thread_1_seq_91.267.904150729 thread=1 sequence=91
 channel default: deleting archived log(s)
 archived log file name=+TSM/testdb/archivelog/2016_02_18/thread_1_seq_91.267.904150729 RECID=834 STAMP=904150729
 media recovery complete, elapsed time: 00:00:02
 Finished recover at 18-02-2016 16:58:51

database opened
 new incarnation of database registered in recovery catalog
 starting full resync of recovery catalog
 full resync complete

How to migrate old fashion to new fashion to make long-term backup

Many environments inherit old backup configurations  of old DBAs, then is very common to see set up long-term backup environments using  “backup  database keep logs;”.

You can ask: I don’t want to keep all logs and  want to keep only my backups which is not obsolete? It’s possible?

Yes, There is a  workaround to avoid RMAN still keeping all logs forever, if you are using 11.1 or above.

AGAIN, if you are using 10.2 or later version you CAN’T do this below.

 

What you need to do.

First of all you need re-configure your scripts to  long-term backups by using restore point from now.

What to do with old backups.

We can leave this the old setup until the last backup take with “KEEP LOGS” retention be obsolete. All Archivelogs Backups will be obsolete and RMAN will automaticaly deleted it. (recommended)

P.S: If you have a Long Term backup with “KEEP FOREVER”  the Archivelogs Backups never will be obsolete.

There is a workaround (or bug)  to delete all archivelogs and make a “self-contained” backup even using “keep logs”. This can be dangerous if not configured properly,  RMAN will delete all archived logs that not satisfy retention policy. That’s mean, if it go wrong you will have a backup of database without archivelog to recovery it. (i.e you don’t will be able to recovery that backup).

Steps:

  • Find on RMAN Catalog all  Backups with clause keep.
  • Find all Backup of Archivelogs between 24h before and 24h after the backup with  clause “Keep” was taken.
    • e.g  backup was finished at 02/01/2016 10:00, find all backup of archivelogs between 02/01/2016 10:00 and 03/01/2016 10:00
  • Change theses backups (archivelogs, controlfile and datafile) to UNAVAILABLE
  • After change these backups to UNAVAILABLE all Backups of Archivelogs will be orphan and will be obsolete.

 

Eg.

(dd-mm-yyyy hh24:mi)

Backup 1 ( 01/01/2016 12:00)

Backup 2 (01/02/2016 12:00)

Backup 3 (01/03/2016 12:00)

Backup 4 (01/04/2016 12:00)

 

 

Scenario 1:

Change only  Backup 1  and Archivelogs to Restore this Backup to unavailable.

  • All archivelogs that does not have status UNAVAILABLE  until 01/02/2016 12:00 will be obsolete and will deleted on next “delete obsolete”.
  • All futures backups of archivelog will be kept until Backup 2 be obsolete.

Scenario 2:

Change only Backup 2  and Archivelogs to Restore this Backup to unavailable.

  • All archivelogs will be kept.

Scenario 3:

Change only Backup 1,2,3 and 4 to UNAVAILABLE.

  • All archivelogs will be obsolete, if run “delete obsolete” you can restore Backup 1,2,3 or 4, BUT WILL NOT BE ABLE TO RECOVERY IT since there is no Archivelog.

Scenario 4:

Change to unavailable Backup 1,2,3,4   and archivelogs to restore these database.

  • All archivelogs with Status AVAILABLE will be obsolete and all archivelogs will obey default retention policy.

 

It seens to me a bug, because Oracle just ignore retention policy of archivelog log, if Backup with Keep clause is UNAVAILABLE.

 

 

 

 


Starting with Oracle In-Memory (aka IM) new Feature


I recommend that DBA be aware of this feature, because this feature will really bring a HUGE difference in performance matter to our databases.

Note: This feature is available starting with Oracle Database 12c Release 1 (12.1.0.2).

Good to Know

  • Is a Option of Oracle Enterprise Edition (12.1.0.2 and above). - Additional license required
  • It is Not in Memory Database - it's an accelerator to our current database
  • It is Not Column Store Database - it allows keeping some of our data in column store which is non-persistent
  • It has nothing to do with Oracle Times-Ten or Oracle Coherence
  • No additional hardware requirement, except additional OS RAM available

Price: Buy it

 

I'm posting useful videos links from Oracle Learning Library.

 

White paper: When to Use Oracle Database In-Memory (PDF)

White paper: Oracle Database In-Memory Advisor Best Practices (PDF)

Blog Official of In-Memory Feature

 


ORAchk Health Checks For The Oracle Stack (ORACHK 2.2.4 and above)


Oracle Database 12c (12.0.1.*) and 11g (11.2.0.4) comes with new feature called RACcheck.
Although the RACcheck already exists before these releases. (In 2011 I posted about benefits of RACcheck)

Brief of RACcheck.
.....
RACcheck - The Oracle RAC Configuration Audit Tool
RACcheck is designed to audit vital configuration settings for the Oracle Database, single instance databases, as well as Oracle Real Application Clusters (Oracle RAC) databases. It also includes checks for Oracle Clusterware, Oracle Automatic Storage Management (Oracle ASM) and Oracle Grid Infrastructure.

RACcheck provides best practices recommedations considering the whole stack, including Maximum Availability Architecture (MAA) configurations and is therefore the ideal tool for regular health checks as well as pre- and post-upgrade best practices assessments.
.....

Now Oracle replace/renamed Oracle RACcheck to ORAchk.

ORAchk- Health Checks for the Oracle Stack

ORAchk replaces the popular RACcheck to support a wider range of products. ORAchk version 2.2.4 is now available for download and includes the following key features:

  • RACcheck renamed to ORAchk
  • ORAchk daemon auto-start mode after node reboot (init integration)
  • Merge multiple ORAchk collection reports
  • Upload of installed patches to database
  • Collection Manager for ORAchk, RACcheck and Exachk (Document 1602329.1)
  • ORAchk signature file in /tmp on all nodes to verify last ORAchk run
  • New checks and bug fixes, including
  • 30 Oracle Ebusiness AP module data integrity checks
  • 12 new Database checks
  • 8 new Solaris system checks
  • Supported Platforms

  • Linux x86-64* (Enterprise Linux, RedHat and SuSE 9, SuSE 10 & SuSE 11)
  • Oracle Solaris SPARC (Solaris 10 and 11)
  • Oracle Solaris x86-64 (Solaris 10 and 11)
  • AIX **
  • HPUX**
  • * 32-bit platforms not supported, no planned support for Linux Itanium
    **Requires BASH Shell 3.2 or higher to be installed

    Supported Oracle Releases

  • 10gR2
  • 11gR1
  • 11gR2
  • 12cR1
  • When to Run ORAchk

  • After initial Oracle RAC deployment
  • Before planned system maintenance
  • After planned system maintenance
  • At least once every three months
  • Install/Configure

    It is recommended to run the tool as the database software owner (e.g. oracle). The user may run the tool as the Grid Infrastructure software owner (e.g. grid) and it will collect the same data but database credentials must manually be supplied to perform the database related audit checks. Typically when run as oracle the customer will have OS authentication set up for the oracle database software owner and the database login credentials will not be needed.

    Download ORAchk

    Stage Location:
    It is recommended that the kit be staged and operated from a local filesystem on a single database server in order to provide the best performance possible.


    $ mkdir -p /u01/app/oracle/orachk

    [oracle@node11g01 install]$ cd /u01/app/oracle/orachk

    [oracle@node11g01 orachk]$ unzip orachk.zip
    Archive: orachk.zip
    inflating: raccheck
    inflating: rules.dat
    inflating: collections.dat
    inflating: readme.txt
    inflating: orachk
    creating: .cgrep/
    inflating: .cgrep/ogghc_12101.sql
    inflating: .cgrep/lcgrep4
    inflating: .cgrep/checkDiskFGMapping.sh
    inflating: .cgrep/ogghc_11204.sql
    inflating: .cgrep/lcgreps9
    inflating: .cgrep/ogghc_11203.sql
    inflating: .cgrep/scgrepx86
    inflating: .cgrep/acgrep
    inflating: .cgrep/oracle-upstarttmpl.conf
    inflating: .cgrep/check_reblance_free_space.sql
    inflating: .cgrep/CollectionManager_App.sql
    inflating: .cgrep/exalogic_zfs_checks.aksh
    inflating: .cgrep/hiacgrep
    inflating: .cgrep/init.tmpl
    inflating: .cgrep/lcgreps10
    inflating: .cgrep/preupgrd.sql
    inflating: .cgrep/diff_collections.pl
    inflating: .cgrep/merge_collections.pl
    inflating: .cgrep/ggdiscovery.sh
    creating: .cgrep/profiles/
    inflating: .cgrep/profiles/DA94919CD0DE0913E04312C0E50A7996.prf
    inflating: .cgrep/profiles/D49C0FBF8FBF4B1AE0431EC0E50A0F24.prf
    extracting: .cgrep/profiles/F13E11974A282AB3E04312C0E50ABCBF.prf
    inflating: .cgrep/profiles/EF6C016813C51366E04313C0E50AE11F.prf
    inflating: .cgrep/profiles/D8367AD6754763FEE04312C0E50A6FCB.prf
    inflating: .cgrep/profiles/DF65D6117CB41054E04312C0E50A69D1.prf
    inflating: .cgrep/profiles/EA5EE324E7E05128E04313C0E50A4B2A.prf
    inflating: .cgrep/profiles/E1BF012E8F210839E04313C0E50A7B68.prf
    inflating: .cgrep/profiles/D462A6F7E9C340FDE0431EC0E50ABE12.prf
    inflating: .cgrep/profiles/D49AD88F8EE75CD8E0431EC0E50A0BC3.prf
    inflating: .cgrep/profiles/E2E972DDE1E14493E04312C0E50A1AB1.prf
    inflating: .cgrep/profiles/F32F44CE0BCD662FE04312C0E50AB058.prf
    inflating: .cgrep/profiles/E8DF76E07DD82E0DE04313C0E50AA55D.prf
    inflating: .cgrep/profiles/D49B218473787400E0431EC0E50A0BB9.prf
    inflating: .cgrep/profiles/D49C0AB26A6D45A8E0431EC0E50ADE06.prf
    inflating: .cgrep/profiles/DFE9C207A8F2428CE04313C0E50A6B0A.prf
    inflating: .cgrep/profiles/D49C4F9F48735396E0431EC0E50A9A0B.prf
    inflating: .cgrep/profiles/D49BDC2EC9E624AEE0431EC0E50A3E12.prf
    inflating: .cgrep/profiles/DF65D0F7FB6F1014E04312C0E50A7808.prf
    inflating: .cgrep/scnhealthcheck.sql
    inflating: .cgrep/pxhcdr.sql
    inflating: .cgrep/lcgrep5
    inflating: .cgrep/scgrep
    inflating: .cgrep/raw_data_browser.pl
    inflating: .cgrep/profiles.dat
    inflating: .cgrep/rack_comparison.py
    inflating: .cgrep/versions.dat
    inflating: .cgrep/create_version.pl
    inflating: .cgrep/lcgreps11
    inflating: .cgrep/utluppkg.sql
    inflating: .cgrep/utlusts.sql
    inflating: .cgrep/reset_crshome.pl
    inflating: .cgrep/asrexacheck
    inflating: .cgrep/lcgrep6
    inflating: .cgrep/utlu112i.sql
    inflating: UserGuide.txt

    Running ORAchk Interactively

    [oracle@node11g01 orachk]$ ./orachk

    CRS stack is running and CRS_HOME is not set. Do you want to set CRS_HOME to /u01/app/11.2.0/grid?[y/n][y]

    Checking ssh user equivalency settings on all nodes in cluster

    Node node11g02 is configured for ssh user equivalency for oracle user

    Searching for running databases . . . . .

    . .
    List of running databases registered in OCR
    1. dborcl
    2. None of above

    Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1].1
    . .

    Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    -------------------------------------------------------------------------------------------------------
    Oracle Stack Status
    -------------------------------------------------------------------------------------------------------
    Host Name CRS Installed ASM HOME RDBMS Installed CRS UP ASM UP RDBMS UP DB Instance Name
    -------------------------------------------------------------------------------------------------------
    node11g01 Yes Yes Yes Yes Yes Yes dborcl_1
    node11g02 Yes Yes Yes Yes Yes Yes dborcl_2
    -------------------------------------------------------------------------------------------------------

    Copying plug-ins

    . . . . . . . . . . . . . . . . . . . . . . . . . . .

    . . . . . .

    17 of the included audit checks require root privileged data collection . If sudo is not configured or the root password is not available, audit checks which require root privileged data collection can be skipped.

    1. Enter 1 if you will enter root password for each host when prompted

    2. Enter 2 if you have sudo configured for oracle user to execute root_orachk.sh script

    3. Enter 3 to skip the root privileged collections

    4. Enter 4 to exit and work with the SA to configure sudo or to arrange for root access and run the tool later.

    Please indicate your selection from one of the above options for root access[1-4][1]:- 2

    *** Checking Best Practice Recommendations (PASS/WARNING/FAIL) ***

    Collections and audit checks log file is
    /u01/app/oracle/orachk/orachk_node11g01_dborcl_031814_170552/log/orachk.log
    .
    .
    .
    .
    ---------------------------------------------------------------------------------
    CLUSTERWIDE CHECKS
    ---------------------------------------------------------------------------------
    ---------------------------------------------------------------------------------

    Detailed report (html) - /u01/app/oracle/orachk/orachk_node11g01_dborcl_031814_170552/orachk_node11g01_dborcl_031814_170552.html

    UPLOAD(if required) - /u01/app/oracle/orachk/orachk_node11g01_dborcl_031814_170552.zip


    Below the report generated in this test.
    Oracle RAC Assessment Report

    orachk usage options

    [oracle@node11g01 orachk]$ ./orachk -h
    Usage : ./orachk [-abvhpfmsuSo:c:t:]
    -a All (Perform best practice check and recommended patch check)
    -b Best Practice check only. No recommended patch check
    -h Show usage
    -v Show version
    -p Patch check only
    -m exclude checks for Maximum Availability Architecture (MAA) scorecards(see user guide for more details)
    -u Run orachk to check pre-upgrade or post-upgrade best practices for 11.2.0.3,11.2.0.4.0 and 12.1.0.1
    -o pre or -o post is mandatory with -u option like ./orachk -u -o pre
    -f Run Offline.Checks will be performed on data already collected from the system
    -o Argument to an option. if -o is followed by v,V,Verbose,VERBOSE or Verbose, it will print checks which passs on the screen
    if -o option is not specified,it will print only failures on screen. for eg: orachk -a -o v

    -clusternodes
    Pass comma separated node names to run orachk only on subset of nodes.
    -dbnames
    Pass comma separated database names to run orachk only on subset of databases
    -localonly
    Run orachk only on local node.
    -debug
    Run orachk in debug mode. Debug log will be generated.
    eg:- ./orachk -debug
    -nopass
    Skip PASS'ed check to print in orachk report and upload to database.

    -noscore
    Do not print healthscore in HTML report.

    -diff [-outfile ]
    Diff two orachk reports. Pass directory name or zip file or html report file as &

    -exadiff
    Compare two different Exalogic rack and see if both are from the same release.Pass directory name or zip file as & (applicable for Exalogic only)

    -c Used only under the guidance of Oracle support or development to override default components

    -
    initsetup : Setup auto restart. Auto restart functionality automatically brings up orachk daemon when node starts
    initrmsetup : Remove auto restart functionality
    initcheck : Check if auto restart functionality is setup or not
    initpresetup : Sets root user equivalency for COMPUTE, STORAGE and IBSWITCHES.(root equivalency for COMPUTE nodes is mandatory for setting up auto restart functionality)

    -d
    start : Start the orachk daemon
    start_debug : Start the orachk daemon in debug mode
    stop : Stop the orachk daemon
    status : Check if the orachk daemon is running
    info : Print information about running orachk daemon
    stop_client : Stop the orachk daemon client
    nextautorun : print the next auto run time
    -daemon
    run orachk only if daemon is running
    -nodaemon
    Dont use daemon to run orachk
    -set
    configure orachk daemon parameter like "param1=value1;param2=value2... "

    Supported parameters are:-

    AUTORUN_INTERVAL :- Automatic rerun interval in daemon mode.Set it zero to disable automatic rerun which is zero.

    AUTORUN_SCHEDULE * * * * :- Automatic run at specific time in daemon mode.
    - - - -
    ▒ ▒ ▒ ▒
    ▒ ▒ ▒ +----- day of week (0 - 6) (0 to 6 are Sunday to Saturday)
    ▒ ▒ +---------- month (1 - 12)
    ▒ +--------------- day of month (1 - 31)
    +-------------------- hour (0 - 23)

    example: orachk -set "AUTORUN_SCHEDULE=8,20 * * 2,5" will schedule runs on tuesday and friday at 8 and 20 hour.

    AUTORUN_FLAGS : orachk flags to use for auto runs.

    example: orachk -set "AUTORUN_INTERVAL=12h;AUTORUN_FLAGS=-profile sysadmin" to run sysadmin profile every 12 hours

    orachk -set "AUTORUN_INTERVAL=2d;AUTORUN_FLAGS=-profile dba" to run dba profile once every 2 days.

    NOTIFICATION_EMAIL : Comma separated list of email addresses used for notifications by daemon if mail server is configured.

    PASSWORD_CHECK_INTERVAL : Interval to verify passwords in daemon mode

    COLLECTION_RETENTION : Purge orachk collection directories and zip files older than specified days.

    -unset
    unset the parameter
    example: orachk -unset "AUTORUN_SCHEDULE"

    -get
    Print the value of parameter

    -excludeprofile
    Pass specific profile.
    List of supported profiles is same as for -profile.

    -merge
    Pass comma separated collection names(directory or zip files) to merge collections and prepare single report.
    eg:- ./orachk -merge orachk_hostname1_db1_120213_163405.zip,orachk_hostname2_db2_120213_164826.zip

    -vmguest
    Pass comma separated filenames containing exalogic guest VM list(applicable for Exalogic only)

    -hybrid [-phy]
    phy :Pass comma separated physical compute nodes(applicable for Exalogic only)
    eg:- ./orachk -hybrid -phy phy_node1,phy_node2

    -profile Pass specific profile.
    List of supported profiles:

    asm asm Checks
    clusterware Oracle clusterware checks
    compute_node Compute Node checks (Exalogic only)
    control_VM Checks only for Control VM(ec1-vm, ovmm, db, pc1, pc2). No cross node checks
    dba dba Checks
    ebs Oracle E-Business Suite checks
    el_extensive Extensive EL checks
    el_lite Exalogic-Lite Checks(Exalogic Only)
    el_rackcompare Data Collection for Exalogic Rack Comparison Tool(Exalogic Only)
    goldengate Oracle GoldenGate checks
    maa Maximum Availability Architecture Checks
    obiee obiee Checks(Exalytics Only)
    storage Oracle Storage Server Checks
    switch Infiniband switch checks
    sysadmin sysadmin checks
    timesten timesten Checks(Exalytics Only)
    virtual_infra OVS, Control VM, NTP-related and stale VNICs check (Exalogic Only)
    zfs ZFS storage appliances checks (Exalogic Only)

    -cells
    Pass comma separated storage server names to run orachk only on selected storage servers.

    -ibswitches
    Pass comma separated infiniband switch names to run orachk only on selected infiniband switches.

    -zfsnodes
    Pass comma separated ZFS storage appliance names to run orachk only on selected storage appliances.

    ORAchk Other Useful Options Not Covered Here

  • Using ORAchk Silently
  • ORAchk can be optionally run in “silent” or “non-interactive” mode in order to enable scheduling and automation
    Is required only if customer does not want to use orachk daemon functionality.

  • Using ORAchk Daemon Mode Operation
  • This functionality permit non-interactive (batch or silent mode) execution on a regular interval.

    When running ORAchk in daemon mode, the most recent and next most recent (if any) collection reports are automatically compared. If the mail address is configured a summary will be emailed along with attachments for the reports and the comparison report.

  • Report Comparisons with ORAchk
  • ORAchk has the ability to perform report comparisons between 2 ORAchk reports.
    This allows for trending of Success Factor and Best Practice changes over time, after planned maintenance, etc within a user friendly HTML report.

  • ORAchk in Upgrade Readiness Mode
  • ORAchk can be used to obtain an automated 11.2.0.3 (or above) Upgrade Readiness Assessment.
    The goal of the ORAchk Upgrade Readiness Assessment is to make the process of upgrade planning for Oracle RAC and Oracle Clusterware target versions 11.2.0.3 and above as smooth as possible by automating many of the
    manual pre and post checks detailed in various upgrade related documents.

    Refer MoS Notes for more details:
    ORAchk - Oracle Configuration Audit Tool (Doc ID 1268927.2)

    ORAchk Users Guide
    For details instructions on how to run ORAchk including troubleshooting steps, available options, etc.

    Enjoy!!!


    IBM POWER7 AIX and Oracle Database performance considerations


    This post is intended to provide DBA's with advice and website links to assist with IBM Power Systems running in an Oracle environment.
    Most issues that show up on POWER7 are the result of not following best practices advice that applies to all Power Systems generations.

    Metalink: IBM POWER7 AIX and Oracle Database performance considerations -- 10g & 11g (Doc ID 1507249.1)

    IBM White paper: IBM POWER7 AIX and Oracle Database performance considerations


    Follow

    Get every new post delivered to your Inbox.

    Join 286 other followers