RACcheck – RAC Configuration Audit Tool

RACcheck is a tool developed by the RAC Assurance development team for use by customers to automate the assessment of RAC systems for known configuration problems and best practices.

RACcheck is a RAC Configuration Audit tool  designed to audit various important configuration settings within a Real Application Clusters (RAC), Oracle Clusterware (CRS), Automatic Storage Management (ASM) and Grid Infrastructure environment. The tool audits configuration settings within the following categories:

  1. OS kernel parameters
  2. OS packages
  3. Many other OS configuration settings important to RAC.
  4. CRS/Grid Infrastructure
  5. RDBMS
  6. ASM
  7. Database parameters
  8. Many other database configuration settings important to RAC.

Features
1. RACcheck is NON-INTRUSIVE and does not change anything in the environment, except as detailed below:

- SSH user equivalence for the RDBMS software owner is assumed to be configured among all the database servers being audited in order for it to execute commands on the remote database server nodes. If the tool determines that this user equivalence is not established it will offer to set it up either temporarily or permanently at the option of the user. If the user chooses to set up SSH user equivalence temporarily then the script will do so for the duration of the execution of the tool but then it will return the system to the state in which it found SSH user equivalence originally. For those wishing to configure SSH user equivalence outside the tool (if not already configured), consult My Oracle Support Note: 372795.1.

- RACcheck creates a number of small output files into which the data necessary to perform the assessment is collected

- RACcheck creates and executes some scripts dynamically in order to accomplish some of the data collection

- RACcheck cleans up after itself any temporary files that are created and not needed as part of the collection.

2. RACcheck interrogates the system to determine the status of the Oracle stack components (ie., Grid Infrastructure, RDBMS, RAC, etc) and whether they are installed and/or running. Depending upon the status of each component, the tool runs the appropriate collections and audit checks. If due to local environmental configuration the tool is unable to properly determine the needed environmental information please refer to the TROUBLESHOOTING section.

3. Watchdog daemon – RACcheck automatically runs a daemon in the background to monitor command execution progress. If, for any reason, one of the commands run by the tool should hang or take longer than anticipated, the monitor daemon kills the hung command after a configurable timeout so that main tool execution can progress. If that happens then the collection or command that was hung is skipped and a notation is made in the log. If the default timeout is too short please see the TROUBLESHOOTING section regarding adjustment of the RAT_TIMEOUT, and RAT_ROOT_TIMEOUT parameters.

4. If RACcheck’s driver files are older than 90 days, the driver files are considered to be “stale” and the script will notify the user of a stale driver file. A new version of the tool and its driver files (kit) must be obtained from MOS Note 1268927.1.

5. When the RACcheck completes the collection and analysis it produces two reports, summary and detailed. A output .zip file is also produced by RACcheck. This output .zip file can be provided to Oracle Support for further analysis if an SR needs to be logged. The detailed report will contain Benefit/Impact, Risk and Action/Repair information. In many cases it will also reference publicly available documents with additional information about the problem and how to resolve it.

6. The results of the audit checks can be optionally uploaded into database tables for reporting purposes. See below for more details on this subject.

7. In some cases customers may want to stage RACcheck on a shared filesystem so that it can be accessed from various systems but be maintained in a single location rather than being copied to each cluster on which it may be used. The default behavior of the tool is to create a subdirectory and its output files in the location where the tool is staged. If that staging area is a read only filesystem or if the user for any reason would like the output to be created elsewhere then there is an environment variable which can be used for that purpose. The RAT_OUTPUT parameter can be set to any valid writable location and the output will be created there.

Applies to:
Oracle Server – Enterprise Edition – Version: 10.2.0.1 to 11.2.0.2 – Release: 10.2 to 11.2

  • Linux x86
  • IBM AIX on POWER Systems (64-bit)
  • Oracle Solaris on SPARC (64-bit)
  • Linux x86-64

To download RAC Check tool use this note on MoS:
RACcheck – RAC Configuration Audit Tool [ID 1268927.1]

Example of report output:

raccheck Report

Enjoy


Oracle Universal Installer (OUI) fails while extracting files…

Several times I have faced problems like this. I know it’s a simple problem to solve, but I’ll leave this tip.

Recently installing Oracle 11.2.0.3 on an AIX environment I faced a problem during installation.

The error was: – Invalid distance code. What’s that????????????? Bug on 11.2.0.3? Maybe!!!

Neither at moon you can find the solution for this error.

So, I wasted no time trying debugging OUI, I realized that the error was raised during the extraction of binary installation, so I took a shortcut.

Cause:

In UNIX/Linux unzip utility extract files even if they are corrupted.
So, the installation files are corrupted and I did not heed it.

unzip -t p10404530_112030_AIX64-5L_1of7.zip |grep -v OK
Archive:  p10404530_112030_AIX64-5L_1of7.zip
    testing: database/stage/Components/oracle.nlsrtl.rsf/11.2.0.3.0/1/DataFiles/filegroup1.jar   bad CRC 0e90be4a  (should be d9efb175)
 bad CRC 0e90be4a  (should be d9efb175)
 bad CRC 6c68df26  (should be 8a9ea9bb)
    testing: database/stage/Components/oracle.sysman.oms.core/10.2.0.4.4/1/DataFiles/filegroup7.jar   bad CRC 6c68df26  (should be 8a9ea9bb)
At least one error was detected in p10404530_112030_AIX64-5L_1of7.zip.


 unzip -t p10404530_112030_AIX64-5L_2of7.zip |grep -v OK
Archive:  p10404530_112030_AIX64-5L_2of7.zip

  error:  invalid compressed data to inflate
    testing: database/stage/Components/oracle.rdbms.install.seeddb/11.2.0.3.0/1/DataFiles/Expanded/filegroup1/Seed_Database.dfb
  error:  invalid compressed data to inflate
At least one error was detected in p10404530_112030_AIX64-5L_2of7.zip.

Solution:

Always test your installation zip files, before extract it.

 unzip -t p10404530_112030_AIX64-5L_1of7.zip |grep -v OK
Archive:  p10404530_112030_AIX64-5L_1of7.zip
No errors detected in compressed data of p10404530_112030_AIX64-5L_1of7.zip.

unzip -t p10404530_112030_AIX64-5L_2of7.zip |grep -v OK
Archive:  p10404530_112030_AIX64-5L_2of7.zip
No errors detected in compressed data of p10404530_112030_AIX64-5L_2of7.zip.

I hope this helps.
Enjoy…


Tip & Hint: DB_CREATE_FILE_DEST Behavior when using ASM

Recently I saw database admin with issue during the Restore  Database using RMAN and ASM, when database is locate in more than one Diskgroup.
Situation:
The Datafiles and REDO were created in two different diskgroups.
It was necessary to perform a restore of the database, then the dba removed all datafiles in both diskgroup and executed restore the database.
During the restore RMAN was reported restoring the datafiles in the correct diskgroup and  showing original path/file on prompt of RMAN, but at the end of restore  all datafiles were restored in only one diskgroup, all datafiles were moved/renamed to diskgroup   specified in parameter DB_CREATE_FILE_DEST.
Question: It’s a normal behavior?
Yes…When using ASM and Oracle parameters such as DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n the dba must be careful to either use all OMF files or keep track of the files not created OMF. There is an order in which files are created with these parameters.
By default, when create a Database using DBCA storing datafiles in  ASM the parameter DB_CREATE_FILE_DEST  is set during creation of database, the parameter  DB_CREATE_ONLINE_LOG_DEST_n is not set.

Datafiles Behavior

The restore behavior is the same as with file creation behavior for OMF files. When OMF files are restored, they follow an order of precedence. That order is:

  1. If “SET NEWNAME” is specified, RMAN will use that name for restore.
  2.  If the original file exists, RMAN will use the original filename for restore.
  3.  If the DB_CREATE_FILE_DEST is set and original filename does not exist, RMAN will use the diskgroup name specified
  4.  If no DB_CREATE_FILE_DEST is set and the original file does not exist, then RMAN will create another name for that file in the original disk group.

So, If you do not want surprise after or during the restore, you must unset the parameter DB_CREATE_FILE_DEST before start restore.

alter system set DB_CREATE_FILE_DEST='' scope=memory;

After Restore/Recover (i.e during open database) you can have suprise if DB_CREATE_FILE_DEST is not set.

So, I recommend you set DB_CREATE_FILE_DEST  between   after RECOVER DATABASE and before “OPEN RESETLOGS”  or set  DB_CREATE_ONLINE_LOG_DEST_n, because during “open resetlogs” Oracle will create Online Logs files if it does not exists.

Online Logs (Redo) Behavior

 When OMF files are restored, they follow an order of precedence. That order is:

  1. If the original file exists, RMAN will use the original filename for restore.
  2. If the DB_CREATE_ONLINE_LOG_DEST_n  is set and original filename does not exist, RMAN will use the diskgroup name specified in DB_CREATE_ONLINE_LOG_DEST_n
  3. If no DB_CREATE_ONLINE_LOG_DEST_n  is set and the original file does not exist  but  DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST  (FRA) is set, RMAN will use the diskgroup name specified in both (DB_CREATE_FILE_DEST & DB_RECOVERY_FILE_DEST)  to multiplex ONLINELOG.
  4. If no DB_CREATE_ONLINE_LOG_DEST_n and DB_RECOVERY_FILE_DEST   is set and the original file does not exist  but  DB_CREATE_FILE_DEST is set,  RMAN will use the diskgroup name specified in DB_CREATE_FILE_DEST.
  5. If no DB_CREATE_ONLINE_LOG_DEST_n and DB_CREATE_FILE_DEST   is set and the original file does not exist  but  DB_RECOVERY_FILE_DEST is set,  RMAN will use the diskgroup name specified in DB_RECOVERY_FILE_DEST.
  6. If no DB_CREATE_ONLINE_LOG_DEST_n, DB_CREATE_FILE_DEST and  DB_RECOVERY_FILE_DEST is set and the original file does not exist, Oracle will raise “ORA-19801: initialization parameter DB_RECOVERY_FILE_DEST is not set” during “alter database open resetlogs“.
Hope this help

Enjoy…


Why Use Parallel Processing?

This demonstration compares the use of parallel and serial processing for the same SQL query.
This demonstration is associated with lessons 2 and 3 of the Oracle University seminar: Parallel Processing in Oracle 11g.

Enjoy


How to Migrate a Microsoft SQL Server Database to Oracle Database 11g

Overview

What Is SQL Developer?

Oracle SQL Developer is a free graphical tool that enhances productivity and simplifies database development tasks. Using Oracle SQL Developer, you can browse database objects, run SQL statements, edit and debug PL/SQL statements and run reports, whether provided or created.

Microsoft SQL Server Migration Overview

Using Oracle SQL Developer Migration Workbench, you can quickly migrate your third-party database to Oracle.

There are four main steps in the database migration process:

 

Capture the Source Database The first step is to capture a “snapshot” of the Microsoft SQL Server database. This can be done in two ways.

  1. Online Capture: This requires creating a connection in SQL Developer to a live Microsoft SQL Server database. Using JDBC, the Microsoft SQL Server database metadata can be accessed and the Capture Model created.
  2. Offline Capture: This involves BCP scripts generated by SQL Developer to extract the Microsoft SQL Server database metadata to files. These files can then be “captured” by SQL Developer to create the Captured Model.

The second method is what you will perform in this tutorial.

Using SQL Developers Offline Capture feature, the Microsoft SQL Server Northwind sample database has been extracted into offline data files. The SQLServer2005.ocp file generated by the Capture tool contains the database schema information for the Microsoft SQL Server Northwind Traders database. Oracle SQL Developer Migration Workbench uses this file as the basis for building a representation of the structure of the source Microsoft SQL Server database. This structure is called the Captured Model.

Convert the Captured Database Oracle SQL Developer Migration Workbench uses the Captured Model to convert the captured objects to Oracle-format objects, building up a representation of the structure of the destination database. This structure is called the Converted Model.
Generate the Oracle Database Oracle SQL Developer Migration Workbench generates DDL statements to create the new Oracle database, based upon the Converted Model objects. Running the DDL statements will result in the creation of the objects in the Oracle database.
Migrate the Data The last step in the process is to migrate the data. You can do this in one of two ways.

  1. Online Data Move: You can create a connection from within Oracle SQL Developer to your Microsoft SQL Server source database and migrate the data.
  2. Offline Data Move: You can export the data from Microsoft SQL Server. SQL Developer</font> will create a series of BCP and sqlldr files that you can run from a batch file.

The second method is what you will perform in this tutorial.

In this tutorial, the required scripts for the offline migration have already been generated and modified. If you do not have time to perform this tutorial, you can also view the offline method, click here.

To view the steps for the online method, click here.

See link below

http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/hol08/sqldev_migration/mssqlserver/migrate_microsoft_sqlserver_otn.htm

Enjoy

 


Follow

Get every new post delivered to your Inbox.

Join 209 other followers