How Convert full Database 10g Linux-x86-64bit to AIX-64bit different Endian Format
Posted: 23/01/2011 | Author: Levi Pereira | Filed under: 10g R1, 10g R2, 11g R1, 11g R2, AIX, Database, Database Migration, Linux, Operating System |19 CommentsIn this post I’ll show you how to migrate full Oracle Database 10g on Linux (Little Endian) for AIX (Big Endian).
I choose Oracle 10g, so this post remains in force for higher versions.
To use transportable database feature, we need know which to recreate an entire database from one platform on another the platforms have the same endian order.
I have following scenario.
Source Host: libano
Platform: Linux OEL 5.5 x64
Endian Format: Little
Database Release: 10.2.0.5
Filesystem: Ext3
Destination Host: butao
Platform: AIX 6.1 64bit
Endian Format: Big
Database Release: 10.2.0.5
Filesystem: jfs2
We can check if is compatible (same endian format) executing query below on source host.
SELECT * FROM V$DB_TRANSPORTABLE_PLATFORM; PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT ----------- ---------------------------------------- -------------- 7 Microsoft Windows IA (32-bit) Little 10 Linux IA (32-bit) Little 5 HP Tru64 UNIX Little 11 Linux IA (64-bit) Little 15 HP Open VMS Little 8 Microsoft Windows IA (64-bit) Little 13 Linux x86 64-bit Little 12 Microsoft Windows x86 64-bit Little 17 Solaris Operating System (x86) Little 20 Solaris Operating System (x86-64) Little 19 HP IA Open VMS Little 11 rows selected.
AIX Platform don’t appear here because source platform is Little Endian. So we can discard the use of transportable database feature.
But we still have hope. We can use Cross-Platform Transportable Tablespace.
CONVERT does not perform in-place conversion of datafiles. It produces output files in the correct format for use on the destination platform, but does not alter the contents of the source datafiles.
If you can convert datafile on source platform you use the CONVERT TABLESPACE command, but if you can convert datafile on destination platform you must use the CONVERT DATAFILE command.
The most important thing in this migration, you know the restrictions.
Restrictions on CONVERT TABLESPACE and CONVERT DATAFILE
Note the following restrictions on CONVERT TABLESPACE and CONVERT DATAFILE:
- Both source and destination databases must be running with the COMPATIBLE initialization parameter set to 10.0 or higher.
- Not all combinations of source and destination platforms are supported. To determine whether your source and destination platforms are supported, query V$TRANSPORTABLE_PLATFORM. If both the source and destination platforms are listed in this view, then CONVERT can be used to prepare datafiles from one platform for use on the other.
- The Tablespace must be made read-write at least once in release 10g before it can be transported to another platform using CONVERT. Hence, any read-only Tablespace (or previously transported Tablespace) from a previous release must first be made read-write before they can be transported to another platform.
- RMAN does not process user datatypes that require endian conversions.
- Prior to release 10g, CLOBs were created with a variable-width character set and stored in an endian-dependent format. The CONVERT command does not perform conversions on these CLOBs. Instead, RMAN captures the endian format of each LOB column and propagates it to the target database. Subsequent reads of this data by the SQL layer interpret the data correctly based upon either endian format, and write the data out in an endian-independent format if the Tablespace is writeable. CLOBs created in Oracle Database Release 10g are stored in character set AL16UTF16, which is platform-independent.
Kick-off
Verify Platform Support…
Source Host/Database
v$database shows what platform the database is running on.
v$transportable_platform shows all platforms supported by Oracle for cross-platform tablespace transport.
SET LINESIZE 100 COL PLATFORM_NAME FOR A40 SELECT A.platform_id, A.platform_name, B.endian_format FROM v$database A, v$transportable_platform B WHERE B.platform_id (+) = A.platform_id;
Output of Source Platform / Host: libano:
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT ----------- ---------------------------------------- -------------- 13 Linux x86 64-bit Little
Output Destination Platform / Host : butao:
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT ----------- ---------------------------------------- -------------- 6 AIX-Based Systems (64-bit) Big
In this test the database has only one user to be migrated. You can use the same procedure to migrate all users of the database (except SYS-Users) at the same time.
Set of Tablespace to be transported must be self-contained. Objects in the Tablespace set cannot reference or depend on objects outside the set (i.e. all segments from user, must be migrated same time). Self-containment problems will cause metadata export to fail.
Self-Containments
Then let’s Check Self-Containment
Let us identify which is the Tablespace are necessary for our migration.
In this example I’ll migrate the user/schema named SOE.
First step is identifying which Tablespace will be transportable:
SELECT TABLESPACE_NAME FROM DBA_SEGMENTS WHERE OWNER='SOE' GROUP BY TABLESPACE_NAME ORDER BY 1; TABLESPACE_NAME ------------------------------ SOE SOE_FAST_PART SOE_INDX SOE_LOB SOE_SLOW_PART
Let’s take a look which type of segment we have:
SELECT SEGMENT_TYPE,TABLESPACE_NAME FROM DBA_SEGMENTS WHERE OWNER='SOE' GROUP BY SEGMENT_TYPE,TABLESPACE_NAME ORDER BY 2; SEGMENT_TYPE TABLESPACE_NAME ------------------ ------------------------------ TABLE SOE TABLE PARTITION SOE TABLE PARTITION SOE_FAST_PART INDEX SOE_INDX INDEX SOE_LOB LOBINDEX SOE_LOB LOBSEGMENT SOE_LOB TABLE SOE_LOB TABLE PARTITION SOE_SLOW_PART
The statement below can be used to determine whether Tablespace are self-contained, with referential integrity constraints taken into consideration (indicated by TRUE).
BEGIN SYS.dbms_tts.transport_set_check ('SOE,SOE_FAST_PART,SOE_INDX,SOE_LOB,SOE_SLOW_PART', incl_constraints=>TRUE, full_check=>FALSE); END; / PL/SQL procedure successfully completed.
After invoking this PL/SQL package, you can see all violations by selecting from the TRANSPORT_SET_VIOLATIONS view. If the set of Tablespace is self-contained, this view is empty.
SELECT * FROM SYS.transport_set_violations; no rows selected
Good no problem with self-contained
Restriction with Data Types
Check Tablespace set for columns with problematic data types:
Data pump can’t transport XMLTypes (i.e. expdp utility)
Original export can’t transport BINARY_FLOAT or BINARY_DOUBLE (i.e. export utility)
Manual conversion could be required for RAW, LONG RAW, BFILE, ANYTYPE, user-defined data types Lets verify if we have problem with data types.
SELECT tc.data_type, COUNT(*) FROM dba_tables t, dba_tab_columns tc WHERE t.owner = 'SOE' AND tc.owner = t.owner AND tc.table_name = t.table_name GROUP BY tc.data_type ORDER BY 1; DATA_TYPE COUNT(*) ---------------------------------------- ---------- BLOB 3 CHAR 1 DATE 32 INTERVAL YEAR(2) TO MONTH 1 NUMBER 88 NVARCHAR2 2 ROWID 1 TIMESTAMP(6) WITH LOCAL TIME ZONE 1 VARCHAR2 122
Good … no problem with Data Types.
Let’s check if any of the Tablespace have some Data Type problematic.
SELECT xt.owner, at.table_name FROM dba_xml_tables xt, all_all_tables at WHERE at.owner = xt.owner AND at.table_name = xt.table_name AND at.tablespace_name IN ('SOE','SOE_FAST_PART','SOE_INDX','SOE_LOB','SOE_SLOW_PART') no rows selected
It’s good .. no data type problematic
Checking External Objects
Using Transportable Database or Transportable Tablespace its necessary check if Database have External tables, directory or BFILES.
This is mandatory because these objects will not migrate automatically.
To complete this step will use package DBMS_TDB.CHECK_EXTERNAL if the database has external tables, directories, or BFILEs, return TRUE. Otherwise, return FALSE.
SET SERVEROUTPUT ON DECLARE external BOOLEAN; BEGIN external := DBMS_TDB.CHECK_EXTERNAL; END; / The following external tables exist in the database: SOE.TBE_ACTLOG_PARTNER, SOE.TBE_ASSOCIATIONS_PARTNER, SOE.TBE_AUDITOCC_PARTNER, SOE.TBE_CLIENT_SCHEDULES_PARTNER, SOE.TBE_DB_PARTNER, SOE.TBE_DRIVES_PARTNER, SOE.TBE_DRMEDIA_PARTNER, SOE.TBE_EVENTS_PARTNER, SOE.TBE_LIBRARIES_PARTNER, SOE.TBE_LIBVOLUMES_PARTNER, SOE.TBE_LOG_PARTNER The following directories exist in the database: SYS.EXT_PARTNER_DIR, SYS.DIR_EXP_CONV, SYS.DATA_PUMP_DIR, SYS.ORACLE_OCM_CONFIG_DIR
We have objects, and will need migrate manually.
Start Migration
In this migration I will use the Data Pump (expdp and impdp).
Typically, I see the guys using the following steps to migrate.
- Export Schema
- Exports the Tablespace
- Converts Tablespace with Rman
- Move the files to the destination host.
- Creates users in the destination database and provides the necessary grants.
- Import tables attaching Tablespace in destination database
- Import other objects such as procedures, packages, etc…
But this migration I will use a different way.
- Export Schema
- Export External Tables
- Export Tablespace
- Convert Tablespace with Rman
- Move the files to destination host.
- Import Schema
- Import External Tables
- Import Tables attaching tablespace in destination database.
You can ask: Why you import the schema first? Why not attach the Tablespace with tables and after import schema?
Simple answer, it’s more easy to me recompile objects invalidated of than recreate all users manually.
Remember, to attach Tablespace in database user must be created.
I choose this way, because it saves time.
A Data Pump schema import will recreate the user and execute all of the associated security privileges (grants, user password history), whereas the old imp utility required the DBA to create the user ID before importing.
Then when you migrate only one user it’s easy recreate user and executes associated security privileges, but when you migrate entire database we can have a many users. In my last migration the database had more than 3,000 users with different passwords and privileges. I know which I can edit the dump file of the export and recreate user with same password and grant, but it’s a hard work.
Using my strategy of migration when you import schema all objects will be invalidated, because no tables in schema and import will finish with many errors.
But don’t worry, it’s easily solved when attach Tablespace in database and recompiling all objects invalidated.
I don’t need worry anymore with map users and grant. The impdp do this automatically.
Exporting Source Database
To use Data pump will need a Logical Directory in database.
Them my stage area to dump is ‘/u01/app/oracle/flashrecovery_area/DBCONV/export’
SQL> CREATE DIRECTORY DIR_EXP_CONV AS '/u01/app/oracle/flashrecovery_area/DBCONV/export'; Directory created. SQL> GRANT READ,WRITE ON DIRECTORY DIR_EXP_CONV TO SYSTEM; Grant succeeded. SQL>
We need change default Tablespace to another Tablespace that which already exists in destination database.
This is necessary because import (impdp) will fail on creation of user if default Tablespace not exists.
In my case I choose USERS.
First I need identify and create command to configure correct default tablespace to user on destination database.
SELECT 'ALTER USER '||USERNAME||' DEFAULT TABLESPACE '||DEFAULT_TABLESPACE||';' CMD FROM DBA_USERS WHERE USERNAME IN ('SOE'); CMD -------------------------------------------------- ALTER USER SOE DEFAULT TABLESPACE SOE;
Now I can change default tablespace to Tablespace USERS before export.
SQL> ALTER USER SOE DEFAULT TABLESPACE USERS; User altered.
Before export database or convert datafiles, we need put all Tablespace from user migrate in READ-ONLY, this will prevent DML in my schema during process migration.
This step is mandatory.
SELECT 'ALTER TABLESPACE "'||TABLESPACE_NAME||'" READ ONLY;' as "CMD READ ONLY" FROM DBA_SEGMENTS WHERE OWNER='SOE' GROUP BY TABLESPACE_NAME ORDER BY 1
SQL> SELECT 'ALTER TABLESPACE "'||TABLESPACE_NAME||'" READ ONLY;' as "CMD READ ONLY" FROM DBA_SEGMENTS WHERE OWNER='SOE' GROUP BY TABLESPACE_NAME ORDER BY 1 2 3 4 5 6 / CMD READ ONLY ------------------------------------------------------------ ALTER TABLESPACE "SOE" READ ONLY; ALTER TABLESPACE "SOE_FAST_PART" READ ONLY; ALTER TABLESPACE "SOE_INDX" READ ONLY; ALTER TABLESPACE "SOE_LOB" READ ONLY; ALTER TABLESPACE "SOE_SLOW_PART" READ ONLY; SQL> ALTER TABLESPACE "SOE" READ ONLY; Tablespace altered. SQL>ALTER TABLESPACE "SOE_FAST_PART" READ ONLY; Tablespace altered. SQL>ALTER TABLESPACE "SOE_INDX" READ ONLY; Tablespace altered. SQL>ALTER TABLESPACE "SOE_LOB" READ ONLY; Tablespace altered. SQL>ALTER TABLESPACE "SOE_SLOW_PART" READ ONLY; Tablespace altered.
Let’s identify how many objects are in my schema, this will help us at end of migration. We need be sure if all object were migrated.
SELECT COUNT(OBJECT_NAME) FROM DBA_OBJECTS WHERE OWNER='SOE'; COUNT(OBJECT_NAME) ------------------ 246 SELECT COUNT(OBJECT_TYPE),OBJECT_TYPE FROM DBA_OBJECTS WHERE OWNER='SOE' GROUP BY OBJECT_TYPE ORDER BY OBJECT_TYPE; COUNT(OBJECT_TYPE) OBJECT_TYPE ------------------ ------------------- 4 FUNCTION 30 INDEX 3 JAVA CLASS 1 JAVA SOURCE 3 LOB 1 PACKAGE 1 PACKAGE BODY 2 PROCEDURE 3 SEQUENCE 34 TABLE 160 TABLE PARTITION 1 TRIGGER 1 TYPE 2 VIEW 14 rows selected.
The first export I will export entire schema with users, grants and objects except tables and indexes.
Excluding/Including an object, will also exclude/include it’s dependent objects, so constraints from tables not be export at this time.
[oracle@libano ~]$ export NLS_DATE_FORMAT="DD-MM-YYYY HH24:MI:SS" [oracle@libano ~]$ expdp DIRECTORY=DIR_EXP_CONV DUMPFILE=exp_schema_soe.dmp SCHEMAS=SOE EXCLUDE=TABLE,INDEXES Export: Release 10.2.0.5.0 - 64bit Production on Monday, 17 January, 2011 17:33:57 Copyright (c) 2003, 2007, Oracle. All rights reserved. Username: system Password: Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** DIRECTORY=DIR_EXP_CONV DUMPFILE=exp_schema_soe.dmp SCHEMAS=SOE EXCLUDE=TABLE,INDEXES Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 0 KB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY Processing object type SCHEMA_EXPORT/JAVA_SOURCE/JAVA_SOURCE Processing object type SCHEMA_EXPORT/JAVA_CLASS/JAVA_CLASS Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: /u01/app/oracle/flashrecovery_area/DBCONV/export/exp_schema_soe.dmp Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:34:27
As exclude tables from export schema, I will need export External Tables separately.
Identifying External Tables ….
SELECT OWNER||'.'||TABLE_NAME||',' CMD FROM DBA_EXTERNAL_TABLES WHERE OWNER ='SOE'
CMD -------------------------------------------------------------- SOE.TBE_ACTLOG_PARTNER, SOE.TBE_ASSOCIATIONS_PARTNER, SOE.TBE_AUDITOCC_PARTNER, SOE.TBE_CLIENT_SCHEDULES_PARTNER, SOE.TBE_DB_PARTNER, SOE.TBE_DRIVES_PARTNER, SOE.TBE_DRMEDIA_PARTNER, SOE.TBE_EVENTS_PARTNER, SOE.TBE_LIBRARIES_PARTNER, SOE.TBE_LIBVOLUMES_PARTNER, SOE.TBE_LOG_PARTNER, SOE.TBE_MEDIA_PARTNER, SOE.TBE_OCCUPANCY_PARTNER, SOE.TBE_PATHS_PARTNER, SOE.TBE_QTD_SCRATCH_PARTNER, SOE.TBE_STGPOOLS_PARTNER, SOE.TBE_SUMMARY_PARTNER, SOE.TBE_VOLHISTORY_PARTNER, SOE.TBE_VOLUMES_PARTNER 19 rows selected.
And exporting ….
[oracle@libano ~]$ cat exp_exttalbes.par DUMPFILE=exp_exttables_soe.dmp LOGFILE=exp_exttables_soe.log TABLES=SOE.TBE_ACTLOG_PARTNER,SOE.TBE_ASSOCIATIONS_PARTNER,SOE.TBE_AUDITOCC_PARTNER, SOE.TBE_CLIENT_SCHEDULES_PARTNER,SOE.TBE_DB_PARTNER,SOE.TBE_DRIVES_PARTNER, SOE.TBE_DRMEDIA_PARTNER,SOE.TBE_EVENTS_PARTNER,SOE.TBE_LIBRARIES_PARTNER, SOE.TBE_LIBVOLUMES_PARTNER,SOE.TBE_LOG_PARTNER,SOE.TBE_MEDIA_PARTNER, SOE.TBE_OCCUPANCY_PARTNER,SOE.TBE_PATHS_PARTNER,SOE.TBE_QTD_SCRATCH_PARTNER, SOE.TBE_STGPOOLS_PARTNER,SOE.TBE_SUMMARY_PARTNER,SOE.TBE_VOLHISTORY_PARTNER, SOE.TBE_VOLUMES_PARTNER [oracle@libano ~]$ expdp parfile=exp_exttalbes.par Export: Release 10.2.0.5.0 - 64bit Production on Saturday, 22 January, 2011 18:07:22 Copyright (c) 2003, 2007, Oracle. All rights reserved. Username: system Password: Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=exp_exttalbes.par Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 0 KB Processing object type TABLE_EXPORT/TABLE/TABLE Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u01/app/oracle/flashrecovery_area/DBCONV/export/exp_exttables_soe.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 18:07:30
All Oracle Directories must be created manually ….
The script above will generate command to recreate Oracle directories, except defaults Oracle Directory.
set linesize 100 SELECT 'CREATE DIRECTORY '|| DIRECTORY_NAME||' AS '''||DIRECTORY_PATH||''';' CMD FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME NOT IN ('ORACLE_OCM_CONFIG_DIR','DATA_PUMP_DIR') / CMD ---------------------------------------------------------------------------------------------------- CREATE DIRECTORY DIR_TXT_FILES AS '/u01/app/external_table/partner'; CREATE DIRECTORY EXT_PARTNER_DIR AS '/u01/app/external_table/partner'; CREATE DIRECTORY DIR_EXP_CONV AS '/u01/app/oracle/flashrecovery_area/DBCONV/export';
No sys-object privileges are granted during the import (using imp or impdp), so we must manually grant access.
This script will create the commands needed to grant access due.
SELECT grantor, grantee, table_schema, table_name, privilege FROM all_tab_privs WHERE grantee = 'SOE' GRANTOR GRANTEE TABLE_SCHEMA TABLE_NAME PRIVILEGE -------------------- -------------------- -------------------- -------------------- ---------- SYS SOE SYS DBMS_LOCK EXECUTE SYS SOE SYS EXT_PARTNER_DIR WRITE SYS SOE SYS EXT_PARTNER_DIR READ SYS SOE SYS EXT_PARTNER_DIR EXECUTE SYS SOE SYS DIR_TXT_FILES WRITE SYS SOE SYS DIR_TXT_FILES READ SYS SOE SYS DIR_TXT_FILES EXECUTE
This script will generate command to Oracle Directory
SELECT 'GRANT ' ||tp.privilege ||' ON DIRECTORY ' || tp.table_name ||' TO ' ||tp.grantee ||';' CMD FROM dba_tab_privs tp, dba_objects DO WHERE do.object_name = tp.table_name AND object_type ='DIRECTORY' AND grantee = 'SOE' /
CMD ----------------------------------------------------------------------- GRANT EXECUTE ON DIRECTORY EXT_PARTNER_DIR TO SOE; GRANT READ ON DIRECTORY EXT_PARTNER_DIR TO SOE; GRANT WRITE ON DIRECTORY EXT_PARTNER_DIR TO SOE; GRANT EXECUTE ON DIRECTORY DIR_TXT_FILES TO SOE; GRANT READ ON DIRECTORY DIR_TXT_FILES TO SOE; GRANT WRITE ON DIRECTORY DIR_TXT_FILES TO SOE;
And this script will generate command another sys-objects.
SELECT 'GRANT ' ||tp.privilege ||' ON ' ||tp.owner ||'.' ||tp.table_name ||' TO ' ||tp.grantee ||';' CMD FROM DBA_TAB_PRIVS tp, dba_objects DO WHERE do.object_name = tp.table_name AND object_type <>'DIRECTORY' AND grantee = 'SOE' GROUP BY tp.privilege, tp.owner, tp.table_name, tp.grantee /
CMD ----------------------------------------------------------- GRANT EXECUTE ON CTXSYS.CTX_DDL TO SOE; GRANT EXECUTE ON SYS.DBMS_LOCK TO SOE;
Let’s export Objects from Transportable Tablespace.
[oracle@libano ~]$ expdp DIRECTORY=DIR_EXP_CONV DUMPFILE=exp_transport_soe.dmp TRANSPORT_FULL_CHECK=Y TRANSPORT_TABLESPACES=SOE,SOE_FAST_PART,SOE_INDX,SOE_LOB,SOE_SLOW_PART Export: Release 10.2.0.5.0 - 64bit Production on Monday, 17 January, 2011 17:39:48 Copyright (c) 2003, 2007, Oracle. All rights reserved. Username: system Password: Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/******** DIRECTORY=DIR_EXP_CONV DUMPFILE=exp_transport_soe.dmp TRANSPORT_FULL_CHECK=Y TRANSPORT_TABLESPACES=SOE,SOE_FAST_PART,SOE_INDX,SOE_LOB,SOE_SLOW_PART Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/INDEX Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT Processing object type TRANSPORTABLE_EXPORT/TRIGGER Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is: /u01/app/oracle/flashrecovery_area/DBCONV/export/exp_transport_soe.dmp Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 17:40:36
Let’s convert datafiles to this stage area ‘/u01/app/oracle/flashrecovery_area/DBCONV/stage/’ with RMAN.
CONVERT TABLESPACE SOE,SOE_FAST_PART,SOE_INDX,SOE_LOB,SOE_SLOW_PART TO PLATFORM = "AIX-Based Systems (64-bit)" DB_FILE_NAME_CONVERT ('/u01/app/oracle/oradata/dbconv/', '/u01/app/oracle/flashrecovery_area/DBCONV/stage/'); PARALLELISM = 2 RMAN> CONVERT TABLESPACE SOE,SOE_FAST_PART,SOE_INDX,SOE_LOB,SOE_SLOW_PART TO PLATFORM = "AIX-Based Systems (64-bit)" DB_FILE_NAME_CONVERT ('/u01/app/oracle/oradata/dbconv/', '/u01/app/oracle/flashrecovery_area/DBCONV/stage/'); PARALLELISM = 22> 3> Starting backup at 17-01-2011 17:45:50 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=137 devtype=DISK channel ORA_DISK_1: starting datafile conversion input datafile fno=00005 name=/u01/app/oracle/oradata/dbconv/soe.dbf converted datafile=/u01/app/oracle/flashrecovery_area/DBCONV/stage/soe.dbf channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:02:06 channel ORA_DISK_1: starting datafile conversion input datafile fno=00006 name=/u01/app/oracle/oradata/dbconv/soe_indx01.dbf converted datafile=/u01/app/oracle/flashrecovery_area/DBCONV/stage/soe_indx01.dbf channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:01:15 channel ORA_DISK_1: starting datafile conversion input datafile fno=00009 name=/u01/app/oracle/oradata/dbconv/soe_lob01.dbf converted datafile=/u01/app/oracle/flashrecovery_area/DBCONV/stage/soe_lob01.dbf channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting datafile conversion input datafile fno=00007 name=/u01/app/oracle/oradata/dbconv/soe_fast_part01.dbf converted datafile=/u01/app/oracle/flashrecovery_area/DBCONV/stage/soe_fast_part01.dbf channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting datafile conversion input datafile fno=00008 name=/u01/app/oracle/oradata/dbconv/soe_slow_part01.dbf converted datafile=/u01/app/oracle/flashrecovery_area/DBCONV/stage/soe_slow_part01.dbf channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:07 Finished backup at 17-01-2011 17:49:50 RMAN> Recovery Manager complete.
Moving Files
Moving files to destination host…
[oracle@libano DBCONV]$ scp -r export stage oracle@butao:/u01/app/db_convert oracle@butao's password: exp_schema_soe.dmp 100% 208KB 208.0KB/s 00:00 export.log 100% 1819 1.8KB/s 00:00 exp_transport_soe.dmp 100% 832KB 832.0KB/s 00:00 soe_indx01.dbf 100% 2249MB 21.4MB/s 01:45 soe_lob01.dbf 100% 364MB 22.8MB/s 00:16 soe_slow_part01.dbf 100% 161MB 20.1MB/s 00:08 soe.dbf 100% 3341MB 21.7MB/s 02:34 soe_fast_part01.dbf 100% 181MB 22.6MB/s 00:08 exp_exttables_soe.dmp 100% 112KB 112.0KB/s 00:00 exp_exttables_soe.log 100% 938 0.9KB/s 00:00
Importing on Destination Database
Checking if user exists on destination database:
Note: User must be not exists.
SELECT USERNAME FROM DBA_USERS WHERE USERNAME ='SOE'; no rows selected
Creating Oracle Directory to be used with impdp …
SQL> CREATE DIRECTORY DIR_IMP_CONV AS '/u01/app/db_convert/export'; Directory created. SQL> GRANT READ,WRITE ON DIRECTORY DIR_IMP_CONV TO SYSTEM; Grant succeeded. SQL>
Creating all Oracle Directories from source database in destination database…
SQL> CREATE DIRECTORY DIR_TXT_FILES AS '/u01/app/external_table/partner'; Directory created. SQL>CREATE DIRECTORY EXT_PARTNER_DIR AS '/u01/app/external_table/partner'; Directory created. SQL>CREATE DIRECTORY DIR_EXP_CONV AS '/u01/app/oracle/flashrecovery_area/DBCONV/export'; Directory created. SQL>
Importing Schema SOE in destination database…
oracle@butao:/u01/app/db_convert/export> impdp DIRECTORY=DIR_IMP_CONV DUMPFILE=exp_schema_soe.dmp LOGFILE=imp_schema_soe.log Import: Release 10.2.0.5.0 - 64bit Production on Saturday, 22 January, 2011 14:38:32 Copyright (c) 2003, 2007, Oracle. All rights reserved. Username: system Password: Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** DIRECTORY=DIR_IMP_CONV DUMPFILE=exp_schema_soe.dmp LOGFILE=imp_schema_soe.log Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA ORA-39083: Object type TABLESPACE_QUOTA failed to create with error: ORA-00959: tablespace 'SOE' does not exist Failing sql is: DECLARE TEMP_COUNT NUMBER; SQLSTR VARCHAR2(200); BEGIN SQLSTR := 'ALTER USER "SOE" QUOTA UNLIMITED ON "SOE"'; EXECUTE IMMEDIATE SQLSTR;EXCEPTION WHEN OTHERS THEN IF SQLCODE = -30041 THEN SQLSTR := 'SELECT COUNT(*) FROM USER_TABLESPACES WHERE TABLESPACE_NAME = ''SOE'' AND CONTENTS = ''TEMPORARY'''; EXECUTE IMMEDIATE SQLSTR INTO TEMP_COUNT; Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC ORA-39082: Object type ALTER_PACKAGE_SPEC:"SOE"."ORDERENTRY" created with compilation warnings Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE ORA-39082: Object type ALTER_PROCEDURE:"SOE"."LOAD_BLOB_FROM_FILE" created with compilation warnings Processing object type SCHEMA_EXPORT/VIEW/VIEW ORA-39082: Object type VIEW:"SOE"."PRODUCTS" created with compilation warnings ORA-39082: Object type VIEW:"SOE"."PRODUCT_PRICES" created with compilation warnings Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY ORA-39082: Object type PACKAGE_BODY:"SOE"."ORDERENTRY" created with compilation warnings Processing object type SCHEMA_EXPORT/JAVA_SOURCE/JAVA_SOURCE Processing object type SCHEMA_EXPORT/JAVA_CLASS/JAVA_CLASS Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 6 error(s) at 14:38:44
Some errors be raised, “Object type TABLESPACE_QUOTA failed to create with error” this error is because Tablespace SOE don’t yet exists.
It’s completely normal.
Granting privileges on Oracle Directories to user SOE.
SQL> GRANT EXECUTE ON DIRECTORY EXT_PARTNER_DIR TO SOE; Grant succeeded. SQL> GRANT READ ON DIRECTORY EXT_PARTNER_DIR TO SOE; Grant succeeded. SQL> GRANT WRITE ON DIRECTORY EXT_PARTNER_DIR TO SOE; Grant succeeded. SQL>GRANT EXECUTE ON DIRECTORY DIR_TXT_FILES TO SOE; Grant succeeded. SQL> GRANT READ ON DIRECTORY DIR_TXT_FILES TO SOE; Grant succeeded. SQL>GRANT WRITE ON DIRECTORY DIR_TXT_FILES TO SOE; Grant succeeded.
Importing External tables…
oracle@butao:/u01/app/db_convert/export> impdp DIRECTORY=DIR_IMP_CONV DUMPFILE=exp_exttables_soe.dmp LOGFILE=imp_exttable_soe.log Import: Release 10.2.0.5.0 - 64bit Production on Saturday, 22 January, 2011 18:57:02 Copyright (c) 2003, 2007, Oracle. All rights reserved. Username: system Password: Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** DIRECTORY=DIR_IMP_CONV DUMPFILE=exp_exttables_soe.dmp LOGFILE=imp_exttable_soe.log Processing object type TABLE_EXPORT/TABLE/TABLE Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 18:57:14
Attaching Tablespaces on destination database and importing tables…
imp_soe.par DIRECTORY=DIR_IMP_CONV DUMPFILE=exp_transport_soe.dmp LOGFILE=imp_transport_soe.log TRANSPORT_DATAFILES='/u01/app/db_convert/stage/soe.dbf', '/u01/app/db_convert/stage/soe_fast_part01.dbf', '/u01/app/db_convert/stage/soe_indx01.dbf', '/u01/app/db_convert/stage/soe_lob01.dbf', '/u01/app/db_convert/stage/soe_slow_part01.dbf' oracle@butao:/u01/app/db_convert/stage> impdp parfile=imp_soe.par Import: Release 10.2.0.5.0 - 64bit Production on Saturday, 22 January, 2011 14:52:56 Copyright (c) 2003, 2007, Oracle. All rights reserved. Username: system Password: Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** parfile=imp_soe.par Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/INDEX Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT Processing object type TRANSPORTABLE_EXPORT/TRIGGER Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 15:01:43
This step can take long time because Oracle always will Analyzes the Index and Tables.
If you can’t this happens, you can use note below…
EXCLUDE=STATISTICS Or EXCLUDE=INDEX_STATISTICS During Datapump Import Still Analyzes The Indexes [ID 793585.1]
Checking Tablespace after import and change to READ WRITE…
select TABLESPACE_NAME, STATUS FROM DBA_TABLESPACES WEHRE TABLESPACE_NAME LIKE '%SOE%'; TABLESPACE_NAME STATUS ------------------------------ --------- SOE_SLOW_PART READ ONLY SOE_LOB READ ONLY SOE_INDX READ ONLY SOE_FAST_PART READ ONLY SOE READ ONLY SELECT 'ALTER TABLESPACE '||TABLESPACE_NAME||' READ WRITE;' CMD FROM DBA_TABLESPACES WHERE TABLESPACE_NAME LIKE '%SOE%' AND STATUS='READ ONLY'; CMD ----------------------------------------------------------- ALTER TABLESPACE SOE_SLOW_PART READ WRITE; ALTER TABLESPACE SOE_LOB READ WRITE; ALTER TABLESPACE SOE_INDX READ WRITE; ALTER TABLESPACE SOE_FAST_PART READ WRITE; ALTER TABLESPACE SOE READ WRITE; SQL> ALTER TABLESPACE SOE_SLOW_PART READ WRITE; Tablespace altered. SQL>ALTER TABLESPACE SOE_LOB READ WRITE; Tablespace altered. SQL>ALTER TABLESPACE SOE_INDX READ WRITE; Tablespace altered. SQL>ALTER TABLESPACE SOE_FAST_PART READ WRITE; Tablespace altered. SQL>ALTER TABLESPACE SOE READ WRITE; Tablespace altered.
Compiling Objects Invalidated ….
set pagesize 0 linesize 120 heading on feedback off SELECT 'ALTER '|| decode(object_type, 'PACKAGE BODY', 'PACKAGE',object_type) || ' "' || owner||'"."'|| object_name||'" compile' || decode(object_type, 'PACKAGE BODY', ' BODY') || ';' FROM dba_objects WHERE status = 'INVALID' and object_type in ( 'PACKAGE', 'PACKAGE BODY', 'PROCEDURE', 'VIEW', 'TRIGGER','SYNONYM','FUNCTION' );
ALTER PACKAGE "SOE"."ORDERENTRY" compile; ALTER PROCEDURE "SOE"."LOAD_BLOB_FROM_FILE" compile; ALTER VIEW "SOE"."PRODUCTS" compile; ALTER VIEW "SOE"."PRODUCT_PRICES" compile; ALTER PACKAGE "SOE"."ORDERENTRY" compile BODY; SQL> ALTER PACKAGE "SOE"."ORDERENTRY" compile; ALTER PROCEDURE "SOE"."LOAD_BLOB_FROM_FILE" compile; ALTER VIEW "SOE"."PRODUCTS" compile; ALTER VIEW "SOE"."PRODUCT_PRICES" compile; ALTER PACKAGE "SOE"."ORDERENTRY" compile BODY;
Checking again ….
SQL> SELECT 'ALTER '|| 2 decode(object_type, 'PACKAGE BODY', 'PACKAGE',object_type) || ' "' || 3 owner||'"."'|| 4 object_name||'" compile' || 5 decode(object_type, 'PACKAGE BODY', ' BODY') || ';' 6 FROM dba_objects 7 WHERE status = 'INVALID' 8 and object_type in ( 'PACKAGE', 9 'PACKAGE BODY', 10 'PROCEDURE', 11 'VIEW', 12 'TRIGGER','SYNONYM','FUNCTION' ); no rows selected
Good … all objects validated …
Verifying if all Objects still on destination Database…
SELECT COUNT(OBJECT_NAME) FROM DBA_OBJECTS WHERE OWNER='SOE'; COUNT(OBJECT_NAME) ------------------ 246 COUNT(OBJECT_TYPE) OBJECT_TYPE ------------------ ------------------- 4 FUNCTION 30 INDEX 3 JAVA CLASS 1 JAVA SOURCE 3 LOB 1 PACKAGE 1 PACKAGE BODY 2 PROCEDURE 3 SEQUENCE 34 TABLE 160 TABLE PARTITION 1 TRIGGER 1 TYPE 2 VIEW 14 rows selected.
Good .. all objects were migrated.
You can start your application and pray no error being raised by Oracle.
If everything is ok … you can celebrate.
Rate this:
Share this:
- Click to share on Facebook (Opens in new window)
- Click to share on Twitter (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to email a link to a friend (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to share on Pinterest (Opens in new window)
- Click to print (Opens in new window)
Do we have to set the exact default tablespace (what it used to be in the sourcedb ) to the user in the target db after the migration .
LikeLike
Yes…This is very important to keep the migrated database equal source database.
LikeLike
if i have same endian_format … can u tell me how step to migrade from itanium base to x86/64..??
LikeLike
Hi,
You can use transportable database feature.
It’s much easy.
Click to access maa-wp-10gr2-platformmigrationtdb-131164.pdf
Hope this helps.
LikeLike
Very nice and detailed guide. Just a few things to add from my personal experience when migrating AIX 6.1 to Oracle Linux 5.6 64Bit:
For large databases use striped ASM SAN luns attached & zoned 2 both hosts. On those LUNS create a ASM diskgroup to be used for migration and mount it at the source server. This way your can run the rman convert from the source ASM diskgroup or fs against the shared ASM diskgroup intended for migration. When done you unmount the ASM diskgroup used for migration on the source system and mount it on the target system. Then you can run the scripts required. Note this also works equally well when doing migrations with the same endian format.
LikeLike
Hi,
I never realized cross-platform migration using ASM diskgroup to move the Datafiles. (I’m scheduling this test but could not get a time for it.)
Have you converted the datafiles using a diskgroup (LUNs) on actual server and after finishing convert datafiles you mapped the LUNs to the new server with different endian and OS. (i.e you use Diskgroup to make move of datafiles between platform)
Is good to hear that works.
LikeLike
Efstathios,
We are facing exactly the same challenge: to migrate a VLDB (20TB) from Tru64 (Little Endian) to HP-UX (Big Endian). We would like to avoid network transfer. We tried exactly the same approach you did: first mount an ASM diskgroup in the source, run RMAN CONVERT, then mount that same raw disks in the target and create a diskgroup.
But, unfortunately, after we created the diskgroup in the target, it was completely empty.
Do you think this is because of some option to mount the raw disks in the target, or it has to be with the diskgroup creation?
Any advice would be appreciate.
Thanks in advance.
LikeLike
Hi,
When you created a Diskgroup all METADATA in the header of LUNs were erased.So, you MUST NOT create a DISKGROUP, he must be mounted only.
As I said, I never do this test, but suppose it work. When you mapped LUNS from TRU64 to HP-UX, you will set the permissions on devices (disk) to ORACLE and ASM (kfod) will read the header of LUNs and identifify the DISKGROUP and MOUNT all ASMDISKs automatically.
To make make sure wich disks are discovered by kfod, use steps above:
$ORACLE_HOME/bin/kfod asm_diskstring=’/dev/rhdisk*’ disks=all
Also you can check this note:
Placeholder for AMDU binaries and using with ASM 10g [ID 553639.1]
Levi Pereira
LikeLike
It’s not possible mount ASMDISK from different ENDIAN Format.
See this test:
Created ASMDISK DATA01 with dd command and Created a Diskgroup named DATA_MOVE on AIX.
For each block on ASMDISK Oracle setup parameter “kfbh.endian:0 ; 0×000: 0×00″ . Endian =0 means BIG ENDIAN)
Check first 500 ASM blocks.
for i in {0..500};
do
echo blknum $i
kfed op=read dev=DATA01 blknum=$i |head -1
done
Created ASMDISK DATA01 with dd command and Created a Diskgroup named DATA_MOVE on LINUX.
For each block on ASMDISK Oracle setup parameter “kfbh.endian:1 ; 0×000: 0×01″ . Endian =1 means LITTLE ENDIAN)
So, when Oracle format ASMDISK all blocks have configured OS Endian.
So, Mounting ASMDISK from different ENDIAN format will not work.
LikeLike
After the raw disk is presented in the target (HP-UX), ASM does not recognize that disk as a diskgroup member. It is recognized as a CANDIDATE, exactly as it was empty. When I try to mount the diskgroup created in the source, I receive an error saying that it could not find enough disks for that diskgroup.
Any other tip? Efstathios, could you explain how did you do that?
LikeLike
Hi,
when we did the setup in our company, we had the following setup:
Source HP-UX
Target IBM AIX
SAN EMC Clariion
ASM Version 10.2.0.4 on both machines
HPUX asm_diskstring=/dev/asm/asm_*
IBM asm_disktring=/dev/asm_*
Please make sure, that there is no reserve_lock on the SAN array. Both boxes should be able to mount the LUN like you would set it up in a RAC cluster.
We did create presistent names on both sides using:
1. Create ASM presistent device aliases
1.1 IBM AIX
mknod [majorID],[minorID] /dev/asm_[name]
chmod 660 /dev/asm_[name]
chown oracle:oinstall /dev/asm_[name]
1.2 HP-UX
mknod [majorID],[minorID] /dev/asm/asm_[name]
chmod 660 /dev/asm_[name]
chown oracle:oinstall /dev/asm/asm_[name]
2. Create ASM diskgroup
We did create the diskgroup on IBM AIX.
3. Unmount the ASM diskgroup on AIX
4. Mount the diskgroup on HP-UX
If this works ok, then you have validated your SAN layout and Device Aliases
They key really is presistency.
Such issues as you mention, are normally seen, if sth on the SAN changes eg. rescan resulting in devices not anymore being present in the range specified in asm_disktring.
Hope this helps
Regards
Efstathios
LikeLike
This may help during testing / error retrieval:
# For testing only
alter system set “_ASM_ALLOW_ONLY_RAW_DISKS” = FALSE scope=spfile;
# create file using dd
dd if=/dev/zero of=/u01/oracle/tmp/asm_disk_01 bs=1024k count=1024
# Edit asm_diskstring to include dd file
alter system set asm_diskstring=’/u01/oracle/tmp/asm_*’;
1. Create ASM Diskgroup on dummy dd file on Source
2. SCP file asm_disk_01 to target machine
3. Adjust asm_disktring to include the dummy file on Target Machine
4. Try to mount the ASM Diskgroup on Target Machine
It’s good way to perform tests with ASM.
I use this to test/find a way to overcome some issues with ACFS replication, which has some limitations (platform/endian must be identical), as this method should allow to use rsync command on the dummy file to establish an abstract level of replication. In some way this is what a SAN array does at the end, when you use SAN level replication between 2 LUNS.
LikeLike
Hi Eduardo,
there seems to be an issue with our internal docs. I spent a whole day and night now trying to figure out a solution a problem, that is self-explanatory.
We tested HPUX to AIX migration => ok since same Endian, have done it myself.
We tested HPUX to Linux => did work since again it was same Endian, while examining our machines used, I discovered it was Linux on Power 😦
We tested Solaris 10 to Linux => Solaris was x86, so it worked 😦
The migration Doc for AIX 6.1 from Oracle Linux was done by a colleague of mine, during reviewing I noticed, he used normal techniques (11g ASMCMD CP Command between ASM instances). I went very low level to reproduce your scenario, but even with all hex-editing, byte-swapping etc. I could not get it work, sadly. I did not come further with 11g than that the disk shows up as provisioned.
This also explains why I am stuck with the ACFS replication issue …
Sometimes things just do not want to work the way we want to have have it.
So for your scenario it sadly does not work. I am very sorry about the documentation issue and it personally bugs me.
If found this on HP homepage as well:
“The Intel® Itanium® processor is little-endian but supports both big and little-endian operating systems. HP-UX has always been a big-endian operating system, and will continue to be so on Itanium®-based systems. This means that applications with databases already running on PA-RISC HP-UX will still be able to read the existing database without modification when moved to the Itanium® architecture. The Tru64 UNIX operating system is little-endian based. Database applications cannot simply be moved from Tru64 UNIX to HP-UX on the Intel® Itanium® architecture. They will not be able to read from the old database. The database as well as any binary data must be migrated.”
Source: http://h30097.www3.hp.com/transition/faqs.html#0602
For your scenario, if visible use a target platform of the same Endian format eg. use a Linux System, then you can still benefit from using ASM disks to be connected to your box avoiding other scenarios like exp/imp, expdp/impdp or copying over network.
This is also advisable to move to Linux rather than HP-UX, since Oracle stopped all development for Itanium platform:
http://www.oracle.com/us/corporate/press/346696
It may be imho the better longterm strategy. As well it’s certainly cheaper to get commodity hardware than proprietary stuff.
If you still want to go for the Endian migration from Tru64 to HP-UX, this approach may help you (check, if applicable for tru64):
For optimal results, you should have fast disks on the source and target machine. E.g. have striped Luns on the source file system, striped luns on the export file system and striped disks on the target server:
1. Create Directory Objects as needed
2. Perform Datapump export in parallel (adjust to your environment) without statistics and a reasonable values for filesize (check you have no LONGS or any other unsupported data types)
expdp user/passwd@${ORACLE_SID} full=yes directory=EXPDP filesize=1G dumpfile=${ORACLE_SID}_full_%U.dmp logfile=LOGDIR:${ORACLE_SID}_full.elog exclude=statistics parallel=16
3. Synchronize the files using either rsync / scp
It is advisable to exchange ssh keys between the source and target machine, so you can evenually copy files in parallel.
4. Tune Target DB for Import (temporary settings)
Size Redo Logs
Increase PGA to a large value like eg. 4gb
Set the following parameters:
alter system set workarea_size_policy=’manual’ scope=both;
alter system set hash_area_size=1999999990 scope=spfile;
alter system set sort_area_retained_size=1999999990 scope=spfile;
alter system set sort_area_size=1999999990 scope=spfile;
5. Create SQLFILE with Index creation synthax
impdp user/passwd$ORACLE_SID full=y directory=EXPDP dumpfile=${ORACLE_SID}_full_%U.dmp sqlfile=${ORACLE_SID}_indexes.sql include=index logfile=LOGDIR:${ORACLE_SID}_createIDX.ilog parallel=16
6. Edit SQLFILE
You can change the parallel degree in that file for indexes, as the file includes a “alter index xxx noparralel” statement after create index statement.
7. Run Datapump import
impdp user/password@$ORACLE_SID full=y TABLE_EXISTS_ACTION=replace directory=EXPDP dumpfile=$ORACLE_SID_full_%U.dmp logfile=LOGDIR:$ORACLE_SID_full.ilog parallel=16 exclude=INDEX
8. Create Indexes from SQLFILE
9. Reset temporary settings
LikeLike
Efstathios, thaks very much for your responde and research, I realy apreciated that. Unfortunately, the target platform is already defined, there is no place to change it. But, thanks.
LikeLike
If you have large amounts of data, you could use Veritas Portable data Containers to move the transportable tablespaces by exporting vg at the source and importing converting vg at the target, however you would still need to convert the datafiles using RMAN either at the source or the target destination:
http://sfdoccentral.symantec.com/sf/5.1SP1/hpux/html/sf_adv_ora/ch16s01.htm
Otherwise, when using transportable tablespaces rsync is your friend to transport them econoically (use version 3.x):
1. Local Snaphot
(assure source and target fs are not on the same lun to avoid i/o contention => r/w on the same device)
Source: http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmbckad.htm#i1006219
alter system switch logfile;
alter system checkpoint;
alter database begin backup;
alter system suspend;
! rsync -avxP –no-W –no-checksum –inplace $LOCAL_SOURCE $LOCAL_DESTINATION
alter system resume;
alter database end backup;
2. Remote Synchronization
rsync -avxzP $LOCAL_SOURCE $HOSTNAME:$REMOTE_DESTINATION
LikeLike
[…] then Mounting ASMDISK from different ENDIAN format won't work. See detail on coments of this post: How Convert full Database 10g Linux-x86-64bit to AIX-64bit different Endian Format Reply With […]
LikeLike
VERY VERY NICE , I LEARNED MORE HERE
LikeLike
Hello,
I would like to know how large were the tablespaces you converted, I saw that it took 4 minutes. I’ve got 80Gb to convert, and I need to provide an estimate.
Thanks a lot, P
LikeLike
But then why use TTS for a full database. you can scp files over. startup nomount the new instance. rman target / convert all files , create controlfile, shutdown and startup upgrade.
Any reasons ?
P.
LikeLike