APPLIES
TO:
Oracle
Database - Enterprise Edition - Version 10.1.0.2 to 12.1.0.1 [Release 10.1
to 12.1]
Information in this document applies to any platform.
******************* WARNING *************
Document 1334152.1 Corrupt IOT
when using Transportable Tablespace to HP from different OS
Document 13001379.8 Bug 13001379 - Datapump
transport_tablespaces produces wrong dictionary metadata for some tables
Starting with Oracle Database 10g, you can transport
tablespaces across platforms. In this note there is a step by step guide
about how to do it with ASM datafiles and with OS filesystem
datafiles.
If your goal is to migrate a
database to different endian platform, the following high-level steps
describe how to migrate a database to a new platform using transportable
tablespace:
1.- Create a new, empty database on the
destination platform.
2.- Import objects required for transport
operations from the source database into the destination database.
3.- Export transportable metadata for all user
tablespaces from the source database.
4.- Transfer data files for user tablespaces
to the destination system.
5.- Use RMAN to convert the data files to the
endian format of the destination system.
6.- Import transportable metadata for all user
tablespaces into the destination database.
7.- Import the remaining database objects and
metadata (that were not moved by the transport operation)
from the source database
into the destination database.
You could also convert the datafiles at source
platform and once converted transfer them to destination platform.
The MAA white paper
"Platform Migration Using Transportable Tablespace" is available
at
http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-platformmigrationtts-129269.pdf
From 11.2.0.4, 12C and
further, if converting to Linux x86-64 consider to follow this doc:
Reduce Transportable Tablespace
Downtime using Cross Platform Incremental Backup [1389592.1]
Supported platforms
You can query the V$TRANSPORTABLE_PLATFORM
view to see the platforms that are supported and to determine each
platform's endian format (byte ordering).
SQL> COLUMN
PLATFORM_NAME FORMAT A32
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;
PLATFORM_ID
PLATFORM_NAME ENDIAN_FORMAT
----------- -------------------------------- --------------
1 Solaris[tm]
OE (32-bit) Big
2 Solaris[tm]
OE (64-bit) Big
7 Microsoft
Windows IA (32-bit) Little
10 Linux IA
(32-bit) Little
6 AIX-Based
Systems (64-bit) Big
3 HP-UX
(64-bit) Big
5 HP Tru64
UNIX Little
4 HP-UX IA
(64-bit) Big
11 Linux IA
(64-bit) Little
15 HP Open
VMS Little
8 Microsoft
Windows IA (64-bit) Little
9 IBM zSeries
Based Linux Big
13 Linux 64-bit for
AMD Little
16 Apple Mac
OS Big
12 Microsoft Windows
64-bit for AMD Little
17 Solaris Operating
System (x86) Little
If the source platform and the target platform are of different endianness,
then an additional step must be done on either the source or target
platform to convert the tablespace being transported to the target format.
If they are of the same endianness, then no conversion is necessary and
tablespaces can be transported as if they were on the same platform.
Transporting the
tablespace
- Prepare for export of the tablespace.
- Check that the tablespace will be self
contained:
SQL> execute
sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
SQL> select * from sys.transport_set_violations;
Note:
these violations must be resolved before the tablespaces can be
transported.
- The tablespaces need to be in READ ONLY mode in
order to successfully run a transport tablespace export:
SQL> ALTER
TABLESPACE TBS1 READ ONLY;
SQL> ALTER TABLESPACE TBS2 READ ONLY;
- Export the metadata.
- Using the original export utility:
exp
userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log
transport_tablespace=y tablespaces=TBS1,TBS2
- Using Datapump export:
First create the directory object to be used for Datapump, like in:
CREATE OR REPLACE
DIRECTORY dpump_dir AS '/tmp/subdir' ;
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
Then initiate Datapump Export:
expdp
system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_TABLESPACES = TBS1,TBS2
If you want to perform a transport tablespace operation with a strict
containment check, use the TRANSPORT_FULL_CHECK parameter:
expdp
system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir
TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y
If the tablespace set being transported is not self-contained then the
export will fail.
- Use V$TRANSPORTABLE_PLATFORM to determine the endianness
of each platform. You can execute the following query on each platform
instance:
SELECT
tp.platform_id,substr(d.PLATFORM_NAME,1,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
If you see that the endian formats are different and then a conversion is
necessary for transporting the tablespace set:
RMAN> convert
tablespace TBS1 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';
RMAN> convert tablespace TBS2 to platform="Linux IA (32-bit)"
FORMAT '/tmp/%U';
Then copy the datafiles as well as the export dump file to the target
environment.
- Import the transportable tablespace.
- Using the original import utility:
imp
userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log
transport_tablespace=y datafiles='/tmp/....','/tmp/...'
CREATE OR REPLACE
DIRECTORY dpump_dir AS '/tmp/subdir';
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
Followed by:
impdp
system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target)
REMAP_SCHEMA=(source_sch2:target_schema_sch2)
You can use REMAP_SCHEMA if you want to change the ownership of the
transported database objects.
- Put the tablespaces in read/write mode:
SQL> ALTER
TABLESPACE TBS1 READ WRITE;
SQL> ALTER TABLESPACE TBS2 READ WRITE;
Using DBMS_FILE_TRANSFER
You can also use DBMS_FILE_TRANSFER to copy
datafiles to another host.
From
12c and in 11.2.0.4 DBMS_FILE_TRANSFER does the conversion by default.
Using DBMS_FILE_TRANSFER the
destination database converts each block when it receives a file from a
platform with different endianness. Datafiles can be imported after they
are moved to the destination database as part of a transportable operation
without RMAN conversion.
In releases lower than 11.2.0.4 you need
to follow the same steps specified above for ASM files. But if the endian
formats are different then you must use the RMAN convert AFTER
transfering the files. The files cannot be copied directly between two ASM
instances at different platforms.
This is an example of usage:
RMAN> CONVERT
DATAFILE
'/hq/finance/work/tru/tbs_31.f',
'/hq/finance/work/tru/tbs_32.f',
'/hq/finance/work/tru/tbs_41.f'
TO PLATFORM="Solaris[tm] OE
(32-bit)"
FROM PLATFORM="HP TRu64 UNIX"
DB_FILE_NAME_CONVERT=
"/hq/finance/work/tru/", "/hq/finance/dbs/tru"
PARALLELISM=5;
The same example, but here showing the destination being an +ASM diskgroup:
RMAN> CONVERT
DATAFILE
'/hq/finance/work/tru/tbs_31.f',
'/hq/finance/work/tru/tbs_32.f',
'/hq/finance/work/tru/tbs_41.f'
TO PLATFORM="Solaris[tm] OE
(32-bit)"
FROM PLATFORM="HP TRu64 UNIX"
DB_FILE_NAME_CONVERT="/hq/finance/work/tru/",
"+diskgroup"
PARALLELISM=5;
*** WARNING ***
- Index
Organized Tables (IOT) can become corrupt when using Transportable
Tablespace (TTS) from Solaris, Linux or AIX to HP/UX.
This is a
restriction caused by BUG:9816640.
Currently there is
no patch for this issue, the Index Organized Tables (IOT) need to be
recreated after the TTS.
See Document 1334152.1 Corrupt IOT when using Transportable
Tablespace to HP from different OS.
- When
using dropped columns, Bug:13001379 - Datapump transport_tablespaces
produces wrong dictionary metadata for some tables can occur.See
Document 1440203.1 for details on this alert.
Known issue Using
DBMS_FILE_TRANSFER
=> Unpublished Bug 13636964 - ORA-19563 from RMAN convert on datafile
copy transferred with DBMS_FILE_TRANSFER (Doc ID 13636964.8)
Versions confirmed as being affected
11.2.0.3
This issue is fixed in
12.1.0.1 (Base Release)
11.2.0.4 (Future Patch Set)
Description
A file transferred using DBMS_FILE_TRANSFER fails during
an RMAN convert
operation.
eg:
RMAN-00569: =============== ERROR MESSAGE STACK
FOLLOWS ===============
RMAN-00571:
===========================================================
RMAN-03002: failure of conversion at target
command at 01/24/2012 16:22:23
ORA-19563: cross-platform datafile header
validation failed for file +RECO/soets_9.tf
Rediscovery Notes:
If RMAN convert fails on a file transferred using
DBMS_FILE_TRANSFER
then it may be due to this bug
Workaround
Transfer the file using OS facilities.
=>
Dbms_file_transfer Corrupts Dbf File When Copying between endians (Doc ID
1262965.1)
Additional Resources
Community: Database
Utilities
Still have questions? Use the above community to search for similar
discussions or start a new discussion on this subject.
Limitations on
Transportable Tablespace Use
- The
source and target database must use the same character set and
national character set.
- You
cannot transport a tablespace to a target database in which a
tablespace with the same name already exists. However, you can rename
either the tablespace to be transported or the destination tablespace
before the transport operation.
- Objects
with underlying objects (such as materialized views) or contained
objects (such as partitioned tables) are not transportable unless all
of the underlying or contained objects are in the tablespace set.
- Review Table "Objects Exported and
Imported in Each Mode" from the Oracle Database Utilities
documentation, there are several object types that are not exported
in tablespace mode.
- If
the owner/s of tablespace objects does not exist on target database,
the usernames need to be created manually before starting the
transportable tablespace import.
- If you use spatial indexes, then:
- be aware that TTS across different endian
platforms are not supported for spatial indexes in 10gR1 and 10gR2;
such a limitation has been released in 11g
- specific Spatial packages must be run before
exporting and after transportation, please see Oracle Spatial
documentation.
- Beginning
with Oracle Database 11g Release 1, you must use only Data Pump to
export and import the tablespace metadata for tablespaces that contain
XMLTypes.
The following query returns a list of tablespaces that contain
XMLTypes:
select distinct
p.tablespace_name
from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
where t.table_name=x.table_name and
t.tablespace_name=p.tablespace_name and
x.owner=u.username;
Transporting tablespaces with XMLTypes has the following limitations:
a.
The target
database must have XML DB installed.
- Schemas
referenced by XMLType tables cannot be the XML DB standard schemas.
- Schemas
referenced by XMLType tables cannot have cyclic dependencies.
- Any
row level security on XMLType tables is lost upon import.
- If
the schema for a transported XMLType table is not present in the
target database, it is imported and registered. If the schema already
exists in the target databasean error is returned unless the ignore=y
option is set.
- Advanced
Queues Transportable tablespaces do not support 8.0-compatible
advanced queues with multiple recipients.
- You
cannot transport the SYSTEM tablespace or objects owned by the user
SYS.
- Opaque
Types Types(such as RAW, BFILE, and the AnyTypes) can be transported,
but they are not converted as part of the cross-platform transport
operation. Their actual structure is known only to the application, so
the application must address any endianness issues after these types
are moved to the new platform.
- Floating-Point
Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable using
Data Pump but not the original export utility, EXP.
- Please
also check Document 1454872.1 - Transportable Tablespace (TTS)
Restrictions and Limitations: Details, Reference, and Version Where
Applicable
Transportable tablespace
EXP/IMP of ASM files
- Using RMAN CONVERT
There is no direct way to exp/imp ASM files as transportable
tablespace. However, the funcationality can be done via RMAN.
You must follow this steps:
- Prepare
for exporting the tablespace.
- Check that the tablespace will be self
contained:
SQL>execute
sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
SQL> select * from sys.transport_set_violations;
Note:
these violations must be resolved before the tablespaces can be
transported.
- The tablespaces need to be in READ ONLY mode
in order to successfully run a transport tablespace export:
SQL> ALTER
TABLESPACE TBS1 READ ONLY;
SQL> ALTER TABLESPACE TBS2 READ ONLY;
- Export
the metadata.
- Using the original export utility:
exp
userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log
transport_tablespace=y tablespaces=TBS1,TBS2
CREATE OR REPLACE
DIRECTORY dpump_dir AS '/tmp/subdir';
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
followed by:
expdp
system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_TABLESPACES = TBS1,TBS2
If you want to perform a transport tablespace operation with a strict
containment check, use the TRANSPORT_FULL_CHECK parameter:
expdp
system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir
TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y
If the tablespace set being transported is not self-contained, then
the export will fail.
- Use
V$TRANSPORTABLE_PLATFORM to find the exact platform
name of target database. You can execute the following query on
target platform instance:
SELECT
tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
- Generate
an OS file from the ASM file, in target platform format:
RMAN> CONVERT
TABLESPACE TBS1
TO PLATFORM 'HP-UX (64-bit)' FORMAT
'/tmp/%U';
RMAN> CONVERT TABLESPACE TBS2
TO PLATFORM 'HP-UX (64-bit)' FORMAT
'/tmp/%U';
- Copy
the generated file to target server if different from source.
- Import
the transportable tablespace
- Using the original import utility:
imp
userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log
transport_tablespace=y datafiles='/tmp/....','/tmp/...'
CREATE OR REPLACE
DIRECTORY dpump_dir AS '/tmp/subdir';
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
followed by:
impdp
system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target)
REMAP_SCHEMA=(source_sch2:target_schema_sch2)
You can use REMAP_SCHEMA if you want to change the ownership of the
transported database objects.
- Put
the tablespaces in read/write mode:
SQL> ALTER
TABLESPACE TBS1 READ WRITE;
SQL> ALTER TABLESPACE TBS2 READ WRITE;
If you want to transport the datafiles from ASM area to filesystem, you
have finished after the above steps. But if you want to transport
tablespaces between two ASM areas you must continue.
- Copy
the datafile '/tmp/....dbf' into the ASM area using rman:
rman nocatalog
target /
RMAN> backup as copy datafile '/tmp/....dbf' format '+DGROUPA';
where +DGROUPA is the name of the ASM diskgroup.
- Switch
the datafile to the copy.
If the 10g database is open you need to offline the datafile first:
SQL> alter
database datafile '/tmp/....dbf' offline;
Switch to the copy:
rman nocatalog
target /
RMAN> switch datafile '/tmp/....dbf' to copy;
Note down the name of the copy created in the +DGROUPA diskgroup, ex.
'+DGROUPA/s101/datafile/tts.270.5'.
- Put
the datafile online again, we need to recover it first:
SQL> recover
datafile '+DGROUPA/s101/datafile/tts.270.5';
SQL> alter database datafile '+DGROUPA/s101/datafile/tts.270.5' online;
- Check
if datafile is indeed part of the ASM area and online:
SQL> select
name, status from v$datafile;
The output should be:
+DGROUPA/s101/datafile/tts.270.5
ONLINE
- Using DBMS_FILE_TRANSFER
You can also use DBMS_FILE_TRANSFER to copy datafiles from one ASM
disk group to another, even on another host. Starting with 10g release
2 you can also use DBMS_FILE_TRANSFER also to copy datafiles from ASM
to filesystem and to filesystem to ASM.
The PUT_FILE procedure reads a local file or ASM and contacts a remote
database to create a copy of the file in the remote file system. The
file that is copied is the source file, and the new file that results
from the copy is the destination file. The destination file is not
closed until the procedure completes successfully.
Syntax:
DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object IN VARCHAR2,
source_file_name IN VARCHAR2,
destination_directory_object IN VARCHAR2,
destination_file_name IN VARCHAR2,
destination_database IN VARCHAR2);
Where:
- source_directory_object:
The directory object from which the file is copied at the local
source site. This directory object must exist at the source site.
- source_file_name:
The name of the file that is copied from the local file system. This
file must exist in the local file system in the directory associated
with the source directory object.
- destination_directory_object:
The directory object into which the file is placed at the destination
site. This directory object must exist in the remote file system.
- destination_file_name:
The name of the file placed in the remote file system. A file with
the same name must not exist in the destination directory in the
remote file system.
- destination_database:
The name of a database link to the remote database to which the file
is copied.
If we want to use DBMS_FILE_TRANSFER.PUT_FILE to transfer the file from
source to destination host, the steps 3,4,5 should be changed by the
following:
- Create a directory at target database host, and
give permissions to local user. This is the directory object into
which the file is placed at the destination site, it must exist in
the remote file system:
CREATE OR REPLACE
DIRECTORY target_dir AS '+DGROUPA';
GRANT WRITE ON DIRECTORY target_dir TO "USER";
- Create a directory at source database host. The
directory object from which the file is copied at the local source
site. This directory object must exist at the source site:
CREATE OR REPLACE
DIRECTORY source_dir AS '+DGROUPS/subdir';
GRANT READ,WRITE ON DIRECTORY source_dir TO "USER";
CREATE OR REPLACE DIRECTORY source_dir_1 AS '+DGROUPS/subdir/subdir_2';
- Create a dblink to connect to target database
host:
CREATE DATABASE
LINK DBS2 CONNECT TO 'user' IDENTIFIED BY 'password' USING
'target_connect';
where target_connect is the connect string for target database and USER is
the user that we are going to use to transfer the datafiles.
- Connect to source instance. The following items
are used:
- dbs1: Connect string to source database
- dbs2: dblink to target database
- a1.dat: Filename at source database
- a4.dat: Filename at target database
CONNECT
user/password@dbs1
-- - put a1.dat to a4.dat (using dbs2 dblink)
-- - level 2 sub dir to parent dir
-- - user has read privs on source_dir_1 at dbs1 and write on target_dir
-- - in dbs2
BEGIN
DBMS_FILE_TRANSFER.PUT_FILE('source_dir_1',
'a1.dat',
'target_dir',
'a4.dat', 'dbs2' );
END;
|
No comments:
Post a Comment