Use
You can use SAPDBA for Oracle to export one or more tables and indexes.
SAPDBA:
Prerequisites
Procedure
To export Oracle system objects, choose Owners, then enter
This is the working directory for the export. The default is
This is the directory used to store the exported data. The default is
<SAPDATA_HOME>/sapreorg , specified by the exireo_dumpdir parameter (highest priority) or the SAPREORG environment variable (lower priority). For more information, see Environment Variables (UNIX) and Environment Variables (Windows NT).Make sure that the space available in the directories or on the tapes is at least as large as the total size of all the objects to be reorganized. SAPDBA warns you if there is not enough space available, but you can still continue with the export if you want.
If you are exporting to tape, SAPDBA asks you to state the size of the tape. SAPDBA also checks in this case whether the data fits on the tape. SAPDBA can only use one tape drive for each export dump file. For a tape export, be sure to set CheckExp to YES.
You can specify the null device
/dev/null for testing tables and indexes on corrupted Oracle blocks. In this case, the export is performed as usual but no export dump file is created. You can also perform this check using the SAPDBA command option sapdba -export <tablespaces/table> . See SAPDBA Command Mode.Parameter |
Default |
Meaning |
ComprDmp |
NO |
Compress the dump file, only possible for export to disk If selected, SAPDBA sends the data to the export dump files using the UNIX compress command.Do not use this parameter for tablespaces with objects already compressed by the SAP database interface, since it has no advantages (there might even be disadvantages). |
Chop |
NO |
Chop the dump file, not possible for Windows platforms Select this if the export dump files are larger than the maximum file size (normally 2 GB) for your operating system. SAPDBA sends the export data to a chop tool (such as BRTOOLS), which splits the data into several smaller files. |
CheckExp |
YES |
Check dump files, recommended especially if exporting to tape SAPDBA performs a read check after the export using the inx<TSP>.sql scripts. |
Commit |
YES |
Commit command passed to database once buffer data has been imported |
Direct path export |
YES |
Export data directly, without using SQL area This improves performance because the data is physically written straight to disk. In general, we recommend it. |
Parallel |
1 |
Export with parallel processing By increasing this to 2 or more, SAPDBA recreates the tables in parallel using the number of processes you enter. For more information, see Parallel Export and Import. |
Buffer size |
3000000 |
To accelerate the export, we recommend providing at least 3 MB of buffer space. |
SAPDBA prepares for the export by:
SAPDBA creates the following scripts and parameter files:
If you start the export in the background, tables generated after the export scripts have been created are not exported.
Result
On successful completion, SAPDBA displays a confirmation message.
If there has been an error, SAPDBA displays the appropriate error message. You can
restart the export after fixing the problem.If necessary, you can
import the exported data. You can also import structures singly using SQL scripts.