SAPDBA: Export Menu 

In data export, the following menu options appear for selection:

a

- Tablespaces

b

- Owners

c

- Tables

d

- Working directory

e

- Dump destination

f

- Export/Import

g

- Storage parameters

 

Compress

 

ComprExt

 

Chop

 

SAP-NEXT

 

CheckExp

   
 

Commit

 

ReduceOb

 

Parallel

 

Manually

s

- Start

   

q

- Return

   

 

If you want to use a method other than the default method (ORACLE export/import) for the data transfer, choose menu option f. The menu option f in the above export menu will be adjusted accordingly, for example after choosing SAPunload / ORACLE SQL*Loader the following will appear in the export menu:

f

- Unload/load

 

SAP unload

 

SQL*Loader

 

Commit :

 

Parallel :

Specify the tables for export. You can select the tables by tablespace, owner, and name of the object. You can enter a single name, the identifier "all", a list of names, or a generic specification in the fields for the identification of the object (tablespaces, owners, tables) :

Special feature when selecting the tablespaces: With PSAPBTAB% you would generate a list of tablespaces (PSAPBTABD and PSAPBTABI) from which you could select one tablespace. If you want to export both the objects of PSAPBTABD as well as the objects of PSAPBTABI, you can do so by entering PSAPBTAB%%.

You can export ORACLE system objects if you select Owners: "all" or "%" and then Incl. owners SYS and SYSTEM: yes.

The most important menu items are listed below:

Working directory in which you store the time stamp subdirectory with the scripts and parameter files. You can change this working directory if required. Logs are always stored in ORACLE_HOME>/sapreorg .

Directory/ies (or tape device(s)) to which the export dump files are written (if a data export takes place). The total amount of space available in the directories or on the tapes should be big enough to at least store all the tables that are to be exported. All further scripts and logs are always written to the working directory.

The export or import can be run in parallel by defining the parameter exp_imp_degree > 1. See Parallel Export/Import

SAPDBA warns you if there is not enough space available. You may nevertheless wish to continue with the export. The data written to the directories on the disk can be compressed by a certain factor by using the option Export/import method ® Compress dump file(s): yes (if ORACLE Export/Import is used). For security reasons, the space check does not take the affects of compression into account (the compression rate and degree of filling is not known exactly).

When you export to tape, SAPDBA asks you to state the size of the tape. SAPDABA then also checks whether there is enough space on the tape(s) for the data that you want to export. In contrast to an export to disk(s), where an export dump file is created for each tablespace, an SAPDBA export to tape creates only one export dump file for each tape for all tables of the specified tablespaces. SAPDBA can only use one tape drive for each export dump file. You are warned, if the total amount of space required for a dump file is greater than the tape size. If SAPDBA is used for exporting the ORACLE program Export, errors occurring when writing to tape during the ORACLE export are ignored.

The menu option Reduce object size: yes calculates the export dump accurately. SAPDBA determines the space actually occupied by the data of the object and carries out the space check on the basis of this value. The compression rate is, however, also not taken into account here.

You can specify the Null device /dev/null. You should do so for testing tables and indexes on corrupted ORACLE blocks. In this case the export is performed as usually but no export dump file is created. You can also make this check using the SAPDBA command option s apdba -export <tablespaces/table> (see SAPDBA Command Mode).

Using SAPunload/load or ORACLE SQL*Loader can save a significant amount of time in the data transfer in comparison with ORACLE export/import (such as with tables with LONG columns).

See SAP unload/load, SQL*Loader.

You can change this default with the init<DBSID>.dba parameter exireo_dumpdir.
To accelerate the export/import, it is recommended that you make a buffer of at least 3 MB available for the ORACLE export and import programs or for the load programs.

Select yes if the data should be compressed during export.

In this case, SAPDBA sends the data with the UNIX command compress before writing it in the directory/ies for the export dump file(s). You can reduce your disk space requirements considerably if you compress your data.

Compression is only possible for an export (with ORACLE export) to disk. This option will be ignored if you export to tape. Therefore, use tape stations with hardware compression where possible.

Do not use compression for tablespaces containing objects already compressed by the SAP database interface, since there is no advantage to be gained (there are more likely to be disadvantages).

(If you set both Compress and CHOP to Yes, compression will use the R3chop option -c and not the UNIX command compress .)

If you want to export very large tables/tablespaces (with ORACLE export) and you expect that the export dump files which will be generated could exceed the maximum file size of the operating system, set option Chop to YES. In this case SAPDBA uses the program R3chop, which splits the dump files into smaller files and then combines them again. You are prompted for the maximum size of the split files. The default value corresponds to the value set in parameter max_file_size (whose default value is 2 GB).

This option is only available if parameter chop_util_name is entered in profile init<DBSID>.dba .

After the ORACLE export, SAPDBA performs a read check if you select CheckExp: Yes (test import with ORACLE parameter INDEXFILE). The inx<TSP>.sql scripts are created. This option should be used in particular if you are exporting to tape.

With Commit:: yes, the command COMMIT is passed to the database after the data contained in the buffer has been imported.
With Commit:: no, the COMMIT is only performed after all the records of a table have been imported.

The default value corresponds to the value of the parameter exp_imp_degree in the profile init<DBSID>.dba . The export/import can be performed in parallel by selecting a degree of parallelism >1 (see Parallel Export/Import).

You can make various settings for the storage parameters here:

Before you export objects, you can change the extent sizes and the other storage parameters proposed by SAPDBA. See Options for Changing and Checking the Storage Parameters