Performance Tips for Data Transfer
Processes
Request processing, the process of loading a data transfer process (DTP), can take place in various parallelization upgrades in the extraction and processing (transformation and update) steps. The system selects the most appropriate and efficient processing for the DTP In accordance with the settings in the DTP maintenance transaction, and creates a DTP processing mode.
To further optimize the performance of request processing, there are a number of further measures that you can take:
● By taking the appropriate measures, you can obtain a processing mode with a higher degree of parallelization.
● A variety of measures can help to improve performance, in particular the settings in the DTP maintenance transaction. Some of these measures are source and data type specific.
The following sections describe the various measures that can be taken.
With a (standard) DTP, you can modify an existing system-defined processing by changing the settings for error handling and semantic grouping. The table below shows how you can optimize the performance of an existing DTP processing mode:
Original State of DTP Processing Mode |
Processing Mode with Optimized Performance |
Measures to Obtain Performance-Optimized Processing Mode |
Serial extraction and processing of the source packages (P3) |
Serial extraction, immediate parallel processing (P2) |
Select the grouping fields |
Serial extraction and processing of the source packages (P3) |
Parallel extraction and processing (P1) |
Only possible with persistent staging area (PSA) as the source Deactivate error handling |
Serial extraction, immediate parallel processing (P2) |
Parallel extraction and processing (P1) |
Only possible with PSA as the source Deactivate error handling Remove grouping fields selection |
Setting the number of parallel processes for a DTP during request processing.
To optimize the performance of data transfer processes with parallel processing, you can set the number of permitted background processes for process type Set Data Transfer Process globally in BI Background Management.
To further optimize performance for a given data transfer process, you can override the global setting:
In the DTP maintenance transaction, choose Goto → Batch Manager Setting .Under Number of Processes, specify how many background processes should be used to process the DTP. Once you have made this setting, remember to save.
Setting the Size of Data Packets
In the standard setting in the data transfer process, the size of a data packet is set to 50,000 data records, on the assumption that a data record has a width of 1,000 bytes. To improve performance, you can increase the size of the data packet for smaller data records.
Enter this value under Packet Size on the Extraction tab in the DTP maintenance transaction.
Avoid too large DTP requests with a large number of source requests: Retrieve the data one request at a time
A DTP request can be very large, since it bundles together all transfer-relevant requests from the source. To improve performance, you can stipulate that a DTP request always reads just one request at a time from the source.
To make this setting, select Get All New Data in Source by Requeston the Extraction tab in the DTP maintenance transaction. Once processing is completed, the DTP request checks for further new requests in the source. If it finds any, it automatically creates an additional DTP request.
With DataSources as the source: Avoid too small data packets when using the DTP filter
If you extract from a DataSource without error handling, and a large amount of data is excluded by the filter, this can cause the data packets loaded by the process to be very small. To improve performance, you can modify this behaviour by activating error handling and defining a grouping key.
Select an error handling option on the Updating tab in the DTP maintenance function. Then define a suitable grouping key on the Extraction tab under Semantic Groups. This ensures that all data records belonging to a grouping key in a packet are extracted and processed.
With DataStore objects as the source: Do not extract data before the first delta or during full extraction from the table of active data
The change log grows in proportion to the table of active data, since it stored before and after-images. To optimize performance during extraction in the Fill mode or with the first delta from the DataStore object, you can read the data from the table of active data instead of from the change log.
To make this setting, select Active Table (with Archive) or Active Table (without Archive) on the Extraction tab in Extraction from… or Delta Extraction from… in the DTP maintenance function.
With InfoCubes as the source: Use extraction from aggregates
With InfoCube extraction, the data is read in the standard setting from the fact table (F table) and the table of compressed data (E table). To improve performance here, you can use aggregates for the extraction.
Select data transfer process Use Aggregates on the Extraction tab in the DTP maintenance transaction. The system then compares the outgoing quantity from the transformation with the aggregates. If all InfoObjects from the outgoing quantity are used in aggregates, the data is read from the aggregates during extraction instead of from the InfoCube tables.
Note for using InfoProviders as the source
If not all key fields for the source InfoProvider in the transformation have target fields assigned to them, the key figures for the source will be aggregated by the unselected key fields in the source during extraction. You can prevent this automatic aggregation by implementing a start routine or an intermediate InfoSource. Note though that this affects the performance of the data transfer process.
Creating Data Transfer Processes
Processing Modes in the Data Transfer Process