Show TOC

Tab Page: ProcessingLocate this document in the navigation structure

Use

You can use this tab page to determine the destination of the updated data.

Prerequisites

If you want to update the data using the PSA, you have set the transfer method to PSA in the transfer rules maintenance.

If you want to transfer the data using IDocs, you have set the transfer method to IDoc in the transfer rules maintenance.

Note

We recommend you use the PSA transfer method.

Functions

PSA Transfer Method

Different options are available for data update when using the PSA transfer method. Upon selection, you need to weigh data security against performance for the loading process.

Processing options for the PSA transfer method

Processing option Description Further Information

PSA and Data Targets/InfoObjects in Parallel (By Package)

A process is started to write the data from this data packet into the PSA for each data package. If the data is successfully updated in the PSA, a second parallel process is started. In this process, the transfer rules are used for the package data records, data is adopted by the communication structure, and it is finally written to the data targets. Posting of the data occurs in parallel by packet.

This method is used to update data into the PSA and the data targets with a high level of performance. The BW system receives the data from the source system, writes it to the PSA, and starts the update immediately and in parallel into the corresponding data target.

The maximum number of processes, which is set in the source system inCustomizing Extractors, does not restrict the number of processes in BW. Therefore, many dialog processes in the BW system could be necessary for the loading process. Make sure that enough dialog processes are available in the BW system.

If the data package contains incorrect data records, you have several options allowing you to continue working with the records in the request. You can specify how the system reacts to incorrect data records You can find additional information about this in the sectionTreating Data Records with Errors.

You also have the option ofCorrecting Data in the PSA and updating it from there.

Note the following when using transfer and update routines:

If you choose this processing option and then request processing takes place in parallel during loading, the global data is deleted because a new process is used for every data package in further processing.

PSA and Then into Data Target/InfoObject (by Package)

A process that writes the package to the PSA table is started for each data package. When the data has been successfully updated to the PSA, the same process then writes the data to the data targets.  Posting of the data occurs in serial by packet.

Compared with the first processing option, you have better control over the whole data flow with a serial update of data in packages, because the BW system carries it out using only one process for each data package. Only a certain number of processes are necessary for each data request in the BW system. This number is defined in the settings made in the maintenance of the control parameters in customizing for extractors.

If the data package contains incorrect data records, you have several options allowing you to continue working with the records in the request. You can find additional information about this in the sectionHandling Data Records with Errors.

You also have the option ofCorrecting Data in the PSA and updating it from there.

Note the following when using transfer and update routines:

If you choose this processing option and then request processing takes place in parallel during loading, the global data is deleted because a new process is used for every data package in further processing.

Only PSA

Using this method, data is written to the PSA and is not updated any further.

You have the advantage of having data stored safely in BW and having the PSA, which is ideal as a persistent incoming data store for mass data as well. The setting for the maximum number of processes in the source system can also have a positive impact on the number of processes in BW.

To further update the data automatically in the corresponding data target, wait until all the data packages have arrived and have been successfully updated in the PSA, and select Update Subsequently in Data Targets from the Processing tab page when you schedule the InfoPackage in the Scheduler.

A process that writes the package to the PSA table is started for each data package. If you then trigger further processing and the data is updated to the data targets, a process is started for the request that writes the data packages to the data targets one after the other. Posting of the data occurs in serial by request.

You can find additional information about the options for uploading data from the PSA into the data targets underType of Data Update in the section Only PSA → Further Processing from PSA.

When using the InfoPackage in a process chain, this setting is hidden in the scheduler. This is because the setting is represented by its own process type in process chain maintenance and is maintained there.

Treating Duplicate Data Records (only possible with the processing type Only PSA):

The system indicates when master data or text DataSources transfer potential duplicate data records for a key into the BW system. The Ignore Duplicate Data Records indicator is also set by default in this case.. In BW, the last data record of a request is updated for a particular key by default when data records are transferred more than once. The remaining data records are ignored for this key in the same request. If the Ignore Duplicate Data Records indicator is not set, duplicate data records will cause an error. The error message is displayed in the monitor.

Note the following when using transfer and update routines:

If you choose this processing options and request processing takes place serially during loading, the global data is kept as long as the process with which the data is processed is in existence.

Only Data Target/InfoObject

We only recommend this method when loading from flat files or from data sources that are always available. This is because the data is not always saved persistently into an incoming data store. This saves hard disk space, although it leads to a loss of the following: data security, an option to rebuild processes, and simulation options.

 

IDoc Transfer Method

The following options are available for data update when using the IDoc transfer method:

  • ALE inbound processing and data target/InfoObject

    In this method, data is loaded in BW ALE inbound processing, and updated further into the data targets directly.

  • Only ALE inbound processing

    In this method, data is loaded in BW ALE inbound processing, but not updated further. The request appears in the monitor as incorrect. You can manually update the data further from here into the data targets.

When you send too many data packages that are too small, too many monitor logs are written for them. This has a negative effect on the monitor run time.

We recommend a data package size of 10,000 to 50,000 data records. With mass data (requests with a size of two to three million records), the size of the data package can be from 50,000 to 100,000 records.  However, the required memory depends on not only the data package size setting, but also the width of the transfer structure, the required memory of the affected extractor, and, for large data packages, the number of data records in the package.

See also:

Documentation for Customizing for extractors in the source system, especially the section Maintaining Control Parameters for Data Transfer, under General Settings. You reach Customizing for extractors by using the source system tree in the Administrator Workbench - Modeling. Choose Customizing for Extractors for your source system by using the context menu (right mouse click) or choose the transaction SBIW directly in your source system.

Consistency Check for Characteristic Values

When you set the Consistency Check for Characteristic Values indicator, the system only checks the loading of master data and transaction data and the activation of hierarchies after reading the data from a PSA or IDoc. The InfoObject definition is checked.

The characteristic values are checked for

The use of small characters,

The use of special characters,

The plausibility of date fields,

The plausibility of time fields,

The use of character values in the fields of the NUMC data type, and

Correct observance of the ALPHA conversion routine

.

Note

You should always select this indicator with new data as well as non-SAP data during the test phase.  The indicator for the productive area is deactivated if the data was of good quality (several data imports without incorrect values). This is because using this function worsens load performance.