Entering content frameFunction documentation Load Distribution and External Parallel Processing Locate the document in its SAP Library structure

Use

External parallel processing guarantees the distribution of loads is always up-to-date and well-balanced.

This ensures the optimum use of the available computing capacity.

Integration

Each relevant program can be brought into operation step by step.

You can also use this function to run the programs that were developed in your customer project in parallel.

Prerequisites

The way in which the initial selection is carried out needs to be set in Customizing. It is not always possible to estimate the workload accurately. You can, however, preselect data using a specific, self-developed function module.

Features

A help program, generated by the actual program, is scheduled to start the parallel jobs. You do not need to change or extend existing programs.

For each program to be run in parallel, the system creates a program, which determines the volume of data to be processed, distributes the data between several jobs and starts parallel processing.

The generated program can be scheduled as a normal job with variants. The system generates a selection screen on which you can select the name of the selection option via which the parallel processing is to be carried out (visible selection options only). There are also some other fields for the parallel processing of jobs (number of parallel jobs, terminating event, tolerance limit for packet creation). In addition, all the variants of the basic program are also copied invisibly so that the values are available for processing. These variants are defined as normal.

There are two ways of selecting data:

In the program generated, a function module containing the values from the variant is called to select the data to be processed. This data is then distributed among the parallel jobs of the program in a later step.

The function module and programs are assigned using a Customizing table so that for each new program, you just need to create the function module and maintain an entry in this table.

The function modules have a standard interface. The crucial thing is the output table in which all the data to be processed is described with suitable keys and values.

The disadvantage of the static method is that a new function module has to be adapted for each program to be processed in parallel. However, it is the only way of parallel processing programs for which the dynamic method cannot be used.

For dynamic creation, it has to be possible to determine the following values:

The result values do not necessarily have to match the key values.

The selection must be possible if the volume of data can only be determined using dependent tables.

The key values that resulted in the initial selection are then summarized in order to prevent a value waiting to be processed being named more than once in multi-level selections. The number of duplicate names is, however, a measure of the workload to be processed for a value. A value could, for example, stand for a document, for which the workload is determined by the number of document items.

The values are sorted according to their workload and assigned to the planned parallel jobs in descending order. This ensures an more even distribution between the parallel processes.

For each parallel run the system sorts the key values in ascending order to fill the corresponding selection options.

Activities

Schedule background jobs for parallel processing using transaction WLCPAR.

For more information and examples, see the program documentation, which you can access from this transaction by pressing the i-button or Shift+F1.

 

 

Leaving content frame