Select language:
Entering content frameProcess documentation Parallel Processing of Background Jobs 

Use

For the SAP system to execute a program in the background, the program must be scheduled as a background job. For information on scheduling background jobs, see Computing Center Management System (BC-CCM):

Structure linkBackground Processing.

Many programs can also be divided up and run parallel in several jobs. This is an option particularly if several application servers are available. In many cases, however, a program requires the results of the previous program or the programs try to access the same data and lock problems occur.

In order to avoid these programs, you can assign part of the data to each server. The data is then processed by programs running in parallel. The data is processed more quickly and the subsequent program can start earlier.

Normally, data has to be distributed between the individual servers manually by creating variants for each program and each server. This is a laborious process and is prone to errors. The distribution of data also soon becomes obsolete and needs to be updated regularly. An optimum runtime is only achieved if the distribution is even so that each individual job has as an equal as possible runtime.

You can have the system distribute the data to the available servers using load distribution and external parallel processing. Parallel processing is set up within process planning and replaces the time consuming parallel job scheduling using variants.

Subsequent process control is made considerably more simple and transparent by the removal of lots of scheduled jobs which need to be sychronized:

  • The manual scheduling of parallel jobs, which is prone to errors and the maintenance of lots of variants to distribute the worklist amongst the parallel jobs is no longer required.
  • Changes to the worklist no longer lead to unequal runtimes for the individual jobs.
  • It is no longer necessary to check and update the distribution of data regularly.
  • From the outside, only one main process is visible. If necessary, a follow-on job starts when the main process has finished.

Prerequisites

In order to make the Customizing settings for these functions, it is important to have knowledge of the way in which the programs to be processed in parallel work.

The timeout for the server group of the online processes must be set so that the jobs are not terminated if they exceed the time limits for the processes.

Procedure

  1. For each relevant program, you generate a help program using transaction WLCPAR, which controls the parallel processing of this program externally (see
  2. Load Distribution and External Parallel Processing).
    1. You schedule the program generated to be executed in parallel.
    1. The system starts the programs to be processed in parallel externally using an asynchronous RFC in a separate parallel process. This process is an online process on a server in a server group that can be set in advance.
    1. Once all partial processes have finished, the main process also finishes, which then triggers the execution of any follow-on jobs that may be scheduled.

    Result

    Parallel processing reduces the time that the program takes to run. The shortest achievable runtime is roughly equal to the runtime on one server divided by the number of servers available.

    If not all the jobs have finished, the system triggers an optimum follow-on event, for which a follow-on job can wait before it starts.

    Follow on jobs also always start if a job is terminated.

     

     

    Leaving content frame