Configuring Receiver Parallelization
Create receiver parallelization rules to control the receiver parallelization.
Use cases
- The customer has a multinode PI installation that communicates with smaller and larger back-end systems. The customer has to define the maximum number of requests that are sent to a particular receiver system, so that the system is not overloaded.
- The customer wants to define different processing capacities for different interfaces in PI, as the current global setting is not sufficient.
- The customer wants to configure receiver parallelization, either at the global or interface level, and would like to use message prioritization as well.
Solution
It is now possible to create receiver parallelization rules to control the receiver parallelization. You define the rules in a new user interface in SAP NetWeaver Administrator (NWA), under SOA Monitoring. The UI is implemented as a table, where rows are editable and each row represents one receiver parallelization rule, similar to the message prioritization UI in NWA. Each rule has the following attributes:
- Rule ID – A GUID that is generated when the rule is created: cannot be modified and is visible on the UI.
- Rule name – A human-readable name for this receiver parallelization rule.
- Count – The number of the threads that can work in parallel for the receivers matching this rule. A value of zero should not be allowed as this can effectively block the receiver interface by allowing a backlog to accumulate in the dispatcher queue, which should be avoided.
- Enabled – Flag that indicates whether the rule is currently active or not. Rules that are not enabled do not influence the receiver parallelization.
- Receiver Party, Receiver Service – The receiver communication component identification. It must uniquely identify a communication component; cannot be empty or *. Receiver Party can be empty if the rule refers to a communication component without a party.
- Interface and Interface Namespace – Explicitly specifies the interface name and namespace, or can be empty or *. The asterisk means that all interfaces that match the rules definition will share the same constraint, they are all limited by the count specified in the receiver parallelization rules. This is very similar to the limit in the outbound scheduler for specific destinations, as a destination basically describes the receiver system, but can process multiple interfaces.
The receiver parallelization per interface feature is disabled by default. It is enabled by the Messaging System property: messaging.system.queueParallelism.perInterface. The property requires an MS service restart.
- The dispatcher queue now checks if the adapter queue has the capacity to process the message; it checks the available worker threads, and if the receiver parallelization and large message handling limits are reached. The message backlog now stays in the dispatcher queue if maxReceivers is configured. Basically, this means that message prioritization now works together with receiver parallelization.
- The counters for the different receivers are now global and not local for each queue, so there is better control over the receiver concurrency and the limit equals to the global setting. So in case of the example above we will have a common counter for the RFC receiver and File sender queue for this receiver interface.
Monitoring the behavior of the Receiver Parallelization Per Interface feature is not straightforward in all cases. The following monitors may be used:
- Message Monitor for the messages with status Delivering. The number of messages with status Delivering for a particular receiver interface must not exceed the limit.
- Adapter Engine Status – Queue Monitor. You can observe the number of worker threads for the queue or queues involved.
- Message Overview and Performance monitor – Can help with indirect observations, such as message backlogs and speeds in processing, based on receiver parallelization and priority.