Application Threads Pool
The Application Thread Manager is responsible for handling application threads.
The Application Threads Pool monitoring group contains the following important monitors that are activated by default:
● ActiveThreadsCount
Description of monitor |
ActiveThreadsCount is the number of threads from the thread pool that are executing an executable task pool. It shows how many threads from the thread pool are currently processing custom tasks. This monitor does not indicate a problem. The number of used threads depends on the number of configured threads and the value of the maximum pool size, and whether this size is reached or whether the pool can still be resized. Even if the pool is resized to maximum and the value of ActiveThreadsCount and CurrentThreadPoolSize are the same, this does not indicate a problem. The purpose of the threads pool is to restrict the request load to a point where the engine can still have enough resources to process these parallel requests fast enough. |
Description of possible problems and their impact
|
If ActiveThreadsCount is equal to MaximumThreadPoolSize, then Application Threads Pool has finished its threads and has requested the Thread Manager for more. Indicates a high load, but not a problematic state. |
Boundary conditions for the metric (thresholds) |
No thresholds can be configured to show a problem. |
Recommended procedure if a problem occurs
|
If the load is persistent, you might have to offload this particular server node, either by adding additional server nodes if high load is experienced in all server nodes of this instance, or by adding more instances if the high load is observed in the entire cluster. |
● WaitingTasksCount
Description of monitor |
The number of tasks waiting to be executed by a free thread, created by the thread poolTarget. WaitingTasksCount shows the number of custom tasks that are waiting to be processed, which means that at the moment there are no free threads in the thread pool although the thread pool is resized to its maximum size. In this case, the tasks that are waiting to be processed start accumulating in a queue. Even when the load is high, the threads process a custom task for a certain time and take the next task from the queue. When the size of the queue grows, it mean that either the request rate is growing and the server node cannot handle such a high load, or there is a blocking situation in the server and the threads are not released from the tasks they are currently executing. |
Description of possible problems and their impact |
The monitor is important to detect how effectively the threading system executes the tasks. With 100% usage rate, caller threads are blocked until some of the tasks that are currently executing finish. This could be a symptom of a deadlock or a bottleneck. |
Boundary conditions for the metric (thresholds) |
We recommend you set the following: GY 80 YR 100 RY 100 YG 80 |
Recommended procedure if a problem occurs |
Make thread dump for analysis. Increase pool size. Start additional nodes. |
● WaitingTasksQueueOverflow
Description of monitor |
The number of threads waiting to deposit a task in the waiting tasks queue when the queue is full. WaitingTasksQueueOverflow shows the number of blocked threads waiting to submit a task in the queue of waiting tasks. In this situation there are no free threads to process requests and also the size of the queue of waiting tasks is full, so the threads that need to submit a new task in this queue are blocked until a free slot is available. This monitor shows a problem. |
Description of possible problems and their impact
|
This counter is unlikely to change its value from 0 because it means that the capacity of the requests queue is exhausted. This is a severe performance problem. This can also be a symptom for a locking situation (blocking back-end calls or Java level deadlock). |
Boundary conditions for the metric (thresholds) |
We recommend you set the following: GY 0 YR 10 RY 10 YG 0 |
Recommended procedure if a problem occurs |
Make thread dump for analysis. Increase pool size. Start additional nodes. |
● WatingTasksUsageRate
Description of monitor |
The ratio of the current size of the waiting tasks queue to the maximum configured size. WaitingTasksUsageRate shows the percentage of the waiting tasks queue that is currently used. |
Description of possible problems and their impact |
The ratio shows how effectively the threading system executes the tasks. When the usage is 100%, the caller threads are blocked until any of the currently executing tasks finishes. This can be a symptom for a deadlock or bottleneck. |
Boundary conditions for the metric (thresholds) |
We recommend you set the following: GY 50% YR 80% RY 80% YG 50% |
Recommended procedure if a problem occurs |
Make thread dump for analysis. Increase pool size. Start additional nodes. |
● ThreadPoolCapacityRate
Description of monitor |
The ratio of the current thread pool size to the maximum pool size in percentages. ThreadPoolCapacityRate shows the percentage of the maximum pool size that the thread pool has currently reached. This is an informative monitor that does not indicate a problem. |
Description of possible problems and their impact |
If the percentage is high, it means that soon the ability to resize the pool will be exhausted. If the ThreadPoolUsageRate is also high, a requests queue will be formed. |
Boundary conditions for the metric (thresholds) |
We recommend you set the following: GY 0 YR -1 RY -1 YG 0 |
Recommended procedure if a problem occurs |
Does not indicate a problem. |