To prevent that the database system accesses the permanent storage (volumes) more often than necessary, read and write operations are buffered in the working memory (RAM).
The unit used for reading from and writing to the working memory is a page (8 KB).
The database system divides the working memory into the following areas:
I/O buffer cache, which mainly consists of:
Data cache (largest part)
Converter
Internal file directory
Catalog cache
Shared SQL cache
Sequence cache
Log queues
For SAP liveCache databases only: OMS heap
Working Memory Area |
Database Parameters |
Description |
---|---|---|
I/O Buffer Cache |
CacheMemorySize |
The database system uses the I/O buffer cache to manage the working memory that is available for I/O operations. Note that this parameter only roughly determines the size of the working memory used by the database. There are database functions that also need a certain amount of working memory, but whose memory consumption is not configured with this parameter. |
Data Cache |
It is not possible to configure the size of the data cache and the converter directly; instead, you can configure them indirectly via the parameter CacheMemorySize. |
The database system uses the data cache to store the pages that it has recently read or write-accessed in the data area. This includes, for example, tables, indexes, LOB values and undo log entries. The database system always searches for data in the data cache before accessing the data area. The size of the data cache has the following effect on the performance of the database system:
|
Pin Area |
DataCachePinAreaThreshold |
In general, data is displaced from the data cache in chronological order. There may be certain situations though where it makes sense to keep data from specific tables in the data cache for a longer time. The caching mode enables you to specify when data from a specific table is displaced from the data cache. Data that is to be kept for a longer time is held in a special area of the data cache, the pin area. More information: Caching Mode |
Converter Cache |
It is not possible to configure the size of the data cache and the converter directly; instead, you can configure them indirectly via the parameter CacheMemorySize. |
The converter is where the database system stores information about which logical page number is saved at what physical position (block address). When the database starts, the database system imports the complete converter into the working memory. If the converter grows while the database is running, and requires more pages, the database system gives it more pages from the I/O buffer cache. More information: Converter |
Catalog Cache |
TaskSpecificCatalogCacheMinSize |
The database system stores the following in the catalog cache:
Once the catalog cache is full, the database system moves part of it to the data cache. The database system assigns a catalog cache area to each user task at the start of the database session; the system releases this catalog cache again after the database session has ended. We recommend a hit rate > 85% for the catalog cache. The lower limit for the hit rate depends largely on the application. |
Shared SQL Cache |
SharedSQLCommandCacheSize |
If the UseSharedSQL database parameter has the value YES, the database system uses the shared SQL cache. The shared SQL cache is where the database system stores SQL statements that have already been executed, together with their execution plans. The shared SQL cache is shared by all database users, so the system saves the same SQL statement only once. Note that if the UseSharedSQL database parameter has the value NO, the database system saves the executed SQL statements for each database user individually in the catalog cache. |
Sequence Cache |
SequenceCacheSize |
The database system stores all information about sequences in the sequence cache. |
Log Queues |
LogQueueSize LogQueues |
The database system assigns each transaction a log queue, where the transaction writes its redo log entries. More than one transaction can write to the same log queue. |
Working Memory for the Internal File Directory |
It is not possible to configure the size of the data cache and the converter directly; instead, you can configure them indirectly via the parameter CacheMemorySize. |
When the database starts, the database system imports the complete internal file directory into the working memory. The database system stores information for logically accessing database objects, such as the assignments of the root pages of B* trees to table IDs, in the internal file directory. |
OMS Heap (only for SAP liveCache Databases) |
OmsMaxHeapSize |
The OMS heap is a specific working memory area only used by SAP liveCache databases. The OMS heap contains the following data:
To access a persistent object, the database system searches first in the OMS heap. If it does not find the object there, it searches in the data cache. If it does not find the object there either, the database system copies the object from the data area to the data cache, and from there to the OMS heap. The database system then makes all changes to the object in the OMS heap. Any objects that are read but not changed within a consistent view are deleted from the OMS heap by the database system. When a transaction ends (COMMIT), the database system writes the OMS data from the OMS heap to the data cache. If the application requires additional working memory, the OMS heap grows. |
Working Memory Area |
Database Parameters |
Maximum Size |
---|---|---|
I/O Buffer Cache |
CacheMemorySize= 2500 |
19.53 MB |
Catalog Cache |
TaskSpecificCatalogCacheMinSize= 1632 |
12.75 MB |
Shared SQL Cache |
SharedSQLCommandCacheSize= 262144 |
256 MB (initial 16 MB) |
Sequence Cache |
SequenceCacheSize= 1 |
0.01 MB |
Log Queues |
LogQueueSize= 50 (size of a log queue) LogQueues= 1 (number of log queues) |
0.4 MB |
When Is Data Displaced from the Data Cache?
In general, data is displaced from the data cache in chronological order. The caching mode enables you to specify when data from a specific table is displaced from the data cache. Data that is to be kept for a longer time is held in a special area of the data cache, the pin area.
If the pin area is full, the least recently used data from the pin area is moved to the end of the default last recently used (LRU) list and then displaced according to the normal chronological mechanism.
How Does the Database System Synchronize Access to Caches?
A stripe is a logical part of a cache (data cache or converter). To synchronize access of competing tasks, the database system uses special synchronization means such as regions and reader-writer locks. A stripe contains several critical sections.
You can override the access restrictions calculated by the system using the special database parameters DataCacheStripes and ConverterStripes.
Name |
Used for |
Description |
---|---|---|
Region |
Data Cache |
EXCLUSIVE When a task accesses a critical section, the associated region blocks this critical section for all other tasks. |
Reader-writer lock (share lock) |
Catalog Cache Shared SQL Cache |
EXCLUSIVE If a task write-accesses a critical section, the associated reader-writer lock (share lock) blocks this critical section for all other task accesses. SHARED If a task read-accesses a critical section, the associated reader-writer lock (share lock) blocks this critical section for all task write-accesses but allows other task read-accesses. |