You can display information on current memory consumption by clicking on Main Memory for the Cache Monitor.
The following table offers an overview of the individual specifications:
Runtime Objects table
Specification | Meaning |
---|---|
Maximum cache size |
200 MB by default |
Current cache size |
Sum of the size of all cache structure elements in KB (see Cache Structure , Bytes column) |
Current swap size |
Size of background store (flat file or cluster table) in KB |
Cache reserved |
Ratio of the maximum cache size to the current cache size in % |
Current entries, total |
Sum of current cache entries and current swap entries |
Current cache entries |
Number of all cache entries (cache structure elements). See Cache Structure |
Current swap entries |
Number of all entries in the background store |
Shared Memory table
Specification | Meaning |
---|---|
Buffer poll time |
Time when buffer capacity was last read |
Buffer reserved |
Degree to which the cache memory is used in % This value is the same as the minimum free bytes and/or free directory entries If you want to look at these limiting sizes in greater detail, choose Buffer Monitor or Buffer Overview (see OLAP Cache Monitor ). |
Buffer capacity cache |
Specification, what proportion of the shared buffer is occupied by the cache |
When the capacity of the cache (the maximum cache size) is exhausted, and more data is to be written to the cache, the following solutions are available:
The following section outlines the basic principle of both procedures using status diagrams:
Principle: Caching with Displacement (Main Memory Cache without Swapping)
When data is written to the cache, the entry has the status NEW. (A new write receives this status).
When data is written from the cache, the entry has the status READ. (A new read and write receives this status).
The LRU replacement mechanism starts as soon as the cache memory capacity has been exhausted. This checks the status of entries and removes the entry that was last read the longest time ago.
When the LRU algorithm comes across an entry with the status READ, it resets it to NEW.
When the LRU algorithm comes across an entry with status NEW, it makes sure that this entry is overwritten with the new data. If you need to access the overwritten data again afterwards, this data must be read from the database again (cache miss).
Principle: Caching with Storage in a Background Store (Swap) (Main Memory Cache with Swapping)
When data is written to the cache, the entry has the status WRITE. (A new write receives this status).
When data is written from the cache, the entry has the status READ WRITE. (A new read and write receives this status).
The LRU replacement mechanism starts as soon as the cache memory capacity has been exhausted. This checks the status of entries and removes the entry that was last read the longest time ago.
When the LRU algorithm comes across an entry with the status READ WRITE, it resets it to READ DIRTY. (A new read receives this status). A new write returns the status of the entry to READ WRITE. The DIRTY flag serves as a "lookout", making sure that this entry is persisted.
When the LRU algorithm comes across an entry with the status READ DIRTY, it resets it to DIRTY. The LRU algorithm does the same with entries having the status WRITE that are not read. (A new read resets the entry to the status READ DIRTY, a new write to the status WRITE).
When the LRU algorithm comes across an entry with status DIRTY, it makes sure that this entry is stored in the background memory and indicated as SWAPPED. If the data needs to be subsequently accessed again, the data can be read from the background memory. At the same time, a new caching entry is created having the status WRITE.
After having run through the entries a maximum of two times, the LRU algorithm, by resetting the flags, has found an entry that can be overwritten.
Further Recommendations
As entries in the cache can only be deleted automatically by displacement, they remain more or less "forever" in all cache modes except Main Memory Cache without Swapping. This applies even if they are not needed any more and have not been needed for a long time. We therefore recommend that you schedule program RSR_CACHE_RSRV_CHECK_ENTRIES to run in background processing on a regular basis (once a week say). This program identifies and deletes old entries like these. This allows you to save memory space and to improve cache performance, as the system no longer needs to search through old entries every time it accesses the cache.