Next: , Previous: AVL-Tree, Up: Data structures

4.5.3 Cache

The caching mechanism in VRR is based on a limited amount of available memory with LRU (the least recently used) deallocation. Limits of the cache size can be set globally in the user settings.

To share the cache between several independent parts of the project, where each part can have a special optimized allocator, it is accessed through the following general interface. Every part of VRR using the global shared cache is called cache user. Users are identified with a unique number and a pointer to a deallocation routine. When someone cannot allocate a new block because the cache is full, the least recently used block is found and depending on the stored identification number, an appropriate deallocation routine is called. This is repeated until there is enough free space in the cache to create the new block.

All cached blocks are held in a double-linked list structure. This allows all operations to be done in a constant time. Items in the list are sorted according to the time of last access. Each access to one of the allocated blocks must be followed with a call to cache_touch to move it to the head of the list.

A simple self-descriptive example of the cache usage can be found in the file lib/cache_example.c and a more detailed description in the file lib/cache.h.