not logged in | [Login]

The rlm_cache module allows values of arbitrary attributes to be stored against a key, and retrieved for future requests matching that key.

The key value itself is an xlat expansion, meaning it can be made up from multiple attributes from any list, or from SQL queries, LDAP searches or any other module which registers an xlat function.

Feel free to add additional use cases if you think of new and interesting ways to use this module.

Note: Versions prior to v3.0.6 had different behaviour to that described here. Please update to v3.0.6 or later if you wish to use the rlm_cache module in the ways described below.

Configuration

See here for the default configuration file. The config file described the module operation adequately, so it won't be repeated here.

There are multiple interesting ways to use rlm_cache, here are some potentially useful ones...

One call caching

This is how the module operates by default. If no cache entry is found, the attributes listed will be expanded, stored and added to the current list. If a cache entry is found and has not expired, it will be merged with the current request.

This is useful where values from a request or reply need to be cached, or values from a small number of queries performed via xlat expansion need to be cached, for example, retrieving a users password.

Two call caching

First we call the module in Cache-Status-Only mode, to find out whether it already has an entry, if it doesn't, we run other modules to retrieve additional values, then call the cache module again to add the entry.

This should be used where entire user entries from directories like LDAP are being cached, and multiple attributes are retrieved at once.

authorize {
    update control {
        Cache-Status-Only = 'yes'
    }
    cache
    if (notfound) {
        ldap
    }
    cache
}

Deduplication

The cache module can also be used for de-duplication. If key is set to something like %{Acct-Unique-ID}.%{Acct-Status-Type}, the number of interim-update packets may be limited to one every ttl.

For this an empty attribute list may be configured.

accounting {
    # Limit the number of interim updates processed by our database
    cache
    if (ok) {
       sql
    }
}

It may also be advisable to remove entries at the end of a session:

accounting {
    if (Acct-Status-Type == 'Stop') {
        update control {
            Cache-TTL = 0
        }
        cache
        sql
    }
    else {
        cache
        if (ok) {
            sql
        }
    }
}

Crappy rate limiting

Setting the add-stats configuration item to yes will add the attribute Cache-Entry-Hits, using this you can determine the number of packets matching the key within the ttl, and make a decision whether to respond or not.

There are numerous reasons as to why this is a bad way to rate limit, the main one being is the packet queue can still become a bottleneck and result in packets being dropped.