There are no specific controls in the LoadMaster Web User Interface (WUI) to limit the maximum file size that will be considered for compression when compression is enabled. This can, however, be accomplished by limiting the maximum size of the cache, which determines the maximum file size that can be compressed. This is because any file that is compressed is first cached, so the cache size necessarily limits the maximum compressible file size.
The maximum amount of memory available is 1/5 (one fifth) of the physical memory. The general rule of thumb is that, if there is nothing in the cache, the maximum file size that can be cached and compressed at any given time is equal to 2/3 (two thirds) of the configured maximum cache size minus the amount of space in the cache already allocated to cached files. It allows any files up to 2/3 (two thirds) of the memory. If there are other files in the cache, the limit goes down. Therefore, the actual file size that can be compressed varies according to how much cacheable traffic is flowing through the LoadMaster. Obviously, this can be much lower than the absolute cacheable maximum file size.
This is best illustrated by an example. The default cache size is located under System Configuration > Miscellaneous Options > AFE Configuration in the WUI, and is set to 100 Mbytes. Therefore:
1. The amount of space available for the first file that is cached is equal to 100 * 2/3 (since the cache is empty at this time), or about 66 Mbytes.
2. If a file comes into the cache nanoseconds after the first file, the space available for caching the second file is equal to 66 Mbytes - (minus) [The size of first file remaining in the cache].
3. The reduction of available space in the cache continues as more files are received and enter the cache; space is returned to the available pool as files are compressed and leave the cache, or are aged and removed from the cache as a normal part of cache maintenance.
Careful analysis of the traffic patterns on your LoadMaster is necessary to find the right balance between cache size, compression, and the availability of resources for other resource-intensive operations (such as SSL Offloading and Content Rules).
When a file is rejected for caching or compression, an entry is written in the log that follows this format:
Out of memory for cacheing - connection dropped (file size X total size Y)
Out of memory for compressing - connection dropped (file size X total size Y)
X is the size of the file that was rejected, and Y is the amount of free space in the cache at that time.
Looking for lines like this in the log at varying levels of system performance will help determine how to set the cache size optimally for typical traffic patterns in your environment. To get an accurate picture, this should be done using light to heavy traffic loads with a typical mix of cacheable, compressible, and other resource-intensive traffic that is typical in your configuration.
This document was last updated on 31 January 2019.