Each entry of the cache consists of five fields: The operator, three pointers to operands and a pointer to the result. Some operators have fewer than three arguments. The policy adopted in the CUDD package is that the last argument is replicated as many times as it is necessary to fill all the available operand slots.
The cache does not contribute to the reference counts of the nodes. The fact that the cache contains a pointer to a node does not imply that the node is alive. Instead, when garbage collection takes place, all entries of the cache pointing to dead nodes are cleared.
The cache is also cleared (of all entries) when dynamic reordering takes place. In both cases, the entries removed from the cache are about to become invalid.
All operands and results in a cache entry must be pointers to DdNodes . If a function produces more than one result, or uses more than three arguments, there are currently two solutions:
There are three sets of interface functions to the cache. The first set is for functions with three operands: cuddCacheInsert and cuddCacheLookup . The second set is for functions with two operands: cuddCacheInsert2 and cuddCacheLookup2 . The second set is for functions with one operand: cuddCacheInsert1 and cuddCacheLookup1 . The second set is slightly faster than the first, and the third set is slightly faster than the second.
The size of the cache can increase during the execution of an application. (There is no way to decrease the size of the cache, though it would not be difficult to do it.) When a cache miss occurs, the package uses the following criteria to decide whether to resize the cache:
The rationale for the ``reward-based " policy is as follows. In many BDD/ADD applications the hit rate is not very sensitive to the size of the cache: It is primarily a function of the problem instance at hand. If a large hit rate is observed, chances are that by using a large cache, the results of large problems (those that would take longer to solve) will survive in the cache without being overwritten long enough to cause a valuable cache hit. Notice that when a large problem is solved more than once, so are its recursively generated subproblems. If the hit rate is low, the probability of large problems being solved more than once is low.
The other observation about the cache sizing policy is that there is little point in keeping a cache which is much larger than the unique table. Every time the unique table ``fills up," garbage collection is invoked and the cache is cleared of all dead entries. A cache that is much larger than the unique table is therefore less than fully utilized.
Sometimes it may be necessary or convenient to use a local cache. A local cache can be lossless (no results are ever overwritten), or it may store objects for which canonical representations are not available. One important fact to keep in mind when using a local cache is that local caches are not cleared during garbage collection or before reordering. Therefore, it is necessary to increment the reference count of all nodes pointed by a local cache. (Unless their reference counts are guaranteed positive in some other way. One such way is by including all partial results in the global result.) Before disposing of the local cache, all elements stored in it must be passed to Cudd_RecursiveDeref . As consequence of the fact that all results in a local cache are referenced, it is generally convenient to store in the local cache also the result of trivial problems, which are not usually stored in the global cache. Otherwise, after a recursive call, it is difficult to tell whether the result is in the cache, and therefore referenced, or not in the cache, and therefore not referenced.
An alternative approach to referencing the results in the local caches is to install hook functions (see Section 3.16) to be executed before garbage collection.