next up previous index
Next: The Unique Table Up: Programmer's Manual Previous: Complement Arcs

The Cache

  

Each entry of the cache consists of five fields: The operator, three pointers to operands and a pointer to the result. Some operators have fewer than three arguments. The policy adopted in the CUDD package is that the last argument is replicated as many times as it is necessary to fill all the available operand slots.

The cache does not contribute to the reference   counts of the nodes. The fact that the cache contains a pointer to a node does not imply that the node is alive. Instead, when garbage  collection takes place, all entries of the cache pointing to dead  nodes are cleared.

The cache is also cleared (of all entries) when dynamic reordering  takes place. In both cases, the entries removed from the cache are about to become invalid.

All operands and results in a cache entry must be pointers to DdNodes . If a function produces more than one result, or uses more than three arguments, there are currently two solutions:

Support of the former solution is under development. (See cuddLCache.c..) Support for the latter solution may be provided in future versions of the package.

There are three sets of interface  functions to the cache. The first set is for functions with three operands: cuddCacheInsert  and cuddCacheLookup . The second set is for functions with two operands: cuddCacheInsert2  and cuddCacheLookup2 . The second set is for functions with one operand: cuddCacheInsert1  and cuddCacheLookup1 . The second set is slightly faster than the first, and the third set is slightly faster than the second.

Cache Sizing

  

The size of the cache can increase during the execution of an application. (There is no way to decrease the size of the cache, though it would not be difficult to do it.) When a cache miss occurs, the package uses the following criteria to decide whether to resize the cache:

  1. If the cache already exceeds the limit given by the maxCache  field of the manager, no resizing takes place. The limit is the minimum of two values: a value set at initialization time and possibly modified by the application, which constitutes the hard limit beyond which the cache will never grow; and a number that depends on the current total number of slots in the unique  table.
  2. If the cache is not too large already, resizing is decided based on the hit rate. The policy adopted by the CUDD package is ``reward-based ." If the cache hit rate is high, then it is worthwhile to increase the size of the cache.
When resizing takes place, the statistical counters   used to compute the hit rate are reinitialized so as to prevent immediate resizing. The number of entries is doubled.

The rationale for the ``reward-based " policy is as follows. In many BDD/ADD applications the hit rate is not very sensitive to the size of the cache: It is primarily a function of the problem instance at hand. If a large hit rate is observed, chances are that by using a large cache, the results of large problems (those that would take longer to solve) will survive in the cache without being overwritten long enough to cause a valuable cache hit. Notice that when a large problem is solved more than once, so are its recursively generated subproblems. If the hit rate is low, the probability of large problems being solved more than once is low.

The other observation about the cache sizing policy is that there is little point in keeping a cache which is much larger than the unique table. Every time the unique table ``fills up," garbage collection is invoked and the cache is cleared of all dead entries. A cache that is much larger than the unique  table is therefore less than fully utilized.

Local Caches

  

Sometimes it may be necessary or convenient to use a local cache. A local cache can be lossless  (no results are ever overwritten), or it may store objects for which canonical  representations are not available. One important fact to keep in mind when using a local cache is that local caches are not cleared during garbage  collection or before reordering. Therefore, it is necessary to increment the reference  count of all nodes pointed by a local cache. (Unless their reference counts are guaranteed positive in some other way. One such way is by including all partial results in the global result.) Before disposing of the local cache, all elements stored in it must be passed to Cudd_RecursiveDeref . As consequence of the fact that all results in a local cache are referenced, it is generally convenient to store in the local cache also the result of trivial problems, which are not usually stored in the global cache. Otherwise, after a recursive call, it is difficult to tell whether the result is in the cache, and therefore referenced, or not in the cache, and therefore not referenced.

An alternative approach to referencing the results in the local caches is to install hook functions (see Section 3.16) to be executed before garbage collection.


next up previous index
Next: The Unique Table Up: Programmer's Manual Previous: Complement Arcs

Fabio Somenzi
Tue May 12 18:47:58 MDT 1998