VCacheImpl - notes on the implementation of the Vesta-2 cache server
- Limitations and Architectural Dependencies
- Cache Server Logs and Stable Variables
- Indexing of VPKFiles
- Implementing VToSCache
- Supporting deletions in the CacheLog
- Eviction of VPKFiles and VMultiPKFiles
- See Also
This section describes the known limitations and dependencies of the VCache implementation.
- Representing Fingerprints
- The current implementation of fingerprints is heavily machine-dependent. In particular, the implementation assumes that the machine uses little-endian byte order and that the C++ compiler provides a 64-bit integer type. (The Vesta header file Basics.H defines the names Word and Bit64 for this type; the fingerprint implementation uses Word.)
- Marshalling Fingerprints
- The fingerprint marshalling code treats the fingerprint as a buffer of bytes, so the format of the fingerprint on the wire depends on the endian-ness of the machine sending the fingerprint. In particular, the current code will not work in a heterogeneous environment where the endian-ness of clients and servers differ. If the SRPC interface were extended to provide procedures for sending 64-bit integers in a canonical form down the wire, it would be trivial to marshal fingerprints in a canonical way as well.
- Marshalling Timestamps
- The OtherTypes interface defines a "TS::T" to be equivalent to the Unix type "time_t", which happens to be defined as an "int". A "TS::T" value is marshalled as an "int". However, if the type of "time_t" were changed, this strategy would break.
This section describes the logs and stable variable files used by the cache server. For each log, this includes an indication as to whether the log is checkpointed, and if so, which method performs the checkpoint and what the checkpoint contains.
Log Files
- CacheLog
- The CacheLog contains a record of the cache entries that have not been written to the stable cache (i.e., to some MultiPKFile). The CacheLog is checkpointed by the CacheS::CleanCacheLog procedure, which prunes the CacheLog by filtering out all cache entries whose pkEpoch values indicate that the cache entry has been written to the stable cache. The checkpoint file is written by a background thread while other threads can continue to add entries to the CacheLog. Entries in the CacheLog consist of the arguments sent by the client when adding the entry plus the cache index allocated for the entry and the pkEpoch of its PKFile when it was added.
- CILog
- The CILog contains the set of cache entry indices (CIs) in use. It is checkpointed by the CacheS::ChkptUsedCIs method, which is called by the CacheS_DoDeletions function after all of the MultiPKFiles that have weeded entries have been rewritten. Hence, a new CILog checkpoint is created in response to a successful run of the weeder. A CILog checkpoint file contains a bit vector of the CIs in use, while the log files contain Intvl::T records.
- EmptyPKLog
- The EmptyPKLog contains a sequence of "(pk, pkEpoch)" pairs recording the pkEpoch values of PKFile's that have become empty (or that once were empty. The EmptyPKLog is checkpointed by the CacheS::CleanCacheLog procedure, as part of cleaning the cache log. Once the cache log has been cleaned, values that were placed in the EmptyPKLog before the start of cleaning are no longer needed (but any values placed in the log during cleaning are still required). The cache server deletes old EmptyPKLog entries by beginning an EmptyPKLog checkpoint just before starting cleaning and committing it just after cleaning is successfully completed. Nothing is written to the checkpoint file.
- GraphLog
- The GraphLog is a representation of the cache entry DAG that is written by the cache server and read by the weeder. The GraphLog contains two kinds of entries: nodes and roots. The nodes correspond to cache entries, and the roots correspond to top-level evaluations (or checkpoints of partially-completed evaluations) that the weeder uses as roots of its mark phase.
The GraphLog is checkpointed by the CacheS::StartMark method, and the checkpoint is committed by the CacheS::CommitChkpt method. Both of these methods are invoked by the weeder, and it is the weeder that actually writes the checkpoint file. The checkpoint file is in the same format as the log files.
- WeededLog
- Abstractly, the WeededLog contains a lower bound on the number of MultiPKFiles in the MPKsToWeed file that have been successfully processed. The WeededLog is only meaningful if the Deleting stable variable is true, that is, while cache entries are being deleted in response to a successful run of the weeder. If the cache server crashes while performing such deletions, the WeededLog is recovered so that all of the deletions do not have to be repeated.
The log contains a sequence of integers, the sum of which denotes the number of MultiPKFiles that have been successfully processed. The WeededLog is checkpointed by the CacheS::ResetWeededMPKs method during the CacheS::EndMark method. Nothing is written into the checkpoint file; the purpose of performing the checkpoint is to simply start a new empty log. New entries are written to the log by the CacheS::AddToWeededMPKs method, which is called by the CacheS_DoDeletions function after it processes every tenth (or so) MPKFile.
Stable Variables
- Deleting
- The Deleting file records one of the Boolean values "true" or "false", indicating whether or not the cache server is processing a deletion request from the weeder. It is written by the CacheS::SetStableDeleting method, is set to "true" in the CacheS::EndMark method, and is set to "false" in the CacheS_DoDeletions function.
- HitFilter
- The HitFilter is a bit vector recording the cache entries on which hits should be blocked (so long as they are not leased). It is written by the CacheS::WriteHitFilter method, is set to non-empty values in the CacheS::SetHitFilter and CacheS::EndMark methods, and is set to the empty set in the CacheS_DoDeletions function.
- MPKsToWeed
- The MPKsToWeed file records the PKPrefix values of the MultiPKFiles that have cache entries to be deleted. The file is only relevant if the stable Deleting variable is true. The file is written by the CacheS::SetMPKsToWeed method during the CacheS::EndMark method. The WeededLog value is interpreted relative to this file as described above.
Since PKFiles are organized in the stable cache in groups called MultiPKFiles, we need a convenient way in the volatile cache for enumerating the set of VPKFiles stored together in a single MultiPKFile on the disk.
The way we do this currently is by storing two tables in the volatile cache:
The cache is responsible for maintaining the invariant that any VPKFile reachable through the cache table is also reachable through the mpkTbl table. With this organization, we can enumerate the VPKFile's for a give MultiPKFile simply by iterating over the appropriate VMultiPKFile in the mpkTbl.
- cache: FP::Tag -> VPKFile*
- mpkTbl: PKPrefix::T -> VMultiPKFile
Another approach would be to use a two-level hierarchy for storing VPKFiles in the volatile cache. The CacheS object has a table cache: PKPrefix -> VMultiPKFile*. Each VMultiPKFile then has a table tbl: FP::Tag -> VPKFile*.
With this organization, we can enumerate the VPKFiles stored in the volatile cache under a give PKPrefix (i.e., in the same MultiPKFile) by simply looking up the VMultiPKFile object stored under the prefix in CacheS::cache, and then iterating over the VPKFile's stored in the resulting VMultiPKFile. The advantage to this scheme is that it requires less space and does not require the implementation to maintain the invariant mentioned earlier. However, this design imposes two table lookups on every Lookup and AddEntry call.
Yet another design would be to have a single table cache: FP::Tag -> VPKFile* that maps PK's directly to the corresponding VPKFile in the volatile cache. There would also be a separate table in the CacheS object mpkTbl: Prefix -> VPKFile*. The mpkTbl would map a given prefix to a linked list of VPKFile's in the volatile cache that share the same prefix. In this implementation, each VPKFile would contain a pointer to the next VPKFile in the list. The difficulty with this implementation would be updating the linked list when a VPKFile was added or removed.
The VToSCache function in the cache server specification is responsible for writing the new entries of a VPKFile to stable storage. It is also passed a set of the CI's to delete. If any of the new cache entries in the VPKFile or any of the existing cache entries in the corresponding SPKFile in stable storage is in this set, the VToSCache function is also responsible for deleting them.
In the implementation, the VToSCache function is implemented by the (surprise!) "CacheS::VToSCache" method. However, this method does very little of the work. Since the disk files correspond to groups of PKFiles, called MultiPKFiles, it is the volatile MultiPKFiles that are flushed. Hence, VToSCache does its work by calling methods of the "VMultiPKFile" and "SMultiPKFile" classes.
The work of re-writing is de-composed into the following steps:
- The VMultiPKFile's "LockForWrite" method is called to ensure that the calling thread is the only one that is writing this MultiPKFile. It blocks if the MultiPKFile in question is already being flushed by a different thread. It also updates the VMultiPKFile's state to indicate that a flush is in progress, which protects it from eviction (see "Eviction of VPKFiles and VMultiPKFiles" below).
- "SMultiPKFile::PrepareForRewrite" is called to read the header of the stable MultiPKfile. The header information is used to ensure that a VPKFile object exists for each stable PKFile before we begin the flush to disk. This is necessary to protect against a VPKFile being created for one of the stable PKFiles by a client call during the flush, leaving that VPKFile out of sync with the updates to the stable PKFile. (See the discussion of removal of free variables and updating the commonNames and uncommonNames sets below.)
- The VMultiPKFile's "ChkptForWrite" method is called to snapshot any new entries to be written to the stable cache, which in turn calls "SMultiPKFile::ChkptForRewrite". This allows clients to continue to add new entries in parallel without changing the data being used by the ongoing flush to disk.
- The CILog and GraphLog are flushed. This ensures that the allocations of the cache indicies used by any new entries as well as their relationships to other entries are committed before writing the entries themselves to the stable cache.
- The VMultiPKFile's "ToSCache" method is called. However, this is just a thin wrapper around the "SMultiPKFile::Rewrite" method which does most of the work.
The signature of "SMultiPKFile::Rewrite" is:
void SMultiPKFile::Rewrite( const PKPrefix &pfx, bool mpkFileExists, ifstream &ifs, SMultiPKFileRep::Header *hdr, const VPKFileMap &vpkFiles, ChkPtTbl &vpkChkptTbl, const BitVector *toDelete, EmptyPKLog *emptyPKLog, EntryState &state) throw (FS::Failure, FS::EndOfFile, VestaLog::Error);
- Its parameters are:
- pfx
- The PK prefix that identifies the MultiPKFile to rewrite.
- mpkFileExists
- Whether the stable MultiPKFile already exists on disk. (Taken from the return value of "SMultiPKFile::PrepareForRewrite".)
- ifs
- An input stream for the old stable MultiPKFile. (If mpkFileExists is false, this stream is not open and will not be used.)
- hdr
- The header information read from disk by "SMultiPKFile::PrepareForRewrite", or a new empty header is there is no old stable MultiPKFile.
- vpkFiles
- A table mapping all PK's with prefix pfx to their corresponding VPKFiles. If (pk, vpkfile) is an entry in this table, then pk has prefix pfx, and hence, corresponds to a PK in the MultiPKFile in question. The vpkfile is a pointer to a VPKFile that may or may not have some new entries that need to be flushed. Note that the entries in this table must be a superset of the PKs in the old stable MultiPKFile.
- vpkChkptTbl
- A table of checkpoints of the VPKFiles in vpkFiles, generated by calling "SMultiPKFile::ChkptForRewrite".
- toDelete
- A bit vector of cache entry indices (CI's) to delete, or NULL if there are no entries to delete.
- emptyPKLog
- The cache server's EmptyPKLog, used to record the pkEpoch values of PKFile's that become empty during the write (by having all of their entries deleted and no new entries added). (See "Supporting deletions in the CacheLog" below.)
- state
- Output parameter used to return the changes in the cache server statistics (number and size of new entries waiting to be flushed to disk) made by the flush.
If the set of common names changes, then the common and uncommon fingerprint tags for all of the entries have to be recomputed, as does the PKFile's ts value. This can happen for two reasons:
- If there are any newUncommon entries, then some names that used to be common no longer will be. Hence, this causes the set of common names to contract.
- If any entries for the PKFile in question are being deleted, then the set of common names can expand.
A natural way to represent the changes to the set of common names is by a pair of bit vectors named exCommonNames and exUncommonNames. The bits set in the former correspond to names that were common but are now uncommon, and bits set in the latter correspond to names that were uncommon but are now common. The intersection of these two bit vectors will always be empty. Given these bit vectors, a PKFile's commonNames are updated by code like this:
commonNames ^= exCommonNames; commonNames ^= exUncommonNames;The uncommonNames set of each of the PKFile's cache entries are then updated like this:uncommonNames -= exUncommonNames; uncommonNames |= exCommonNames;There is another complication: if any entries are deleted, then the specification says that the set of all free variable names may contract. This could be a real problem, since the set of common names for the PKFile and the set of uncommon names for each entry are all represented by bit vectors interpreted relative to the list of all free variable names. If that list of names changes, then the bit vectors have to be packed accordingly. So, the natural algorithm is to make one pass to determine if any entries are being deleted, and if so, to test if any names need to be removed from the set of all free variable names. In that case, a slower path can be taken to convert the bit vectors.In the event that any names are removed from the set of all free variables, the relative order of the names must remain the same. A natural way to represent the shuffle is by the bit vector of the new names, called a packMask in the implementation. The operation of compacting the names corresponds to the operation of packing the bits in this bit vector. There is an operation defined for the BitVector class that packs a bit vector according to such masks. Another way of representing the remapping is by a map from the old index of a name to its new (lower or equal) index. This map is called reMap in the implementation.
In each cache entry, the commonNames bit vector in each PKFile and the uncommonNames bit vector in each cache entry must be packed according to the corresponding packMask. Similarly, the imap in the "extras" portion of each entry that maps indices in the list of all free variables to indices in the fingerprint and timestamp arrays must be modified to correspond to the new list of all variable names. This is easily done using the reMap table.
Since there are both packed and unpacked versions of various bit vectors and maps when a PKFile is updated, care must be taken to avoid comparing or combining packed and unpacked values. In general, value packing is delayed to the end of the major methods, so most operations are performed on unpacked values.
In the specification, the VToSCache work is divided into roughly two functions:
- NewPF
- Computes the entries in the stable PKFile. This includes adding any new entries from the volatile PKFile and deleting any entries in the hitFilter.
- NewVF
- Computes the entries in the volatile PKFile.
Roughly speaking, there is a function in the implementation corresponding to each of these functions in the specification. The correspondence between them is:
- SVCache.NewPF is implemented by SPKFile::Update
- SVCache.NewVF is implemented by VPKFile::Update
The SPKFile::Update method computes and returns the exCommonNames, exUncommonNames, packMask, and reMap values that result from updating a particular PKFile. These values are salted away by the SMultiPKFile::Rewrite method in an SMultiPKFileRep::HeaderEntry structure for that PKFile. The values are then passed into the VPKFile::Update method (along with the checkpoint) so the VPKFile can be updated to be consistent with the stable PKFile.
Butler's original SVCache specification has the following bug: it does not correctly handle the case where some entries are created (and hence written to the CacheLog) and then subsequently deleted. If the cache server crashes at this point without the CacheLog having been cleaned in the meantime, the deleted entries will be read from the CacheLog and installed in the cache!
To avoid this problem (and to simplify the process of cleaning the CacheLog), we assign a pkEpoch field to each stable and volatile PKFile, and to each entry in the CacheLog. A new SPKFile or VPKFile has a pkEpoch of 0. Whenever a new entry is created, it is written to the CacheLog with the pkEpoch of its corresponding VPKFile. Whenever a VPKFile is "flushed" to the stable cache, the pkEpoch of the VPKFile is incremented before the SPKFile is written to disk. Hence, the pkEpoch values of all entries in a SPKFile are strictly less than the pkEpoch of the SPKFile itself, and at recovery (or in the CleanCacheLog procedure) any entry in the CacheLog with a pkEpoch strictly less than that of the corresponding SPKFile should be discarded.
Other entries in the CacheLog, and each VPKFile in memory, usually have a pkEpoch value equal to that of the corresponding SPKFile on disk. There are two exceptions:
- Flushing a VPKFile to the stable cache is not an atomic operation; more entries can be created and logged (or deleted) between the time when the cache server decides what version of the VPKFile will be made stable and the time when the new MultiPKFile containing the updated SPKFile is fully written and atomically replaces the old one on disk. The cache server atomically snapshots a particular state of the VPKFile to be written to disk by making a "checkpoint" of it, and the checkpoint determines what will actually be written to disk, regardless of later changes to the VPKFile. Because any entries not in the checkpoint will not be flushed at this time, the VPKFile's pkEpoch must be incremented atomically with making the checkpoint. Thus the new version of the SPKFile will have the correct pkEpoch, and any entries created after the checkpoint is made will also have this epoch, as they should. (Minor detail: the implementation actually writes the old value of pkEpoch to the checkpoint data structure, but correctly uses the incremented value when writing the SPKFile to disk.)
Note that after the VPKFile has been checkpointed but before the stable MultiPKFile has been written, any new entries that are written to the CacheLog for that PKFile will have a pkEpoch value greater than that of the SPKFile, not equal to it. If the cache server crashes before the MultiPKFile has been written, this difference will be visible at recovery time. (If the cache server crashes repeatedly, the difference can be more than 1.) During recovery, it is important to recover this larger pkEpoch value and restore it to the VPKFile data structure; we do so by noticing when a cacheLog entry being recovered has a larger pkEpoch value than we have seen so far for the corresponding VPKFile and updating the latter accordingly.
It would also be possible for the VPKFile and SPKFile's pkEpochs to differ by more than 1 if there were several flushes in progress at the same time, since starting each flush takes a checkpoint and increments the volatile pkEpoch by 1. However, this scenario is prevented because VMultiPKFile::ToSCache refuses to flush the same MultiPKFile twice concurrently. To be extra careful, though, the cache records a VPKFile's pkEpoch when checkpointing it, and when writing out the SPKFile from the checkpoint, it asserts that vpkfile->pkEpoch == chkpt->pkEpoch+1.
Finally, we introduce an extra complication to make epoch numbers grow more slowly. Because all VPKFiles in a MultiPKFile must be checkpointed when the MultiPKFile is flushed, all their pkEpochs must be incremented at this time as well. However, modifications are not necessarily made to all of the corresponding SPKFiles. To prevent the pkEpoch of an SPKFile from being incremented unnecessarily, the VPKFile pkEpoch is decremented back to its original value when the new SMultiPKFile is committed whenever it is safe to do so, namely when the SPKFile is deemed not to have been modified and when no new entries were created (with the incremented pkEpoch value) since the VPKFile was checkpointed. (This decrementing and the code used to determine whether it is safe also depends for their validity on the fact that the same PKFile cannot be flushed twice concurrently, so there can be only one checkpoint in existence at a time.)
- Another case that causes difficulties occurs when all of the entries are deleted from a SPKFile. In this case, no record of the SPKFile is kept in the stable cache, so if the cache server were to crash, the SPKFile would be found to have a pkEpoch of 0 at recovery time. In that case, there could be entries in the CacheLog with arbitrarily large pkEpoch values, and they would be incorrectly kept. The problem is that there is no way to distinguish this case from the one in which we are starting a fresh, empty cache server with no history.
To handle this case, we keep a second log called the EmptyPKLog that records when a SPKFile has become empty. We also record the new pkEpoch for that (empty) SPKFile in the EmptyPKLog. At VToSCache time, we must commit these log records before swinging the file pointer for the SMultiPKFile (or deleting the SMultiPKFile entirely if all its SPKFile's are now empty). If the cache server crashes after committing a EmptyPKLog record, but before committing the new SMultiPKFile, it will have a record at recovery indicating that a PKFile has been deleted when in fact it has not. The recovery process must be able to handle this case.
At recovery, we first read the EmptyPKLog, building a table that maps PK's to the highest pkEpoch for that PK in the log. Once we have built the table, we scan the CacheLog. For each record in the CacheLog, we compare the pkEpoch of the entry to the pkEpoch of the SPKFile in the stable cache. If there is no such SPKFile, we look in the table of empty PKFiles and read the pkEpoch from there. If there is no such entry in the table, we use a pkEpoch of 0. Now, if the pkEpoch of the entry is less than that of the (possibly empty) PKFile, we ignore it. Otherwise, we add the entry to the volatile cache.
When a new VPKFile is created, its pkEpoch is usually initialized to that of the corresponding SPKFile. However, if the SPKFile has been deleted, it is important to initialize the VPKFile's pkEpoch to be that of the entry in the EmptyPKLog. (Otherwise, we would initialize it to 0. This would be wrong if there were old cache log entries for the VPKFile, since they might be erroneously recovered.) So as to avoid having to read the EmptyPKLog whenever a new VPKFile is created, we keep the table of empty PK files read at recovery time permanently, and consult it whenever a new VPKFile is created. The cache server maintains the invariant that the table consists of the maximum pkEpoch values of the entries in the EmptyPKLog. It does this by augmenting the table whenever a new EmptyPKLog entry is written.
The CleanCacheLog procedure re-writes both the CacheLog and the EmptyPKLog, removing unnecessary entries from both. Unlike the recovery process, it does not read the EmptyPKLog from disk because that log's entries are already stored in its in-memory table. Like the recovery process, however, it reads through the entries in the CacheLog. Any entry in the CacheLog with a pkEpoch less than that specified by the corresponding stable PKFile or entry in the empty PK table is discarded; all others are copied to the new version of the log.
Once the new CacheLog is written, there is no reason to save any entries in the EmptyPKLog (or its corresponding in-memory table) that were inserted before the start of cleaning: (1) A CacheLog entry that survived cleaning certainly does not need to be compared against the same old EmptyPKLog entries again. (2) A CacheLog entry added after the start of cleaning certainly cannot need to be discarded because its PKFile was deleted before the start of cleaning.
We delete old entries from the EmptyPKLog by starting a checkpoint of the log at the beginning of cleaning, writing nothing to it, and committing it at the end of cleaning. (We never write anything to an EmptyPKLog checkpoint.) Thus new entries can accumulate in the EmptyPKLog during cleaning, but entries from before cleaning disappear when the checkpoint commits. Handling the in-memory copy of the EmptyPKLog is slightly more complicated. The EmptyPKLog class keeps two separate tables during cleaning, one for old entries and one for new ones. Lookups examine both tables. At the end of cleaning, the old table is freed.
Because the CacheLog and EmptyPKLog are separate logs, we can't commit changes to both logs atomically, so we have to use careful ordering instead. It is harmless if we fail to delete some old entries from the EmptyPKLog that could have been deleted, so we use the following ordering: (1) Begin EmptyPKLog checkpoint, (2) Begin CacheLog checkpoint, (3) Clean CacheLog, (4) Commit CacheLog checkpoint, (5) Commit EmptyPKLog checkpoint.
The original cache server design did not include any mechanism for removing VPKFiles and VMultiPKFiles from memory. Once these data structures were created, they existed for the lifetime of the cache server. The consequence of this was that a certain portion of the cache server's memory usage was monotonically increasing over time.
However, due to the multi-threaded nature of the cache server, removing these data structures from the cache is not straightforward. If another thread were still using a VPKFile that had been removed, data could be lost (e.g. a newly added cache entry). This section describes the methods used to avoid these problems and implement eviction of VPKFiles and VMultiPKFiles.
Each VPKFile has a boolean "evicted" member variable. This is set to false when the VPKFile is created, and becomes true when the VPKFile is evicted from the cache. When a thread locks a VPKFile, before using it in any way, it is responsible for first checking to make sure that it hasn't been evicted. This is necessary because there is an implicit race between the thread responsible for evicting VPKFiles and any thread using a VPKFile. This test is implemented as follows:
// Lock the VPKFile we're about to use vpkfile->mu.lock(); // Ensure that we don't have an evicted VPKFile while(vpkfile->Evicted()) { // Unlock the evicted one, as we won't be using it vpkfile->mu.unlock(); // Get another VPKFile object for this PK (probably // creating it) CacheS.mu.lock(); (void)(CacheS.GetVPKFile(pk, /*OUT*/ vpkfile)); CacheS.mu.unlock(); // Lock our new VPKFile object vpkfile->mu.lock(); } // ...proceed with using the VPKFile...Each VPKFile also has a freeing epoch ("freePKFileEpoch"). Each time a VPKFile is used by a client call (FreeVariables, Lookup, or AddEntry), this is updated to the current value of a global counter ("CacheS.freeMPKFileEpoch"). Only VPKFiles that have not been used for a configurable amount of time (i.e. have a sufficiently low freeing epoch) are potential candidates for eviction. This is done to avoid evicting VPKFiles that have had recent activity, as uses of a given PK have a degree of temporal locality.The process of evicting VPKFiles and VMultiPKFiles is handled by the same background thread which periodically flushes new entries to disk (implemented by the "CacheS_DoFreeMPKFiles" function). After new entries have been flushed to disk and old entries have been purged from memory, it searches for VPKFiles to evict. A VPKFile is evicted if and only if:
- It has not been used recently by a client call (it has a sufficiently old freeing epoch).
- It has no new entries yet to be flushed to the stable cache.
- It currently has no old ("warm") entries paged in from the stable cache.
- It is not part of a VMultiPKFile that is currently being flushed to the stable cache or that is going to be flushed to the stable cache in the near future.
A VPKFile is evicted by calling its "Evict" method (which simply sets its "evicted" member variable to true), removing it from the table of VPKFiles ("CacheS.cache"), and removing it from its VMultiPKFile's table of subordinate VPKFiles. (See "Indexing of VPKFiles" above.)
After evicting VPKFiles, VMultiPKFiles may be evicted. A VMultiPKFile is evicted if and only if:
- It has zero subordinate VPKFiles. (Thus, for a VMultiPKFile to be evicted, all of its VPKFiles must first be evicted.)
- It is neither currently being flushed to the stable cache nor is going to be flushed to the stable cache in the near future.
A VMultiPKFile is evicted simply by removing it from the table of VMultiPKFiles ("CacheS.mpkTbl").
Note that, unlike VPKFiles, VMultiPKFiles do not have a member variable indicating whether they have been evicted. This means that if a thread holds a pointer to a VMultiPKFile, it could be evicted out from under it. However, the only case where this can happen is during "CacheS::VToSCache", in which case the VMultiPKFile is protected against eviction. This is why, in the implementation of "CacheS::VToSCache", the cache lock ("CacheS.mu") is held from before the pointer to the VMultiPKFile is acquired until after the VMultiPKFile is locked for writing (by calling "VMultiPKFile::LockForWrite").
Also note that, as of this writing, VMultiPKFiles that are to be re-written to process pending deletions (i.e. are in MPKsToWeed) are not protected from eviction, nor are their subordinate VPKFiles. It might be nice to protect them from eviction, but currently MPKsToWeed is an unordered list of PK prefixes, which means that determining whether a given MultiPKFile is in the set would require a linear search. However, protecting these VMultiPKFiles and VPKFiles from eviction is not strictly necessary, as the other constraints on eviction are sufficient to maintain correct behavior of the cache.
Finally, the entire eviction design is predicated on the cache server use of garbage collection. Pointers to the evicted VPKFiles and VMultiPKFiles are simply dropped. This allows other threads time to deal with evicted VPKFiles. Eventually, the garbage collector will reclaim the space used by the evicted objects.
This page was generated automatically by mtex software.Original version by: Allan Heydon (caheydon@yahoo.com)
Updates and additions by: Ken Schalk (ken@xorian.net)
Last modified on Wed Apr 2 00:22:45 EST 2003 by ken@xorian.net modified on Wed Feb 23 17:27:30 PST 2000 by mann modified on Sat Aug 22 15:38:13 PDT 1998 by heydon