Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the invalidate_all method to all caches #11

Merged
merged 17 commits into from
Mar 2, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
"Moka",
"Ristretto",
"Tatsuya",
"unsync",
"Upsert",
"actix",
"ahash",
"benmanes",
Expand All @@ -27,6 +27,7 @@
"semver",
"structs",
"toolchain",
"unsync",
"usize"
]
}
49 changes: 33 additions & 16 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,48 @@
# Moka — Release Notes
# Moka — Change Log

## Unreleased
## Version 0.3.0

### Features
### Added

- Introduce an unsync cache.
- Add an unsync cache (`moka::unsync::Cache`) and its builder for single-thread
applications. ([#9][gh-pull-0009])
- Add `invalidate_all` method to `sync`, `future` and `unsync` caches.
([#11][gh-pull-0011])

### Fixed

- Fix problems including segfault caused by race conditions between the sync/eviction
thread and client writes. (Addressed as a part of [#11][gh-pull-0011]).


## Version 0.2.0

### Features
### Added

- Introduce an asynchronous (futures aware) cache.
- Add an asynchronous, futures aware cache (`moka::future::Cache`) and its builder.
([#7][gh-pull-0007])


## Version 0.1.0

### Features
### Added

- Add thread-safe, highly concurrent in-memory cache implementations
(`moka::sync::{Cache, SegmentedCache}`) with the following features:
- Bounded by the maximum number of elements.
- Maintains good hit rate by using entry replacement algorithms inspired by
[Caffeine][caffeine-git]:
- Admission to a cache is controlled by the Least Frequently Used (LFU) policy.
- Eviction from a cache is controlled by the Least Recently Used (LRU) policy.
- Expiration policies:
- Time to live
- Time to idle

- Thread-safe, highly concurrent in-memory cache implementations.
- Caches are bounded by the maximum number of elements.
- Maintains good hit rate by using entry replacement algorithms inspired by
[Caffeine][caffeine-git]:
- Admission to a cache is controlled by the Least Frequently Used (LFU) policy.
- Eviction from a cache is controlled by the Least Recently Used (LRU) policy.
- Supports expiration policies:
- Time to live
- Time to idle

<!-- Links -->

[caffeine-git]: https://github.com/ben-manes/caffeine

[gh-pull-0011]: https://github.com/moka-rs/moka/pull/11/
[gh-pull-0009]: https://github.com/moka-rs/moka/pull/9/
[gh-pull-0007]: https://github.com/moka-rs/moka/pull/7/
4 changes: 2 additions & 2 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "moka"
version = "0.2.0"
version = "0.3.0"
authors = ["Tatsuya Kawano <tatsuya@hibaridb.org>"]
edition = "2018"

Expand All @@ -27,7 +27,7 @@ future = ["async-io"]
cht = "0.4"
crossbeam-channel = "0.5"
num_cpus = "1.13"
once_cell = "1.5"
once_cell = "1.7"
parking_lot = "0.11"
# v0.7.1 or newer should be used as v0.7.0 will not compile on non-x86_64 platforms.
# https://github.com/metrics-rs/quanta/pull/38
Expand Down
35 changes: 20 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,12 @@ Moka is a fast, concurrent cache library for Rust. Moka is inspired by
[Caffeine][caffeine-git] (Java) and [Ristretto][ristretto-git] (Go).

Moka provides cache implementations that support full concurrency of retrievals and
a high expected concurrency for updates. They perform a best-effort bounding of a
concurrent hash map using an entry replacement algorithm to determine which entries
to evict when the capacity is exceeded.
a high expected concurrency for updates. Moka also provides a not thread-safe cache
implementation for single thread applications.

All caches perform a best-effort bounding of a hash map using an entry
replacement algorithm to determine which entries to evict when the capacity is
exceeded.

[gh-actions-badge]: https://github.com/moka-rs/moka/workflows/CI/badge.svg
[release-badge]: https://img.shields.io/crates/v/moka.svg
Expand All @@ -35,9 +38,10 @@ to evict when the capacity is exceeded.
## Features

- Thread-safe, highly concurrent in-memory cache implementations:
- Synchronous (blocking) caches that can be shared across OS threads.
- Blocking caches that can be shared across OS threads.
- An asynchronous (futures aware) cache that can be accessed inside and outside
of asynchronous contexts.
- A not thread-safe, in-memory cache implementation for single thread applications.
- Caches are bounded by the maximum number of entries.
- Maintains good hit rate by using an entry replacement algorithms inspired by
[Caffeine][caffeine-git]:
Expand All @@ -54,25 +58,25 @@ Add this to your `Cargo.toml`:

```toml
[dependencies]
moka = "0.2"
moka = "0.3"
```

To use the asynchronous cache, enable a crate feature called "future".

```toml
[dependencies]
moka = { version = "0.2", features = ["future"] }
moka = { version = "0.3", features = ["future"] }
```


## Example: Synchronous Cache

The synchronous (blocking) caches are defined in the `sync` module.
The thread-safe, blocking caches are defined in the `sync` module.

Cache entries are manually added using `insert` method, and are stored in the cache
until either evicted or manually invalidated.

Here's an example that reads and updates a cache by using multiple threads:
Here's an example of reading and updating a cache by using multiple threads:

```rust
// Use the synchronous cache.
Expand Down Expand Up @@ -157,7 +161,7 @@ Here is a similar program to the previous example, but using asynchronous cache
// Cargo.toml
//
// [dependencies]
// moka = { version = "0.2", features = ["future"] }
// moka = { version = "0.3", features = ["future"] }
// tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
// futures = "0.3"

Expand Down Expand Up @@ -220,12 +224,13 @@ async fn main() {

## Avoiding to clone the value at `get`

The return type of `get` method is `Option<V>` instead of `Option<&V>`, where `V` is
the value type. Every time `get` is called for an existing key, it creates a clone of
the stored value `V` and returns it. This is because the `Cache` allows concurrent
updates from threads so a value stored in the cache can be dropped or replaced at any
time by any other thread. `get` cannot return a reference `&V` as it is impossible to
guarantee the value outlives the reference.
For the concurrent caches (`sync` and `future` caches), the return type of `get`
method is `Option<V>` instead of `Option<&V>`, where `V` is the value type. Every
time `get` is called for an existing key, it creates a clone of the stored value `V`
and returns it. This is because the `Cache` allows concurrent updates from threads so
a value stored in the cache can be dropped or replaced at any time by any other
thread. `get` cannot return a reference `&V` as it is impossible to guarantee the
value outlives the reference.

If you want to store values that will be expensive to clone, wrap them by
`std::sync::Arc` before storing in a cache. [`Arc`][rustdoc-std-arc] is a thread-safe
Expand Down
8 changes: 8 additions & 0 deletions src/common.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,11 @@ pub(crate) trait AccessTime {
fn last_modified(&self) -> Option<Instant>;
fn set_last_modified(&mut self, timestamp: Instant);
}

pub(crate) fn u64_to_instant(ts: u64) -> Option<Instant> {
if ts == u64::MAX {
None
} else {
Some(unsafe { std::mem::transmute(ts) })
}
}
4 changes: 4 additions & 0 deletions src/common/deque.rs
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,10 @@ impl<T> Deque<T> {
}
}

pub(crate) fn region(&self) -> &CacheRegion {
&self.region
}

pub(crate) fn contains(&self, node: &DeqNode<T>) -> bool {
self.region == node.region && (node.prev.is_some() || self.is_head(node))
}
Expand Down
2 changes: 1 addition & 1 deletion src/future.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
//! Provides thread-safe, asynchronous (futures aware) cache implementations.
//! Provides a thread-safe, asynchronous (futures aware) cache implementation.
//!
//! To use this module, enable a crate feature called "future".

Expand Down
10 changes: 2 additions & 8 deletions src/future/builder.rs
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ where
K: Eq + Hash,
V: Clone,
{
/// Construct a new `CacheBuilder` that will be used to build a `Cache` or
/// `SegmentedCache` holding up to `max_capacity` entries.
/// Construct a new `CacheBuilder` that will be used to build a `Cache` holding
/// up to `max_capacity` entries.
pub fn new(max_capacity: usize) -> Self {
Self {
max_capacity,
Expand All @@ -64,9 +64,6 @@ where
}

/// Builds a `Cache<K, V>`.
///
/// If you want to build a `SegmentedCache<K, V>`, call `segments` method before
/// calling this method.
pub fn build(self) -> Cache<K, V, RandomState> {
let build_hasher = RandomState::default();
Cache::with_everything(
Expand All @@ -79,9 +76,6 @@ where
}

/// Builds a `Cache<K, V, S>`, with the given `hasher`.
///
/// If you want to build a `SegmentedCache<K, V>`, call `segments` method before
/// calling this method.
pub fn build_with_hasher<S>(self, hasher: S) -> Cache<K, V, S>
where
S: BuildHasher + Clone,
Expand Down
46 changes: 44 additions & 2 deletions src/future/cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ use std::{
/// [`blocking_invalidate`](#method.blocking_invalidate) methods. They will block
/// for a short time under heavy updates.
///
/// Here's an example that reads and updates a cache by using multiple asynchronous
/// Here's an example of reading and updating a cache by using multiple asynchronous
/// tasks with [Tokio][tokio-crate] runtime:
///
/// [tokio-crate]: https://crates.io/crates/tokio
Expand All @@ -49,7 +49,7 @@ use std::{
/// // Cargo.toml
/// //
/// // [dependencies]
/// // moka = { version = "0.2", features = ["future"] }
/// // moka = { version = "0.3", features = ["future"] }
/// // tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
/// // futures = "0.3"
///
Expand Down Expand Up @@ -327,6 +327,20 @@ where
}
}

/// Discards all cached values.
///
/// This method returns immediately and a background thread will evict all the
/// cached values inserted before the time when this method was called. It is
/// guaranteed that the `get` method must not return these invalidated values
/// even if they have not been evicted.
///
/// Like the `invalidate` method, this method does not clear the historic
/// popularity estimator of keys so that it retains the client activities of
/// trying to retrieve an item.
pub fn invalidate_all(&self) {
self.base.invalidate_all();
}

/// Returns the `max_capacity` of this cache.
pub fn max_capacity(&self) -> usize {
self.base.max_capacity()
Expand Down Expand Up @@ -577,6 +591,34 @@ mod tests {
assert!(cache.get(&20).is_some());
}

#[tokio::test]
async fn invalidate_all() {
let mut cache = Cache::new(100);
cache.reconfigure_for_testing();

// Make the cache exterior immutable.
let cache = cache;

cache.insert("a", "alice").await;
cache.insert("b", "bob").await;
cache.insert("c", "cindy").await;
assert_eq!(cache.get(&"a"), Some("alice"));
assert_eq!(cache.get(&"b"), Some("bob"));
assert_eq!(cache.get(&"c"), Some("cindy"));
cache.sync();

cache.invalidate_all();
cache.sync();

cache.insert("d", "david").await;
cache.sync();

assert!(cache.get(&"a").is_none());
assert!(cache.get(&"b").is_none());
assert!(cache.get(&"c").is_none());
assert_eq!(cache.get(&"d"), Some("david"));
}

#[tokio::test]
async fn time_to_live() {
let mut cache = CacheBuilder::new(100)
Expand Down
Loading