Skip to content

Commit

Permalink
Merge pull request #22 from moka-rs/switch-to-moka-cht
Browse files Browse the repository at this point in the history
Switch to moka-cht v0.5
  • Loading branch information
tatsuya6502 authored Aug 3, 2021
2 parents 2ac69dd + c0a820e commit a737009
Show file tree
Hide file tree
Showing 12 changed files with 80 additions and 53 deletions.
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
# Moka — Change Log

## Version 0.5.1

### Changed

- Replace a dependency cht v0.4 with moka-cht v0.5. ([#22][gh-pull-0022])


## Version 0.5.0

### Added
Expand Down Expand Up @@ -74,6 +81,7 @@

[caffeine-git]: https://github.com/ben-manes/caffeine

[gh-pull-0022]: https://github.com/moka-rs/moka/pull/22/
[gh-pull-0020]: https://github.com/moka-rs/moka/pull/20/
[gh-pull-0019]: https://github.com/moka-rs/moka/pull/19/
[gh-pull-0016]: https://github.com/moka-rs/moka/pull/16/
Expand Down
4 changes: 2 additions & 2 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "moka"
version = "0.5.0"
version = "0.5.1"
authors = ["Tatsuya Kawano <tatsuya@hibaridb.org>"]
edition = "2018"

Expand All @@ -24,8 +24,8 @@ default = []
future = ["async-io", "async-lock"]

[dependencies]
cht = "0.4"
crossbeam-channel = "0.5"
moka-cht = "0.5"
num_cpus = "1.13"
once_cell = "1.7"
parking_lot = "0.11"
Expand Down
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -330,8 +330,7 @@ change.
<!--
- socket2 0.4.0 requires 1.46.
- quanta requires 1.45.
- aHash 0.5 requires 1.43.
- cht requires 1.41.
- moka-cht requires 1.41.
-->


Expand Down
32 changes: 19 additions & 13 deletions src/future/cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,14 +27,14 @@ use std::{
/// `Cache` supports full concurrency of retrievals and a high expected concurrency
/// for updates. It can be accessed inside and outside of asynchronous contexts.
///
/// `Cache` utilizes a lock-free concurrent hash table `cht::SegmentedHashMap` from
/// the [cht][cht-crate] crate for the central key-value storage. `Cache` performs a
/// best-effort bounding of the map using an entry replacement algorithm to determine
/// which entries to evict when the capacity is exceeded.
/// `Cache` utilizes a lock-free concurrent hash table `SegmentedHashMap` from the
/// [moka-cht][moka-cht-crate] crate for the central key-value storage. `Cache`
/// performs a best-effort bounding of the map using an entry replacement algorithm
/// to determine which entries to evict when the capacity is exceeded.
///
/// To use this cache, enable a crate feature called "future".
///
/// [cht-crate]: https://crates.io/crates/cht
/// [moka-cht-crate]: https://crates.io/crates/moka-cht
///
/// # Examples
///
Expand Down Expand Up @@ -171,15 +171,13 @@ use std::{
/// # Hashing Algorithm
///
/// By default, `Cache` uses a hashing algorithm selected to provide resistance
/// against HashDoS attacks.
/// against HashDoS attacks. It will be the same one used by
/// `std::collections::HashMap`, which is currently SipHash 1-3.
///
/// The default hashing algorithm is the one used by `std::collections::HashMap`,
/// which is currently SipHash 1-3.
///
/// While its performance is very competitive for medium sized keys, other hashing
/// algorithms will outperform it for small keys such as integers as well as large
/// keys such as long strings. However those algorithms will typically not protect
/// against attacks such as HashDoS.
/// While SipHash's performance is very competitive for medium sized keys, other
/// hashing algorithms will outperform it for small keys such as integers as well as
/// large keys such as long strings. However those algorithms will typically not
/// protect against attacks such as HashDoS.
///
/// The hashing algorithm can be replaced on a per-`Cache` basis using the
/// [`build_with_hasher`][build-with-hasher-method] method of the
Expand Down Expand Up @@ -294,6 +292,10 @@ where
/// key even if the method is concurrently called by many async tasks; only one
/// of the calls resolves its future, and other calls wait for that future to
/// complete.
#[allow(clippy::redundant_allocation)]
// https://rust-lang.github.io/rust-clippy/master/index.html#redundant_allocation
// `Arc<Box<dyn ..>>` in the return type creates an extra heap allocation.
// This will be addressed by Moka v0.6.0.
pub async fn get_or_try_insert_with<F>(
&self,
key: K,
Expand Down Expand Up @@ -484,6 +486,10 @@ where
}
}

#[allow(clippy::redundant_allocation)]
// https://rust-lang.github.io/rust-clippy/master/index.html#redundant_allocation
// `Arc<Box<dyn ..>>` in the return type creates an extra heap allocation.
// This will be addressed by Moka v0.6.0.
async fn get_or_try_insert_with_hash_and_fun<F>(
&self,
key: Arc<K>,
Expand Down
8 changes: 6 additions & 2 deletions src/future/value_initializer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,18 @@ use std::{

type Waiter<V> = Arc<RwLock<Option<Result<V, Arc<Box<dyn Error + Send + Sync + 'static>>>>>>;

#[allow(clippy::redundant_allocation)]
// https://rust-lang.github.io/rust-clippy/master/index.html#redundant_allocation
pub(crate) enum InitResult<V> {
Initialized(V),
ReadExisting(V),
// This `Arc<Box<dyn ..>>` creates an extra heap allocation. This will be
// addressed by Moka v0.6.0.
InitErr(Arc<Box<dyn Error + Send + Sync + 'static>>),
}

pub(crate) struct ValueInitializer<K, V, S> {
waiters: cht::HashMap<Arc<K>, Waiter<V>, S>,
waiters: moka_cht::SegmentedHashMap<Arc<K>, Waiter<V>, S>,
}

impl<K, V, S> ValueInitializer<K, V, S>
Expand All @@ -26,7 +30,7 @@ where
{
pub(crate) fn with_hasher(hasher: S) -> Self {
Self {
waiters: cht::HashMap::with_hasher(hasher),
waiters: moka_cht::SegmentedHashMap::with_num_segments_and_hasher(16, hasher),
}
}

Expand Down
12 changes: 6 additions & 6 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,19 @@
//!
//! Moka provides in-memory concurrent cache implementations that support full
//! concurrency of retrievals and a high expected concurrency for updates. <!-- , and multiple ways to bound the cache. -->
//! They utilize a lock-free concurrent hash table `cht::SegmentedHashMap` from the
//! [cht][cht-crate] crate for the central key-value storage.
//! They utilize a lock-free concurrent hash table `SegmentedHashMap` from the
//! [moka-cht][moka-cht-crate] crate for the central key-value storage.
//!
//! Moka also provides an in-memory, not thread-safe cache implementation for single
//! thread applications.
//!
//! All cache implementations perform a best-effort bounding of the map using an entry
//! replacement algorithm to determine which entries to evict when the capacity is
//! exceeded.
//! All cache implementations perform a best-effort bounding of the map using an
//! entry replacement algorithm to determine which entries to evict when the capacity
//! is exceeded.
//!
//! [caffeine-git]: https://github.com/ben-manes/caffeine
//! [ristretto-git]: https://github.com/dgraph-io/ristretto
//! [cht-crate]: https://crates.io/crates/cht
//! [moka-cht-crate]: https://crates.io/crates/moka-cht
//!
//! # Features
//!
Expand Down
6 changes: 3 additions & 3 deletions src/sync/base_cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ where
let mut op1 = None;
let mut op2 = None;

// Since the cache (cht::SegmentedHashMap) employs optimistic locking
// Since the cache (moka-cht::SegmentedHashMap) employs optimistic locking
// strategy, insert_with_or_modify() may get an insert/modify operation
// conflicted with other concurrent hash table operations. In that case, it
// has to retry the insertion or modification, so on_insert and/or on_modify
Expand Down Expand Up @@ -343,7 +343,7 @@ where
}
}

type CacheStore<K, V, S> = cht::SegmentedHashMap<Arc<K>, Arc<ValueEntry<K, V>>, S>;
type CacheStore<K, V, S> = moka_cht::SegmentedHashMap<Arc<K>, Arc<ValueEntry<K, V>>, S>;

type CacheEntry<K, V> = (Arc<K>, Arc<ValueEntry<K, V>>);

Expand Down Expand Up @@ -387,7 +387,7 @@ where
.map(|cap| cap + WRITE_LOG_SIZE * 4)
.unwrap_or_default();
let num_segments = 64;
let cache = cht::SegmentedHashMap::with_num_segments_capacity_and_hasher(
let cache = moka_cht::SegmentedHashMap::with_num_segments_capacity_and_hasher(
num_segments,
initial_capacity,
build_hasher.clone(),
Expand Down
32 changes: 19 additions & 13 deletions src/sync/cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ use std::{
/// `Cache` supports full concurrency of retrievals and a high expected concurrency
/// for updates.
///
/// `Cache` utilizes a lock-free concurrent hash table `cht::SegmentedHashMap` from
/// the [cht][cht-crate] crate for the central key-value storage. `Cache` performs a
/// best-effort bounding of the map using an entry replacement algorithm to determine
/// which entries to evict when the capacity is exceeded.
/// `Cache` utilizes a lock-free concurrent hash table `SegmentedHashMap` from the
/// [moka-cht][moka-cht-crate] crate for the central key-value storage. `Cache`
/// performs a best-effort bounding of the map using an entry replacement algorithm
/// to determine which entries to evict when the capacity is exceeded.
///
/// [cht-crate]: https://crates.io/crates/cht
/// [moka-cht-crate]: https://crates.io/crates/moka-cht
///
/// # Examples
///
Expand Down Expand Up @@ -143,15 +143,13 @@ use std::{
/// # Hashing Algorithm
///
/// By default, `Cache` uses a hashing algorithm selected to provide resistance
/// against HashDoS attacks.
/// against HashDoS attacks. It will be the same one used by
/// `std::collections::HashMap`, which is currently SipHash 1-3.
///
/// The default hashing algorithm is the one used by `std::collections::HashMap`,
/// which is currently SipHash 1-3.
///
/// While its performance is very competitive for medium sized keys, other hashing
/// algorithms will outperform it for small keys such as integers as well as large
/// keys such as long strings. However those algorithms will typically not protect
/// against attacks such as HashDoS.
/// While SipHash's performance is very competitive for medium sized keys, other
/// hashing algorithms will outperform it for small keys such as integers as well as
/// large keys such as long strings. However those algorithms will typically not
/// protect against attacks such as HashDoS.
///
/// The hashing algorithm can be replaced on a per-`Cache` basis using the
/// [`build_with_hasher`][build-with-hasher-method] method of the
Expand Down Expand Up @@ -295,6 +293,10 @@ where
/// key even if the method is concurrently called by many threads; only one of
/// the calls evaluates its function, and other calls wait for that function to
/// complete.
#[allow(clippy::redundant_allocation)]
// https://rust-lang.github.io/rust-clippy/master/index.html#redundant_allocation
// `Arc<Box<dyn ..>>` in the return type creates an extra heap allocation.
// This will be addressed by Moka v0.6.0.
pub fn get_or_try_insert_with<F>(
&self,
key: K,
Expand All @@ -308,6 +310,10 @@ where
self.get_or_try_insert_with_hash_and_fun(key, hash, init)
}

#[allow(clippy::redundant_allocation)]
// https://rust-lang.github.io/rust-clippy/master/index.html#redundant_allocation
// `Arc<Box<dyn ..>>` in the return type creates an extra heap allocation.
// This will be addressed by Moka v0.6.0.
pub(crate) fn get_or_try_insert_with_hash_and_fun<F>(
&self,
key: Arc<K>,
Expand Down
2 changes: 0 additions & 2 deletions src/sync/invalidator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,6 @@ impl<K, V> InvalidationResult<K, V> {
}

pub(crate) struct Invalidator<K, V, S> {
// TODO: Replace this RwLock<std::collections::HashMap<_, _>> with cht::HashMap
// once iterator is implemented. https://github.com/Gregory-Meyer/cht/issues/20
predicates: RwLock<HashMap<PredicateId, Predicate<K, V>>>,
is_empty: AtomicBool,
scan_context: Arc<ScanContext<K, V, S>>,
Expand Down
4 changes: 4 additions & 0 deletions src/sync/segment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,10 @@ where
/// key even if the method is concurrently called by many threads; only one of
/// the calls evaluates its function, and other calls wait for that function to
/// complete.
#[allow(clippy::redundant_allocation)]
// https://rust-lang.github.io/rust-clippy/master/index.html#redundant_allocation
// `Arc<Box<dyn ..>>` in the return type creates an extra heap allocation.
// This will be addressed by Moka v0.6.0.
pub fn get_or_try_insert_with<F>(
&self,
key: K,
Expand Down
8 changes: 6 additions & 2 deletions src/sync/value_initializer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,18 @@ use std::{

type Waiter<V> = Arc<RwLock<Option<Result<V, Arc<Box<dyn Error + Send + Sync + 'static>>>>>>;

#[allow(clippy::redundant_allocation)]
// https://rust-lang.github.io/rust-clippy/master/index.html#redundant_allocation
pub(crate) enum InitResult<V> {
Initialized(V),
ReadExisting(V),
// This `Arc<Box<dyn ..>>` creates an extra heap allocation. This will be
// addressed by Moka v0.6.0.
InitErr(Arc<Box<dyn Error + Send + Sync + 'static>>),
}

pub(crate) struct ValueInitializer<K, V, S> {
waiters: cht::HashMap<Arc<K>, Waiter<V>, S>,
waiters: moka_cht::SegmentedHashMap<Arc<K>, Waiter<V>, S>,
}

impl<K, V, S> ValueInitializer<K, V, S>
Expand All @@ -25,7 +29,7 @@ where
{
pub(crate) fn with_hasher(hasher: S) -> Self {
Self {
waiters: cht::HashMap::with_hasher(hasher),
waiters: moka_cht::SegmentedHashMap::with_num_segments_and_hasher(16, hasher),
}
}

Expand Down
14 changes: 6 additions & 8 deletions src/unsync/cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -91,15 +91,13 @@ type CacheStore<K, V, S> = std::collections::HashMap<Rc<K>, ValueEntry<K, V>, S>
/// # Hashing Algorithm
///
/// By default, `Cache` uses a hashing algorithm selected to provide resistance
/// against HashDoS attacks.
/// against HashDoS attacks. It will the same one used by
/// `std::collections::HashMap`, which is currently SipHash 1-3.
///
/// The default hashing algorithm is the one used by `std::collections::HashMap`,
/// which is currently SipHash 1-3.
///
/// While its performance is very competitive for medium sized keys, other hashing
/// algorithms will outperform it for small keys such as integers as well as large
/// keys such as long strings. However those algorithms will typically not protect
/// against attacks such as HashDoS.
/// While SipHash's performance is very competitive for medium sized keys, other
/// hashing algorithms will outperform it for small keys such as integers as well as
/// large keys such as long strings. However those algorithms will typically not
/// protect against attacks such as HashDoS.
///
/// The hashing algorithm can be replaced on a per-`Cache` basis using the
/// [`build_with_hasher`][build-with-hasher-method] method of the
Expand Down

0 comments on commit a737009

Please sign in to comment.