Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove clusters #171

Merged
merged 2 commits into from
May 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 3 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,8 +125,9 @@ Solid Cache supports these options in addition to the standard `ActiveSupport::C
- `max_age` - the maximum age of entries in the cache (default: `2.weeks.to_i`). Can be set to `nil`, but this is not recommended unless using `max_entries` to limit the size of the cache.
- `max_entries` - the maximum number of entries allowed in the cache (default: `nil`, meaning no limit)
- `max_size` - the maximum size of the cache entries (default `nil`, meaning no limit)
- `cluster` - a Hash of options for the cache database cluster, e.g `{ shards: [:database1, :database2, :database3] }`
- `clusters` - an Array of Hashes for multiple cache clusters (ignored if `:cluster` is set)
- `cluster` - (deprecated) a Hash of options for the cache database cluster, e.g `{ shards: [:database1, :database2, :database3] }`
- `clusters` - (deprecated) an Array of Hashes for multiple cache clusters (ignored if `:cluster` is set)
- `shards` - an Array of databases.
- `active_record_instrumentation` - whether to instrument the cache's queries (default: `true`)
- `clear_with` - clear the cache with `:truncate` or `:delete` (default `truncate`, except for when `Rails.env.test?` then `delete`)
- `max_key_bytesize` - the maximum size of a normalized key in bytes (default `1024`)
Expand Down Expand Up @@ -220,43 +221,6 @@ production:
databases: [cache_shard1, cache_shard2, cache_shard3]
```

### Secondary cache clusters

You can add secondary cache clusters. Reads will only be sent to the primary cluster (i.e. the first one listed).

Writes will go to all clusters. The writes to the primary cluster are synchronous, but asynchronous to the secondary clusters.

To specific multiple clusters you can do:

```yaml
# config/solid_cache.yml
production:
databases: [cache_primary_shard1, cache_primary_shard2, cache_secondary_shard1, cache_secondary_shard2]
store_options:
clusters:
- shards: [cache_primary_shard1, cache_primary_shard2]
- shards: [cache_secondary_shard1, cache_secondary_shard2]
```

### Named shard destinations

By default, the node key used for sharding is the name of the database in `database.yml`.

It is possible to add names for the shards in the cluster config. This will allow you to shuffle or remove shards without breaking consistent hashing.

```yaml
production:
databases: [cache_primary_shard1, cache_primary_shard2, cache_secondary_shard1, cache_secondary_shard2]
store_options:
clusters:
- shards:
cache_primary_shard1: node1
cache_primary_shard2: node2
- shards:
cache_secondary_shard1: node3
cache_secondary_shard2: node4
```

### Enabling encryption

Add this to an initializer:
Expand Down
2 changes: 1 addition & 1 deletion Rakefile
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def run_without_aborting(*tasks)
end

def configs
[ :default, :cluster, :cluster_inferred, :clusters, :clusters_named, :database, :no_database ]
[ :default, :connects_to, :database, :no_database, :shards ]
end

task :test do
Expand Down
18 changes: 0 additions & 18 deletions lib/solid_cache/cluster.rb

This file was deleted.

55 changes: 0 additions & 55 deletions lib/solid_cache/cluster/connections.rb

This file was deleted.

7 changes: 1 addition & 6 deletions lib/solid_cache/connections.rb
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,8 @@ def self.from_config(options)
case options
when NilClass
names = SolidCache.configuration.shard_keys
nodes = names.to_h { |name| [ name, name ] }
when Array
names = options.map(&:to_sym)
nodes = names.to_h { |name| [ name, name ] }
when Hash
names = options.keys.map(&:to_sym)
nodes = options.to_h { |names, nodes| [ nodes.to_sym, names.to_sym ] }
end

if (unknown_shards = names - SolidCache.configuration.shard_keys).any?
Expand All @@ -23,7 +18,7 @@ def self.from_config(options)
if names.size == 1
Single.new(names.first)
else
Sharded.new(names, nodes)
Sharded.new(names)
end
else
Unmanaged.new
Expand Down
7 changes: 3 additions & 4 deletions lib/solid_cache/connections/sharded.rb
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,9 @@ module Connections
class Sharded
attr_reader :names, :nodes, :consistent_hash

def initialize(names, nodes)
def initialize(names)
@names = names
@nodes = nodes
@consistent_hash = MaglevHash.new(@nodes.keys)
@consistent_hash = MaglevHash.new(names)
end

def with_each(&block)
Expand All @@ -35,7 +34,7 @@ def count

private
def shard_for(key)
nodes[consistent_hash.node(key)]
consistent_hash.node(key)
end
end
end
Expand Down
6 changes: 1 addition & 5 deletions lib/solid_cache/store.rb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

module SolidCache
class Store < ActiveSupport::Cache::Store
include Api, Clusters, Entries, Failsafe
include Api, Connections, Entries, Execution, Expiry, Failsafe, Stats
prepend ActiveSupport::Cache::Strategy::LocalCache

def initialize(options = {})
Expand All @@ -16,9 +16,5 @@ def self.supports_cache_versioning?
def setup!
super
end

def stats
primary_cluster.stats
end
end
end
83 changes: 0 additions & 83 deletions lib/solid_cache/store/clusters.rb

This file was deleted.

108 changes: 108 additions & 0 deletions lib/solid_cache/store/connections.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
# frozen_string_literal: true

module SolidCache
class Store
module Connections
attr_reader :shard_options

def initialize(options = {})
super(options)
if options[:clusters].present?
if options[:clusters].size > 1
raise ArgumentError, "Multiple clusters are no longer supported"
else
ActiveSupport.deprecator.warn(":clusters is deprecated, use :shards instead.")
end
@shard_options = options.fetch(:clusters).first[:shards]
elsif options[:cluster].present?
ActiveSupport.deprecator.warn(":cluster is deprecated, use :shards instead.")
@shard_options = options.fetch(:cluster, {})[:shards]
else
@shard_options = options.fetch(:shards, nil)
end

if [ Array, NilClass ].none? { |klass| @shard_options.is_a? klass }
raise ArgumentError, "`shards` is a `#{@shard_options.class.name}`, it should be Array or nil"
end
end

def with_each_connection(async: false, &block)
return enum_for(:with_each_connection) unless block_given?

connections.with_each do
execute(async, &block)
end
end

def with_connection_for(key, async: false, &block)
connections.with_connection_for(key) do
execute(async, &block)
end
end

def with_connection(name, async: false, &block)
connections.with(name) do
execute(async, &block)
end
end

def group_by_connection(keys)
connections.assign(keys)
end

def connection_names
connections.names
end

def connections
@connections ||= SolidCache::Connections.from_config(@shard_options)
end

private
def setup!
connections
end

def reading_key(key, failsafe:, failsafe_returning: nil, &block)
failsafe(failsafe, returning: failsafe_returning) do
with_connection_for(key, &block)
end
end

def reading_keys(keys, failsafe:, failsafe_returning: nil)
group_by_connection(keys).map do |connection, keys|
failsafe(failsafe, returning: failsafe_returning) do
with_connection(connection) do
yield keys
end
end
end
end


def writing_key(key, failsafe:, failsafe_returning: nil, &block)
failsafe(failsafe, returning: failsafe_returning) do
with_connection_for(key, &block)
end
end

def writing_keys(entries, failsafe:, failsafe_returning: nil)
group_by_connection(entries).map do |connection, entries|
failsafe(failsafe, returning: failsafe_returning) do
with_connection(connection) do
yield entries
end
end
end
end

def writing_all(failsafe:, failsafe_returning: nil, &block)
connection_names.map do |connection|
failsafe(failsafe, returning: failsafe_returning) do
with_connection(connection, &block)
end
end.first
end
end
end
end
Loading