Class: ContainerEnv::Cache
- Inherits:
-
Object
- Object
- ContainerEnv::Cache
- Defined in:
- lib/container_env/cache.rb
Overview
:nodoc:
Defined Under Namespace
Classes: Entry
Instance Method Summary collapse
- #clear ⇒ Object
-
#fetch_or_store(key) ⇒ Object
Atomic check-compute-store (#3).
- #get(key) ⇒ Object
-
#initialize(ttl:, max_size: Configuration::DEFAULT_MAX_SIZE) ⇒ Cache
constructor
A new instance of Cache.
- #set(key, value) ⇒ Object
Constructor Details
#initialize(ttl:, max_size: Configuration::DEFAULT_MAX_SIZE) ⇒ Cache
Returns a new instance of Cache.
8 9 10 11 12 13 |
# File 'lib/container_env/cache.rb', line 8 def initialize(ttl:, max_size: Configuration::DEFAULT_MAX_SIZE) @ttl = ttl @max_size = max_size @store = {} @mutex = build_mutex # fiber-aware when Async::Mutex is available (#4) end |
Instance Method Details
#clear ⇒ Object
43 44 45 |
# File 'lib/container_env/cache.rb', line 43 def clear @mutex.synchronize { @store.clear } end |
#fetch_or_store(key) ⇒ Object
Atomic check-compute-store (#3).
Calls the block only on a cache miss, then re-checks under the write lock before storing. This narrows the stampede window: concurrent threads may still compute in parallel, but only the first writer wins; the rest discard their computed value and return what was stored. Nil values are never cached.
30 31 32 33 34 35 36 37 38 39 40 41 |
# File 'lib/container_env/cache.rb', line 30 def fetch_or_store(key) hit = get(key) return hit unless hit.nil? computed = yield return nil if computed.nil? @mutex.synchronize do current = read_entry(key) current.nil? ? write_entry(key, computed) : current end end |
#get(key) ⇒ Object
15 16 17 |
# File 'lib/container_env/cache.rb', line 15 def get(key) @mutex.synchronize { read_entry(key) } end |
#set(key, value) ⇒ Object
19 20 21 22 |
# File 'lib/container_env/cache.rb', line 19 def set(key, value) @mutex.synchronize { write_entry(key, value) } value end |