Module: LazyInit::InstanceMethods

Defined in:
lib/lazy_init/instance_methods.rb

Overview

Provides instance-level utility methods for lazy initialization patterns.

This module is automatically included when a class includes LazyInit (as opposed to extending it). It provides method-local memoization capabilities that are useful for expensive computations that need to be cached per method call location rather than per attribute.

The lazy_once method is particularly powerful as it provides automatic caching based on the caller location, making it easy to add memoization to any method without explicit cache key management.

Examples:

Basic lazy value creation

class DataProcessor
  include LazyInit

  def process_data
    expensive_parser = lazy { ExpensiveParser.new }
    expensive_parser.value.parse(data)
  end
end

Method-local memoization

class ApiClient
  include LazyInit

  def fetch_user_data(user_id)
    lazy_once(ttl: 5.minutes) do
      expensive_api_call(user_id)
    end
  end
end

Since:

  • 0.1.0

Instance Method Summary collapse

Instance Method Details

#clear_lazy_once_values!void

This method returns an undefined value.

Clear all cached lazy_once values for this instance.

This method is thread-safe and can be used to reset all method-local memoization caches, useful for testing or when you need to ensure fresh computation on subsequent calls.

Since:

  • 0.1.0



183
184
185
186
187
188
# File 'lib/lazy_init/instance_methods.rb', line 183

def clear_lazy_once_values!
  @lazy_once_mutex ||= Mutex.new
  @lazy_once_mutex.synchronize do
    @lazy_once_cache&.clear
  end
end

#lazy(&block) ⇒ LazyValue

Create a standalone lazy value container.

This is a simple factory method that creates a LazyValue instance. Useful when you need lazy initialization behavior but don’t want to define a formal lazy attribute on the class.

Examples:

Standalone lazy computation

def expensive_calculation
  result = lazy { perform_heavy_computation }
  result.value
end

Parameters:

  • block (Proc)

    the computation to execute lazily

Returns:

Raises:

  • (ArgumentError)

    if no block is provided

Since:

  • 0.1.0



52
53
54
# File 'lib/lazy_init/instance_methods.rb', line 52

def lazy(&block)
  LazyValue.new(&block)
end

#lazy_once(max_entries: nil, ttl: nil, &block) ⇒ Object

Method-local memoization with automatic cache key generation.

Caches computation results based on the caller location (file and line number), providing automatic memoization without explicit key management. Each unique call site gets its own cache entry with optional TTL and LRU eviction.

This is particularly useful for expensive computations in methods that are called frequently but where the result can be cached for a period of time.

Examples:

Simple method memoization

def expensive_data_processing
  lazy_once do
    perform_heavy_computation
  end
end

With TTL and size limits

def fetch_external_data
  lazy_once(ttl: 30.seconds, max_entries: 100) do
    external_api.fetch_data
  end
end

Parameters:

  • max_entries (Integer, nil) (defaults to: nil)

    maximum cache entries before LRU eviction

  • ttl (Numeric, nil) (defaults to: nil)

    time-to-live in seconds for cache entries

  • block (Proc)

    the computation to cache

Returns:

  • (Object)

    the computed or cached value

Raises:

  • (ArgumentError)

    if no block is provided

Since:

  • 0.1.0



84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
# File 'lib/lazy_init/instance_methods.rb', line 84

def lazy_once(max_entries: nil, ttl: nil, &block)
  raise ArgumentError, 'Block is required' unless block

  # apply global configuration defaults
  max_entries ||= LazyInit.configuration.max_lazy_once_entries
  ttl ||= LazyInit.configuration.lazy_once_ttl

  # generate cache key from caller location for automatic memoization
  call_location = caller_locations(1, 1).first
  location_key = "#{call_location.path}:#{call_location.lineno}"

  # ensure thread-safe cache initialization
  @lazy_once_mutex ||= Mutex.new

  # fast path: check cache outside mutex for performance
  if @lazy_once_cache&.key?(location_key)
    cached_entry = @lazy_once_cache[location_key]

    # handle TTL expiration if configured
    if ttl && Time.now - cached_entry[:created_at] > ttl
      @lazy_once_mutex.synchronize do
        # double-check TTL after acquiring lock
        if @lazy_once_cache&.key?(location_key)
          cached_entry = @lazy_once_cache[location_key]
          if Time.now - cached_entry[:created_at] > ttl
            @lazy_once_cache.delete(location_key)
          else
            # entry is still valid, update access tracking and return
            cached_entry[:access_count] += 1
            cached_entry[:last_accessed] = Time.now if ttl
            return cached_entry[:value]
          end
        end
      end
    else
      # cache hit: update access tracking in thread-safe manner
      @lazy_once_mutex.synchronize do
        if @lazy_once_cache&.key?(location_key)
          cached_entry = @lazy_once_cache[location_key]
          cached_entry[:access_count] += 1
          cached_entry[:last_accessed] = Time.now if ttl
          return cached_entry[:value]
        end
      end
    end
  end

  # slow path: compute value and cache result
  @lazy_once_mutex.synchronize do
    # double-check pattern: another thread might have computed while we waited
    if @lazy_once_cache&.key?(location_key)
      cached_entry = @lazy_once_cache[location_key]

      # verify TTL hasn't expired while we waited for the lock
      if ttl && Time.now - cached_entry[:created_at] > ttl
        @lazy_once_cache.delete(location_key)
      else
        cached_entry[:access_count] += 1
        cached_entry[:last_accessed] = Time.now if ttl
        return cached_entry[:value]
      end
    end

    # initialize cache storage if this is the first lazy_once call
    @lazy_once_cache ||= {}

    # perform LRU cleanup if cache is getting too large
    cleanup_lazy_once_cache_simple!(max_entries) if @lazy_once_cache.size >= max_entries

    # compute the value and store in cache with minimal metadata
    begin
      computed_value = block.call

      # create cache entry with minimal required metadata for performance
      cache_entry = {
        value: computed_value,
        access_count: 1
      }

      # add optional metadata only when features are actually used
      cache_entry[:created_at] = Time.now if ttl
      cache_entry[:last_accessed] = Time.now if ttl

      @lazy_once_cache[location_key] = cache_entry
      computed_value
    rescue StandardError => e
      # don't cache exceptions to keep implementation simple
      raise
    end
  end
end

#lazy_once_infoHash<String, Hash>

Get detailed information about all cached lazy_once values.

Returns a hash mapping call locations to their cache metadata, useful for debugging and understanding cache behavior.

Examples:

Inspecting cache state

processor = DataProcessor.new
processor.some_cached_method
info = processor.lazy_once_info
puts info # => { "/path/to/file.rb:42" => { computed: true, access_count: 1, ... } }

Returns:

  • (Hash<String, Hash>)

    mapping of call locations to cache information

Since:

  • 0.1.0



202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
# File 'lib/lazy_init/instance_methods.rb', line 202

def lazy_once_info
  @lazy_once_mutex ||= Mutex.new
  @lazy_once_mutex.synchronize do
    return {} unless @lazy_once_cache

    result = {}
    @lazy_once_cache.each do |location_key, entry|
      result[location_key] = {
        computed: true, # always true in this implementation since we don't cache exceptions
        exception: false, # we don't cache exceptions for simplicity
        created_at: entry[:created_at],
        access_count: entry[:access_count],
        last_accessed: entry[:last_accessed]
      }
    end
    result
  end
end

#lazy_once_statisticsHash

Get statistical summary of lazy_once cache usage.

Provides aggregated information about cache performance including total entries, access patterns, and timing information.

Examples:

Monitoring cache performance

stats = processor.lazy_once_statistics
puts "Cache hit ratio: #{stats[:total_accesses] / stats[:total_entries].to_f}"
puts "Average accesses per entry: #{stats[:average_accesses]}"

Returns:

  • (Hash)

    statistical summary of cache usage

Since:

  • 0.1.0



232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
# File 'lib/lazy_init/instance_methods.rb', line 232

def lazy_once_statistics
  @lazy_once_mutex ||= Mutex.new
  @lazy_once_mutex.synchronize do
    # return empty stats if no cache exists yet
    unless @lazy_once_cache
      return {
        total_entries: 0,
        computed_entries: 0,
        oldest_entry: nil,
        newest_entry: nil,
        total_accesses: 0,
        average_accesses: 0
      }
    end

    total_entries = @lazy_once_cache.size
    total_accesses = @lazy_once_cache.values.sum { |entry| entry[:access_count] }

    # extract creation timestamps for age analysis (Ruby 2.6 compatible)
    created_times = @lazy_once_cache.values.map { |entry| entry[:created_at] }.compact

    {
      total_entries: total_entries,
      computed_entries: total_entries, # all cached entries are successfully computed
      oldest_entry: created_times.min,
      newest_entry: created_times.max,
      total_accesses: total_accesses,
      average_accesses: total_entries > 0 ? total_accesses / total_entries.to_f : 0
    }
  end
end