Class: Langfuse::CacheWarmer

Inherits:
Object
  • Object
show all
Defined in:
lib/langfuse/cache_warmer.rb

Overview

Cache warming utility for pre-loading prompts into cache

Useful for deployment scenarios where you want to warm the cache before serving traffic, preventing cold-start API calls.

Examples:

Warm cache with specific prompts

warmer = Langfuse::CacheWarmer.new
results = warmer.warm(['greeting', 'conversation', 'rag-pipeline'])
puts "Cached #{results[:success].size} prompts"

Warm cache with error handling

warmer = Langfuse::CacheWarmer.new(client: my_client)
results = warmer.warm(['greeting', 'conversation'])

results[:failed].each do |failure|
  logger.warn "Failed to cache #{failure[:name]}: #{failure[:error]}"
end

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(client: nil) ⇒ CacheWarmer

Initialize a new cache warmer

Parameters:

  • client (Client, nil) (defaults to: nil)

    Optional Langfuse client (defaults to global client)



28
29
30
# File 'lib/langfuse/cache_warmer.rb', line 28

def initialize(client: nil)
  @client = client || Langfuse.client
end

Instance Attribute Details

#clientObject (readonly)

Returns the value of attribute client.



23
24
25
# File 'lib/langfuse/cache_warmer.rb', line 23

def client
  @client
end

Instance Method Details

#cache_enabled?Boolean

Check if cache warming is enabled

Returns false if caching is disabled (cache_ttl = 0)

Returns:

  • (Boolean)


149
150
151
152
153
154
# File 'lib/langfuse/cache_warmer.rb', line 149

def cache_enabled?
  cache = client.api_client.cache
  return false if cache.nil?

  cache.ttl&.positive? || false
end

#cache_statsHash?

Get cache statistics (if supported by backend)

Returns:

  • (Hash, nil)

    Cache stats or nil if not supported



159
160
161
162
163
164
165
166
167
168
169
# File 'lib/langfuse/cache_warmer.rb', line 159

def cache_stats
  cache = client.api_client.cache
  return nil unless cache

  stats = {}
  stats[:backend] = cache.class.name.split("::").last
  stats[:ttl] = cache.ttl if cache.respond_to?(:ttl)
  stats[:size] = cache.size if cache.respond_to?(:size)
  stats[:max_size] = cache.max_size if cache.respond_to?(:max_size)
  stats
end

#warm(prompt_names, versions: {}, labels: {}) ⇒ Hash

Warm the cache with specified prompts

Fetches each prompt and populates the cache. This is idempotent - safe to call multiple times.

Examples:

Basic warming

results = warmer.warm(['greeting', 'conversation'])
# => { success: ['greeting', 'conversation'], failed: [] }

With specific versions

results = warmer.warm(
  ['greeting', 'conversation'],
  versions: { 'greeting' => 2, 'conversation' => 1 }
)

With labels

results = warmer.warm(
  ['greeting', 'conversation'],
  labels: { 'greeting' => 'production' }
)

Parameters:

  • prompt_names (Array<String>)

    List of prompt names to cache

  • versions (Hash<String, Integer>, nil) (defaults to: {})

    Optional version numbers per prompt

  • labels (Hash<String, String>, nil) (defaults to: {})

    Optional labels per prompt

Returns:

  • (Hash)

    Results with :success and :failed arrays



57
58
59
60
61
62
63
64
65
# File 'lib/langfuse/cache_warmer.rb', line 57

def warm(prompt_names, versions: {}, labels: {})
  results = { success: [], failed: [] }

  prompt_names.each do |name|
    warm_single_prompt(name, results, versions, labels)
  end

  results
end

#warm!(prompt_names, versions: {}, labels: {}) ⇒ Hash

Warm the cache and raise on any failures

Same as #warm but raises an error if any prompts fail to cache. Useful when you want to abort deployment if cache warming fails.

Examples:

begin
  warmer.warm!(['greeting', 'conversation'])
rescue Langfuse::CacheWarmingError => e
  abort "Cache warming failed: #{e.message}"
end

Parameters:

  • prompt_names (Array<String>)

    List of prompt names to cache

  • versions (Hash<String, Integer>, nil) (defaults to: {})

    Optional version numbers per prompt

  • labels (Hash<String, String>, nil) (defaults to: {})

    Optional labels per prompt

Returns:

  • (Hash)

    Results with :success array

Raises:



133
134
135
136
137
138
139
140
141
142
# File 'lib/langfuse/cache_warmer.rb', line 133

def warm!(prompt_names, versions: {}, labels: {})
  results = warm(prompt_names, versions: versions, labels: labels)

  if results[:failed].any?
    failed_names = results[:failed].map { |f| f[:name] }.join(", ")
    raise CacheWarmingError, "Failed to cache prompts: #{failed_names}"
  end

  results
end

#warm_all(default_label: "production", versions: {}, labels: {}) ⇒ Hash

Warm the cache with all prompts (auto-discovery)

Automatically discovers all prompts in your Langfuse project via the list_prompts API and warms the cache with all of them. By default, fetches prompts with the “production” label. Useful for deployment scenarios where you want to ensure all prompts are cached without manually specifying them.

Examples:

Auto-discover and warm all prompts with “production” label

results = warmer.warm_all
puts "Cached #{results[:success].size} prompts"

Warm with a different default label

results = warmer.warm_all(default_label: "staging")

Warm without any label (latest versions)

results = warmer.warm_all(default_label: nil)

With specific versions for some prompts

results = warmer.warm_all(versions: { 'greeting' => 2 })

Override label for specific prompts

results = warmer.warm_all(
  default_label: "production",
  labels: { 'greeting' => 'staging' }  # Use staging for this one
)

Parameters:

  • default_label (String, nil) (defaults to: "production")

    Label to use for all prompts (default: “production”)

  • versions (Hash<String, Integer>, nil) (defaults to: {})

    Optional version numbers per prompt

  • labels (Hash<String, String>, nil) (defaults to: {})

    Optional labels per specific prompts (overrides default_label)

Returns:

  • (Hash)

    Results with :success and :failed arrays



98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
# File 'lib/langfuse/cache_warmer.rb', line 98

def warm_all(default_label: "production", versions: {}, labels: {})
  prompt_list = client.list_prompts
  prompt_names = prompt_list.map { |p| p["name"] }.uniq

  # Build labels hash: apply default_label to all prompts, then merge overrides
  # BUT: if a version is specified for a prompt, don't apply a label (version takes precedence)
  final_labels = {}
  if default_label
    prompt_names.each do |name|
      # Only apply default label if no version specified for this prompt
      final_labels[name] = default_label unless versions[name]
    end
  end
  final_labels.merge!(labels) # Specific label overrides win

  warm(prompt_names, versions: versions, labels: final_labels)
end