Class: LLM::Provider Abstract

Inherits:
Object
  • Object
show all
Includes:
Transport::Execution
Defined in:
lib/llm/provider.rb

Overview

This class is abstract.

The Provider class represents an abstract class for LLM (Language Model) providers.

Direct Known Subclasses

Anthropic, Bedrock, Google, Ollama, OpenAI

Instance Method Summary collapse

Constructor Details

#initialize(key:, host:, port: 443, timeout: 60, ssl: true, base_path: "", persistent: false, transport: nil) ⇒ Provider

Returns a new instance of Provider.

Parameters:

  • key (String, nil)

    The secret key for authentication

  • host (String)

    The host address of the LLM provider

  • port (Integer) (defaults to: 443)

    The port number

  • timeout (Integer) (defaults to: 60)

    The number of seconds to wait for a response

  • ssl (Boolean) (defaults to: true)

    Whether to use SSL for the connection

  • base_path (String) (defaults to: "")

    Optional base path prefix for HTTP API routes.

  • persistent (Boolean) (defaults to: false)

    Whether to use a persistent connection. Requires the net-http-persistent gem.

  • transport (LLM::Transport, Class, nil) (defaults to: nil)

    Optional override with any Transport instance or subclass.



29
30
31
32
33
34
35
36
37
38
39
40
# File 'lib/llm/provider.rb', line 29

def initialize(key:, host:, port: 443, timeout: 60, ssl: true, base_path: "", persistent: false, transport: nil)
  @key = key
  @host = host
  @port = port
  @timeout = timeout
  @ssl = ssl
  @base_path = normalize_base_path(base_path)
  @base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/")
  @headers = {"User-Agent" => "llm.rb v#{LLM::VERSION}"}
  @transport = resolve_transport(transport, persistent:)
  @monitor = Monitor.new
end

Instance Method Details

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Raises:

  • (NotImplementedError)


176
177
178
# File 'lib/llm/provider.rb', line 176

def assistant_role
  raise NotImplementedError
end

#audioLLM::OpenAI::Audio

Returns an interface to the audio API

Returns:

Raises:

  • (NotImplementedError)


140
141
142
# File 'lib/llm/provider.rb', line 140

def audio
  raise NotImplementedError
end

#chat(prompt, params = {}) ⇒ LLM::Context

Starts a new chat powered by the chat completions API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:



103
104
105
106
# File 'lib/llm/provider.rb', line 103

def chat(prompt, params = {})
  role = params.delete(:role)
  LLM::Context.new(self, params).talk(prompt, role:)
end

#complete(prompt, params = {}) ⇒ LLM::Response

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(key: ENV["KEY"])
messages = [{role: "system", content: "Your task is to answer all of my questions"}]
res = llm.complete("5 + 2 ?", messages:)
print "[#{res.messages[0].role}]", res.messages[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :role (Symbol)

    Defaults to the provider’s default role

  • :model (String)

    Defaults to the provider’s default model

  • :schema (#to_json, nil)

    Defaults to nil

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



94
95
96
# File 'lib/llm/provider.rb', line 94

def complete(prompt, params = {})
  raise NotImplementedError
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

    Returns the default model for chat completions

Raises:

  • (NotImplementedError)


183
184
185
# File 'lib/llm/provider.rb', line 183

def default_model
  raise NotImplementedError
end

#developer_roleSymbol

Returns:

  • (Symbol)


261
262
263
# File 'lib/llm/provider.rb', line 261

def developer_role
  :developer
end

#embed(input, model: nil, **params) ⇒ LLM::Response

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: nil)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



70
71
72
# File 'lib/llm/provider.rb', line 70

def embed(input, model: nil, **params)
  raise NotImplementedError
end

#filesLLM::OpenAI::Files

Returns an interface to the files API

Returns:

Raises:

  • (NotImplementedError)


147
148
149
# File 'lib/llm/provider.rb', line 147

def files
  raise NotImplementedError
end

#imagesLLM::OpenAI::Images, LLM::Google::Images

Returns an interface to the images API

Returns:

Raises:

  • (NotImplementedError)


133
134
135
# File 'lib/llm/provider.rb', line 133

def images
  raise NotImplementedError
end

#inspectString

Note:

The secret key is redacted in inspect for security reasons

Returns an inspection of the provider object

Returns:

  • (String)


46
47
48
# File 'lib/llm/provider.rb', line 46

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @transport=#{transport.inspect} @tracer=#{tracer.inspect}>"
end

#interrupt!(owner) ⇒ nil Also known as: cancel!

Interrupt the active request, if any.

Parameters:

  • owner (Fiber)

Returns:

  • (nil)


322
323
324
# File 'lib/llm/provider.rb', line 322

def interrupt!(owner)
  transport.interrupt!(owner)
end

#modelsLLM::OpenAI::Models

Returns an interface to the models API

Returns:

Raises:

  • (NotImplementedError)


154
155
156
# File 'lib/llm/provider.rb', line 154

def models
  raise NotImplementedError
end

#moderationsLLM::OpenAI::Moderations

Returns an interface to the moderations API

Returns:

Raises:

  • (NotImplementedError)


161
162
163
# File 'lib/llm/provider.rb', line 161

def moderations
  raise NotImplementedError
end

#nameSymbol

Returns the provider’s name

Returns:

  • (Symbol)

    Returns the provider’s name

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



55
56
57
# File 'lib/llm/provider.rb', line 55

def name
  raise NotImplementedError
end

#request_ownerObject

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Returns the current request owner used by the transport.

Returns:



331
332
333
# File 'lib/llm/provider.rb', line 331

def request_owner
  transport.request_owner
end

#respond(prompt, params = {}) ⇒ LLM::Context

Starts a new chat powered by the responses API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



114
115
116
117
# File 'lib/llm/provider.rb', line 114

def respond(prompt, params = {})
  role = params.delete(:role)
  LLM::Context.new(self, params).respond(prompt, role:)
end

#responsesLLM::OpenAI::Responses

Note:

Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.

Returns:

Raises:

  • (NotImplementedError)


126
127
128
# File 'lib/llm/provider.rb', line 126

def responses
  raise NotImplementedError
end

#schemaLLM::Schema

Returns an object that can generate a JSON schema

Returns:



190
191
192
# File 'lib/llm/provider.rb', line 190

def schema
  LLM::Schema.new
end

#server_tool(name, options = {}) ⇒ LLM::ServerTool

Note:

OpenAI, Anthropic, and Gemini provide platform-tools for things like web search, and more.

Returns a tool provided by a provider.

Examples:

llm   = LLM.openai(key: ENV["KEY"])
tools = [llm.server_tool(:web_search)]
res   = llm.responses.create("Summarize today's news", tools:)
print res.output_text, "\n"

Parameters:

  • name (String, Symbol)

    The name of the tool

  • options (Hash) (defaults to: {})

    Configuration options for the tool

Returns:



233
234
235
# File 'lib/llm/provider.rb', line 233

def server_tool(name, options = {})
  LLM::ServerTool.new(name, options, self)
end

#server_toolsString => LLM::ServerTool

Note:

This method might be outdated, and the LLM::Provider#server_tool method can be used if a tool is not found here.

Returns all known tools provided by a provider.

Returns:



216
217
218
# File 'lib/llm/provider.rb', line 216

def server_tools
  {}
end

#streamable?(stream) ⇒ Boolean

Parameters:

Returns:

  • (Boolean)


338
339
340
# File 'lib/llm/provider.rb', line 338

def streamable?(stream)
  LLM::Stream === stream || stream.respond_to?(:<<)
end

#system_roleSymbol

Returns:

  • (Symbol)


255
256
257
# File 'lib/llm/provider.rb', line 255

def system_role
  :system
end

#tool_roleSymbol

Returns:

  • (Symbol)


267
268
269
# File 'lib/llm/provider.rb', line 267

def tool_role
  :tool
end

#tracerLLM::Tracer

Returns the current scoped tracer override or provider default tracer

Returns:

  • (LLM::Tracer)

    Returns the current scoped tracer override or provider default tracer



274
275
276
# File 'lib/llm/provider.rb', line 274

def tracer
  weakmap[self] || @tracer || LLM::Tracer::Null.new(self)
end

#tracer=(tracer) ⇒ void

This method returns an undefined value.

Set the provider’s default tracer This tracer is shared by the provider instance and becomes the fallback whenever no scoped override is active.

Examples:

llm = LLM.openai(key: ENV["KEY"])
llm.tracer = LLM::Tracer::Logger.new(llm, path: "/path/to/log.txt")

Parameters:



288
289
290
# File 'lib/llm/provider.rb', line 288

def tracer=(tracer)
  @tracer = tracer
end

#user_roleSymbol

Returns:

  • (Symbol)


249
250
251
# File 'lib/llm/provider.rb', line 249

def user_role
  :user
end

#vector_storesLLM::OpenAI::VectorStore

Returns an interface to the vector stores API

Returns:

  • (LLM::OpenAI::VectorStore)

    Returns an interface to the vector stores API

Raises:

  • (NotImplementedError)


168
169
170
# File 'lib/llm/provider.rb', line 168

def vector_stores
  raise NotImplementedError
end

#web_search(query:) ⇒ LLM::Response

Provides a web search capability

Parameters:

  • query (String)

    The search query

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



243
244
245
# File 'lib/llm/provider.rb', line 243

def web_search(query:)
  raise NotImplementedError
end

#with(headers:) ⇒ LLM::Provider

Add one or more headers to all requests

Examples:

llm = LLM.openai(key: ENV["KEY"])
llm.with(headers: {"OpenAI-Organization" => ENV["ORG"]})
llm.with(headers: {"OpenAI-Project" => ENV["PROJECT"]})

Parameters:

  • headers (Hash<String,String>)

    One or more headers

Returns:



204
205
206
207
208
# File 'lib/llm/provider.rb', line 204

def with(headers:)
  lock do
    tap { @headers.merge!(headers) }
  end
end

#with_tracer(tracer) { ... } ⇒ Object

Override the tracer for the current fiber while the block runs. This is useful when you want per-request or per-turn tracing without replacing the provider’s default tracer.

Examples:

llm.with_tracer(LLM::Tracer::Logger.new(llm, io: $stdout)) do
  llm.complete("hello", model: "gpt-5.4-mini")
end

Parameters:

Yields:

Returns:



303
304
305
306
307
308
309
310
311
312
313
314
315
316
# File 'lib/llm/provider.rb', line 303

def with_tracer(tracer)
  had_override = weakmap.key?(self)
  previous = weakmap[self]
  weakmap[self] = tracer
  yield
ensure
  if had_override
    weakmap[self] = previous
  elsif weakmap.respond_to?(:delete)
    weakmap.delete(self)
  else
    weakmap[self] = nil
  end
end