Class: LLM::Provider Abstract

Inherits:
Object
  • Object
show all
Includes:
Transport::HTTP::Execution
Defined in:
lib/llm/provider.rb,
lib/llm/provider/transport/http.rb,
lib/llm/provider/transport/http/interruptible.rb

Overview

This class is abstract.

The Provider class represents an abstract class for LLM (Language Model) providers.

Direct Known Subclasses

Anthropic, Google, Ollama, OpenAI

Defined Under Namespace

Modules: Transport

Instance Method Summary collapse

Constructor Details

#initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) ⇒ Provider

Returns a new instance of Provider.

Parameters:

  • key (String, nil)

    The secret key for authentication

  • host (String)

    The host address of the LLM provider

  • port (Integer) (defaults to: 443)

    The port number

  • timeout (Integer) (defaults to: 60)

    The number of seconds to wait for a response

  • ssl (Boolean) (defaults to: true)

    Whether to use SSL for the connection

  • persistent (Boolean) (defaults to: false)

    Whether to use a persistent connection. Requires the net-http-persistent gem.



28
29
30
31
32
33
34
35
36
37
38
# File 'lib/llm/provider.rb', line 28

def initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false)
  @key = key
  @host = host
  @port = port
  @timeout = timeout
  @ssl = ssl
  @base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/")
  @headers = {"User-Agent" => "llm.rb v#{LLM::VERSION}"}
  @transport = Transport::HTTP.new(host:, port:, timeout:, ssl:, persistent:)
  @monitor = Monitor.new
end

Instance Method Details

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Raises:

  • (NotImplementedError)


174
175
176
# File 'lib/llm/provider.rb', line 174

def assistant_role
  raise NotImplementedError
end

#audioLLM::OpenAI::Audio

Returns an interface to the audio API

Returns:

Raises:

  • (NotImplementedError)


138
139
140
# File 'lib/llm/provider.rb', line 138

def audio
  raise NotImplementedError
end

#chat(prompt, params = {}) ⇒ LLM::Context

Starts a new chat powered by the chat completions API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:



101
102
103
104
# File 'lib/llm/provider.rb', line 101

def chat(prompt, params = {})
  role = params.delete(:role)
  LLM::Context.new(self, params).talk(prompt, role:)
end

#complete(prompt, params = {}) ⇒ LLM::Response

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(key: ENV["KEY"])
messages = [{role: "system", content: "Your task is to answer all of my questions"}]
res = llm.complete("5 + 2 ?", messages:)
print "[#{res.messages[0].role}]", res.messages[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :role (Symbol)

    Defaults to the provider’s default role

  • :model (String)

    Defaults to the provider’s default model

  • :schema (#to_json, nil)

    Defaults to nil

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



92
93
94
# File 'lib/llm/provider.rb', line 92

def complete(prompt, params = {})
  raise NotImplementedError
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

    Returns the default model for chat completions

Raises:

  • (NotImplementedError)


181
182
183
# File 'lib/llm/provider.rb', line 181

def default_model
  raise NotImplementedError
end

#developer_roleSymbol

Returns:

  • (Symbol)


259
260
261
# File 'lib/llm/provider.rb', line 259

def developer_role
  :developer
end

#embed(input, model: nil, **params) ⇒ LLM::Response

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: nil)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



68
69
70
# File 'lib/llm/provider.rb', line 68

def embed(input, model: nil, **params)
  raise NotImplementedError
end

#filesLLM::OpenAI::Files

Returns an interface to the files API

Returns:

Raises:

  • (NotImplementedError)


145
146
147
# File 'lib/llm/provider.rb', line 145

def files
  raise NotImplementedError
end

#imagesLLM::OpenAI::Images, LLM::Google::Images

Returns an interface to the images API

Returns:

Raises:

  • (NotImplementedError)


131
132
133
# File 'lib/llm/provider.rb', line 131

def images
  raise NotImplementedError
end

#inspectString

Note:

The secret key is redacted in inspect for security reasons

Returns an inspection of the provider object

Returns:

  • (String)


44
45
46
# File 'lib/llm/provider.rb', line 44

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @transport=#{transport.inspect} @tracer=#{tracer.inspect}>"
end

#interrupt!(owner) ⇒ nil Also known as: cancel!

Interrupt the active request, if any.

Parameters:

  • owner (Fiber)

Returns:

  • (nil)


319
320
321
# File 'lib/llm/provider.rb', line 319

def interrupt!(owner)
  transport.interrupt!(owner)
end

#modelsLLM::OpenAI::Models

Returns an interface to the models API

Returns:

Raises:

  • (NotImplementedError)


152
153
154
# File 'lib/llm/provider.rb', line 152

def models
  raise NotImplementedError
end

#moderationsLLM::OpenAI::Moderations

Returns an interface to the moderations API

Returns:

Raises:

  • (NotImplementedError)


159
160
161
# File 'lib/llm/provider.rb', line 159

def moderations
  raise NotImplementedError
end

#nameSymbol

Returns the provider’s name

Returns:

  • (Symbol)

    Returns the provider’s name

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



53
54
55
# File 'lib/llm/provider.rb', line 53

def name
  raise NotImplementedError
end

#persist!LLM::Provider Also known as: persistent

This method configures a provider to use a persistent connection pool via the optional dependency [Net::HTTP::Persistent](github.com/drbrain/net-http-persistent)

Examples:

llm = LLM.openai(key: ENV["KEY"]).persistent
# do something with 'llm'

Returns:



309
310
311
312
# File 'lib/llm/provider.rb', line 309

def persist!
  transport.persist!
  self
end

#respond(prompt, params = {}) ⇒ LLM::Context

Starts a new chat powered by the responses API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



112
113
114
115
# File 'lib/llm/provider.rb', line 112

def respond(prompt, params = {})
  role = params.delete(:role)
  LLM::Context.new(self, params).respond(prompt, role:)
end

#responsesLLM::OpenAI::Responses

Note:

Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.

Returns:

Raises:

  • (NotImplementedError)


124
125
126
# File 'lib/llm/provider.rb', line 124

def responses
  raise NotImplementedError
end

#schemaLLM::Schema

Returns an object that can generate a JSON schema

Returns:



188
189
190
# File 'lib/llm/provider.rb', line 188

def schema
  LLM::Schema.new
end

#server_tool(name, options = {}) ⇒ LLM::ServerTool

Note:

OpenAI, Anthropic, and Gemini provide platform-tools for things like web search, and more.

Returns a tool provided by a provider.

Examples:

llm   = LLM.openai(key: ENV["KEY"])
tools = [llm.server_tool(:web_search)]
res   = llm.responses.create("Summarize today's news", tools:)
print res.output_text, "\n"

Parameters:

  • name (String, Symbol)

    The name of the tool

  • options (Hash) (defaults to: {})

    Configuration options for the tool

Returns:



231
232
233
# File 'lib/llm/provider.rb', line 231

def server_tool(name, options = {})
  LLM::ServerTool.new(name, options, self)
end

#server_toolsString => LLM::ServerTool

Note:

This method might be outdated, and the LLM::Provider#server_tool method can be used if a tool is not found here.

Returns all known tools provided by a provider.

Returns:



214
215
216
# File 'lib/llm/provider.rb', line 214

def server_tools
  {}
end

#streamable?(stream) ⇒ Boolean

Parameters:

Returns:

  • (Boolean)


327
328
329
# File 'lib/llm/provider.rb', line 327

def streamable?(stream)
  LLM::Stream === stream || stream.respond_to?(:<<)
end

#system_roleSymbol

Returns:

  • (Symbol)


253
254
255
# File 'lib/llm/provider.rb', line 253

def system_role
  :system
end

#tool_roleSymbol

Returns:

  • (Symbol)


265
266
267
# File 'lib/llm/provider.rb', line 265

def tool_role
  :tool
end

#tracerLLM::Tracer

Returns a fiber-local tracer

Returns:



272
273
274
# File 'lib/llm/provider.rb', line 272

def tracer
  weakmap[self] || LLM::Tracer::Null.new(self)
end

#tracer=(tracer) ⇒ void

This method returns an undefined value.

Set a fiber-local tracer

Examples:

llm = LLM.openai(key: ENV["KEY"])
Thread.new do
  llm.tracer = LLM::Tracer::Logger.new(llm, path: "/path/to/log/1.txt")
end
Thread.new do
  llm.tracer = LLM::Tracer::Logger.new(llm, path: "/path/to/log/2.txt")
end
# ...

Parameters:



290
291
292
293
294
295
296
297
298
299
300
# File 'lib/llm/provider.rb', line 290

def tracer=(tracer)
  if tracer.nil?
    if weakmap.respond_to?(:delete)
      weakmap.delete(self)
    else
      weakmap[self] = nil
    end
  else
    weakmap[self] = tracer
  end
end

#user_roleSymbol

Returns:

  • (Symbol)


247
248
249
# File 'lib/llm/provider.rb', line 247

def user_role
  :user
end

#vector_storesLLM::OpenAI::VectorStore

Returns an interface to the vector stores API

Returns:

  • (LLM::OpenAI::VectorStore)

    Returns an interface to the vector stores API

Raises:

  • (NotImplementedError)


166
167
168
# File 'lib/llm/provider.rb', line 166

def vector_stores
  raise NotImplementedError
end

#web_search(query:) ⇒ LLM::Response

Provides a web search capability

Parameters:

  • query (String)

    The search query

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



241
242
243
# File 'lib/llm/provider.rb', line 241

def web_search(query:)
  raise NotImplementedError
end

#with(headers:) ⇒ LLM::Provider

Add one or more headers to all requests

Examples:

llm = LLM.openai(key: ENV["KEY"])
llm.with(headers: {"OpenAI-Organization" => ENV["ORG"]})
llm.with(headers: {"OpenAI-Project" => ENV["PROJECT"]})

Parameters:

  • headers (Hash<String,String>)

    One or more headers

Returns:



202
203
204
205
206
# File 'lib/llm/provider.rb', line 202

def with(headers:)
  lock do
    tap { @headers.merge!(headers) }
  end
end