Class: LLM::Provider Abstract
- Inherits:
-
Object
- Object
- LLM::Provider
- Includes:
- Transport::HTTP::Execution
- Defined in:
- lib/llm/provider.rb,
lib/llm/provider/transport/http.rb,
lib/llm/provider/transport/http/interruptible.rb
Overview
The Provider class represents an abstract class for LLM (Language Model) providers.
Defined Under Namespace
Modules: Transport
Instance Method Summary collapse
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API.
-
#chat(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the chat completions API.
-
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API.
-
#default_model ⇒ String
Returns the default model for chat completions.
- #developer_role ⇒ Symbol
-
#embed(input, model: nil, **params) ⇒ LLM::Response
Provides an embedding.
-
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API.
-
#images ⇒ LLM::OpenAI::Images, LLM::Google::Images
Returns an interface to the images API.
-
#initialize(key:, host:, port: 443, timeout: 60, ssl: true, base_path: "", persistent: false) ⇒ Provider
constructor
A new instance of Provider.
-
#inspect ⇒ String
Returns an inspection of the provider object.
-
#interrupt!(owner) ⇒ nil
(also: #cancel!)
Interrupt the active request, if any.
-
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API.
-
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API.
-
#name ⇒ Symbol
Returns the provider’s name.
-
#persist! ⇒ LLM::Provider
(also: #persistent)
This method configures a provider to use a persistent connection pool via the optional dependency [Net::HTTP::Persistent](github.com/drbrain/net-http-persistent).
-
#respond(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the responses API.
-
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
-
#schema ⇒ LLM::Schema
Returns an object that can generate a JSON schema.
-
#server_tool(name, options = {}) ⇒ LLM::ServerTool
Returns a tool provided by a provider.
-
#server_tools ⇒ String => LLM::ServerTool
Returns all known tools provided by a provider.
- #streamable?(stream) ⇒ Boolean
- #system_role ⇒ Symbol
- #tool_role ⇒ Symbol
-
#tracer ⇒ LLM::Tracer
Returns the current scoped tracer override or provider default tracer.
-
#tracer=(tracer) ⇒ void
Set the provider’s default tracer This tracer is shared by the provider instance and becomes the fallback whenever no scoped override is active.
- #user_role ⇒ Symbol
-
#vector_stores ⇒ LLM::OpenAI::VectorStore
Returns an interface to the vector stores API.
-
#web_search(query:) ⇒ LLM::Response
Provides a web search capability.
-
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests.
-
#with_tracer(tracer) { ... } ⇒ Object
Override the tracer for the current fiber while the block runs.
Constructor Details
#initialize(key:, host:, port: 443, timeout: 60, ssl: true, base_path: "", persistent: false) ⇒ Provider
Returns a new instance of Provider.
30 31 32 33 34 35 36 37 38 39 40 41 |
# File 'lib/llm/provider.rb', line 30 def initialize(key:, host:, port: 443, timeout: 60, ssl: true, base_path: "", persistent: false) @key = key @host = host @port = port @timeout = timeout @ssl = ssl @base_path = normalize_base_path(base_path) @base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/") @headers = {"User-Agent" => "llm.rb v#{LLM::VERSION}"} @transport = Transport::HTTP.new(host:, port:, timeout:, ssl:, persistent:) @monitor = Monitor.new end |
Instance Method Details
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
177 178 179 |
# File 'lib/llm/provider.rb', line 177 def assistant_role raise NotImplementedError end |
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API
141 142 143 |
# File 'lib/llm/provider.rb', line 141 def audio raise NotImplementedError end |
#chat(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the chat completions API
104 105 106 107 |
# File 'lib/llm/provider.rb', line 104 def chat(prompt, params = {}) role = params.delete(:role) LLM::Context.new(self, params).talk(prompt, role:) end |
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API
95 96 97 |
# File 'lib/llm/provider.rb', line 95 def complete(prompt, params = {}) raise NotImplementedError end |
#default_model ⇒ String
Returns the default model for chat completions
184 185 186 |
# File 'lib/llm/provider.rb', line 184 def default_model raise NotImplementedError end |
#developer_role ⇒ Symbol
262 263 264 |
# File 'lib/llm/provider.rb', line 262 def developer_role :developer end |
#embed(input, model: nil, **params) ⇒ LLM::Response
Provides an embedding
71 72 73 |
# File 'lib/llm/provider.rb', line 71 def (input, model: nil, **params) raise NotImplementedError end |
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API
148 149 150 |
# File 'lib/llm/provider.rb', line 148 def files raise NotImplementedError end |
#images ⇒ LLM::OpenAI::Images, LLM::Google::Images
Returns an interface to the images API
134 135 136 |
# File 'lib/llm/provider.rb', line 134 def images raise NotImplementedError end |
#inspect ⇒ String
The secret key is redacted in inspect for security reasons
Returns an inspection of the provider object
47 48 49 |
# File 'lib/llm/provider.rb', line 47 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @transport=#{transport.inspect} @tracer=#{tracer.inspect}>" end |
#interrupt!(owner) ⇒ nil Also known as: cancel!
Interrupt the active request, if any.
336 337 338 |
# File 'lib/llm/provider.rb', line 336 def interrupt!(owner) transport.interrupt!(owner) end |
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API
155 156 157 |
# File 'lib/llm/provider.rb', line 155 def models raise NotImplementedError end |
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API
162 163 164 |
# File 'lib/llm/provider.rb', line 162 def moderations raise NotImplementedError end |
#name ⇒ Symbol
Returns the provider’s name
56 57 58 |
# File 'lib/llm/provider.rb', line 56 def name raise NotImplementedError end |
#persist! ⇒ LLM::Provider Also known as: persistent
This method configures a provider to use a persistent connection pool via the optional dependency [Net::HTTP::Persistent](github.com/drbrain/net-http-persistent)
326 327 328 329 |
# File 'lib/llm/provider.rb', line 326 def persist! transport.persist! self end |
#respond(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the responses API
115 116 117 118 |
# File 'lib/llm/provider.rb', line 115 def respond(prompt, params = {}) role = params.delete(:role) LLM::Context.new(self, params).respond(prompt, role:) end |
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
127 128 129 |
# File 'lib/llm/provider.rb', line 127 def responses raise NotImplementedError end |
#schema ⇒ LLM::Schema
Returns an object that can generate a JSON schema
191 192 193 |
# File 'lib/llm/provider.rb', line 191 def schema LLM::Schema.new end |
#server_tool(name, options = {}) ⇒ LLM::ServerTool
OpenAI, Anthropic, and Gemini provide platform-tools for things like web search, and more.
Returns a tool provided by a provider.
234 235 236 |
# File 'lib/llm/provider.rb', line 234 def server_tool(name, = {}) LLM::ServerTool.new(name, , self) end |
#server_tools ⇒ String => LLM::ServerTool
This method might be outdated, and the LLM::Provider#server_tool method can be used if a tool is not found here.
Returns all known tools provided by a provider.
217 218 219 |
# File 'lib/llm/provider.rb', line 217 def server_tools {} end |
#streamable?(stream) ⇒ Boolean
344 345 346 |
# File 'lib/llm/provider.rb', line 344 def streamable?(stream) LLM::Stream === stream || stream.respond_to?(:<<) end |
#system_role ⇒ Symbol
256 257 258 |
# File 'lib/llm/provider.rb', line 256 def system_role :system end |
#tool_role ⇒ Symbol
268 269 270 |
# File 'lib/llm/provider.rb', line 268 def tool_role :tool end |
#tracer ⇒ LLM::Tracer
Returns the current scoped tracer override or provider default tracer
275 276 277 |
# File 'lib/llm/provider.rb', line 275 def tracer weakmap[self] || @tracer || LLM::Tracer::Null.new(self) end |
#tracer=(tracer) ⇒ void
This method returns an undefined value.
Set the provider’s default tracer This tracer is shared by the provider instance and becomes the fallback whenever no scoped override is active.
289 290 291 |
# File 'lib/llm/provider.rb', line 289 def tracer=(tracer) @tracer = tracer end |
#user_role ⇒ Symbol
250 251 252 |
# File 'lib/llm/provider.rb', line 250 def user_role :user end |
#vector_stores ⇒ LLM::OpenAI::VectorStore
Returns an interface to the vector stores API
169 170 171 |
# File 'lib/llm/provider.rb', line 169 def vector_stores raise NotImplementedError end |
#web_search(query:) ⇒ LLM::Response
Provides a web search capability
244 245 246 |
# File 'lib/llm/provider.rb', line 244 def web_search(query:) raise NotImplementedError end |
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests
205 206 207 208 209 |
# File 'lib/llm/provider.rb', line 205 def with(headers:) lock do tap { @headers.merge!(headers) } end end |
#with_tracer(tracer) { ... } ⇒ Object
Override the tracer for the current fiber while the block runs. This is useful when you want per-request or per-turn tracing without replacing the provider’s default tracer.
304 305 306 307 308 309 310 311 312 313 314 315 316 317 |
# File 'lib/llm/provider.rb', line 304 def with_tracer(tracer) had_override = weakmap.key?(self) previous = weakmap[self] weakmap[self] = tracer yield ensure if had_override weakmap[self] = previous elsif weakmap.respond_to?(:delete) weakmap.delete(self) else weakmap[self] = nil end end |