Class: LLM::Provider Abstract
- Inherits:
-
Object
- Object
- LLM::Provider
- Includes:
- Transport::Execution
- Defined in:
- lib/llm/provider.rb
Overview
The Provider class represents an abstract class for LLM (Language Model) providers.
Instance Method Summary collapse
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API.
-
#chat(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the chat completions API.
-
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API.
-
#default_model ⇒ String
Returns the default model for chat completions.
- #developer_role ⇒ Symbol
-
#embed(input, model: nil, **params) ⇒ LLM::Response
Provides an embedding.
-
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API.
-
#images ⇒ LLM::OpenAI::Images, LLM::Google::Images
Returns an interface to the images API.
-
#initialize(key:, host:, port: 443, timeout: 60, ssl: true, base_path: "", persistent: false, transport: nil) ⇒ Provider
constructor
A new instance of Provider.
-
#inspect ⇒ String
Returns an inspection of the provider object.
-
#interrupt!(owner) ⇒ nil
(also: #cancel!)
Interrupt the active request, if any.
-
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API.
-
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API.
-
#name ⇒ Symbol
Returns the provider’s name.
-
#request_owner ⇒ Object
private
Returns the current request owner used by the transport.
-
#respond(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the responses API.
-
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
-
#schema ⇒ LLM::Schema
Returns an object that can generate a JSON schema.
-
#server_tool(name, options = {}) ⇒ LLM::ServerTool
Returns a tool provided by a provider.
-
#server_tools ⇒ String => LLM::ServerTool
Returns all known tools provided by a provider.
- #streamable?(stream) ⇒ Boolean
- #system_role ⇒ Symbol
- #tool_role ⇒ Symbol
-
#tracer ⇒ LLM::Tracer
Returns the current scoped tracer override or provider default tracer.
-
#tracer=(tracer) ⇒ void
Set the provider’s default tracer This tracer is shared by the provider instance and becomes the fallback whenever no scoped override is active.
- #user_role ⇒ Symbol
-
#vector_stores ⇒ LLM::OpenAI::VectorStore
Returns an interface to the vector stores API.
-
#web_search(query:) ⇒ LLM::Response
Provides a web search capability.
-
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests.
-
#with_tracer(tracer) { ... } ⇒ Object
Override the tracer for the current fiber while the block runs.
Constructor Details
#initialize(key:, host:, port: 443, timeout: 60, ssl: true, base_path: "", persistent: false, transport: nil) ⇒ Provider
Returns a new instance of Provider.
29 30 31 32 33 34 35 36 37 38 39 40 |
# File 'lib/llm/provider.rb', line 29 def initialize(key:, host:, port: 443, timeout: 60, ssl: true, base_path: "", persistent: false, transport: nil) @key = key @host = host @port = port @timeout = timeout @ssl = ssl @base_path = normalize_base_path(base_path) @base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/") @headers = {"User-Agent" => "llm.rb v#{LLM::VERSION}"} @transport = resolve_transport(transport, persistent:) @monitor = Monitor.new end |
Instance Method Details
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
176 177 178 |
# File 'lib/llm/provider.rb', line 176 def assistant_role raise NotImplementedError end |
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API
140 141 142 |
# File 'lib/llm/provider.rb', line 140 def audio raise NotImplementedError end |
#chat(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the chat completions API
103 104 105 106 |
# File 'lib/llm/provider.rb', line 103 def chat(prompt, params = {}) role = params.delete(:role) LLM::Context.new(self, params).talk(prompt, role:) end |
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API
94 95 96 |
# File 'lib/llm/provider.rb', line 94 def complete(prompt, params = {}) raise NotImplementedError end |
#default_model ⇒ String
Returns the default model for chat completions
183 184 185 |
# File 'lib/llm/provider.rb', line 183 def default_model raise NotImplementedError end |
#developer_role ⇒ Symbol
261 262 263 |
# File 'lib/llm/provider.rb', line 261 def developer_role :developer end |
#embed(input, model: nil, **params) ⇒ LLM::Response
Provides an embedding
70 71 72 |
# File 'lib/llm/provider.rb', line 70 def (input, model: nil, **params) raise NotImplementedError end |
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API
147 148 149 |
# File 'lib/llm/provider.rb', line 147 def files raise NotImplementedError end |
#images ⇒ LLM::OpenAI::Images, LLM::Google::Images
Returns an interface to the images API
133 134 135 |
# File 'lib/llm/provider.rb', line 133 def images raise NotImplementedError end |
#inspect ⇒ String
The secret key is redacted in inspect for security reasons
Returns an inspection of the provider object
46 47 48 |
# File 'lib/llm/provider.rb', line 46 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @transport=#{transport.inspect} @tracer=#{tracer.inspect}>" end |
#interrupt!(owner) ⇒ nil Also known as: cancel!
Interrupt the active request, if any.
322 323 324 |
# File 'lib/llm/provider.rb', line 322 def interrupt!(owner) transport.interrupt!(owner) end |
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API
154 155 156 |
# File 'lib/llm/provider.rb', line 154 def models raise NotImplementedError end |
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API
161 162 163 |
# File 'lib/llm/provider.rb', line 161 def moderations raise NotImplementedError end |
#name ⇒ Symbol
Returns the provider’s name
55 56 57 |
# File 'lib/llm/provider.rb', line 55 def name raise NotImplementedError end |
#request_owner ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
Returns the current request owner used by the transport.
331 332 333 |
# File 'lib/llm/provider.rb', line 331 def request_owner transport.request_owner end |
#respond(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the responses API
114 115 116 117 |
# File 'lib/llm/provider.rb', line 114 def respond(prompt, params = {}) role = params.delete(:role) LLM::Context.new(self, params).respond(prompt, role:) end |
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
126 127 128 |
# File 'lib/llm/provider.rb', line 126 def responses raise NotImplementedError end |
#schema ⇒ LLM::Schema
Returns an object that can generate a JSON schema
190 191 192 |
# File 'lib/llm/provider.rb', line 190 def schema LLM::Schema.new end |
#server_tool(name, options = {}) ⇒ LLM::ServerTool
OpenAI, Anthropic, and Gemini provide platform-tools for things like web search, and more.
Returns a tool provided by a provider.
233 234 235 |
# File 'lib/llm/provider.rb', line 233 def server_tool(name, = {}) LLM::ServerTool.new(name, , self) end |
#server_tools ⇒ String => LLM::ServerTool
This method might be outdated, and the LLM::Provider#server_tool method can be used if a tool is not found here.
Returns all known tools provided by a provider.
216 217 218 |
# File 'lib/llm/provider.rb', line 216 def server_tools {} end |
#streamable?(stream) ⇒ Boolean
338 339 340 |
# File 'lib/llm/provider.rb', line 338 def streamable?(stream) LLM::Stream === stream || stream.respond_to?(:<<) end |
#system_role ⇒ Symbol
255 256 257 |
# File 'lib/llm/provider.rb', line 255 def system_role :system end |
#tool_role ⇒ Symbol
267 268 269 |
# File 'lib/llm/provider.rb', line 267 def tool_role :tool end |
#tracer ⇒ LLM::Tracer
Returns the current scoped tracer override or provider default tracer
274 275 276 |
# File 'lib/llm/provider.rb', line 274 def tracer weakmap[self] || @tracer || LLM::Tracer::Null.new(self) end |
#tracer=(tracer) ⇒ void
This method returns an undefined value.
Set the provider’s default tracer This tracer is shared by the provider instance and becomes the fallback whenever no scoped override is active.
288 289 290 |
# File 'lib/llm/provider.rb', line 288 def tracer=(tracer) @tracer = tracer end |
#user_role ⇒ Symbol
249 250 251 |
# File 'lib/llm/provider.rb', line 249 def user_role :user end |
#vector_stores ⇒ LLM::OpenAI::VectorStore
Returns an interface to the vector stores API
168 169 170 |
# File 'lib/llm/provider.rb', line 168 def vector_stores raise NotImplementedError end |
#web_search(query:) ⇒ LLM::Response
Provides a web search capability
243 244 245 |
# File 'lib/llm/provider.rb', line 243 def web_search(query:) raise NotImplementedError end |
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests
204 205 206 207 208 |
# File 'lib/llm/provider.rb', line 204 def with(headers:) lock do tap { @headers.merge!(headers) } end end |
#with_tracer(tracer) { ... } ⇒ Object
Override the tracer for the current fiber while the block runs. This is useful when you want per-request or per-turn tracing without replacing the provider’s default tracer.
303 304 305 306 307 308 309 310 311 312 313 314 315 316 |
# File 'lib/llm/provider.rb', line 303 def with_tracer(tracer) had_override = weakmap.key?(self) previous = weakmap[self] weakmap[self] = tracer yield ensure if had_override weakmap[self] = previous elsif weakmap.respond_to?(:delete) weakmap.delete(self) else weakmap[self] = nil end end |