Class: LLM::Provider Abstract
- Inherits:
-
Object
- Object
- LLM::Provider
- Includes:
- Transport::HTTP::Execution
- Defined in:
- lib/llm/provider.rb,
lib/llm/provider/transport/http.rb,
lib/llm/provider/transport/http/interruptible.rb
Overview
The Provider class represents an abstract class for LLM (Language Model) providers.
Defined Under Namespace
Modules: Transport
Instance Method Summary collapse
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API.
-
#chat(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the chat completions API.
-
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API.
-
#default_model ⇒ String
Returns the default model for chat completions.
- #developer_role ⇒ Symbol
-
#embed(input, model: nil, **params) ⇒ LLM::Response
Provides an embedding.
-
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API.
-
#images ⇒ LLM::OpenAI::Images, LLM::Google::Images
Returns an interface to the images API.
-
#initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) ⇒ Provider
constructor
A new instance of Provider.
-
#inspect ⇒ String
Returns an inspection of the provider object.
-
#interrupt!(owner) ⇒ nil
(also: #cancel!)
Interrupt the active request, if any.
-
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API.
-
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API.
-
#name ⇒ Symbol
Returns the provider’s name.
-
#persist! ⇒ LLM::Provider
(also: #persistent)
This method configures a provider to use a persistent connection pool via the optional dependency [Net::HTTP::Persistent](github.com/drbrain/net-http-persistent).
-
#respond(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the responses API.
-
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
-
#schema ⇒ LLM::Schema
Returns an object that can generate a JSON schema.
-
#server_tool(name, options = {}) ⇒ LLM::ServerTool
Returns a tool provided by a provider.
-
#server_tools ⇒ String => LLM::ServerTool
Returns all known tools provided by a provider.
- #streamable?(stream) ⇒ Boolean
- #system_role ⇒ Symbol
- #tool_role ⇒ Symbol
-
#tracer ⇒ LLM::Tracer
Returns a fiber-local tracer.
-
#tracer=(tracer) ⇒ void
Set a fiber-local tracer.
- #user_role ⇒ Symbol
-
#vector_stores ⇒ LLM::OpenAI::VectorStore
Returns an interface to the vector stores API.
-
#web_search(query:) ⇒ LLM::Response
Provides a web search capability.
-
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests.
Constructor Details
#initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) ⇒ Provider
Returns a new instance of Provider.
28 29 30 31 32 33 34 35 36 37 38 |
# File 'lib/llm/provider.rb', line 28 def initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) @key = key @host = host @port = port @timeout = timeout @ssl = ssl @base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/") @headers = {"User-Agent" => "llm.rb v#{LLM::VERSION}"} @transport = Transport::HTTP.new(host:, port:, timeout:, ssl:, persistent:) @monitor = Monitor.new end |
Instance Method Details
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
174 175 176 |
# File 'lib/llm/provider.rb', line 174 def assistant_role raise NotImplementedError end |
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API
138 139 140 |
# File 'lib/llm/provider.rb', line 138 def audio raise NotImplementedError end |
#chat(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the chat completions API
101 102 103 104 |
# File 'lib/llm/provider.rb', line 101 def chat(prompt, params = {}) role = params.delete(:role) LLM::Context.new(self, params).talk(prompt, role:) end |
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API
92 93 94 |
# File 'lib/llm/provider.rb', line 92 def complete(prompt, params = {}) raise NotImplementedError end |
#default_model ⇒ String
Returns the default model for chat completions
181 182 183 |
# File 'lib/llm/provider.rb', line 181 def default_model raise NotImplementedError end |
#developer_role ⇒ Symbol
259 260 261 |
# File 'lib/llm/provider.rb', line 259 def developer_role :developer end |
#embed(input, model: nil, **params) ⇒ LLM::Response
Provides an embedding
68 69 70 |
# File 'lib/llm/provider.rb', line 68 def (input, model: nil, **params) raise NotImplementedError end |
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API
145 146 147 |
# File 'lib/llm/provider.rb', line 145 def files raise NotImplementedError end |
#images ⇒ LLM::OpenAI::Images, LLM::Google::Images
Returns an interface to the images API
131 132 133 |
# File 'lib/llm/provider.rb', line 131 def images raise NotImplementedError end |
#inspect ⇒ String
The secret key is redacted in inspect for security reasons
Returns an inspection of the provider object
44 45 46 |
# File 'lib/llm/provider.rb', line 44 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @transport=#{transport.inspect} @tracer=#{tracer.inspect}>" end |
#interrupt!(owner) ⇒ nil Also known as: cancel!
Interrupt the active request, if any.
319 320 321 |
# File 'lib/llm/provider.rb', line 319 def interrupt!(owner) transport.interrupt!(owner) end |
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API
152 153 154 |
# File 'lib/llm/provider.rb', line 152 def models raise NotImplementedError end |
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API
159 160 161 |
# File 'lib/llm/provider.rb', line 159 def moderations raise NotImplementedError end |
#name ⇒ Symbol
Returns the provider’s name
53 54 55 |
# File 'lib/llm/provider.rb', line 53 def name raise NotImplementedError end |
#persist! ⇒ LLM::Provider Also known as: persistent
This method configures a provider to use a persistent connection pool via the optional dependency [Net::HTTP::Persistent](github.com/drbrain/net-http-persistent)
309 310 311 312 |
# File 'lib/llm/provider.rb', line 309 def persist! transport.persist! self end |
#respond(prompt, params = {}) ⇒ LLM::Context
Starts a new chat powered by the responses API
112 113 114 115 |
# File 'lib/llm/provider.rb', line 112 def respond(prompt, params = {}) role = params.delete(:role) LLM::Context.new(self, params).respond(prompt, role:) end |
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
124 125 126 |
# File 'lib/llm/provider.rb', line 124 def responses raise NotImplementedError end |
#schema ⇒ LLM::Schema
Returns an object that can generate a JSON schema
188 189 190 |
# File 'lib/llm/provider.rb', line 188 def schema LLM::Schema.new end |
#server_tool(name, options = {}) ⇒ LLM::ServerTool
OpenAI, Anthropic, and Gemini provide platform-tools for things like web search, and more.
Returns a tool provided by a provider.
231 232 233 |
# File 'lib/llm/provider.rb', line 231 def server_tool(name, = {}) LLM::ServerTool.new(name, , self) end |
#server_tools ⇒ String => LLM::ServerTool
This method might be outdated, and the LLM::Provider#server_tool method can be used if a tool is not found here.
Returns all known tools provided by a provider.
214 215 216 |
# File 'lib/llm/provider.rb', line 214 def server_tools {} end |
#streamable?(stream) ⇒ Boolean
327 328 329 |
# File 'lib/llm/provider.rb', line 327 def streamable?(stream) LLM::Stream === stream || stream.respond_to?(:<<) end |
#system_role ⇒ Symbol
253 254 255 |
# File 'lib/llm/provider.rb', line 253 def system_role :system end |
#tool_role ⇒ Symbol
265 266 267 |
# File 'lib/llm/provider.rb', line 265 def tool_role :tool end |
#tracer ⇒ LLM::Tracer
Returns a fiber-local tracer
272 273 274 |
# File 'lib/llm/provider.rb', line 272 def tracer weakmap[self] || LLM::Tracer::Null.new(self) end |
#tracer=(tracer) ⇒ void
This method returns an undefined value.
Set a fiber-local tracer
290 291 292 293 294 295 296 297 298 299 300 |
# File 'lib/llm/provider.rb', line 290 def tracer=(tracer) if tracer.nil? if weakmap.respond_to?(:delete) weakmap.delete(self) else weakmap[self] = nil end else weakmap[self] = tracer end end |
#user_role ⇒ Symbol
247 248 249 |
# File 'lib/llm/provider.rb', line 247 def user_role :user end |
#vector_stores ⇒ LLM::OpenAI::VectorStore
Returns an interface to the vector stores API
166 167 168 |
# File 'lib/llm/provider.rb', line 166 def vector_stores raise NotImplementedError end |
#web_search(query:) ⇒ LLM::Response
Provides a web search capability
241 242 243 |
# File 'lib/llm/provider.rb', line 241 def web_search(query:) raise NotImplementedError end |
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests
202 203 204 205 206 |
# File 'lib/llm/provider.rb', line 202 def with(headers:) lock do tap { @headers.merge!(headers) } end end |