ruby-openrouter
A minimal, conversational Ruby client for the OpenRouter API — access hundreds of LLMs (Claude, GPT-4, Gemini, Llama, and more) through a single, clean interface.
client = RubyOpenrouter::Client.new(model: "anthropic/claude-3.5-sonnet")
client.system("You are a concise assistant.")
puts client.user("What is Ruby?")
#=> "Ruby is a dynamic, open-source programming language..."
Features
- Conversational by default — message history is maintained automatically across turns
- Streaming support — receive tokens in real time via a simple block
- Multi-turn context — system, user, and assistant messages compose naturally
- Error hierarchy — typed exceptions for auth, rate limits, and server errors
- Zero magic — one client, one model, clear methods
Installation
Add to your Gemfile:
gem "ruby-openrouter"
Or install directly:
gem install ruby-openrouter
Quick Start
require "ruby_openrouter"
client = RubyOpenrouter::Client.new(
model: "openai/gpt-4o",
api_key: ENV["OPENROUTER_API_KEY"]
)
reply = client.user("Explain quantum entanglement in one sentence.")
puts reply
Configuration
Per-client
Pass options directly when instantiating:
client = RubyOpenrouter::Client.new(
model: "anthropic/claude-3.5-sonnet",
api_key: ENV["OPENROUTER_API_KEY"],
site_url: "https://myapp.com", # sent as HTTP-Referer for attribution
site_name: "MyApp", # sent as X-Title
timeout: 60
)
Global
Set defaults once at startup (e.g., in an initializer):
RubyOpenrouter.configure do |config|
config.api_key = ENV["OPENROUTER_API_KEY"]
config.site_url = "https://myapp.com"
config.site_name = "MyApp"
config.timeout = 30
end
# All clients created afterwards will use these defaults
client = RubyOpenrouter::Client.new(model: "openai/gpt-4o-mini")
Usage
System prompt
client.system("You are a Ruby expert. Keep answers short.")
Calling #system again replaces the existing system message — there is always at most one.
Chat
reply = client.user("What is a Proc?")
puts reply #=> "A Proc is a block of code..."
# Continue the conversation — history is kept automatically
reply = client.user("How does it differ from a lambda?")
puts reply #=> "Unlike a lambda, a Proc does not..."
Streaming
Pass a block to #user to receive tokens as they arrive:
client.user("Write a haiku about Ruby.") do |chunk|
print chunk
end
# Streams: "Elegant syntax flows / Objects dance in harmony / Matz smiles warmly"
puts
The full response is appended to conversation history even when streaming.
Few-shot examples
Seed the conversation with pre-written assistant turns using #assistant:
client.user("Translate: hello")
client.assistant("Hola")
client.user("Translate: goodbye")
client.assistant("Adiós")
puts client.user("Translate: thank you")
#=> "Gracias"
Reset
Clear the conversation history while keeping the system prompt:
client.system("You are a chef.")
client.user("What is mise en place?")
client.reset # clears user/assistant turns, keeps system prompt
client.user("What is a roux?") # fresh start, still a chef
List available models
# From a client instance
models = client.models
models.each { |m| puts m["id"] }
# Or at the module level
models = RubyOpenrouter.models(api_key: ENV["OPENROUTER_API_KEY"])
models.first(5).each { |m| puts "#{m["id"]} — #{m["name"]}" }
Error Handling
begin
reply = client.user("Hello!")
rescue RubyOpenrouter::AuthenticationError => e
puts "Invalid API key: #{e.}"
rescue RubyOpenrouter::RateLimitError => e
puts "Rate limited (status #{e.status}), retry later"
rescue RubyOpenrouter::ServerError => e
puts "OpenRouter server error: #{e.}"
rescue RubyOpenrouter::APIError => e
puts "API error #{e.status}: #{e.}"
rescue RubyOpenrouter::ConfigurationError => e
puts "Configuration problem: #{e.}"
end
| Exception | HTTP status |
|---|---|
AuthenticationError |
401 |
BadRequestError |
400 |
RateLimitError |
429 |
ServerError |
5xx |
APIError |
any other 4xx/5xx |
ConfigurationError |
— (missing api_key) |
Model IDs
OpenRouter uses provider/model identifiers. Some popular ones:
| Model | ID |
|---|---|
| Claude 3.5 Sonnet | anthropic/claude-3.5-sonnet |
| GPT-4o | openai/gpt-4o |
| GPT-4o mini | openai/gpt-4o-mini |
| Gemini 1.5 Pro | google/gemini-pro-1.5 |
| Llama 3.3 70B | meta-llama/llama-3.3-70b-instruct |
See all available models at openrouter.ai/models or via client.models.
Development
git clone https://github.com/deyvin/ruby-openrouter
cd ruby-openrouter
bundle install
bundle exec rspec # run tests
bundle exec rubocop # lint
Tests use WebMock — no real API calls are made.
License
MIT