LLM Cost Tracker
Self-hosted LLM cost tracking for Rails.
Every call your app makes to OpenAI, Anthropic, Gemini, RubyLLM, or any OpenAI-compatible API gets logged: tokens, cost, latency, tags. Calls go app → provider direct. No proxy.
Not Langfuse, Helicone, or LiteLLM. No prompts, no traces, no replay. Spend attribution only.
Requires Ruby 3.4+, Rails 7.1+, PostgreSQL or MySQL.

Quickstart
# Gemfile
gem "llm_cost_tracker"
gem "openai"
bin/rails llm_cost_tracker:setup
That runs the install generator with the dashboard and pricing snapshot,
migrates the database, then verifies via llm_cost_tracker:doctor.
# config/initializers/llm_cost_tracker.rb
LlmCostTracker.configure do |config|
config. = -> { { environment: Rails.env } }
config.instrument :openai
end
Tag your calls — that's how you find out who burned the money:
LlmCostTracker.(user_id: Current.user&.id, feature: "chat") do
client = OpenAI::Client.new(api_key: ENV["OPENAI_API_KEY"])
client.responses.create(model: "gpt-4o", input: "Hello")
end
Mount the dashboard at /llm-costs and put it behind your app's auth — it
ships without one.
What lands in the ledger
- Calls. Provider, model, total tokens, total cost, latency, status.
- Line items. Per-component breakdown — text/audio/cached tokens, tool charges (web search, code execution, grounding, container sessions).
- Tags. Whatever attribution you pass — user, feature, tenant, env.
- Provider IDs. Response, project, API key, workspace — for downstream audits.
- Pricing snapshot. So historical numbers don't drift when prices change.
Capture surfaces
| Surface | Path |
|---|---|
| OpenAI | Official SDK or Faraday |
| Anthropic | Official SDK or Faraday |
| Google Gemini | Faraday |
| RubyLLM | Provider layer |
ruby-openai |
Faraday |
| OpenRouter, DeepSeek, Groq, LiteLLM-style gateways | OpenAI-compatible Faraday |
| Anything else | LlmCostTracker.track |
Streams capture when the provider emits final usage. OpenAI Faraday streams
need stream_options: { include_usage: true }.
What it isn't
- No proxy. Direct calls only.
- No prompts. Token counts and metadata only.
- Not invoice-grade. Provider response IDs are stored for reconciliation.
- Not multi-service. Built for a Rails monolith.
Manual tracking
For batch jobs, internal gateways, or anything without an SDK/Faraday hook:
LlmCostTracker.track(
provider: :anthropic,
model: "claude-sonnet-4-6",
tokens: { input: 1500, output: 320 },
tags: { feature: "summarizer", user_id: current_user.id }
)
Docs
- Configuration
- Pricing
- Budgets
- Data model
- Querying
- Dashboard
- Streaming
- Cookbook
- Extending
- Operations
- Architecture
- Upgrading
- Changelog
Development
bundle install
bin/check
License
MIT — see LICENSE.txt.