JsonLLM
JsonLLM is a small Ruby library for asking language models for JSON-shaped answers and validating them. You describe the keys and value types you expect (using hash_validator schemas), send a prompt, and get back a Ruby Hash—or a clear error if the model’s reply is not valid JSON or does not match the schema.
It ships with thin providers built on the official openai gem, so you can use OpenAI’s API or any OpenAI-compatible HTTP API (same request shape, different base URL).
Installation
gem "json_llm"
Requires Ruby ≥ 3.2.
Usage with OpenAI
Set OPENAI_API_KEY in your environment, then:
require "json_llm"
provider = JsonLLM::Providers::OpenAI.new(
api_key: ENV.fetch("OPENAI_API_KEY"),
model: "gpt-4o-mini"
)
# Schema: each value is a type name for hash_validator (e.g. "string", "numeric").
expected = {
"title" => "string",
"summary" => "string"
}
result = provider.chat("Invent a short blog post idea about Ruby gems.", expected)
# => { "title" => "...", "summary" => "..." }
#chat appends your schema to the prompt, parses JSON from the model’s reply (including Markdown fenced JSON blocks), and validates the result. On failure it raises JsonLLM::Tools::NotJsonFounded or JsonLLM::Tools::InvalidPayload.
Optional: custom base URL
Pass base_url if you use a compatible gateway or self-hosted endpoint:
JsonLLM::Providers::OpenAI.new(
api_key: ENV.fetch("OPENAI_API_KEY"),
model: "gpt-4o-mini",
base_url: "https://api.openai.com/v1" # default
)
Providers
| Provider | Class | Default base URL | Notes |
|---|---|---|---|
| OpenAI | JsonLLM::Providers::OpenAI |
https://api.openai.com/v1 |
Official OpenAI Chat Completions API via the openai gem. |
| DeepSeek | JsonLLM::Providers::Deepseek |
https://api.deepseek.com |
Subclass of the OpenAI provider; same constructor shape (api_key:, model:, optional base_url:). Uses DeepSeek’s OpenAI-compatible API. |
Both providers implement #chat(prompt, expected_payload) as defined on JsonLLM::Providers::Base. To integrate another vendor that speaks the same OpenAI-compatible chat format, you can subclass JsonLLM::Providers::OpenAI and pass that vendor’s base_url.
Custom provider (CustomProvider)
For an LLM with a custom HTTP API (not OpenAI-compatible), subclass JsonLLM::Providers::Base and implement the private method prompt_llm. It must return a String: the model’s raw reply (a JSON object as text, or Markdown that includes a JSON block—same rules as the bundled providers). #chat then adds your expected schema, parses JSON, and validates.
require "json"
require "net/http"
require "uri"
require "json_llm"
class CustomProvider < JsonLLM::Providers::Base
def initialize(api_key:, base_url:)
@api_key = api_key
@base_url = base_url
end
private
def prompt_llm(prompt)
uri = URI.join(@base_url, "/v1/generate")
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = uri.scheme == "https"
request = Net::HTTP::Post.new(uri)
request["Content-Type"] = "application/json"
request["Authorization"] = "Bearer #{@api_key}"
request.body = { "prompt" => prompt }.to_json
response = http.request(request)
body = JSON.parse(response.body)
body.fetch("text") # adapt key/path to your API; must return a String
end
end
provider = CustomProvider.new(
api_key: ENV.fetch("MY_LLM_API_KEY"),
base_url: "https://llm.example.com"
)
provider.chat("Reply in JSON only.", {
"greeting" => "string",
"language" => "string"
})
Adapt the URL, JSON body, and fetch key to your vendor’s API. You can swap Net::HTTP for Faraday or a vendor SDK—the important part is that prompt_llm returns the model’s reply as a single String.