hyperion-async-pg
Async-aware shim for the pg gem. Patches PG::Connection so exec, exec_params, exec_prepared and friends cooperate with the Async fiber scheduler — while one fiber is parked on a Postgres socket waiting for query results, other fibers in the same OS thread serve other requests. Companion to the Hyperion HTTP server. Pure Ruby, drop-in, no behavior change outside an Async scheduler.
Install
# Gemfile
gem 'hyperion-async-pg'
# config/initializers/async_pg.rb (Rails) or wherever your app boots
require 'hyperion/async_pg'
Hyperion::AsyncPg.install!
install! is idempotent and thread-safe. Call once at boot, before any DB connections are opened. It returns true on the first call, false thereafter.
Compatibility
Works transparently with anything sitting on top of pg:
- ActiveRecord (the
postgresqladapter calls throughPG::Connection#exec_params/#exec_prepared). - Sequel (the
postgresadapter does the same). - ROM-sql + rom-pg.
- Raw
pg— your ownPG::Connection.new(...).exec_params(...)calls.
No driver-side opt-in required. Patches are prepended onto PG::Connection, so every caller in the process picks them up.
Server support matrix
This shim only delivers fiber concurrency when the HTTP server runs each request inside an Async::Scheduler. Without a scheduler, IO#wait_readable blocks the OS thread normally — the patch is silent and harmless, but produces no concurrency win.
| Server | Path | Concurrency win? | Notes |
|---|---|---|---|
| Falcon | any | ✅ yes | Native fiber scheduler per request. Drop-in. Recommended. |
Hyperion --tls-cert ... (HTTPS) |
TLS / h1 + h2 | ✅ yes | TLS path runs start_async_loop; every dispatch is a fiber. Works today. |
| Hyperion HTTPS over h2 | h2 streams | ✅ yes | Each h2 stream is a fiber by design. |
| Hyperion plain HTTP/1.1 | thread pool | ❌ not yet | 1.2.0's perf-bypass (start_raw_loop) hands the whole socket to a worker thread with no scheduler. Pending: Hyperion 1.3.0 ships async_io: true config flag that re-enables the Async wrap (opt-in; default keeps 1.2.0 perf). Until then, plain HTTP/1.1 is throughput-equivalent to Puma at the same thread count on PG-bound workloads. |
| Puma | any | ❌ no | No fiber scheduler. Patch is silent, behaviour identical to plain pg. |
| Sidekiq / scripts / rake | any | ❌ no (and that's fine) | No scheduler → no patch effect. Drop-in safe. |
If your stack is Hyperion plain HTTP/1.1 today, hold off on this shim until Hyperion 1.3.0 lands the async_io flag. If you're on Falcon or Hyperion-over-TLS, install now.
Connection pool — use a fiber-aware one
The popular connection_pool gem (used by ActiveRecord, Sidekiq, etc.) is not fiber-aware: its internal Mutex + ConditionVariable don't yield to the Async scheduler. A fiber waiting for a connection blocks the entire OS thread, defeating this shim's purpose. Symptoms: throughput same as plain pg even though wait_readable is firing; under heavy load Falcon may report "Closing scheduler with blocked operations!".
For raw pg callers, prefer one of:
async-pool— explicit pool with fiber-aware semaphore (Async::Pool::Controller).- A pre-allocated array of N connections checked out via
Async::SemaphoreorAsync::Variable. - A per-fiber connection (no pool) — works but holds a connection for the fiber's lifetime; size your Postgres
max_connectionsaccordingly.
ActiveRecord 7.2+ has experimental fiber-aware pool support via ActiveRecord::Base.connection_pool.checkout/checkin driven by Fiber[:active_record_connection_pool]; verify your AR version cooperates with Async before claiming the win.
Caveats
- Only yields under a fiber scheduler. Outside
Async { ... }(Sidekiq workers, plain scripts, rake tasks, Rails console) the patched methods behave identically to plainpg—IO#wait_readablefalls back to its blocking implementation whenFiber.schedulerisnil. There is no perf regression in non-async contexts. - Long-running statements still block the calling fiber. The shim parks a fiber on the socket; it does not preempt the running query. A 10 s
SELECTstill ties up that fiber for 10 s. Cap runaway queries with Postgresstatement_timeout(or session-levelSET statement_timeout), not at the Ruby layer. - Connection pool sizing. Under Hyperion + this shim, fibers vastly outnumber threads — each fiber can hold a checked-out DB connection while it waits on Postgres. A worker with 10 OS threads and 200 concurrent fibers can hold 200 in-flight connections. Size your
pool:(ActiveRecord) or:max_connections(Sequel) and your Postgresmax_connectionsaccordingly. Rule of thumb: pool >= peak concurrent fibers per worker. - Single-statement only. The shim drains all results and returns the last one, matching pg's default
exec_paramssemantics. Multi-statement strings sent throughexecproduce the last result, as before.
Tuning
| Env var | Default | Meaning |
|---|---|---|
HYPERION_ASYNC_PG_READ_TIMEOUT |
unset (block forever) | Seconds passed to IO#wait_readable per poll. Unset matches pg's default — rely on Postgres statement_timeout for the upper bound. Set when you want a hard ceiling on a single socket-wait independent of server-side timeouts; on timeout the shim raises PG::ConnectionBad. |
Read at every dispatch; no restart required.
Expected gain
On a PG-bound Rack workload (handler issues one query taking ~50 ms, served by Falcon or Hyperion-over-TLS at -t 5, 200 concurrent wrk connections, fiber-aware pool with 64 connections), the theoretical ceiling is pool_size / query_seconds = 64 / 0.05 = ~1,300 r/s. Plain pg + Puma at the same thread count caps at threads / query_seconds = 5 / 0.05 = ~100 r/s. Realistic gain: 5–10× throughput, p99 dropping from queueing-dominated seconds to near-RTT.
The win evaporates if any of these is wrong:
- Server doesn't run requests under
Async::Scheduler(Hyperion plain HTTP/1.1, Puma — see the support matrix above). - Connection pool isn't fiber-aware (
connection_poolgem blocks the OS thread). - Workload isn't actually wait-bound (CPU-heavy handlers don't benefit; the gain is exactly the PG round-trip you can stack).
See bench/pg_concurrent.ru for a reproducible bench. The early development bench results (macOS, Postgres over WAN, 50 ms pg_sleep):
| Setup | r/s | p99 | Notes |
|---|---|---|---|
| Hyperion 1.2.0 plain HTTP/1.1 + this shim | 88.5 | 67 ms | parity with Puma — no scheduler ≠ no win (see matrix) |
| Puma 7.2 + plain pg | 87.3 | 2.45 s | the same 5-threads-bottleneck, but with queueing |
Falcon + this shim + connection_pool gem |
hung | — | non-fiber-aware pool deadlocks the scheduler |
Linux + Falcon/Hyperion-1.3.0 + async-pool numbers will land in the 0.2.0 release once that integration is verified end-to-end.
How it works
PG::Connection#exec_params(...) (and the other patched methods) becomes:
- Call the non-blocking
send_query_params(...)C function — fires the query off, returns immediately. - Loop:
consume_input→ checkis_busy→ if busy,socket_io.wait_readable. UnderAsync::Scheduler,wait_readableyields the fiber. Without one, it blocks the OS thread. - Drain results with
get_result, return the final one (afterresult.checkto surface errors).
No threads, no extra IO objects, no copy of the result through Ruby. The C extension does all the work; we only swap the wait primitive.
License
MIT.