Module: Hyperion::AsyncPg::ForkSafe
- Defined in:
- lib/hyperion/async_pg/fork_safe.rb
Overview
Auto-invalidates registered pools across fork boundaries so users don’t have to wire ‘on_worker_boot` (Hyperion / Puma cluster mode) themselves. Hooks `Process._fork` (Ruby 3.1+) to forget any connections held by the parent — the child will recreate them on first use.
Background: pre-fork servers (Hyperion ‘-w N`, Puma cluster, Falcon multi-worker) load the rackup in the master process and then fork children. Any `PG::Connection` opened in the master is shared with every child via inherited file descriptors. Concurrent reads/writes on the same fd interleave bytes and corrupt the wire protocol —symptom is `PG::UnableToSend: another command is already in progress` on every request, ~99.99 % 5xx under load. The classic workaround is “open the pool in `on_worker_boot`”, which requires a separate config file. ForkSafe lets you keep the pool open at the rackup level and have it transparently re-init in each child.
Usage:
require 'hyperion/async_pg/fork_safe'
Hyperion::AsyncPg::ForkSafe.install!
$pg_pool = Hyperion::AsyncPg::ForkSafe.register(
Hyperion::AsyncPg::FiberPool.new(size: 64) do
PG.connect(ENV['DATABASE_URL'])
end
)
$pg_pool.fill # master: opens 64 conns. Child: refills lazily on first .with.
Eliminating the cold-start p99 spike on ‘-w N`: pass `prefill_in_child: true` so the fork hook synchronously calls `pool.fill` in each child after `pool.reset_after_fork`. The operator pays the per-worker connect cost ONCE during fork, and the first request hits a warm pool. Trade-off: fork itself takes longer (each child does N parallel `PG.connect`s before returning). Recommended for production multi-worker deployments.
$pg_pool = Hyperion::AsyncPg::ForkSafe.register(
Hyperion::AsyncPg::FiberPool.new(size: 64) do
PG.connect(ENV['DATABASE_URL'])
end,
prefill_in_child: true
)
$pg_pool.fill
Or via the kitchen-sink one-liner on the main shim:
Hyperion::AsyncPg.install!(activerecord: true, fork_safe: true)
Registered pools must respond to ‘#reset_after_fork`. The shipped `Hyperion::AsyncPg::FiberPool` implements it as a metadata-only reset (drops parent’s connection refs without calling ‘#close` on them — those file descriptors are now in use by the child kernel- side, and closing them in the parent would yank them out from under the child too). When `prefill_in_child: true` is requested, the pool must additionally respond to `#fill`.
Defined Under Namespace
Modules: Hook Classes: IncompatiblePoolError, Registration
Class Method Summary collapse
-
.__reset_for_specs__ ⇒ Object
Test-only: clear pool registry + flip the install flag so the next ‘install!` is a fresh idempotency test.
-
.install! ⇒ Object
Install the ‘Process._fork` hook.
- .installed? ⇒ Boolean
-
.register(pool, prefill_in_child: false) ⇒ Object
Register a pool to be reset on fork.
-
.reset_all_pools_in_child! ⇒ Object
Reset all registered pools — called from the fork hook in the child process AFTER ‘fork(2)` returns 0.
Class Method Details
.__reset_for_specs__ ⇒ Object
Test-only: clear pool registry + flip the install flag so the next ‘install!` is a fresh idempotency test. We do NOT and CANNOT un-prepend `Hook` from `Process.singleton_class` —Ruby has no public API for that. Once prepended (by the first spec that calls `install!`), `Hook` stays in the ancestor chain for the rest of the suite. That’s safe because ‘Hook#_fork` only calls `reset_all_pools_in_child!`, and we clear the pool registry between examples — so the leftover hook + empty registry is a no-op.
181 182 183 184 |
# File 'lib/hyperion/async_pg/fork_safe.rb', line 181 def __reset_for_specs__ @pools_mutex.synchronize { @registrations.clear } @install_mutex.synchronize { @installed = false } end |
.install! ⇒ Object
Install the ‘Process._fork` hook. Idempotent + thread-safe. Returns `true` if this call wired the hook, `false` if the hook was already installed (or this Ruby lacks `Process._fork`).
No-op on Rubies older than 3.1 (warns once on stderr) — without ‘Process._fork` there’s no reliable, library-friendly way to detect the fork boundary; users on those Rubies must use ‘on_worker_boot` from their server config instead.
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
# File 'lib/hyperion/async_pg/fork_safe.rb', line 99 def install! installed_now = false @install_mutex.synchronize do return false if @installed unless ::Process.respond_to?(:_fork) warn '[hyperion-async-pg] ForkSafe: Process._fork unavailable on this Ruby — fork detection disabled' return false end install_hook! @installed = true installed_now = true end installed_now end |
.installed? ⇒ Boolean
115 116 117 |
# File 'lib/hyperion/async_pg/fork_safe.rb', line 115 def installed? @installed end |
.register(pool, prefill_in_child: false) ⇒ Object
Register a pool to be reset on fork. Returns the pool unchanged for chaining: ‘$pg_pool = ForkSafe.register(FiberPool.new(…))`.
The pool must respond to ‘#reset_after_fork`. Anything else is rejected up-front to surface integration mistakes at boot rather than after a child fork.
When ‘prefill_in_child: true` is set, the fork hook will also call `pool.fill` in the child (after `pool.reset_after_fork`) so the operator pays the per-worker `PG.connect` cost during fork rather than absorbing it into the first request’s p99. The pool must additionally respond to ‘#fill` in that case; we validate eagerly to surface the misconfig at boot.
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
# File 'lib/hyperion/async_pg/fork_safe.rb', line 132 def register(pool, prefill_in_child: false) unless pool.respond_to?(:reset_after_fork) raise IncompatiblePoolError, "pool must respond to #reset_after_fork (got #{pool.class})" end if prefill_in_child && !pool.respond_to?(:fill) raise IncompatiblePoolError, "pool must respond to #fill when prefill_in_child: true (got #{pool.class})" end @pools_mutex.synchronize do @registrations << Registration.new(pool: pool, prefill_in_child: prefill_in_child) end pool end |
.reset_all_pools_in_child! ⇒ Object
Reset all registered pools — called from the fork hook in the child process AFTER ‘fork(2)` returns 0. Forgets parent’s connection refs (without closing them — child’s fds, parent owns the OS-level closing). For registrations that opted into ‘prefill_in_child: true`, also calls `#fill` synchronously so the child returns from `Process._fork` with a warm pool —eliminates the cold-start p99 spike on the first ~pool_size requests per worker.
One bad pool’s ‘#reset_after_fork` (or `#fill`) raising must NOT prevent the rest from being reset; otherwise a single buggy pool can poison every other pool in the child and you’re back to the fd-sharing corruption this whole module exists to prevent. Same rationale for ‘#fill` failures: PG might be unreachable on child boot — log and continue so the child can still serve other (non-PG) routes or recover via lazy refill.
164 165 166 167 168 169 170 |
# File 'lib/hyperion/async_pg/fork_safe.rb', line 164 def reset_all_pools_in_child! @pools_mutex.synchronize do @registrations.each do |registration| reset_one_in_child(registration) end end end |