Async::Background
A lightweight cron, interval, and job-queue scheduler for Ruby's Async ecosystem. Built for Falcon, works with any Async app.
- Cron & interval scheduling on a single event loop with a min-heap
- Dynamic job queue backed by SQLite, with delayed jobs (
perform_in/perform_at) - Cross-process wake-ups over Unix domain sockets — web workers can enqueue and instantly wake background workers
- Multi-process safe — deterministic worker sharding, no duplicate execution
- Per-job timeouts, skip-on-overlap, startup jitter, optional metrics
Requirements
- Ruby >= 3.3
async ~> 2.0,fugit ~> 1.0sqlite3 ~> 2.0(optional, for the job queue)async-utilization ~> 0.3(optional, for metrics)
Install
# Gemfile
gem "async-background"
gem "sqlite3", "~> 2.0" # optional
gem "async-utilization", "~> 0.3" # optional
➡️ Get Started
Full setup walkthrough: schedule config, Falcon integration, Docker, queue, delayed jobs.
Quick Look
class SendEmailJob
include Async::Background::Job
def perform(user_id, template)
Mailer.send(User.find(user_id), template)
end
end
SendEmailJob.perform_async(user_id, "welcome")
SendEmailJob.perform_in(300, user_id, "reminder")
SendEmailJob.perform_at(Time.new(2026, 4, 1, 9), user_id, "scheduled")
Schedule recurring jobs in config/schedule.yml:
sync_products:
class: SyncProductsJob
every: 60
daily_report:
class: DailyReportJob
cron: "0 3 * * *"
timeout: 120
| Key | Description |
|---|---|
class |
Job class — must include Async::Background::Job |
every / cron |
One of: interval in seconds, or cron expression |
timeout |
Max execution time in seconds (default: 30) |
worker |
Pin to a specific worker. Default: crc32(name) % total_workers |
Gotchas
Docker: SQLite requires a named volume
The SQLite database must not live on Docker's overlay2 filesystem. The overlay2 driver breaks coherence between write() and mmap(), which corrupts SQLite WAL under concurrent access.
# docker-compose.yml
services:
app:
volumes:
- queue-data:/app/tmp/queue # ← named volume, NOT overlay2
volumes:
queue-data:
Without this, you will get database crashes in multi-process mode. See Get Started → Step 3 for details. If you can't use a named volume, pass queue_mmap: false to disable mmap entirely.
Other gotchas
Don't share SQLite connections across fork(). The gem opens connections lazily after fork, but if you create a Queue::Store manually for schema setup, close it before forking:
store = Async::Background::Queue::Store.new(path: db_path)
store.ensure_database!
store.close # ← before fork
Two clocks, on purpose. Interval jobs use CLOCK_MONOTONIC (immune to NTP drift). Cron jobs use wall-clock time, because "every day at 3am" needs to mean 3am.
How it works
schedule.yml ─► build_heap ─► MinHeap<Entry> ─► scheduler loop ─► Semaphore ─► run_job
A single Async task sleeps until the next entry is due, then dispatches it under a semaphore that caps concurrency. Overlapping ticks are skipped and rescheduled.
The dynamic queue runs alongside it:
Producer (web/console) Consumer (background worker)
│ │
▼ ▼
Queue::Client Queue::Store#fetch
push / push_in / push_at (run_at <= now)
│ ▲
▼ │
Queue::Store ──── SQLite (jobs) ──── SocketWaker
│ ▲
└───────► SocketNotifier ───────────────┘
(UNIX socket wake-up, ~80µs)
Jobs are persisted in SQLite, so a missed wake-up is never a lost job — workers also poll every 5 seconds as a safety net.
Metrics
With async-utilization installed, per-worker stats land in shared memory at /tmp/async-background.shm with lock-free updates.
runner.metrics.values
# => { total_runs: 142, total_successes: 140, total_failures: 2,
# total_timeouts: 0, total_skips: 5, active_jobs: 1, ... }
Async::Background::Metrics.read_all(total_workers: 2)
Without the gem, metrics are silently disabled — zero overhead.
License
MIT