Queue the task. Agents take it.
Queue the task. Agents do the work.
PREQSTATION is a self-hosted execution queue for developers using AI coding engines. It takes a scoped software task from intake to reviewed result, and it does not replace GitHub Issues, Linear, or Jira for backlog planning.
Open Source · Self-hosted · Multi-engine execution queue
Execution proof
A representative task record shows the queue status, assigned engine, review gate, and shipped result in one place.
Task
LP-241
Clarify landing positioning and proof path
Queue the landing copy update, execute it with Codex, then move it to review with concrete evidence attached.
- Engine
- Codex
- Agent session
- codex/landing-positioning-proof
- Branch
- codex/landing-proof-pass
hold branch
If review or implementation is blocked, the task branches to hold instead of sitting in a vague in-progress state.
status = blocked
Use hold only when review or implementation is blocked and the task needs a real branch out of the happy path.
Inbox
09:10 scoped
Acceptance criteria and ownership were added before dispatch.
Todo
09:18 claimed
Codex claimed the task and opened an isolated worktree.
Ready
10:02 review ready
Copy, proof section, and Korean localization were submitted together.
Done
10:14 verified
Build passed, the hero was spot-checked, and the result was committed.
Review evidence
Verification
pnpm --filter @preqstation/landing build passed.
Review note
Terminology unified around agent, engine, queue, and review-ready.
Result
Execution proof shipped with guide and GitHub follow-through links.
#3953
Rate Limit API
#3954
Auth Refactor
#3955
Deploy Script
The missing part is execution discipline
AI can write code. Shipping still breaks down between the idea and the verified result.
Issue trackers stop at assignment
They hold backlog and discussion, but they do not execute the task.
Raw AI sessions lack workflow guardrails
No queue, no branch policy, no review state, no audit trail.
Simple work still needs a safe workspace
Agents need isolated worktrees, repeatable steps, and reporting back.
The handoff gap burns the schedule
The delay is not writing code. It is routing, checking, and deciding what can ship.
Work like a kitchen.You are the chef.
Who is it for?
Ideas on the go
Fire AI tasks from anywhere, anytime on mobile
Multi-project juggler
Vibe code across projects without context loss
Context switcher
Freelancing? No more 'What did I do here?'
Multi-engine strategist
Pick Claude, Codex, or Gemini per task
Quality-focused dev
Systematically review and verify AI output
Your backlog is already elsewhere
Keep discovery and roadmap work in GitHub Issues, Linear, or Jira, then send scoped execution work into PREQ.
You want execution, not general collaboration
Use Notion, Slack, or Asana for broad team coordination. PREQ is the handoff point once a task is ready to run.
You use AI agents as workers
PREQ is strongest when Claude, Codex, or Gemini are doing implementation and a human is reviewing the result.
You have repeatable delivery flow
The queue is most useful when tasks regularly move through intake, execution, review, and follow-up.
You separate planning from execution
PREQ handles execution state and evidence. It does not replace sprint planning, roadmap ownership, or portfolio tracking.
Built for developers who ship
PREQ is for developers who already know what to build and need a controlled way to hand that task to an AI agent, review the result, and decide what ships.
Execution, not backlog triage
Keep discovery and planning in GitHub Issues, Linear, or Jira. PREQ starts when a scoped task is ready for an agent to execute.
One queue, multiple engines
Assign Claude, Codex, or Gemini per task while keeping the same queue, review gate, and audit trail.
Agents work in controlled sandboxes
Each agent session runs in an isolated worktree and reports back with notes, checks, and branch output instead of editing your main branch directly.
Review with visible proof
Status changes, work logs, verification, and branch details stay attached to the task so done means reviewed, not guessed.
Why engineers actually use it
PREQ turns a scoped task into a repeatable execution loop with clear ownership and proof.
Intake only the work that is ready
Create a task once the scope, acceptance criteria, and owner are clear enough for an agent to execute without guesswork.
Dispatch the right engine
Pick Claude, Codex, or Gemini for the task, let the agent claim it, and keep queued or working as run-state overlays instead of fake workflow columns.
Keep blocked work explicit
Use hold as a real branch for blocked execution so review-ready work stays separate from tasks that still need input.
Require proof before done
Ready means the agent returned notes, checks, and branch output. You review the evidence before the task moves to done.
Execution Workflow
Happy path goes inbox → todo → ready → done. Hold branches off when work is blocked.
Inbox
inbox
Todo
todo
Ready
ready
Done
done
Hold
blocked branch
Tasks move here when implementation or verification is blocked. They rejoin the happy path only after the blocker is cleared.
Run-state overlay
Use queued when dispatch is requested and working once an agent claims the task. They sit on top of status instead of creating fake workflow columns.
Not an issue tracker
Use GitHub Issues, Linear, or Jira to discover and triage work. PREQ starts once the task is scoped enough to execute safely.
Choose the right AI for the task
Pick the best tool, no vendor lock-in
Strong at complex refactoring, architecture changes, and general tasks
Strong at code review, security analysis, and architecture review
Strong at UI/UX design, documentation, and large-context processing
Free. Open Source. Forever.
Your code, your data, your server
- ✓Full source code on GitHub
- ✓Deploy on your own server
- ✓Community-driven standards
- ✓Your data stays yours
- ✓No vendor lock-in
Frequently Asked Questions
Open your kitchen today
Start with the execution proof, then use the guide to wire PREQ into your own workflow.