Async Vibe Coding Agent

Async
Code Agent

Powered by SubAgents multi-model collaboration — comparable coding quality to Cursor / Claude Code at 1/10 the token cost. Native high-concurrency async development with a built-in Web UI — friendlier than CLI, lighter than IDE.

Core Capability

AI Coding Powered by SubAgents

Multiple AI Agents collaborate through automatic dispatch or custom workflows. Put the right model on the right task — balancing quality and cost.

Auto Mode

Recommended

AI autonomously decides when to dispatch SubAgents, how many to use, and what each handles. Install one rule and start coding — no manual orchestration needed.

  • Zero config, works out of the box
  • Ideal for most daily development tasks
  • Smart multi-model routing, auto cost optimization
Learn More

Cowork Mode

Advanced

Define multi-agent collaboration workflows via YAML — break complex tasks into multi-stage DAG execution with precise control over models and strategies at each phase.

  • Custom multi-agent collaboration pipelines
  • Supports async background execution
  • Ideal for large-scale refactoring and fixed processes
Learn More

Interface Preview

Dual Terminal modes: classic auto-coder.chat and lightweight auto-coder.chat.lite

Classic Terminal: auto-coder.chat

Classic Terminal: auto-coder.chat

Full command system and advanced capabilities for deep engineering workflows.

Lightweight Terminal: auto-coder.chat.lite

Lightweight Terminal: auto-coder.chat.lite

Simpler interaction and lower onboarding cost for quick start and daily coding.

WinClaw logoClaw Friendly

auto-coder.run Headless Mode

A headless CLI for scripted and non-interactive execution (alias: auto-coder.cli), ideal for Claw integration, CI pipelines, and batch tasks.

Recommended Invocation

Put prompts in files and pass them via --from-prompt-file, then use --verbose and --output-format stream-json for traceable, machine-readable event streams. Turn on --async when you need parallel task splitting.

$auto-coder.run --from-prompt-file task.md --verbose --output-format stream-json
$echo "task" > auto-coder.run --verbose --output-format stream-json

Why auto-coder.chat?

Built for developers who want AI-powered coding without limits

Async Vibe Coding

Built on git worktree as core infrastructure — multiple tasks execute truly in parallel with automatic conflict resolution, no serial waits or manual merges

Controllable AI Duration

Precisely control AI runtime via the /time parameter, giving the agent sufficient time to think and iterate for higher-quality code output

Unlimited Context

Run sessions over 800k tokens using 128k/200k context models with intelligent chunking

SubAgents Multi-Model Cowork

Define multi-model collaboration workflows — assign routine tasks to cost-effective models and reserve premium models for critical decisions, significantly reducing token costs

Launch Web UI Anytime

Start the Web UI anytime for an experience friendlier than CLI and lighter than IDE

Domestic Model Support

Support for domestic model coding plan subscriptions including GLM4.6 and M2

Get Started with auto-coder.chat

Three simple steps to start AI-powered coding

1

Install

$python3 -m pip install -U auto-coder auto_coder_web

Note: Python supports versions 3.10 - 3.12 only

2

Start Chat

$cd your-project
$auto-coder.chat
3

Start Web UI

$cd your-project
$auto-coder.web
Async Vibe Coding

What is Async Vibe Coding?

Submit coding tasks that run in the background. Multiple tasks can execute in parallel, and results are automatically merged into the main branch — even when tasks conflict.

Create async task

/async /name <job_name> <requirement>

Merge results

/auto /merge <job>