Features

Agent-Based Architecture

Autonomous Execution with chu do

The flagship command that orchestrates 4 specialized agents working in sequence:

  1. Analyzer: Understands codebase, reads relevant files using dependency graph
  2. Planner: Creates minimal implementation plan, lists files to modify
  3. Editor: Executes changes ONLY on planned files (file validation)
  4. Validator: Verifies success criteria, triggers auto-retry if needed

Usage:

chu do "add JWT authentication"
chu do "fix bug in payment processing" --supervised
chu do "refactor error handling" --interactive

Flags:

Benefits:

See full agent flow diagram on homepage →


Validation & Safety

File Validation

The Editor agent can only modify files explicitly mentioned in the Planner’s output. This prevents:

Success Criteria Validation

The Validator agent automatically:

  1. Checks if task completion criteria are met
  2. Runs tests if applicable
  3. Verifies file changes match plan
  4. Triggers retry with feedback if validation fails

Supervised vs Autonomous Modes

Choose based on task criticality.


ML-Powered Intelligence

Intent Classification

Routes requests in 1ms instead of 500ms LLM calls. Classifies user intent (query, edit, research, review) with 89% accuracy and smart LLM fallback when uncertain.

Benefits:

Configuration:

chu config get defaults.ml_intent_threshold  # default: 0.7
chu config set defaults.ml_intent_threshold 0.8

Complexity Detection

Automatically triggers Guided Mode (research → plan → implement) for complex multi-step tasks.

Configuration:

chu config get defaults.ml_complex_threshold  # default: 0.55
chu config set defaults.ml_complex_threshold 0.6

CLI Commands:

chu ml list                    # List available models
chu ml test intent "query"     # Test intent classification
chu ml eval intent             # Evaluate accuracy
chu ml train intent            # Retrain model (requires Python)

Read more about ML features →


Smart Context Selection

Dependency Graph Analysis

Automatically builds a graph of your codebase’s file dependencies and uses PageRank to identify important files.

How it works:

  1. Analyzes imports/requires to build dependency graph
  2. Ranks files by importance using PageRank
  3. Matches query terms to relevant files
  4. Expands to 1-hop neighbors (dependencies + dependents)
  5. Provides top 5 most relevant files as context

Benefits:

Supported languages: Go, Python, JavaScript/TypeScript, Ruby, Rust

Debug mode:

CHUCHU_DEBUG=1 chu chat "your query"
# [GRAPH] Built graph: 142 nodes, 287 edges
# [GRAPH] Selected 5 files:
# [GRAPH]   1. internal/agents/router.go (score: 0.842)

Read more about graph features →


Multi-Agent Architecture

Specialized Agents

Router Agent (fast, cheap)

Query Agent (comprehension)

Editor Agent (code generation)

Research Agent (web search)

Agent Configuration

backend:
  groq:
    agent_models:
      router: llama-3.1-8b-instant
      query: gpt-oss-120b-128k
      editor: deepseek-r1-distill-qwen-32b
      research: groq/compound

Compare models →


Profile Management

Switch between model configurations instantly:

Neovim UI:

CLI:

chu backend list           # List configured backends
chu backend switch groq    # Switch to Groq backend

TDD-First Workflow

Test-Driven Development

Commands

chu tdd                    # Interactive TDD mode
chu feature "description"  # Generate tests + implementation

Workflow

  1. Describe feature requirements
  2. AI generates tests first
  3. Tests guide implementation
  4. Verify with test suite
  5. Iterate until green

Neovim Integration

Chat Interface

Model Management

Key Bindings (configurable)

<C-d>      -- Toggle chat interface
<C-m>      -- Profile management
<leader>ms -- Model search and install

Features


Cost Optimization

Per-Agent Pricing

Configure different model tiers based on task importance:

Agent Model Input Output Use Case
Router Llama 3.1 8B $0.05 $0.08 Fast intent classification
Query GPT-OSS 120B $0.15 $0.60 Code comprehension
Editor DeepSeek R1 $0.14 $0.42 Code generation
Research Grok 4.1 Free $0.00 $0.00 Web search

Monthly Cost Examples

See optimal configurations →


Local Deployment

Ollama Support

Run completely offline with Ollama:

Recommended models:

Configuration:

backend:
  ollama:
    base_url: http://localhost:11434
    default_model: qwen2.5-coder:32b

Benefits:

Setup guide →


OpenRouter Integration

Access 100+ models through single API:

Configuration:

backend:
  openrouter:
    base_url: https://openrouter.ai/api/v1
    default_model: anthropic/claude-4.5-sonnet

OpenRouter setup →


Research & Planning

Research Mode

Comprehensive codebase research with parallel sub-agents:

chu research "how does authentication work"

Research workflow →

Planning Mode

Interactive plan creation with iteration:

chu plan "add JWT authentication"

Planning workflow →

Implementation

Execute plans with verification:

chu implement plan.md

Model Comparison

Interactive tool to compare LLMs for coding:

Compare models →