Commands Reference

Complete guide to all chu commands and their usage.

Quick Navigation

Setup Interactive Workflow Review Features Run ML Graph Config

chu do - Autonomous Execution

The flagship copilot command. Orchestrates 4 specialized agents to autonomously complete tasks with validation and auto-retry.

How It Works

chu do "add JWT authentication"

Agent Flow:

  1. Analyzer - Understands codebase using dependency graph, reads relevant files
  2. Planner - Creates minimal implementation plan, lists files to modify
  3. File Validation - Extracts allowed files, blocks extras
  4. Editor - Executes changes ONLY on planned files
  5. Validator - Checks success criteria, triggers auto-retry if validation fails

Examples

chu do "fix authentication bug in login handler"
chu do "refactor error handling to use custom types"
chu do "add rate limiting to API endpoints" --supervised
chu do "optimize database queries" --interactive

Flags

Benefits

See full agent architecture →


Setup Commands

chu setup

Initialize Chuchu configuration at ~/.chuchu.

chu setup

Creates:

chu key [backend]

Add or update API key for a backend provider.

chu key openrouter
chu key groq

chu models update

Update model catalog from available providers (OpenRouter, Groq, OpenAI, etc.).

chu models update

Interactive Modes

chu chat

Code-focused conversation mode. Routes queries to appropriate agents based on intent.

chu chat
chu chat "explain how authentication works"
echo "list go files" | chu chat

Agent routing:

chu tdd

Incremental TDD mode. Generates tests first, then implementation.

chu tdd
chu tdd "slugify function with unicode support"

Workflow:

  1. Clarify requirements
  2. Generate tests
  3. Generate implementation
  4. Iterate and refine

Workflow Commands (Research → Plan → Implement)

chu research [question]

Document codebase and understand architecture.

chu research "How does authentication work?"
chu research "Explain the payment flow"

Creates a research document with findings and analysis.

chu plan [task]

Create detailed implementation plan with phases.

chu plan "Add user authentication"
chu plan "Implement webhook system"

Generates:

chu implement <plan_file>

Execute an approved plan phase-by-phase with verification.

chu implement ~/.chuchu/plans/2025-01-15-add-auth.md

Each phase:

  1. Implemented
  2. Verified (tests run)
  3. User confirms before next phase

Code Quality

chu review [target]

NEW: Review code for bugs, security issues, and improvements against coding standards.

chu review main.go
chu review ./src
chu review .
chu review internal/agents/ --focus security

Options:

Reviews against standards:

Output structure:

  1. Summary: Overall assessment
  2. Critical Issues: Must-fix bugs or security risks
  3. Suggestions: Quality/performance improvements
  4. Nitpicks: Style, naming preferences

Examples:

chu review main.go --focus "error handling"
chu review . --focus performance
chu review src/auth --focus security

Feature Generation

chu feature [description]

Generate tests + implementation with auto-detected language.

chu feature "slugify with unicode support and max length"

Supported languages:


Execution Mode

chu run [task]

Execute tasks with follow-up support. Two modes available:

1. AI-assisted mode (default when no args provided):

chu run                                    # Start interactive session
chu run "deploy to staging" --once         # Single AI execution

2. Direct REPL mode with command history:

chu run --raw                              # Interactive command execution
chu run "docker ps" --raw                  # Execute command and exit

AI-Assisted Mode

Provides intelligent command suggestions and execution:

chu run
> deploy to staging
[AI executes fly deploy command]
> check if it's running
[AI executes status check]
> roll back if there are errors
[AI conditionally executes rollback]

Direct REPL Mode

Direct shell command execution with enhanced features:

chu run --raw
> ls -la
> cat $1                    # Reference previous command
> /history                  # Show command history
> /output 1                 # Show output of command 1
> /cd /tmp                  # Change directory
> /env MY_VAR=value         # Set environment variable
> /exit                     # Exit REPL

REPL Commands:

Command References:

Examples

# AI-assisted operational tasks
chu run "check postgres status"
chu run "make GET request to api.github.com/users/octocat"

# Direct command REPL for DevOps
chu run --raw
> docker ps
> docker logs $1            # Reference container from previous output
> /cd /var/log
> tail -f app.log

# Single-shot with piped input
echo "deploy to production" | chu run --once

Flags

Perfect for operational tasks, DevOps workflows, and command execution with history.


Machine Learning Commands

chu ml list

List available ML models.

chu ml list

Shows:

chu ml train <model>

Train an ML model using Python.

chu ml train complexity
chu ml train intent

Available models:

Requirements:

chu ml test <model> [query]

Test a trained model with a query.

chu ml test complexity "implement oauth"
chu ml test intent "explain this code"

Shows prediction and probabilities for all classes.

chu ml eval <model> [-f file]

Evaluate model performance on test dataset.

chu ml eval complexity
chu ml eval intent -f ml/intent/data/eval.csv

Shows:

chu ml predict [model] <text>

Make prediction using embedded Go model (no Python runtime).

chu ml predict "implement auth"               # uses complexity (default)
chu ml predict complexity "fix typo"          # explicit model
chu ml predict intent "explain this code"     # intent classification

Fast path:


ML Configuration

Complexity Threshold

Controls when Guided Mode is automatically activated.

# View current threshold (default: 0.55)
chu config get defaults.ml_complex_threshold

# Set threshold (0.0-1.0)
chu config set defaults.ml_complex_threshold 0.6

Higher threshold = less sensitive (fewer Guided Mode triggers)

Intent Threshold

Controls when ML router is used instead of LLM.

# View current threshold (default: 0.7)
chu config get defaults.ml_intent_threshold

# Set threshold (0.0-1.0)
chu config set defaults.ml_intent_threshold 0.8

Higher threshold = more LLM fallbacks (more accurate but slower/expensive)


Dependency Graph Commands

chu graph build

Force rebuild dependency graph, ignoring cache.

chu graph build

Shows:

When to use:

chu graph query <terms>

Find relevant files for a query term using PageRank.

chu graph query "authentication"
chu graph query "database connection"
chu graph query "api routes"

Shows:

How it works:

  1. Keyword matching in file paths
  2. Neighbor expansion (imports/imported-by)
  3. PageRank weighting
  4. Top N selection

Graph Configuration

Max Files

Control how many files are added to context in chat mode.

# View current setting (default: 5)
chu config get defaults.graph_max_files

# Set max files (1-20)
chu config set defaults.graph_max_files 8

Recommendations:

Debug Graph

export CHUCHU_DEBUG=1
chu chat "your query"  # Shows graph stats

Shows:


Command Comparison

Command Purpose When to Use
chat Interactive conversation Quick questions, exploratory work
review Code review Before commit, quality check, security audit
tdd TDD workflow New features requiring tests
research Understand codebase Architecture analysis, onboarding
plan Create implementation plan Large features, complex changes
implement Execute plan Structured feature implementation
feature Quick feature generation Small, focused features
run Execute tasks DevOps, HTTP requests, CLI commands

Environment Variables

CHUCHU_DEBUG

Enable debug output to stderr.

CHUCHU_DEBUG=1 chu chat

Shows:


Configuration

All configuration lives in ~/.chuchu/:

~/.chuchu/
├── profile.yaml          # Backend and model settings
├── system_prompt.md      # Base system prompt
├── memories.jsonl        # Example memory store
└── plans/               # Saved implementation plans
    └── 2025-01-15-add-auth.md

Example profile.yaml

defaults:
  backend: groq
  model: fast

backends:
  groq:
    type: chat_completion
    base_url: https://api.groq.com/openai/v1
    default_model: llama-3.3-70b-versatile
    models:
      fast: llama-3.3-70b-versatile
      smart: llama-3.3-70b-specdec

Next Steps