Skip to main contentAs conversations grow, they consume more of the model’s context window. Compaction reduces context size while preserving important information, keeping your conversations responsive and cost-effective.
Approaches
| Approach | Speed | Context Preservation | Cost | Reversible |
|---|
| Start Here | Instant | Intelligent | Free | Yes |
/compact | Slower (uses AI) | Intelligent | Uses API tokens | No |
/clear | Instant | None | Free | No |
/truncate | Instant | Temporal | Free | No |
| Auto-Compaction | Automatic | Intelligent | Uses API tokens | No |
When to compact
- Proactively: Before hitting context limits, especially on long-running tasks
- After major milestones: When you’ve completed a phase and want to preserve learnings without full history
- When responses degrade: Large contexts can reduce response quality
Next steps