Skip to main content
As conversations grow, they consume more of the model’s context window. Compaction reduces context size while preserving important information, keeping your conversations responsive and cost-effective.

Approaches

ApproachSpeedContext PreservationCostReversible
Start HereInstantIntelligentFreeYes
/compactSlower (uses AI)IntelligentUses API tokensNo
/clearInstantNoneFreeNo
/truncateInstantTemporalFreeNo
Auto-CompactionAutomaticIntelligentUses API tokensNo

When to compact

  • Proactively: Before hitting context limits, especially on long-running tasks
  • After major milestones: When you’ve completed a phase and want to preserve learnings without full history
  • When responses degrade: Large contexts can reduce response quality

Next steps