Context rot

Context rot [noun]: The gradual degradation of an AI system's useful context as stored information becomes stale, contradictory, or irrelevant — causing compounding errors, hallucination, and loss of coherence over time.

How context rot happens

01
Information changes but the index doesn't

Data sources update, but retrieval indexes aren't refreshed automatically. The model keeps seeing the old version.

02
Conflicting facts accumulate without resolution

New facts are added alongside old ones, not replacing them. Over time the index fills with contradictions the model must guess between.

03
Preferences and decisions get buried under noise

Older, less-used context sinks while irrelevant noise rises. The signal the model needs is still there — just unreachable.

04
Each new session adds data without retiring old data

Indexes grow without pruning, degrading signal quality. Volume increases while precision falls.

Signs your system has context rot

×Agents citing outdated facts
×Contradictory answers to the same question over time
×Model "forgetting" previous decisions
×Increasing hallucination rate as data volume grows

Why RAG doesn't solve it (and often makes it worse)

×More data in the index means more stale data retrieved. Volume amplifies noise, not signal.
×No native mechanism for supersession or retirement. Retrieval has no concept of “this fact replaced that one.”

How to prevent context rot

01
Capture provenance

Track when information was written and what source it came from. Provenance is the precondition for everything else — without it, you cannot score recency or detect conflicts.

02
Track supersession

Newer facts should retire older ones, not coexist with them. Every write should carry a signal about what it updates or invalidates.

03
Score by recency

Not just semantic similarity. A fact written yesterday about the current state of a system should outrank a more similar document from two years ago.

04
Write outcomes back

Close the loop on what the model acted on. When an agent acts on a fact, that action is itself a new data point — record it and let the system learn from it.

Frequently asked questions

Is context rot the same as hallucination?

Context rot is a cause of hallucination, not the same thing. When the context a model sees is stale, contradictory, or irrelevant, the model fills in gaps — which produces hallucinations. Fixing the context reduces the hallucination rate.

Does context rot only affect RAG systems?

No. Any AI system that accumulates information over time — including vector databases, chat histories, and knowledge bases — can develop context rot if it lacks mechanisms for supersession, conflict resolution, and retirement.

How does Cilow prevent context rot?

Cilow tracks provenance, scores by recency alongside relevance, flags superseded information, and writes outcomes back into the context layer — so the system improves with use instead of degrading.

See how Cilow tracks provenance, scores by recency, and retires stale context automatically.

See how Cilow handles it → Architecture
Cilow