Q&A: Phase 18.2 — MemoryConsolidator #455
Unanswered
web3guru888
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Q&A: Phase 18.2 — MemoryConsolidator
Community Q&A for the
MemoryConsolidatorcomponent (issue #453). Ask questions below or review the pre-seeded pairs.Q1: Why an async background loop vs synchronous consolidation?
A: Consolidation must never block the
CognitiveCyclehot path. If_consolidate_trace()were called synchronously insiderun_cycle(), a slowWorldModel.upsert_pattern()(e.g. a database write) would introduce latency into every cognitive tick. The backgroundasyncio.Taskruns independently — it yields withawait asyncio.wait_for(stop_event, timeout=sweep_interval_s)between sweeps, so the event loop remains responsive.Tuning
sweep_interval_s: lower values (e.g. 5s) increase consolidation throughput at the cost of more CPU; higher values (e.g. 60s) reduce overhead but allow episodic backlog to grow. A reasonable default is 30s. Monitorconsolidation_sweep_totalrate to detect stalls.Q2: How does HYBRID strategy balance the three scores?
A: HYBRID computes a weighted sum:
Where:
recency = 1 / (age_seconds + 1)— bounded in(0, 1], decays rapidlyfrequency = trace.frequency / max_freq— normalised to[0, 1]within the current candidate batchsurprise = trace.surprise_score— already[0, 1]fromSurpriseDetectorThe weights sum to 1.0 (
0.3 + 0.4 + 0.3). Frequency has the highest weight because repeated patterns have the strongest evidence for generalisation. Normalisation offrequencyis per-sweep:max_freq = max(t.frequency for t in candidates). This means a trace withfrequency=1in a batch where the max is also 1 scores1.0for frequency — the strategy is relative within each sweep batch, not absolute.Q3: What happens when WorldModel rejects a SemanticPattern?
A:
world_model.upsert_pattern()raisesWorldModelErroron storage failure. The current design in_consolidate_trace()does not retry — it logs the error and continues with the next trace. Rationale: consolidation is a background best-effort process; a single rejection should not halt the entire sweep.For production use cases where pattern durability is critical, a retry queue can be added: failed patterns are appended to
_retry_queue: dequeand re-attempted on the next sweep before fetching new candidates. This is a planned enhancement (Phase 18.5 TemporalCoherenceArbiter may address this).Q4: How does SurpriseDetector integrate?
A:
SurpriseDetector(Phase 13.4) annotates events with a surprise score at the point they are written toTemporalGraph. When anEpisodicTraceis created (grouping a set of event IDs), the trace'ssurprise_scoreis set to the max (or mean) surprise score across its constituent events.MemoryConsolidatoruses this score in two ways:surprise_score < surprise_threshold(default 0.7) score 0.0 and are excluded from consolidation — only genuinely surprising traces are promoted toWorldModel0.3 * surprise_scoreto the blend — surprising traces are ranked higher but not exclusively selectedSurpriseDetectoris passed toAsyncMemoryConsolidator.__init__()for potential future use (e.g. real-time re-scoring of candidate traces during sweep); the current implementation readstrace.surprise_scoredirectly from the frozen dataclass.Q5: What is the purpose of
dry_runmode?A:
ConsolidatorConfig(dry_run=True)enables a non-mutating sweep:_consolidate_trace()computes features and scores but does not callworld_model.upsert_pattern()_patternsdict is not updatedprune_old_traces()is a no-op and returns 0Use cases:
WorldModelTemporalGraphandSurpriseDetectorwithout needing a realWorldModelstubdry_run=Truein production briefly to log what would be consolidated, then disable to executedry_runmetrics can optionally emit with adry_run="true"label to distinguish from live sweeps in Grafana.Q6: How do we prevent memory bloat in TemporalGraph?
A: Two mechanisms:
prune_old_traces(retention_ns): removes traces wheretimestamp_ns < cutoff_ns. Called explicitly or via a scheduled task.ConsolidatorConfig.retention_days = 7.0gives acutoff_ns = time.monotonic_ns() - int(7 * 86400 * 1e9). Only consolidated traces should be pruned (the implementation checkstrace.consolidated == Truebefore deletion — unconsolidated traces are preserved regardless of age, unless an explicit override is passed).Background loop with
max_traces_per_sweep: by processing traces continuously in the background, the consolidation rate keeps pace with ingestion rate under normal load. If ingestion spikes, the sweep will catch up over subsequent intervals.Monitor
active_patterns_countas a proxy for WorldModel growth (should grow slowly and plateau); monitor TemporalGraph trace count separately to detect backlog accumulation.Q7: Grafana — show consolidation rate and active patterns
A: Two recommended panels:
Alert rule example (consolidation stalled):
Beta Was this translation helpful? Give feedback.
All reactions