Show & Tell: Phase 18.2 — MemoryConsolidator architecture #454
web3guru888
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Phase 18.2 — MemoryConsolidator: offline episodic→semantic consolidation
What MemoryConsolidator Does (and Why)
MemoryConsolidatorbridges the gap between raw episodic experience and stable semantic knowledge. Episodic traces inTemporalGraphaccumulate rapidly during active cognition. Left unchecked this grows without bound and slows lookups.The neuroscience analogy: the hippocampus rapidly encodes episodic memories with high fidelity. During rest (or low-load periods), hippocampal replay signals drive neocortical consolidation — repeated patterns are compressed into stable semantic representations.
MemoryConsolidatormirrors this: it runs as a backgroundasyncio.Task, sweeps unconsolidatedEpisodicTraceobjects, extracts centroid feature representations, and upsertsSemanticPatternobjects intoWorldModel.Benefits:
retention_daysWorldModelaccumulates generalised patterns, not raw eventsCognitiveCyclehot pathConsolidationStrategy Comparison Table
RECENCY1 / (age_seconds + 1)FREQUENCYtrace.frequency / max_freqfrequency >= min_frequency_thresholdSURPRISEtrace.surprise_scoresurprise_score >= surprise_thresholdHYBRID0.3*recency + 0.4*frequency + 0.3*surpriseASCII Data Flow Diagram
Background Loop Lifecycle
WorldModel Integration
MemoryConsolidatorcallsworld_model.upsert_pattern(pattern: SemanticPattern):pattern_idis new, insert directlypattern_idexists, mergeabstractiondicts (new values win), takemax(existing.confidence, new.confidence)WorldModelError— caller logs and continuesWorldModelaccumulatesSemanticPatternobjects over time, representing generalised regularities: e.g. "when module X is active and surprise > 0.8, event type Y follows 70% of the time."HorizonPlanner Integration
HorizonPlanner(18.1) classifies goals into SHORT/MEDIUM/LONG horizons. WithMemoryConsolidatoronline:WorldModel.get_patterns(module=X)returns high-confidenceSemanticPatternobjectsHorizonPlanneruses pattern confidence to refine duration estimatesThis closes the hippocampal-neocortical loop: episodic experience → semantic compression → improved prospective planning.
Prometheus Metrics + Grafana
consolidation_sweep_totalrate(consolidation_sweep_total[5m])— sweeps/sectraces_consolidated_totalrate(traces_consolidated_total[5m])— throughputpatterns_created_totalincrease(patterns_created_total[1h])— hourly growthconsolidation_sweep_duration_secondshistogram_quantile(0.95, consolidation_sweep_duration_seconds_bucket)active_patterns_countactive_patterns_count— current semantic store sizeGrafana panel YAML:
Open Questions for Contributors
Distributed consolidation: When multiple ASI-Build nodes run
MemoryConsolidatorindependently, how do we prevent duplicateSemanticPatterncreation for the same underlying traces? Should consolidation be elected to a single leader node, or shouldWorldModelhandle merge conflicts?Pattern overlap / conflict resolution: When two
SemanticPatternobjects have overlappingsource_trace_ids, should they be merged, kept separate, or does one subsume the other? What conflict resolution semantics shouldworld_model.upsert_pattern()implement?Incremental vs batch consolidation: The current design consolidates
max_traces_per_sweeptraces per sweep as a batch. Would an incremental approach (one trace at a time, yield between each) be better for latency-sensitive deployments?Beta Was this translation helpful? Give feedback.
All reactions