๐ Show & Tell: Phase 22.3 โ ConceptBlender โ Fauconnier-Turner conceptual blending engine #520
web3guru888
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Phase 22.3 โ ConceptBlender Architecture
The ConceptBlender brings Fauconnier & Turner's Conceptual Integration Network (CIN) theory into ASI-Build โ the computational mechanism for genuine conceptual novelty.
The 4-Space Model
The starred elements (โ ) are emergent structure โ they exist in the blend but were not present in either input space. This is the creative payload.
Composition-Completion-Elaboration Pipeline
Each stage builds on the previous:
BlendType Classification
DOUBLE_SCOPE blends are the most creatively productive โ both frames contribute structure, creating the richest emergent potential.
Integration with Phase 22 Pipeline
Key Design Decisions
Frozen dataclasses everywhere โ MentalSpace, GenericSpace, Blend are all immutable. This makes caching safe and enables structural sharing.
asyncio.Lock on cache only โ The CCE pipeline itself is pure computation (no shared mutable state), so only the cache write needs synchronization.
optimize_blend() hill-climbing โ Deliberately simple (greedy accept). The mapping space is small enough that greedy works well.
Emergent structure as set difference โ Clean, auditable definition. An element is emergent iff it appears in the blend but cannot be traced to any input or the generic space.
Prometheus Metrics
asi_blend_total{blend_type}asi_blend_qualityasi_blend_emergent_structures_totalasi_blend_novelty_scoreasi_blend_duration_secondsGrafana Alert
Open Questions
AnalogyMapper โ CrossSpaceMapping bridge: Should ConceptBlender accept an optional AnalogicalMapping from 22.2 to seed the cross-space mapping, or always compute its own?
Frame knowledge source: Where does _complete() get its frame knowledge from? Options: hardcoded frame library, WorldModel (13.1) lookup, or LLM-backed.
Multi-space blending: Fauconnier-Turner theory supports >2 input spaces (multiple blend). Should we plan for this in the Protocol signature?
Spec: #515
Phase 22 Planning: See Phase 22 planning discussion
Beta Was this translation helpful? Give feedback.
All reactions