Hey — congratulations on shipping palace-daemon. The multi-client coordination problem (bulk mine starving interactive queries, multi-machine access, fairness under load) has been in the upstream mempalace backlog for a while — see MemPalace/mempalace#904, #357 — and it's good to see someone took it seriously.
I maintain jphein/mempalace, a production fork of MemPalace running a ~165K-drawer palace in daily use since 2026-04-09. A few things we've been building that might intersect with your roadmap:
- ThreadPoolExecutor-based mining (MemPalace/mempalace#1088) — single-client
--workers N for cold mines, ~5× speedup on our palace. The semaphore-lane pattern you're using for mine coordination is exactly what I'm trying to apply in-process.
- ChromaDB hardening stack —
quarantine_stale_hnsw (merged in MemPalace 3.3.2), _get_client() get-then-create guard (#1089), .blob_seq_ids_migrated skip marker (#1090). Your correctness-floor claim (">=3.3.2") aligns closely with where the internal-reliability work lands.
- Deterministic silent save hooks (MemPalace/mempalace#673) — externally approved, waiting upstream merge. Decouples save reliability from "AI remembers to call MCP tools."
Wanted to open an issue rather than land code unbidden.
A few questions if you have a minute:
- What's your next ~2-3 weeks of focus? (Happy to avoid duplicating; also happy to contribute if it's in our wheelhouse.)
- Is there a specific class of contribution that would be most useful right now — integration tests against real large palaces, docs, a particular feature, or something else entirely?
- Any known pain points when running palace-daemon against a bigger palace? We have a ~165K-drawer one handy for real-world stress testing if that'd help.
Either way — thanks for architecting this as a coordination layer rather than forking the storage. That's the right shape.
— JP / jphein
Hey — congratulations on shipping palace-daemon. The multi-client coordination problem (bulk mine starving interactive queries, multi-machine access, fairness under load) has been in the upstream mempalace backlog for a while — see MemPalace/mempalace#904, #357 — and it's good to see someone took it seriously.
I maintain jphein/mempalace, a production fork of MemPalace running a ~165K-drawer palace in daily use since 2026-04-09. A few things we've been building that might intersect with your roadmap:
--workers Nfor cold mines, ~5× speedup on our palace. The semaphore-lane pattern you're using for mine coordination is exactly what I'm trying to apply in-process.quarantine_stale_hnsw(merged in MemPalace 3.3.2),_get_client()get-then-create guard (#1089),.blob_seq_ids_migratedskip marker (#1090). Your correctness-floor claim (">=3.3.2") aligns closely with where the internal-reliability work lands.Wanted to open an issue rather than land code unbidden.
A few questions if you have a minute:
Either way — thanks for architecting this as a coordination layer rather than forking the storage. That's the right shape.
— JP / jphein