You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs(readme): refresh fork-ahead queue + open PR table for 2026-04-24 state
Three stale sections updated:
- Fork change queue: row 8 (.blob_seq_ids_migrated marker) struck
through → FILED as MemPalace#1177. Two new rows added for segfault fixes
discovered today (MemPalace#1171 concurrent-write lock, MemPalace#1173 quarantine in
make_client) that weren't in the queue because the bugs surfaced
today, not during the original 2026-04-21 triage.
- Open upstream PRs: was showing 3 of 10 PRs. Now shows all 10 with
current CI/review state. All rebased onto current upstream/develop
and MERGEABLE as of today.
- Merged since v3.3.1: added v3.3.3 release (2026-04-24) with its
constituent merges — MemPalace#942, MemPalace#833, MemPalace#1097, MemPalace#1145, MemPalace#1147, MemPalace#1148/1150/1157
entity-detection overhaul (via @igorls's MemPalace#1175 stacked-PR rescue),
MemPalace#1166 palace-path security, MemPalace#340/MemPalace#1093 install regression, plus MemPalace#851
from the 2026-04-22 batch.
Copy file name to clipboardExpand all lines: README.md
+22-6Lines changed: 22 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,9 @@ Size (lines of diff) and Risk (maintainer-appetite + chance of a rework request)
31
31
|**Hooks**| Transcript auto-mining in `hook_precompact()` with correct defaults — `--mode convos` + `wing_<project>` derived from transcript path, plus a `hook_auto_mine` config flag (default `true`) for explicit opt-out |[Commented on #1083 on 2026-04-21](https://github.com/MemPalace/mempalace/issues/1083#issuecomment-4292630330) with the two-part design (opt-out + correct defaults), asked @raphaelsamy whether `hook_auto_mine: false` boolean is sufficient or they want finer-grained control, asked @bensig for direction. PR to follow once direction is confirmed. | medium | low-medium |`hooks_cli.py`, `config.py`, `tests/test_hooks_cli.py`|
32
32
|**Performance**|`bulk_check_mined()` paginated pre-fetch + `--workers` ThreadPoolExecutor concurrent mining |[Issue #1088](https://github.com/MemPalace/mempalace/issues/1088) filed 2026-04-21; [cross-ref comment](https://github.com/MemPalace/mempalace/issues/1088#issuecomment-4292570126) ties it to [#357](https://github.com/MemPalace/mempalace/issues/357) (parallel-mining corruption we could fix) and gates the PR on [#1071](https://github.com/MemPalace/mempalace/pull/1071) landing first (ORT thread cap, for bounded parallelism). | medium | medium |`palace.py`, `miner.py`|
33
33
|**Reliability**|`_get_client()` tries `get_collection` before `create_collection` — `get_or_create_collection` segfaults ChromaDB 1.5.x when the existing collection's metadata differs from the call-site metadata |[Issue #1089](https://github.com/MemPalace/mempalace/issues/1089) filed 2026-04-21 — documented the crash + fork workaround, cross-referenced [#974](https://github.com/MemPalace/mempalace/issues/974) / [#1071](https://github.com/MemPalace/mempalace/pull/1071) interaction (metadata drift risk post-merge), offered three paths: interim guard PR, chroma-core bug report, or close as covered. | small | medium |`backends/chroma.py`|
34
-
|**Reliability**| Skip `_fix_blob_seq_ids` sqlite open after first successful migration via `.blob_seq_ids_migrated` marker — opening sqlite3 against a live ChromaDB 1.5.x file corrupts the next PersistentClient |[Issue #1090](https://github.com/MemPalace/mempalace/issues/1090) filed 2026-04-21 — documented the post-migration re-open crash pattern, fork's sentinel workaround, adjacent issues ([#722](https://github.com/MemPalace/mempalace/issues/722), [#832](https://github.com/MemPalace/mempalace/issues/832), [#1035](https://github.com/MemPalace/mempalace/issues/1035)), asked whether other palaces repro this. | small | medium |`backends/chroma.py`|
34
+
|**Reliability**| Skip `_fix_blob_seq_ids` sqlite open after first successful migration via `.blob_seq_ids_migrated` marker — opening sqlite3 against a live ChromaDB 1.5.x file corrupts the next PersistentClient |[#1177](https://github.com/milla-jovovich/mempalace/pull/1177) filed 2026-04-24, closes [#1090](https://github.com/MemPalace/mempalace/issues/1090) — marker guard in `_fix_blob_seq_ids()`, CI green after ruff cleanup | small | medium |`backends/chroma.py`|
35
+
|**Reliability**| Cross-process write lock at `ChromaCollection` adapter — prevents HNSW segment corruption from concurrent `mcp_server.py` + `mempalace mine` writers (Claude Code spawns one per terminal, stop hooks spawn more). `fcntl.flock` on `$palace/.write.lock`, Windows is a no-op (palace-daemon recommended there). |[#1171](https://github.com/milla-jovovich/mempalace/pull/1171) filed 2026-04-24, redirected from mcp_server-only to RFC 001 backend seam | medium | low |`backends/chroma.py`, `mcp_server.py`|
36
+
|**Reliability**| Call `quarantine_stale_hnsw()` in `make_client()` itself + lower threshold 3600→300s — upstream's #1062 wires it at server startup but misses short-lived callers (hooks, CLI). Production 0.96h-drift segfault confirmed 1h threshold was too loose. |[#1173](https://github.com/milla-jovovich/mempalace/pull/1173) filed 2026-04-24, complementary to [#1062](https://github.com/MemPalace/mempalace/pull/1062)| small | low |`backends/chroma.py`|
35
37
|**Performance**| L1 importance pre-filter — `importance >= 3` first, full scan fallback |[#660](https://github.com/milla-jovovich/mempalace/pull/660)| small | low |`layers.py`|
36
38
|**Performance**|`miner.status()` paginates `col.get()` in 10 K-drawer batches — upstream's single `col.get(limit=total)` hits SQLite's max-variable limit on palaces with many thousands of drawers | tracked upstream in [#851](https://github.com/milla-jovovich/mempalace/pull/851) (merged 2026-04-22, also fixes #850 and #1015); fork's paginated version has been running since 2026-04-10 | small | low |`miner.py`|
37
39
|**Config**| Configurable chunking parameters — `chunk_size` (default 800 chars), `chunk_overlap` (100), `min_chunk_size` (50) written to `config.json` and exposed via `MempalaceConfig` properties |[#1024](https://github.com/milla-jovovich/mempalace/pull/1024) · addresses [#390](https://github.com/MemPalace/mempalace/issues/390) (default 800 exceeds MiniLM's 256-token cap; this lets users override) | small | low |`config.py`, `miner.py`, `convo_miner.py`|
@@ -348,13 +350,27 @@ Tools and patterns we're evaluating for the two open problems above. Not competi
348
350
349
351
## Open upstream PRs
350
352
353
+
All 10 rebased onto current `upstream/develop` and `MERGEABLE` as of 2026-04-24.
0 commit comments