Conversation
Surface content redundancy data so users can answer "if this drive dies, what do I lose?" — builds on existing content identity and volume systems. Backend: - New `redundancy.summary` library query with per-volume at-risk vs redundant byte/file counts and a library-wide replication score - Extend `SearchFilters` with `at_risk`, `on_volumes`, `not_on_volumes`, `min_volume_count`, `max_volume_count` filters - Add composite index migration on entries(content_id, volume_id) Frontend: - `/redundancy` dashboard with replication score, volume bars, at-risk callout - `/redundancy/at-risk` paginated file list sorted by size - `/redundancy/compare` two-volume comparison (unique/shared toggle) - Sidebar ShieldCheck button linking to redundancy view Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
# Conflicts: # core/src/volume/fs/zfs.rs
- ZFS: override total_capacity for pool-root volumes using zfs list used+available. df under-reports pool-root Size because it only counts the root dataset's own used bytes plus avail — on a 60 TB raidz2 pool this shows as ~15 TB instead of ~62 TB. The pool root's own used property includes descendants, so used+available is the real usable capacity. - Library stats: drop volumes where is_user_visible=false AND re-apply should_hide_by_mount_path retroactively so stale DB rows (detected before the Linux visibility filters existed) don't inflate reported capacity. - Extract should_hide_by_mount_path into volume/utils as a shared helper used by both the list query and the stats calculation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Makes it possible to verify library-level capacity aggregation from the CLI — previously the list only showed mount, fingerprint, and tracked/mounted state, which meant debugging the ZFS pool capacity issue required querying the library DB directly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New docs/core/filesystems.mdx covering per-filesystem capabilities (CoW, pool-awareness, visibility filtering, capacity correction), platform detection strategies, the FilesystemHandler trait, Linux/ macOS/ZFS visibility rules, the ZFS pool-root capacity problem and fix, copy strategy selection, and known limitations. Registered under File Management in both mint.json and docs.json. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- core/src/ops/search: wire redundancy filters (at_risk, on_volumes, not_on_volumes, min/max volume_count) through the search query; fix UUID-to-SQLite BLOB literal so volume UUID comparisons actually match (volumes.uuid is stored as a 16-byte BLOB, quoted-string comparison silently returned zero rows). - apps/cli: new redundancy subcommand + populate the new SearchFilters fields from search args. - packages/interface: redundancy at-risk and compare pages reworked to consume the new filter surface; explorer context/hook updates to support redundancy-scoped views. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- New WebContextMenuProvider + Radix DropdownMenu-based renderer anchored at cursor via a 1x1 virtual trigger. Handles separators, submenus, disabled, and the danger variant via text-status-error. - useContextMenu now routes web clicks through the provider instead of parking data in unused local state, and trims leading/trailing/adjacent separators so condition-filtered menus don't render orphaned lines. - Drop app-frame corner rounding on the web build. - Add shrink-0 to the sidebar space switcher so the scrollable sibling can't compress it vertically. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…a dir - build.rs runs `bun run build` in apps/web so `just dev-server` always embeds the latest UI. rerun-if-changed covers apps/web/src, packages/interface/src, and packages/ts-client/src so Rust-only edits skip the rebuild. Skips gracefully when bun isn't on PATH or SD_SKIP_WEB_BUILD is set; Dockerfile sets the latter since dist is pre-built and bun isn't in the Rust stage. - Graceful shutdown was hanging because the browser holds the /events SSE stream open forever and axum waits for all connections to drain. After the first signal, arm a background force-exit on second Ctrl+C or 5s timeout so the process can't stick. - Debug builds were starting from a fresh tempfile::tempdir() on every run (the TempDir handle dropped at end of the closure, deleting the dir we just took a path to). Default to ~/.spacedrive in debug so data persists and `just dev-server` shares a data dir with the Tauri app. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Uses zig cc as C/C++ compiler on TrueNAS Scale where /usr is read-only and no system gcc exists. Dev tools live at /mnt/pool/dev-tools/ (zig, cmake, make, extracted deb headers). Builds sd-server + sd-cli in ~4 min on a 12-core NAS. AI feature disabled (whisper.cpp C11 atomics incompatible with zig clang-18). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
On first load (fresh library, all stats zero), libraries.info used to calculate statistics synchronously before responding. On large libraries during active indexing this hangs indefinitely — the closure-table walk in calculate_file_statistics loads every descendant ID into a Vec then issues a WHERE IN(...) with millions of entries, which SQLite can't finish while the indexer is writing. Now always return cached (possibly zero) stats and let the background recalculate_statistics task fill them in. The UI refreshes via the ResourceChanged event when the calculation completes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Core::new() registers default protocol handlers after starting networking, but swallows any failure (error is only logged). If the initial registration fails — e.g. on a host where start_networking hasn't fully set up the event loop command sender by the time register_default_protocol_handlers runs — the registry is left empty. A subsequent call to Core::init_networking() would see `services.networking().is_some()` and skip re-registration, permanently leaving protocols unregistered for the life of the process. sd-server calls init_networking() right after Core::new(), so it's the client most exposed to this. Symptom: pairing over the web UI returns "Pairing protocol not registered" while the same library works fine from Tauri and mobile. Fix: init_networking now queries the registry directly for the pairing handler and re-registers the default set if it's missing, independent of whether networking is already initialized. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Iroh's endpoint.bind() fails wholesale if any configured discovery service fails to initialize. MdnsDiscovery requires binding UDP :5353, which on most Linux systems (including TrueNAS) is already owned by avahi-daemon. Result: endpoint creation errors out with "Service 'mdns' error", the event loop never starts, command_sender stays None, and protocol registration fails — so sd-server has no working networking at all. Make mDNS best-effort: on any error whose message mentions "mdns", retry endpoint creation with only pkarr + DNS discovery. Local-network auto-discovery is lost but remote pairing via node ID (which uses n0's DNS infrastructure, not mDNS) continues to work normally. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The dual-path discovery in start_pairing_as_joiner_with_code used tokio::select! to race mDNS and relay. select! resolves on the first branch to complete — including errors — so a host that can't bind mDNS (e.g. a Linux box where avahi already owns UDP :5353) would fail pairing wholesale: mDNS discovery errors out in <1ms with "Failed to create mDNS discovery: Service 'mdns' error", that Err wins the race, and relay discovery gets cancelled before it can even begin. Switch to futures::select_ok so we only return the error if EVERY discovery path has failed. mDNS failing immediately now leaves relay running to completion, which is the common case for remote pairing into a NAS. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replace the Mintlify docs in `docs/` with a self-hosted Next.js 16 + Fumadocs 16.7 app. All 58 content pages rewritten to use Fumadocs primitives natively (Callout / Steps / Tabs / Accordion / Card / FlowDiagram) with no Mintlify shims or wrappers. `mint.json`, `docs.json`, `custom.css` deleted. - App scaffold: Next 16.2, React 19, Tailwind v4, Biome, built-in Orama search - Brand blue `#36A3FF` as `--color-fd-primary` on the neutral preset - Page actions: Copy Markdown + Open-in-ChatGPT/Claude/Cursor/GitHub - LLM export routes: `/llms.txt`, `/llms-full.txt`, `/:path.mdx` - Per-page OG image generation via `next/og` - Edit-on-GitHub wired to `spacedriveapp/spacedrive@main` - `/` redirects to `/overview/introduction` - Internal notes moved to `docs/internal/` (out of the published tree) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR SummaryMedium Risk Overview Improves reliability/perf in core services. Search “fast” path now hydrates Fixes volume reporting and server/dev ergonomics. Volume capacity/visibility reporting is tightened (hide stale/filtered mounts, ZFS pool-root capacity correction, CLI shows capacity + visibility), server dev mode defaults data dir to Ports docs to a new Next.js + Fumadocs app. Replaces Mintlify docs with a self-hosted Fumadocs site including search, OG/LLM export routes, and new UI components/actions (plus updated docs ignores and lockfile). Reviewed by Cursor Bugbot for commit c471b5a. Configure here. |
WalkthroughThis PR introduces comprehensive redundancy awareness across Spacedrive, adding a new core Changes
Sequence DiagramsequenceDiagram
participant CLI as CLI User
participant CLICmd as CLI Command Handler
participant CoreOps as Core Redundancy Ops
participant DB as Database
participant Result as Result Formatting
CLI->>CLICmd: Execute redundancy.summary
CLICmd->>CoreOps: summary(RedundancySummaryInput)
CoreOps->>DB: Query volumes
CoreOps->>DB: Query at-risk content bytes/counts
CoreOps->>DB: Query redundant content bytes/counts
CoreOps->>DB: Query total content bytes (deduplicated)
DB-->>CoreOps: Per-volume aggregates + library totals
CoreOps-->>CLICmd: RedundancySummaryOutput
CLICmd->>Result: Format table/summary
Result-->>CLI: Display replication score & per-volume breakdown
sequenceDiagram
participant User as Frontend User
participant Explorer as Explorer Component
participant Context as Explorer Context
participant Query as Search Query (Fast)
participant DB as Database
participant Hydrate as Metadata Hydration
User->>Explorer: Apply redundancy filter
Explorer->>Context: enterFilteredMode(filters, label)
Context->>Explorer: Set mode.type = "filtered"
Explorer->>Query: Execute with at_risk/on_volumes filters
Query->>DB: Search entries with filter subqueries
DB-->>Query: Matching entry IDs
Query->>Hydrate: Batch fetch content_identities
Hydrate->>DB: Fetch content_kind names
Hydrate->>DB: Fetch sidecars by content_uuid
Hydrate-->>Query: Hydrated metadata
Query-->>Explorer: FileSearchResult[] with metadata
Explorer-->>User: Display filtered file list
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly Related PRs
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit c471b5a. Configure here.
| None, | ||
| Some(2u32), | ||
| "shared", | ||
| ), |
There was a problem hiding this comment.
Shared compare mode returns false positive matches
Medium Severity
The CompareMode::Shared case sets on_volumes to [A, B] and min_volume_count to 2, intending to find files on both volumes. However, on_volumes uses an IN clause that matches content on A or B, and min_volume_count checks the global volume count (not restricted to A and B). A file on volumes A and C (but not B) would incorrectly match. The filter needs to require presence on both specific volumes, e.g. via a GROUP BY with HAVING COUNT(DISTINCT v.uuid) = 2 scoped to the two target volumes.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit c471b5a. Configure here.
| lib_redundant_bytes as f64 / (lib_at_risk_bytes + lib_redundant_bytes) as f64 | ||
| } else { | ||
| 0.0 | ||
| }; |
There was a problem hiding this comment.
Replication score uses overcounted per-volume redundant bytes
Medium Severity
lib_redundant_bytes is accumulated by summing per-volume redundant bytes, but redundant content (on 2+ volumes) gets counted once per volume it appears on. A 1 GB file on 3 volumes contributes 3 GB to lib_redundant_bytes. The replication_score denominator (lib_at_risk_bytes + lib_redundant_bytes) is therefore inflated, and total_redundant_bytes in the output doesn't match its documented meaning of deduplicated redundant content. The already-available total_unique_content_bytes from Query 4 is the correct denominator.
Reviewed by Cursor Bugbot for commit c471b5a. Configure here.
| -j10 "$@" | ||
|
|
||
| echo "Binaries at:" | ||
| ls -lh target/release/sd-server target/release/sd-cli 2>/dev/null |
There was a problem hiding this comment.
Personal TrueNAS build script committed to repo root
Low Severity
build-server.sh at the repo root contains hardcoded paths specific to a single TrueNAS machine (/mnt/pool/dev-tools/, /mnt/pool/spacedrive). This appears to be a personal development helper script that was accidentally included in the commit. It isn't referenced by any CI, Dockerfile, or justfile target.
Reviewed by Cursor Bugbot for commit c471b5a. Configure here.
| } else { | ||
| format!("{:.2} {}", value, UNITS[unit]) | ||
| } | ||
| } |
There was a problem hiding this comment.
Duplicated format_bytes utility across new CLI modules
Low Severity
This commit adds a new format_bytes in volume/mod.rs and a near-identical format_bytes_u64 in redundancy/mod.rs. The codebase already has at least three other copies (main.rs, index/mod.rs, sync/mod.rs). These differ only in decimal precision (.1 vs .2) and zero-handling. A single shared utility would reduce maintenance burden and ensure consistent formatting.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit c471b5a. Configure here.
| all_volumes.iter().map(|v| (v.id, v.display_name.clone())).collect(); | ||
|
|
||
| // Helper to build volume ID WHERE clause | ||
| let volume_where = match &volume_id_filter { |
There was a problem hiding this comment.
Edge case: if volume_uuids is non-empty but none resolve to DB IDs, ids becomes empty and this ends up generating IN () (invalid SQL). Might be worth guarding this (e.g. short-circuit to an empty result, or use something like AND 0).
| let at_risk_sql = format!( | ||
| r#" | ||
| SELECT e.volume_id as volume_id, | ||
| COUNT(*) as file_count, |
There was a problem hiding this comment.
COUNT(*) / SUM(ci.total_size) here operates per entry, so the same content_id referenced multiple times on a volume can inflate both counts and bytes. If the goal is unique content-at-risk bytes, consider aggregating distinct (volume_id, content_id) before joining/summing.
| setLoading(true); | ||
|
|
||
| try { | ||
| const promise = fetch(markdownUrl).then((res) => res.text()); |
There was a problem hiding this comment.
ClipboardItem expects Blob | Promise<Blob> values; this is currently passing a Promise<string>, and it also caches failed fetches forever. I think writeText(await promise) is enough here.
| const promise = fetch(markdownUrl).then((res) => res.text()); | |
| const promise = fetch(markdownUrl).then(async (res) => { | |
| if (!res.ok) throw new Error(`Failed to fetch markdown: ${res.status}`); | |
| return res.text(); | |
| }); | |
| cache.set(markdownUrl, promise); | |
| await navigator.clipboard.writeText(await promise); |
| const scan = source.getPages().map(getLLMText); | ||
| const scanned = await Promise.all(scan); | ||
|
|
||
| return new Response(scanned.join('\n\n')); |
There was a problem hiding this comment.
Tiny nit: consider setting an explicit content type here so clients don’t have to guess.
| return new Response(scanned.join('\n\n')); | |
| return new Response(scanned.join('\n\n'), { | |
| headers: { 'Content-Type': 'text/plain; charset=utf-8' }, | |
| }); |
| warn!("Graceful shutdown timed out after 5s, forcing exit"); | ||
| } | ||
| } | ||
| std::process::exit(0); |
There was a problem hiding this comment.
Minor: the forced-exit path uses std::process::exit(0), which looks like a clean shutdown to supervisors/CI. If this is a true “we got stuck” fallback, a non-zero exit code (or a signal-ish one like 130/143) may be more informative.
There was a problem hiding this comment.
Actionable comments posted: 14
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
docs/content/docs/core/api.mdx (1)
264-264:⚠️ Potential issue | 🟡 MinorStale relative link escapes the docs content root.
[apps/tauri/DAEMON_SETUP.md](../../apps/tauri/DAEMON_SETUP.md)resolves outsidedocs/content/docs/and will 404 on the Fumadocs site (the file lives in the monorepo, not in the docs tree). Either link to the file on GitHub or remove the link.Proposed fix
-See [apps/tauri/DAEMON_SETUP.md](../../apps/tauri/DAEMON_SETUP.md) for configuration details. +See [apps/tauri/DAEMON_SETUP.md](https://github.com/spacedriveapp/spacedrive/blob/main/apps/tauri/DAEMON_SETUP.md) for configuration details.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/core/api.mdx` at line 264, The relative link "[apps/tauri/DAEMON_SETUP.md](../../apps/tauri/DAEMON_SETUP.md)" in docs/content/docs/core/api.mdx points outside the docs tree and will 404; update that markdown link by either replacing it with the repository GitHub URL for apps/tauri/DAEMON_SETUP.md or remove the link and keep plain text. Locate the link text "apps/tauri/DAEMON_SETUP.md" in the file and swap the href to the canonical GitHub path (or drop the link) so the documentation no longer points outside docs/content/docs/.core/src/ops/search/input.rs (1)
206-247:⚠️ Potential issue | 🟡 MinorValidate the volume-count range.
min_volume_countandmax_volume_countcan currently be inverted, unlike date and size ranges. Rejecting that early keeps the search contract consistent.🛡️ Proposed validation
if let Some(size_range) = &self.filters.size_range { if let (Some(min), Some(max)) = (size_range.min, size_range.max) { if min > max { return Err("Size range min must be less than max".to_string()); } } } + + if let (Some(min), Some(max)) = ( + self.filters.min_volume_count, + self.filters.max_volume_count, + ) { + if min > max { + return Err("Volume count min must be less than or equal to max".to_string()); + } + } Ok(())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/src/ops/search/input.rs` around lines 206 - 247, The volume-count range (self.filters.min_volume_count and self.filters.max_volume_count) isn't validated for inversion; add a check similar to the date_range and size_range validations: if both min_volume_count and max_volume_count are Some and min > max then return an Err with a clear message (e.g., "Volume count min must be less than or equal to max"); place this check alongside the existing date_range and size_range validations in validate() so the search input contract is consistent.
🟡 Minor comments (16)
docs/content/docs/overview/history.mdx-21-21 (1)
21-21:⚠️ Potential issue | 🟡 MinorHyphenate compound adjective in heading.
Line 21 should use “Open-source launch” for correct compound-adjective usage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/overview/history.mdx` at line 21, Update the heading text "Open source launch (May 2022)" to use the compound adjective form "Open-source launch (May 2022)" so the line that currently reads exactly "## Open source launch (May 2022)" is changed to "## Open-source launch (May 2022)" in docs/content/docs/overview/history.mdx.docs/content/docs/overview/history.mdx-31-31 (1)
31-31:⚠️ Potential issue | 🟡 MinorUse “open-source” before noun for consistency.
At Line 31, prefer “open-source contributors” (hyphenated adjective).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/overview/history.mdx` at line 31, Update the phrase "open source contributors" to the hyphenated adjective form "open-source contributors" in the sentence that reads "Team grew to ~12 people working remotely. 100+ open source contributors participated." so the docs/overview history content uses the consistent "open-source" modifier.docs/content/docs/overview/self-hosting.mdx-233-233 (1)
233-233:⚠️ Potential issue | 🟡 MinorFix subject–verb agreement in monitoring section.
Use “Spikes” instead of “Spike” to match “Memory usage”.
✏️ Suggested edit
-Memory usage typically ranges from 100–500MB depending on active jobs. Spike during intensive operations like thumbnail generation. +Memory usage typically ranges from 100–500MB depending on active jobs. Spikes during intensive operations like thumbnail generation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/overview/self-hosting.mdx` at line 233, Update the sentence in the monitoring section so the verb agrees with the plural subject "Memory usage": change "Spike during intensive operations like thumbnail generation." to "Spikes during intensive operations like thumbnail generation." in the docs content (the line containing "Memory usage typically ranges from 100–500MB depending on active jobs. Spike during intensive operations like thumbnail generation.").packages/interface/src/hooks/useContextMenu.ts-59-76 (1)
59-76:⚠️ Potential issue | 🟡 Minor
collapseSeparatorscan produce empty submenus that still render as clickable items.If a submenu's contents were entirely separators (or became so after upstream filtering), this function recurses and returns
[]for that submenu, but the parent item is still kept withsubmenu: []. Downstream,WebContextMenuContext.tsxchecksitem.submenu && item.submenu.length > 0and falls back to rendering the parent as a regularDropdownMenu.Itemwith noonClick— an inert but clickable row.Consider either dropping items whose submenu collapsed to empty, or marking them disabled.
♻️ Proposed fix
- } else if (item.submenu) { - result.push({ ...item, submenu: collapseSeparators(item.submenu) }); + } else if (item.submenu) { + const submenu = collapseSeparators(item.submenu); + if (submenu.length === 0) continue; + result.push({ ...item, submenu }); } else { result.push(item); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/interface/src/hooks/useContextMenu.ts` around lines 59 - 76, collapseSeparators currently keeps parent items whose submenu recursion returns [] which results in inert clickable rows; update collapseSeparators so that when encountering an item with submenu you call collapseSeparators(item.submenu) and if the result is an empty array either (a) skip pushing the parent item entirely or (b) push the parent item with a disabled flag (e.g., set disabled: true) so downstream rendering in WebContextMenuContext.tsx (which checks item.submenu && item.submenu.length > 0) won't render an active clickable row; modify the branch handling item.submenu in collapseSeparators to detect empty collapsed submenus and act accordingly.packages/interface/src/hooks/useContextMenu.ts-146-156 (1)
146-156:⚠️ Potential issue | 🟡 MinorAvoid
anyon the Tauri bridge access.As per coding guidelines: "Never use
anytype in TypeScript - useunknownwith type guards if needed". The cast(window as any).__SPACEDRIVE__?.showContextMenushould be replaced with a typed declaration so consumers don't lose signature safety (wrong arg shapes here wouldn't be caught at compile time).🛠️ Suggested typing
Add a module-scoped declaration (either in this file or a shared
types/window.d.ts):declare global { interface Window { __SPACEDRIVE__?: { showContextMenu?: ( items: ContextMenuItem[], position: { x: number; y: number } ) => Promise<void>; }; } }Then:
- const nativeShow = (window as any).__SPACEDRIVE__?.showContextMenu; + const nativeShow = window.__SPACEDRIVE__?.showContextMenu;As per coding guidelines: "Never use
anytype in TypeScript - useunknownwith type guards if needed".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/interface/src/hooks/useContextMenu.ts` around lines 146 - 156, The code uses (window as any).__SPACEDRIVE__?.showContextMenu which loses type safety; add a global Window interface declaration that types __SPACEDRIVE__ and its showContextMenu signature (accepting ContextMenuItem[] and {x:number,y:number} and returning Promise<void>), then replace the any cast by directly referencing window.__SPACEDRIVE__ (or narrow with a type guard) in useContextMenu (the nativeShow variable) so TypeScript enforces the correct args and return type when calling nativeShow inside the isTauri branch of the useContextMenu hook.docs/components/ai/page-actions.tsx-42-54 (1)
42-54:⚠️ Potential issue | 🟡 MinorPreserve the copy handler and loading-disabled state.
Because
{...props}comes afterdisabledandonClick, a caller can accidentally replace the copy behavior or re-enable the button while a copy is in progress.Proposed fix
<button + {...props} type="button" - disabled={isLoading} + disabled={isLoading || props.disabled} onClick={onClick} - {...props} className={cn(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/components/ai/page-actions.tsx` around lines 42 - 54, The props spread is placed after explicit disabled={isLoading} and onClick={onClick}, allowing callers to override those and break the copy/loading behavior; update the component so {...props} is spread before the explicit attributes (or destructure onClick and disabled from props and reassign them) to ensure disabled is always driven by isLoading and onClick always uses the local handler (refer to the button element where disabled, onClick, {...props}, isLoading, onClick handler and buttonVariants/cn are used).docs/components/ai/page-actions.tsx-28-35 (1)
28-35:⚠️ Potential issue | 🟡 MinorDon’t cache failed Markdown fetches.
Line 29 caches the promise before checking
res.ok, so a transient failure or 404 response can be reused forever and copied as text. Validate the response and delete the cache entry on rejection.Proposed fix
- try { - const promise = fetch(markdownUrl).then((res) => res.text()); + try { + const promise = fetch(markdownUrl) + .then((res) => { + if (!res.ok) { + throw new Error(`Failed to fetch Markdown: ${res.status}`); + } + return res.text(); + }) + .catch((error) => { + cache.delete(markdownUrl); + throw error; + }); cache.set(markdownUrl, promise);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/components/ai/page-actions.tsx` around lines 28 - 35, The code caches the fetch promise (cache.set(markdownUrl, promise)) before validating the response, which can store failed responses permanently; change the flow in the function that builds and copies markdown so you first perform fetch(markdownUrl), await the response, check res.ok (and throw or reject on non-ok), then read res.text() and only then set cache.set(markdownUrl, textOrPromise) before creating the ClipboardItem for navigator.clipboard.write; additionally ensure any rejection deletes cache.delete(markdownUrl) (or avoid caching until success) so failed or 404 responses are not kept; update references around cache.set(markdownUrl, promise), fetch(markdownUrl), res.ok, res.text(), and navigator.clipboard.write/ClipboardItem accordingly.docs/content/docs/core/file-sync.mdx-318-318 (1)
318-318:⚠️ Potential issue | 🟡 MinorTighten the closing sentence wording.
“Organizational preferences” reads more naturally here than “organization preferences.”
Suggested wording fix
-The index-based design ensures these workflows remain fast and reliable while respecting your organization preferences. +The index-based design ensures these workflows remain fast and reliable while respecting your organizational preferences.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/core/file-sync.mdx` at line 318, Replace "organization preferences" with "organizational preferences" in the closing sentence ("The index-based design ensures these workflows remain fast and reliable while respecting your organization preferences.") so it reads: "The index-based design ensures these workflows remain fast and reliable while respecting your organizational preferences."; update the sentence text in docs/content/docs/core/file-sync.mdx where that exact sentence appears.core/src/ops/redundancy/summary/input.rs-10-12 (1)
10-12:⚠️ Potential issue | 🟡 MinorDefine the empty-list semantics for
volume_uuids.
Option<Vec<Uuid>>allows omitted/null/empty/non-empty states, but Line 10 only documentsNone = all volumes. Generated clients can pass[], so the public contract should state whether that means “all volumes” or “no volumes”.📝 Proposed doc clarification
- /// Optional: restrict summary to specific volumes. None = all volumes. + /// Optional: restrict summary to specific volumes. + /// + /// `None` and an empty list mean all volumes; a non-empty list limits the + /// summary to the specified volume UUIDs. #[serde(default)] pub volume_uuids: Option<Vec<Uuid>>,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/src/ops/redundancy/summary/input.rs` around lines 10 - 12, The doc for the field volume_uuids (type Option<Vec<Uuid>>) is ambiguous about empty-list semantics; update the comment to explicitly state the chosen contract (e.g., "None or empty list means all volumes; Some(non-empty) restricts to those volumes" OR "None means all volumes, empty list means no volumes") and implement normalization so callers/consumers behave consistently: either add a small helper like fn normalized_volume_uuids(&self) -> Option<&[Uuid]> that treats Some(vec![]) as None (or vice versa depending on the chosen contract), or add a custom deserializer to map []→None; reference the volume_uuids field and the new helper/deserializer in the change.docs/content/docs/core/cloud-integration.mdx-377-380 (1)
377-380:⚠️ Potential issue | 🟡 MinorFix inconsistent security link: change
/securityto/core/securityon line 380.Line 380 links to
/security, but all other security references in the docs use/core/security(pairing.mdx, networking.mdx, key-manager.mdx). There is no top-level/securitypage, making this link a 404.Proposed fix
- [Jobs](/core/jobs) - Monitor long-running operations -- [Security](/security) - Credential and encryption details +- [Security](/core/security) - Credential and encryption details🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/core/cloud-integration.mdx` around lines 377 - 380, Update the broken security link by changing the URL in the list item "[Security](/security) - Credential and encryption details" to point to "/core/security" so it matches other docs; modify the markdown link target from /security to /core/security in the cloud-integration list to avoid the 404.docs/app/llms.txt/route.ts-13-13 (1)
13-13:⚠️ Potential issue | 🟡 MinorSet an explicit plain-text content type.
/llms.txtis a text endpoint; returning it withoutContent-Typecan make clients rely on sniffing/defaults.🛠️ Proposed fix
- return new Response(lines.join('\n')); + return new Response(lines.join('\n'), { + headers: { + 'Content-Type': 'text/plain; charset=utf-8', + }, + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/app/llms.txt/route.ts` at line 13, The response for the /llms.txt endpoint currently returns plain text via new Response(lines.join('\n')) without a Content-Type header; update the Response creation in the route handler to include headers with 'Content-Type': 'text/plain; charset=utf-8' so clients don't need to sniff the type—locate the new Response(...) that uses lines.join('\n') and add the headers option accordingly.docs/content/docs/overview/get-started.mdx-19-26 (1)
19-26:⚠️ Potential issue | 🟡 MinorKeep setup steps aligned with the stated platform support.
Line 19 says this release supports only macOS and Linux, but later steps still tell users to run a Windows installer and install on mobile. That makes the quickstart unusable for unsupported platforms.
📝 Proposed wording adjustment
-Follow your platform's standard installation process. On macOS, drag Spacedrive to Applications. On Windows, run the installer. Linux users can use AppImage or package managers. +Follow your platform's standard installation process. On macOS, drag Spacedrive to Applications. Linux users can use AppImage or package managers. @@ -Download and install Spacedrive on another computer or mobile device. +Download and install Spacedrive on another supported desktop device.Also applies to: 221-240
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/overview/get-started.mdx` around lines 19 - 26, Update the "Run the installer" section to match the stated platform support (Spacedrive v2.0.0-alpha.1 supports only macOS and Linux) by removing or gating Windows/iOS/Android installation instructions and instead providing macOS and Linux-specific steps (e.g., macOS: drag to Applications; Linux: AppImage or package manager commands) and include a short note that Windows, iOS, and Android are coming in v2.0.0-alpha.2; apply the same alignment to the later installer sections referenced in the document (the other installer blocks/Step sections).docs/content/docs/overview/add-index-locations.mdx-112-112 (1)
112-112:⚠️ Potential issue | 🟡 MinorHyphenate the compound adjective.
“Full-text search” reads correctly here because it modifies “search”.
📝 Proposed wording fix
-- Full text search +- Full-text search🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/overview/add-index-locations.mdx` at line 112, The heading text "Full text search" should use a hyphen as a compound adjective; update the heading or phrase "Full text search" in the document (search for the string "Full text search" in the file) to "Full-text search" so it correctly hyphenates the compound modifier.apps/cli/src/domains/redundancy/mod.rs-85-107 (1)
85-107:⚠️ Potential issue | 🟡 MinorReject comparing a volume with itself.
The UI prevents selecting the same volume twice, but the CLI allows it. That makes
unique-a/unique-bmisleading andsharedno longer a real two-volume comparison.🐛 Proposed validation
async fn run_compare(ctx: &Context, args: CompareArgs) -> Result<()> { + if args.volume_a == args.volume_b { + anyhow::bail!("volume_a and volume_b must be different"); + } + let (on, not_on, min_count, label) = match args.mode {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/cli/src/domains/redundancy/mod.rs` around lines 85 - 107, run_compare currently allows comparing the same volume twice which breaks semantics for CompareMode::UniqueA/UniqueB/Shared; add an early validation in run_compare that checks if args.volume_a == args.volume_b and return an error (or bail) with a clear message (e.g., "cannot compare a volume with itself") before the match on args.mode so the CLI mirrors the UI restriction; reference run_compare, CompareArgs, CompareMode, args.volume_a and args.volume_b when adding this guard.docs/content/docs/core/filesystems.mdx-186-192 (1)
186-192:⚠️ Potential issue | 🟡 MinorFix the pass count in the capacity section.
The text says “three passes,” but the list has five steps.
-`calculate_volume_capacity` (and `_static`) in `core/src/library/mod.rs` aggregates per-volume capacity with three passes: +`calculate_volume_capacity` (and `_static`) in `core/src/library/mod.rs` aggregates per-volume capacity with five passes:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/core/filesystems.mdx` around lines 186 - 192, The description for calculate_volume_capacity (and _static) in core/src/library/mod.rs incorrectly says "three passes" while the enumerated steps list five passes; update the prose to reflect the correct count (e.g., "five passes" or "multiple passes") so the header matches the detailed steps for calculate_volume_capacity and _static.docs/content/docs/core/filesystems.mdx-15-15 (1)
15-15:⚠️ Potential issue | 🟡 MinorLine 23 overpromises CoW behavior; reword to indicate best-effort semantics.
The statement that
FastCopyStrategy"produce[s] metadata-only copies" when on the same filesystem is too strong. The implementation delegates entirely tostd::fs::copy()without explicit clone/block-clone APIs, so the fallback behavior is automatic and platform-dependent. Reword to "may produce metadata-only copies" or "attempts fast-copy optimizations" to match the actual best-effort semantics and align with the more accurate phrasing already used in lines 206–212.Secondary: Line 131 includes
/homein the system mount points to hide. On most Linux systems,/homeis a user data directory, not a system mount, and hiding it will exclude real user volumes from capacity stats and visibility.Also applies to: 206–212 (already accurate; no changes needed)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/core/filesystems.mdx` at line 15, Update the wording that overstates CoW guarantees: change the sentence describing FastCopyStrategy from asserting it "produce[s] metadata-only copies" to a best-effort phrasing such as "may produce metadata-only copies" or "attempts fast-copy optimizations" to reflect delegation to std::fs::copy; edit the FastCopyStrategy description accordingly. Also remove /home from the list of system mount points to hide (so it isn't treated as a system mount and excluded from user volume/capacity stats) in the same document.
🧹 Nitpick comments (19)
docs/content/docs/overview/whitepaper.mdx (1)
16-18: Optional wording polish to reduce repeated sentence openings.All three bullets start with “To …”. Consider varying lead-ins for smoother flow.
Suggested wording tweak
-1. **To provide a definitive technical blueprint**: it is the single source of truth for the Spacedrive V2 architecture, detailing the core concepts, design decisions, and innovations that power the new system. -2. **To re-engage our community**: we want to share our renewed vision and technical direction transparently, providing a clear path for developers, contributors, and users to rally behind. -3. **To guide future development**: the document serves as a roadmap and a set of guiding principles, ensuring that all future contributions align with the core architectural tenets of performance, privacy, and user control. +1. **A definitive technical blueprint**: this is the single source of truth for the Spacedrive V2 architecture, detailing the core concepts, design decisions, and innovations that power the new system. +2. **Community re-engagement**: we share our renewed vision and technical direction transparently, providing a clear path for developers, contributors, and users to rally behind. +3. **Guidance for future development**: the document serves as a roadmap and a set of guiding principles, ensuring that all future contributions align with the core architectural tenets of performance, privacy, and user control.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/overview/whitepaper.mdx` around lines 16 - 18, Revise the three bullets that currently all start with "To ...": change the lead-ins so they vary (e.g., "Provide a definitive technical blueprint:", "Re-engage our community by:", "Guide future development:") while keeping the bolded headings and original intent intact; update the lines beginning "**To provide a definitive technical blueprint**", "**To re-engage our community**", and "**To guide future development**" to use distinct openers for smoother flow without altering the core content.docs/content/docs/overview/history.mdx (1)
200-200: Normalize “open-source” in COSS phrase.At Line 200, consider “commercial open-source software (COSS) model” for consistent style.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/overview/history.mdx` at line 200, Update the phrasing "commercial open source software (COSS) model" to use a hyphenated adjective: change the phrase to "commercial open-source software (COSS) model" wherever the exact string "commercial open source software (COSS) model" appears (e.g., the bolded "**Solution**: **commercial open source software (COSS) model**") to normalize style and ensure consistency.packages/interface/src/contexts/WebContextMenuContext.tsx (2)
133-144:onSelectdrops the event and blocks any "keep menu open" patterns.Radix passes an
EventtoonSelectthat consumers canpreventDefault()on to keep the menu open (useful for toggleable items). Discarding it here means that pattern is unreachable for web users, while Tauri native menus may behave differently. Consider forwarding it so a futureContextMenuItemcan opt into it:♻️ Proposed
- onSelect={() => item.onClick?.()} + onSelect={(e) => item.onClick?.(e as unknown as Event)}(Or extend
ContextMenuItem.onClickto accept an optional event.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/interface/src/contexts/WebContextMenuContext.tsx` around lines 133 - 144, The onSelect handler currently drops Radix's Event by calling item.onClick?.() with no arguments; change it to forward the event so consumers can call event.preventDefault() to keep the menu open (or update ContextMenuItem.onClick to accept an optional Event). Concretely, in the DropdownMenu.Item onSelect callback pass the received event through to item.onClick (e.g., onSelect={(event) => item.onClick?.(event)}) so MenuItemInner/menuItemClasses consumers can opt into the keep-open pattern while preserving existing behavior if they ignore the arg.
94-96: Key derived from array index + optional label can collide and re-mount on reorder.
${index}-${item.label ?? item.type ?? "item"}is reasonable when items are static, but for conditionally-filtered menus (the whole purpose ofuseContextMenu'scondition) adjacent reshuffles will reuse keys for different logical items, potentially preserving open submenu state across unrelated entries. If items have a stable identifier (e.g.,keybindIdor a caller-providedid), preferring that would be safer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/interface/src/contexts/WebContextMenuContext.tsx` around lines 94 - 96, The key generation in renderItems (function renderItems(items: ContextMenuItem[])) uses the array index plus label/type which can collide when items are filtered or reordered; change key creation to prefer a stable identifier on the item (e.g., item.id or item.keybindId) and only fall back to a deterministic combination (e.g., `${item.id ?? item.keybindId ?? item.type ?? item.label ?? index}`) so that React keys remain stable across reorders and avoid accidental remounts/submenu state leakage; update the key assignment where const key = ... is declared and ensure ContextMenuItem typings/documentation reflect the preferred id field if required.core/src/volume/fs/zfs.rs (1)
428-449: Pool-root capacity override: verify behavior under reservations/quotas.
used + availableon the pool-root dataset is a good approximation of pool capacity, but note that ZFSavailableon the root reflects usable space after accounting for child reservations, refreservations, and quotas. On pools with large child reservations, the reportedtotal_capacitywill be lower than the raw pool size (and lower than whatzpool listwould show forSIZE). That's probably acceptable (it reflects true usable capacity for the user), but worth capturing in the doc comment so future readers don't try to reconcile it withzpool list SIZE.Also consider guarding against the pathological case where
used + available == 0(e.g. malformedzfs listrows parsed as zero) — today that would silently zero outvolume.total_capacity. A simpleif pool_total > 0around the assignment would be defensive.🛡️ Suggested guard
- if dataset_info.name == dataset_info.pool_name { - let pool_total = dataset_info - .used_bytes - .saturating_add(dataset_info.available_bytes); - debug!( - "ZFS pool root '{}' at {}: overriding total_capacity {} → {} (used={}, avail={})", - dataset_info.pool_name, - mount_point, - volume.total_capacity, - pool_total, - dataset_info.used_bytes, - dataset_info.available_bytes, - ); - volume.total_capacity = pool_total; - volume.available_space = dataset_info.available_bytes; - } + if dataset_info.name == dataset_info.pool_name { + let pool_total = dataset_info + .used_bytes + .saturating_add(dataset_info.available_bytes); + if pool_total > 0 { + debug!( + "ZFS pool root '{}' at {}: overriding total_capacity {} → {} (used={}, avail={})", + dataset_info.pool_name, + mount_point, + volume.total_capacity, + pool_total, + dataset_info.used_bytes, + dataset_info.available_bytes, + ); + volume.total_capacity = pool_total; + volume.available_space = dataset_info.available_bytes; + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/src/volume/fs/zfs.rs` around lines 428 - 449, The pool-root override currently sets volume.total_capacity = pool_total (computed as dataset_info.used_bytes + dataset_info.available_bytes) which can silently zero the capacity if the parsed values are malformed and also doesn't call out that available_bytes reflects usable space after quotas/reservations; update the block around the dataset_info.name == dataset_info.pool_name check to (1) add a brief doc comment explaining that pool_total is usable capacity post child reservations/quotas (so it may differ from zpool SIZE), and (2) guard the assignment with an explicit check such as only assigning volume.total_capacity and volume.available_space when pool_total > 0 (or otherwise treat zero as a parse error), leaving the existing df-derived values intact on zero to avoid suppressing valid capacity. Ensure references to dataset_info.used_bytes, dataset_info.available_bytes, pool_total, and volume.total_capacity are used in the new check and comment.docs/app/global.css (1)
5-5: Stylelint false positive on@source.
@sourceis a Tailwind v4 at-rule used to register additional content globs; the stylelintscss/at-rule-no-unknownhit is a false positive since the project is using Tailwind v4 (@import 'tailwindcss'). Consider adding@sourceto the stylelintignoreAtRuleslist to silence this if stylelint runs in CI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/app/global.css` at line 5, Stylelint is flagging the Tailwind v4 at-rule "@source" in docs/app/global.css as unknown; update the stylelint configuration (the scss/at-rule-no-unknown rule's ignoreAtRules list) to include "source" (or "@source") so the linter ignores this Tailwind v4 at-rule; reference the at-rule "@source" and the stylelint rule "scss/at-rule-no-unknown" when making the change.core/src/volume/utils.rs (1)
169-182: Shared helper looks good; consider documenting/homehiding behavior.
should_hide_by_mount_pathdelegates tois_system_mount_point, which treats/homeas a system mount and hides it. On many Linux distros/homeis a dedicated partition holding all user data, so hiding it by default may surprise users running Spacedrive on desktop Linux (as opposed to TrueNAS, which this filter set is clearly tuned for). Not introduced by this PR — the/homeentry predates it — but since this PR is promoting the predicate to a shared, cross-call-site utility, it's worth confirming the intent and/or documenting it onis_system_mount_point.Tests at Line 364-400 correctly pin down the TrueNAS cases; consider adding a comment next to the
/homeassertion explaining why it's classified as system.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/src/volume/utils.rs` around lines 169 - 182, The shared helper should_hide_by_mount_path delegates to is_system_mount_point which classifies /home as a system mount; update the code and tests to document that intent: add a brief doc comment on is_system_mount_point explaining why /home is treated as a system mount (e.g., TrueNAS/engineered partition use-cases vs desktop Linux ambiguity) and add an inline comment next to the /home assertion in the tests (the cases around lines 364–400) clarifying the rationale so callers of should_hide_by_mount_path understand the deliberate behavior rather than a bug.core/src/service/network/core/mod.rs (1)
251-283: Avoid gating the mDNS fallback on error text.Line 260 only retries DNS-only mode when Iroh’s error string contains
"mdns". If the bind failure is reported as a lower-level socket error like “address already in use”, startup will still fail even though the DNS-only retry could recover. Consider retrying once without mDNS for the first endpoint-build failure, then return a combined error if the fallback also fails.Suggested retry shape
- Err(e) => { - let err_str = e.to_string().to_lowercase(); - if err_str.contains("mdns") { - self.logger - .warn(&format!( - "mDNS discovery unavailable ({}); retrying with pkarr + DNS only. \ - Local-network auto-discovery is disabled on this host, but remote \ - pairing via node ID will still work.", - e - )) - .await; - let ep = build_endpoint(false).await.map_err(|e| { - NetworkingError::Transport(format!("Failed to create endpoint: {}", e)) - })?; - self.logger - .info("Endpoint bound successfully without mDNS (pkarr + DNS only)") - .await; - ep - } else { - return Err(NetworkingError::Transport(format!( - "Failed to create endpoint: {}", - e - ))); - } - } + Err(with_mdns_err) => { + self.logger + .warn(&format!( + "Endpoint creation with mDNS failed ({}); retrying with pkarr + DNS only. \ + Local-network auto-discovery may be disabled on this host, but remote \ + pairing via node ID can still work.", + with_mdns_err + )) + .await; + + match build_endpoint(false).await { + Ok(ep) => { + self.logger + .info("Endpoint bound successfully without mDNS (pkarr + DNS only)") + .await; + ep + } + Err(without_mdns_err) => { + return Err(NetworkingError::Transport(format!( + "Failed to create endpoint with mDNS: {}; retry without mDNS also failed: {}", + with_mdns_err, without_mdns_err + ))); + } + } + }Please verify against the Iroh version in this repo that mDNS bind conflicts always include
"mdns"in the propagated error before keeping the string check.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/src/service/network/core/mod.rs` around lines 251 - 283, The current logic only retries with DNS-only if the error string from build_endpoint(true).await contains "mdns"; change it to always attempt a single fallback call to build_endpoint(false).await when the initial build_endpoint(true) fails, logging the original error via self.logger.warn and then trying the fallback; if the fallback succeeds log success via self.logger.info and return the endpoint, otherwise return a NetworkingError::Transport that combines both the original and fallback errors for clarity. Update the error mapping around build_endpoint(false) and the early return to include both errors, and as part of this change verify against the repo's Iroh version whether mDNS bind failures always contain "mdns" before removing the original substring check so you don't lose any special-case behavior.core/src/ops/redundancy/mod.rs (1)
1-5: Expand the module docs to match the Rust docs standard.Line 1 lacks the required
#title and example, and the prose mostly lists contents instead of explaining why this ops module exists.♻️ Proposed documentation update
-//! Redundancy awareness operations +//! # Redundancy awareness operations //! -//! Provides queries for understanding data redundancy across volumes: +//! This module exists so clients can inspect content replication across volumes +//! without coupling UI or CLI code to storage internals. +//! +//! ```rust,ignore +//! use sd_core::ops::redundancy::summary::RedundancySummaryInput; +//! +//! let input = RedundancySummaryInput { volume_uuids: None }; +//! ``` +//! +//! Provides queries for understanding data redundancy across volumes: //! - Summary statistics (per-volume at-risk vs redundant bytes) //! - Integration with search filters for file-level redundancy queriesAs per coding guidelines, "Module documentation should explain WHY the module exists, use prose with one code example, and include a title with
#."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/src/ops/redundancy/mod.rs` around lines 1 - 5, Update the module-level docs for ops::redundancy to follow Rust doc standards: add a top-level title line starting with `#` (e.g., `# Redundancy operations`), replace the list-only prose with a short paragraph explaining why the module exists (what problem it solves and when to use it), and include a small code example demonstrating usage (referencing RedundancySummaryInput and the summary functions/structs such as summary::RedundancySummaryInput or summary::compute_summary) so readers can quickly see how to call it; keep the example concise and ensure the module doc includes both prose and the code snippet.core/src/ops/redundancy/summary/input.rs (1)
1-1: Add the required module-doc title and example.Line 1 is a module doc, but it does not include the required
#title, rationale, or example.♻️ Proposed documentation update
-//! Input types for redundancy summary query +//! # Redundancy summary input types +//! +//! Defines the serialized query contract for scoping redundancy summaries by +//! volume while keeping generated client types aligned with the core operation. +//! +//! ```rust,ignore +//! let input = RedundancySummaryInput { volume_uuids: None }; +//! ```As per coding guidelines, "Module documentation should explain WHY the module exists, use prose with one code example, and include a title with
#."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/src/ops/redundancy/summary/input.rs` at line 1, Add a proper module doc comment header and example: replace the current single-line doc with a multi-line doc including a title (e.g. "# Redundancy summary input"), a short rationale explaining why this module exists and what RedundancySummaryInput represents, and a fenced-code example demonstrating basic construction (for example showing creating a RedundancySummaryInput with volume_uuids: None). Ensure the doc uses prose to explain intent and includes the code example using triple-backticks so rustdoc renders it.core/src/ops/libraries/info/query.rs (1)
11-11: Nit:use tracing;is redundant.The
tracing::info!/tracing::warn!/tracing::debug!invocations below resolve through the crate root directly; this bareusehas no effect.♻️ Proposed cleanup
-use tracing; use uuid::Uuid;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/src/ops/libraries/info/query.rs` at line 11, Remove the redundant bare import "use tracing;" from the module; the macros like tracing::info!, tracing::warn!, and tracing::debug! (used in this file, e.g. in functions handling query info) resolve via the crate root and the bare use has no effect—delete that line to clean up imports and run cargo check to ensure no other unused imports remain.docs/app/llms-full.txt/route.ts (1)
5-10: Optional: set an explicitContent-Typeheader.The default for
new Response(string)istext/plain;charset=UTF-8, which works, but tooling and crawlers that consumellms-full.txtgenerally expect an explicit text/markdown or text/plain header. Also worth noting: for very large doc trees,Promise.allholds every page's generated text in memory at once before joining — not an issue at 58 pages, but consider streaming if the corpus grows significantly.♻️ Proposed tweak
- return new Response(scanned.join('\n\n')); + return new Response(scanned.join('\n\n'), { + headers: { 'Content-Type': 'text/plain; charset=utf-8' }, + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/app/llms-full.txt/route.ts` around lines 5 - 10, The GET handler currently returns new Response(scanned.join('\n\n')) without an explicit Content-Type; update the export async function GET() to build the Response with headers including "Content-Type": "text/markdown; charset=utf-8" (or "text/plain; charset=utf-8") so tooling/crawlers parse llms-full.txt correctly; locate the Promise.all(scan) usage (scan = source.getPages().map(getLLMText)) and keep the same await logic but construct the Response with the headers, and consider switching to a streaming approach that yields results from source.getPages().map(getLLMText) if you later need to avoid holding scanned in memory for large corpora.build-server.sh (1)
7-21: Make the TrueNAS paths configurable.The script is useful, but committing host-specific paths at repo root means other TrueNAS installs must edit tracked source. Keep these defaults, but allow environment overrides.
♻️ Proposed refactor
-set -e -SR=/mnt/pool/dev-tools/sysroot +set -e + +: "${SPACEDRIVE_TOOLS:=/mnt/pool/dev-tools}" +: "${SPACEDRIVE_ROOT:=/mnt/pool/spacedrive}" +: "${SPACEDRIVE_BUILD_JOBS:=10}" + +SR="$SPACEDRIVE_TOOLS/sysroot" export BINDGEN_EXTRA_CLANG_ARGS="-I$SR/usr/lib/gcc/x86_64-linux-gnu/12/include -I$SR/usr/include -I$SR/usr/include/x86_64-linux-gnu" -export PATH="/mnt/pool/dev-tools:/mnt/pool/dev-tools/bin:/mnt/pool/dev-tools/sysroot/usr/bin:$PATH" -export CC=/mnt/pool/dev-tools/cc -export CXX="/mnt/pool/dev-tools/c++" -export AR=/mnt/pool/dev-tools/ar +export PATH="$SPACEDRIVE_TOOLS:$SPACEDRIVE_TOOLS/bin:$SPACEDRIVE_TOOLS/sysroot/usr/bin:$PATH" +export CC="$SPACEDRIVE_TOOLS/cc" +export CXX="$SPACEDRIVE_TOOLS/c++" +export AR="$SPACEDRIVE_TOOLS/ar" export C_INCLUDE_PATH="$SR/usr/include:$SR/usr/include/x86_64-linux-gnu:$SR/usr/lib/gcc/x86_64-linux-gnu/12/include" export CPLUS_INCLUDE_PATH="$C_INCLUDE_PATH" export OPENSSL_INCLUDE_DIR="$SR/usr/include" export OPENSSL_LIB_DIR="$SR/usr/lib/x86_64-linux-gnu" -cd /mnt/pool/spacedrive +cd "$SPACEDRIVE_ROOT" cargo build --release --bin sd-server --bin sd-cli \ --features sd-core/heif,sd-core/ffmpeg \ - -j10 "$@" + -j"$SPACEDRIVE_BUILD_JOBS" "$@"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@build-server.sh` around lines 7 - 21, Make the TrueNAS-specific paths and tool variables configurable by using environment-variable defaults instead of hard-coded values: replace the literal SR, BINDGEN_EXTRA_CLANG_ARGS, PATH, CC, CXX, AR, C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, OPENSSL_INCLUDE_DIR and OPENSSL_LIB_DIR assignments with parameter-expanded defaults (e.g. SR=${SR:-/mnt/pool/dev-tools/sysroot}) so callers can override them externally, ensure the PATH modification prepends the tools directories rather than clobbering PATH, and expose the cargo parallelism flag (replace -j10 with -j${JOBS:-10}) so JOBS can be set by the environment; keep the same feature list and cargo invocation (cargo build --release --bin sd-server --bin sd-cli --features sd-core/heif,sd-core/ffmpeg "$@") but source values from the new env-defaulted variables.core/src/ops/redundancy/summary/mod.rs (1)
1-1: Expand the module documentation.This public ops module currently has only a label. Please add a titled module doc that explains why the redundancy summary query exists and includes a minimal usage example.
♻️ Proposed module-doc shape
-//! Redundancy summary query +//! # Redundancy Summary Query +//! +//! Provides the public query surface for reporting how content is replicated +//! across known volumes. This lets clients identify at-risk content without +//! duplicating volume-presence aggregation logic. +//! +//! ```rust,ignore +//! let summary = api.library_query("redundancy.summary", input).await?; +//! ```Based on learnings, “Module documentation should explain WHY the module exists, use prose with one code example, and include a title with
#.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/src/ops/redundancy/summary/mod.rs` at line 1, Replace the lone module label with a full titled module doc that explains why the redundancy summary query exists (what problem it solves and when to use it), written in prose and beginning with a Markdown title (e.g. "# Redundancy summary"), and include a minimal usage example showing how to call the query (for example using api.library_query("redundancy.summary", input).await?) so readers can copy/paste; update the comment block at the top of mod.rs for this ops module (the redundancy summary ops module) accordingly.docs/components/flow-diagram.tsx (1)
13-15: Use thefd-primarytheme token instead of hard-coded hex values.The docs theme defines primary blue as
--color-fd-primaryand other components likebutton.tsxalready usefd-primaryclasses. Replacing the repeated#36A3FFhex values with this token keeps the component theme-aligned and maintainable.♻️ Proposed token-based cleanup
- <div className="w-full max-w-2xl rounded-lg border border-[`#36A3FF`]/20 bg-gradient-to-br from-[`#36A3FF`]/5 to-transparent p-6 shadow-sm backdrop-blur-sm transition-all hover:border-[`#36A3FF`]/40 hover:shadow-md"> + <div className="w-full max-w-2xl rounded-lg border border-fd-primary/20 bg-gradient-to-br from-fd-primary/5 to-transparent p-6 shadow-sm backdrop-blur-sm transition-all hover:border-fd-primary/40 hover:shadow-md"> <div className="flex items-start gap-4"> - <div className="flex h-8 w-8 shrink-0 items-center justify-center rounded-full bg-[`#36A3FF`]/20 text-sm font-semibold text-[`#36A3FF`]"> + <div className="flex h-8 w-8 shrink-0 items-center justify-center rounded-full bg-fd-primary/20 text-sm font-semibold text-fd-primary"> @@ - className="rounded-md border border-[`#36A3FF`]/30 bg-[`#36A3FF`]/10 px-2.5 py-1 text-xs text-fd-muted-foreground" + className="rounded-md border border-fd-primary/30 bg-fd-primary/10 px-2.5 py-1 text-xs text-fd-muted-foreground" @@ - <span className="font-semibold text-[`#36A3FF`]"> + <span className="font-semibold text-fd-primary"> @@ - <div className="h-full w-full bg-gradient-to-b from-[`#36A3FF`]/40 via-[`#36A3FF`]/20 to-[`#36A3FF`]/40" /> - <div className="absolute h-2 w-2 rounded-full bg-[`#36A3FF`]/60" /> + <div className="h-full w-full bg-gradient-to-b from-fd-primary/40 via-fd-primary/20 to-fd-primary/40" /> + <div className="absolute h-2 w-2 rounded-full bg-fd-primary/60" />Also applies to lines 32, 49, 61–62.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/components/flow-diagram.tsx` around lines 13 - 15, Replace hard-coded `#36A3FF` color tokens in the FlowDiagram JSX className strings with the project theme token variants (e.g., text-fd-primary, bg-fd-primary/20, border-fd-primary/20, hover:border-fd-primary/40, from-fd-primary/5) so the component uses --color-fd-primary consistently; update the container div with gradient/border classes and the rounded badge div (and the other occurrences referenced at lines 32, 49, 61–62) to use these fd-primary utility classes instead of literal hex values.packages/interface/src/routes/redundancy/index.tsx (1)
32-35: Avoid memoizing this cheap derived value.
scorePercentis a simple multiplication/rounding expression, souseMemoadds unnecessary hook overhead and dependency surface.♻️ Proposed simplification
- const scorePercent = useMemo(() => { - if (!data) return 0; - return Math.round(data.library_totals.replication_score * 100); - }, [data]); + const scorePercent = data + ? Math.round(data.library_totals.replication_score * 100) + : 0;As per coding guidelines, “Use useMemo only when actually needed for expensive computations, not for simple values like string concatenation”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/interface/src/routes/redundancy/index.tsx` around lines 32 - 35, The value scorePercent is an inexpensive derived value currently wrapped in useMemo; remove the useMemo wrapper and compute it directly to reduce hook overhead—replace the useMemo block that references data and data.library_totals.replication_score with a plain constant assignment (e.g., const scorePercent = data ? Math.round(data.library_totals.replication_score * 100) : 0) so the component uses a simple expression instead of the useMemo hook.packages/interface/src/routes/redundancy/compare.tsx (2)
105-113: Move filtered-mode updates out of this effect.This effect updates Explorer context state after render whenever local picker state changes. Prefer calling
enterFilteredMode/exitFilteredModefrom the picker and mode event handlers with the next values so the Explorer mode is updated in the same interaction pass. As per coding guidelines, "Never use Effects to update parent component state - call callback in event handler instead for same render pass."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/interface/src/routes/redundancy/compare.tsx` around lines 105 - 113, The effect that calls enterFilteredMode/exitFilteredMode when filters or label change should be removed and those calls moved into the picker and mode event handlers so Explorer context updates happen during the same interaction; locate the useEffect block that references enterFilteredMode, exitFilteredMode, filters, and label and delete it, then invoke enterFilteredMode(filters, label) from the picker selection/change handler(s) when both values are set and invoke exitFilteredMode() from the handler that clears a picker or toggles mode off (or from the same handlers when the next values indicate no filter), ensuring the callbacks receive the next values instead of relying on a post-render effect.
241-255: Use a props interface andclsxforModeButton.The inline prop type and template-literal conditional class name diverge from the interface/component conventions.
♻️ Proposed refactor
+interface ModeButtonProps { + active: boolean; + onClick: () => void; + label: string; +} + function ModeButton({ active, onClick, label, -}: { - active: boolean; - onClick: () => void; - label: string; -}) { +}: ModeButtonProps) { return ( <button onClick={onClick} - className={`flex-1 rounded-md px-3 py-1.5 text-xs font-medium transition-colors ${ - active ? "bg-accent text-white" : "text-ink-dull hover:text-ink" - }`} + className={clsx( + "flex-1 rounded-md px-3 py-1.5 text-xs font-medium transition-colors", + active ? "bg-accent text-white" : "text-ink-dull hover:text-ink", + )} >Also add the
clsximport with the external imports.As per coding guidelines, "Use explicit TypeScript interfaces for component props instead of implicit types" and "Use clsx utility for conditional className application instead of manual ternaries."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/interface/src/routes/redundancy/compare.tsx` around lines 241 - 255, Replace the inline prop typing and template-literal class toggle in ModeButton with an explicit props interface and use clsx for conditional classes: add an interface (e.g., ModeButtonProps) describing active, onClick, label; import clsx at the top alongside other external imports; update the ModeButton signature to accept ModeButtonProps and compute className using clsx to apply "bg-accent text-white" when active and the fallback classes otherwise. Ensure no other behavior changes.core/src/ops/redundancy/summary/query.rs (1)
1-1: Expand the module docs beyond the one-line summary.This new core module should explain why the redundancy summary exists, what it counts, and include a small example of the output shape or scoring semantics.
As per coding guidelines,
Module documentation should explain WHY the module exists, use prose with one code example, and include a title with #.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/src/ops/redundancy/summary/query.rs` at line 1, Update the module-level doc comment for core::ops::redundancy::summary::query to a multi-line Rust doc (//! ) with a heading (e.g. "# Redundancy summary query"), explain why the module exists (purpose of computing redundancy metrics), describe exactly what is counted (e.g. duplicate entries, unique keys, per-shard or per-partition counts, scoring semantics such as higher score = more redundancy or lower score = better), and include a short code/example block showing the output shape (struct/JSON example) and a one-line note about how scores are computed; keep it prose-focused, use the module name (redundancy::summary::query) in the text, and ensure the doc follows the project's style guidelines.
| if env::var_os("SD_SKIP_WEB_BUILD").is_some() { | ||
| println!("cargo:warning=SD_SKIP_WEB_BUILD set — using existing apps/web/dist"); | ||
| return; | ||
| } | ||
|
|
||
| // If bun isn't available (e.g., Docker Rust build stage), the caller is | ||
| // expected to have prebuilt apps/web/dist. Skip silently. | ||
| if Command::new("bun").arg("--version").output().is_err() { | ||
| println!("cargo:warning=bun not found on PATH — using existing apps/web/dist"); | ||
| return; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Find references to the web dist assets consumed by the server build/runtime.
rg -n -C3 'apps/web/dist|web/dist|include_dir|dist' --iglob '*.rs' --iglob 'Dockerfile' --iglob '*.toml'Repository: spacedriveapp/spacedrive
Length of output: 14340
Add validation that apps/web/dist exists before skipping web build.
Both early-return paths (when SD_SKIP_WEB_BUILD is set or bun is unavailable) claim they will use existing apps/web/dist, but neither validates it exists. The embedded web assets are consumed via the rust-embed macro in apps/server/src/main.rs (#[folder = "../web/dist/"]), which will fail at compile time if the directory is missing. A check in build.rs would catch this issue earlier with a clearer error message.
🛡️ Proposed guard
+ let manifest_dir = PathBuf::from(env::var("CARGO_MANIFEST_DIR").unwrap());
+ let repo_root = manifest_dir
+ .parent()
+ .and_then(Path::parent)
+ .expect("apps/server is two levels below the repo root")
+ .to_path_buf();
+ let web_dir = repo_root.join("apps/web");
+ let web_dist = web_dir.join("dist");
+
if env::var_os("SD_SKIP_WEB_BUILD").is_some() {
+ assert_web_dist_exists(&web_dist, "SD_SKIP_WEB_BUILD is set");
println!("cargo:warning=SD_SKIP_WEB_BUILD set — using existing apps/web/dist");
return;
}
// If bun isn't available (e.g., Docker Rust build stage), the caller is
// expected to have prebuilt apps/web/dist. Skip silently.
if Command::new("bun").arg("--version").output().is_err() {
+ assert_web_dist_exists(&web_dist, "bun was not found on PATH");
println!("cargo:warning=bun not found on PATH — using existing apps/web/dist");
return;
}fn assert_web_dist_exists(web_dist: &Path, reason: &str) {
if !web_dist.exists() {
panic!(
"{reason}, but {} does not exist. Build the web UI first or install bun.",
web_dist.display()
);
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/server/build.rs` around lines 9 - 19, Add a pre-check that verifies the
apps/web/dist directory exists before early-returning in build.rs: create a
helper like assert_web_dist_exists(web_dist: &Path, reason: &str) and call it in
both branches where you currently return early (the SD_SKIP_WEB_BUILD branch and
the bun --version failure branch) to panic with a clear message if web_dist
(../web/dist) is missing; reference the existing checks
(env::var_os("SD_SKIP_WEB_BUILD") and
Command::new("bun").arg("--version").output().is_err()) and ensure the panic
message mentions building the web UI or installing bun and references the
rust-embed folder ("../web/dist/") so the error is informative.
| // Invalidate the build script when any UI source or relevant config changes. | ||
| // Cargo will cache this build script's output otherwise, so Rust-only changes | ||
| // won't pay the cost of rebuilding the web bundle. | ||
| watch_dir(&web_dir.join("src")); | ||
| watch_dir(&repo_root.join("packages/interface/src")); | ||
| watch_dir(&repo_root.join("packages/ts-client/src")); | ||
| for path in [ | ||
| web_dir.join("index.html"), | ||
| web_dir.join("vite.config.ts"), | ||
| web_dir.join("package.json"), | ||
| web_dir.join("tsconfig.json"), | ||
| repo_root.join("packages/interface/package.json"), | ||
| repo_root.join("packages/ts-client/package.json"), | ||
| ] { | ||
| if path.exists() { | ||
| rerun(&path); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Cargo build script rerun-if-changed directory added file behavior
💡 Result:
When using cargo:rerun-if-changed=some_directory/ in a Cargo build script, Cargo scans the entire directory (and subdirectories) recursively for any modifications, including newly added files. This triggers a rerun of the build script if a new file is added to the directory. This behavior was changed in Cargo 1.50.0 (December 2020) via PR #8973, which updated Cargo to recursively check mtimes of all files in the directory instead of just the directory's own mtime. Prior to this, only directory mtime changes (which reliably detect additions/deletions on some platforms but not content changes) would trigger reruns; content changes or timestamps inside required listing individual files. Official Cargo documentation confirms: "If the path points to a directory, it will scan the entire directory for any modifications." Adding a file updates the directory mtime and adds a new file mtime newer than the cached build timestamp, causing a rerun. Sources: Official docs state directory scanning detects modifications including additions; historical issue #2599 shows new files triggered even pre-change; PR #8973 implemented recursive scan.
Citations:
- 1: https://doc.rust-lang.org/1.59.0/cargo/reference/build-scripts.html
- 2: https://doc.rust-lang.org/1.62.1/cargo/reference/build-scripts.html
- 3: Check if rerun-if-changed points to a directory. rust-lang/cargo#8973
- 4: rerun-if-changed on directory does not rerun on new timestamp or file contents rust-lang/cargo#2599
🏁 Script executed:
# Find and examine the Cargo version and build.rs implementation
git ls-files | grep -E '(Cargo.toml|build\.rs|\.cargo/config)' | head -20Repository: spacedriveapp/spacedrive
Length of output: 710
🏁 Script executed:
# Look for the build.rs file in apps/server and examine the watch_dir function
find . -path './apps/server/build.rs' -type f -exec wc -l {} \;Repository: spacedriveapp/spacedrive
Length of output: 94
🏁 Script executed:
# Read the build.rs file to understand the current watch_dir implementation
cat -n ./apps/server/build.rs | head -100Repository: spacedriveapp/spacedrive
Length of output: 3560
🏁 Script executed:
# Check the root Cargo.toml for Cargo version requirement
head -30 Cargo.tomlRepository: spacedriveapp/spacedrive
Length of output: 802
🏁 Script executed:
# Search for any existing workspace-level file watching
rg "package\.json|bun\.lock|tsconfig\.json" apps/server/build.rs apps/tauri/src-tauri/build.rs core/build.rs 2>/dev/null || echo "Search complete"Repository: spacedriveapp/spacedrive
Length of output: 322
🏁 Script executed:
# Check if workspace root files exist
ls -la | grep -E "(package\.json|bun\.lock|tsconfig\.json)"Repository: spacedriveapp/spacedrive
Length of output: 260
Register directories in build script to detect new UI file additions.
The watch_dir function only registers individual files that exist during the build-script run, so adding or removing a UI source file won't trigger a rebuild if the directory itself isn't watched. Register the directory itself before recursing into it, and add the workspace-level package/lock/config files since they also affect the web bundle compilation.
🔁 Proposed invalidation fix
fn watch_dir(dir: &Path) {
if !dir.exists() {
return;
}
+ rerun(dir);
let entries = match std::fs::read_dir(dir) {
Ok(e) => e,
Err(_) => return,
}; for path in [
+ repo_root.join("package.json"),
+ repo_root.join("bun.lockb"),
+ repo_root.join("tsconfig.json"),
web_dir.join("index.html"),
web_dir.join("vite.config.ts"),
web_dir.join("package.json"),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/server/build.rs` around lines 38 - 55, The build script currently calls
watch_dir on individual paths and rerun on files, but doesn't register the
parent directories themselves so adding/removing files won't trigger rebuilds;
update the logic around watch_dir, web_dir, repo_root and the existing path list
so you call watch_dir on the directory roots (e.g., web_dir,
web_dir.join("src"), repo_root.join("packages/interface/src"),
repo_root.join("packages/ts-client/src")) before recursing and also include
workspace-level files (workspace package.json, pnpm-lock.yaml or yarn.lock,
etc.) in the array passed to rerun; ensure rerun(&path) is still called for each
existing file and that directory watches are registered for the directories
mentioned to detect new file additions/removals.
| .or_else(|_| { | ||
| dirs::home_dir() | ||
| .map(|h| h.join(".spacedrive")) | ||
| .ok_or(()) | ||
| }) | ||
| .unwrap_or_else(|_| { | ||
| warn!("Could not resolve home directory; falling back to tempdir"); | ||
| let temp = tempfile::tempdir().expect("Failed to create temp dir"); | ||
| temp.path().to_path_buf() | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Rust std Result::or_else signature closure return same error type tempfile TempDir into_path drop semantics
💡 Result:
The Rust standard library Result<T, E>::or_else method has the signature: pub fn or_else(self, op: F) -> Result<T, F::Error> where F: FnOnce(E) -> Result<T, F::Error>, The closure op takes the original error E and must return a Result<T, F::Error>, where the Ok type T matches the original (preserving success value type) but the Err type can differ (allowing error recovery or conversion). The returned Result has error type F::Error. This design enables fallback/recovery logic while maintaining type safety; the closure's return error type becomes the method's output error type. If chaining multiple or_else, the final error type is determined by the last closure. tempfile::TempDir creates a temporary directory automatically deleted on drop (via Drop impl, which recursively removes contents; errors silently ignored). TempDir::into_path(self) -> PathBuf (or keep in recent versions) consumes self, returns the PathBuf, and disables automatic deletion—the directory persists until manually removed (e.g., fs::remove_dir_all). This is used to "persist" the temp dir beyond the TempDir scope. Calling into_path prevents the drop semantics from running. These are often combined in error-handling patterns: e.g., TempDir::new.or_else(|_| fallback)?; let path = dir.into_path;—but or_else requires the fallback closure to return Result<TempDir, SameE> (cannot directly return PathBuf due to type mismatch). To handle io::Error consistently, use map_err or convert errors. Official sources: Rust std docs confirm or_else signature/examples (1.0+ stable) [1,2,14]; tempfile docs (v3+) detail TempDir drop/into_path [7,17,61].
Citations:
- 1: https://doc.rust-lang.org/std/result/enum.Result.html
- 2: https://doc.rust-lang.org/stable/std/result/enum.Result.html
- 3: https://docs.rs/tempfile/3.3.0/tempfile/struct.TempDir.html
- 4: https://docs.rust-lang.org/std/result/enum.Result.html
- 5: https://docs.rs/tempfile/~3/
- 6: https://docs.rs/tempfile/latest/tempfile/struct.TempDir.html
🏁 Script executed:
cat -n apps/server/src/main.rs | sed -n '305,325p'Repository: spacedriveapp/spacedrive
Length of output: 954
🏁 Script executed:
# Check the broader context to see how base_data_dir is used
cat -n apps/server/src/main.rs | sed -n '295,335p'Repository: spacedriveapp/spacedrive
Length of output: 1737
🏁 Script executed:
# Check what std::env::var returns
cat -n apps/server/src/main.rs | head -20
# Also look for imports to see what's in scopeRepository: spacedriveapp/spacedrive
Length of output: 771
🏁 Script executed:
# Try to build just the server crate to see if there are compilation errors
cd apps/server && cargo check 2>&1 | head -50Repository: spacedriveapp/spacedrive
Length of output: 6884
Fix the TempDir drop issue before this can compile.
Line 318 creates a TempDir that is immediately dropped when the closure exits, deleting the directory before base_data_dir is used. The returned PathBuf points to a deleted directory. Use into_path() instead of path().to_path_buf() to consume the TempDir and prevent automatic deletion.
🐛 Proposed fix
std::env::var("DATA_DIR")
.map(PathBuf::from)
- .or_else(|_| {
- dirs::home_dir()
- .map(|h| h.join(".spacedrive"))
- .ok_or(())
- })
.unwrap_or_else(|_| {
- warn!("Could not resolve home directory; falling back to tempdir");
- let temp = tempfile::tempdir().expect("Failed to create temp dir");
- temp.path().to_path_buf()
+ dirs::home_dir().map(|h| h.join(".spacedrive")).unwrap_or_else(|| {
+ warn!("Could not resolve home directory; falling back to tempdir");
+ let temp = tempfile::tempdir().expect("Failed to create temp dir");
+ temp.into_path()
+ })
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .or_else(|_| { | |
| dirs::home_dir() | |
| .map(|h| h.join(".spacedrive")) | |
| .ok_or(()) | |
| }) | |
| .unwrap_or_else(|_| { | |
| warn!("Could not resolve home directory; falling back to tempdir"); | |
| let temp = tempfile::tempdir().expect("Failed to create temp dir"); | |
| temp.path().to_path_buf() | |
| }) | |
| .unwrap_or_else(|_| { | |
| dirs::home_dir().map(|h| h.join(".spacedrive")).unwrap_or_else(|| { | |
| warn!("Could not resolve home directory; falling back to tempdir"); | |
| let temp = tempfile::tempdir().expect("Failed to create temp dir"); | |
| temp.into_path() | |
| }) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/server/src/main.rs` around lines 310 - 319, The closure that falls back
to a temp dir currently creates a tempfile::TempDir with tempfile::tempdir() and
calls temp.path().to_path_buf(), which drops the TempDir and deletes the
directory, leaving base_data_dir pointing to a removed path; change the code to
consume the TempDir by calling into_path() on the TempDir returned by
tempfile::tempdir() so the directory is not deleted on drop (i.e., replace the
temp.path().to_path_buf() usage with temp.into_path()), preserving the directory
for subsequent use by the code that relies on base_data_dir.
| if !already_initialized || !pairing_registered { | ||
| if already_initialized && !pairing_registered { | ||
| logger | ||
| .warn("Networking was initialized but protocol handlers are missing; re-registering") | ||
| .await; | ||
| } else { | ||
| logger.info("Registering protocol handlers...").await; | ||
| } | ||
| self.register_default_protocols(&networking_service).await?; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect the protocol registry implementation and call sites.
# Expectation: determine whether register_handler is duplicate-safe.
rg -n -C4 'fn register_handler|pub fn register_handler|async fn register_handler|register_handler\s*\(' --type rustRepository: spacedriveapp/spacedrive
Length of output: 2834
🏁 Script executed:
# Read the relevant section of core/src/lib.rs to see the exact code being reviewed
sed -n '520,560p' core/src/lib.rsRepository: spacedriveapp/spacedrive
Length of output: 1659
🏁 Script executed:
# Find and read the register_default_protocols function
rg -n 'fn register_default_protocols|async fn register_default_protocols' --type rust -A 30Repository: spacedriveapp/spacedrive
Length of output: 1738
🏁 Script executed:
# Find the register_default_protocol_handlers function
rg -n 'fn register_default_protocol_handlers|pub fn register_default_protocol_handlers|async fn register_default_protocol_handlers' --type rust -A 50Repository: spacedriveapp/spacedrive
Length of output: 2692
🏁 Script executed:
# Continue reading register_default_protocol_handlers from line 696 onwards
sed -n '696,800p' core/src/lib.rsRepository: spacedriveapp/spacedrive
Length of output: 3857
Make the recovery registration path idempotent or validate all handlers before re-registering.
The recovery logic checks only the pairing handler to determine whether re-registration is needed, then calls register_default_protocols() which blindly re-registers all five handlers (pairing, messaging, file transfer, job activity, sync multiplexer). The register_handler() method is not idempotent and errors if a handler is already registered. If networking was partially initialized (e.g., pairing handler failed but messaging succeeded), the recovery path will fail when attempting to re-register the existing messaging handler. Either validate that all handlers are missing before re-registering, or modify register_default_protocol_handlers() to check for and skip already-registered handlers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/src/lib.rs` around lines 536 - 544, The recovery path checks only
already_initialized and pairing_registered then unconditionally calls
register_default_protocols, but register_handler is not idempotent and will
error if some handlers (e.g., messaging) are already registered; fix by either
(A) before calling register_default_protocols, validate that all five handlers
are missing (check each handler's presence) and only call when none exist, or
(B) make register_default_protocols/idempotent by changing its logic to query
each handler's registration status and skip calling register_handler for
handlers that are already registered (i.e., perform existence checks inside
register_default_protocols rather than blindly re-registering).
| // Create space-level items (Overview, Recents, Favorites, File Kinds, | ||
| // Sources, Redundancy) - these appear outside groups | ||
| let space_items = vec![ | ||
| (ItemType::Overview, "Overview", 0), | ||
| (ItemType::Recents, "Recents", 1), | ||
| (ItemType::Favorites, "Favorites", 2), | ||
| (ItemType::FileKinds, "File Kinds", 3), | ||
| (ItemType::Sources, "Sources", 4), | ||
| (ItemType::Redundancy, "Redundancy", 5), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Show all call sites for default-space initialization.
rg -nP '\bcreate_default_space\s*\(' -C 4
# Show all Redundancy default-item insertions/usages.
rg -nP 'ItemType::Redundancy|Redundancy",\s*5|space_item.*Redundancy|deterministic_library_default_uuid\(.*"space_item".*"Redundancy"' -C 4
# Search migration-like files for a backfill that inserts missing default space items.
fd -i 'migration|migrat' -e rs -x rg -nP 'space_item|Redundancy|Sources|default space' -C 3 {}Repository: spacedriveapp/spacedrive
Length of output: 3989
🏁 Script executed:
# View the open_library implementation to see if it has any backfill/ensure logic
rg -nP '\basync fn open_library\s*\(' -A 50 core/src/library/manager.rs | head -80Repository: spacedriveapp/spacedrive
Length of output: 1714
🏁 Script executed:
# Search for any other migrations that might insert or ensure default space items
fd -i 'migration' -e rs -path '*/migrations/*' | head -20Repository: spacedriveapp/spacedrive
Length of output: 236
🏁 Script executed:
# Search for any "ensure" or "upsert" logic related to space items or defaults
rg -nP 'ensure.*default|upsert.*space_item|insert.*if.*not.*exist|on_conflict' core/src/library/manager.rs -C 3Repository: spacedriveapp/spacedrive
Length of output: 2022
🏁 Script executed:
# Get the complete open_library function
rg -nP '\basync fn open_library\s*\(' -A 200 core/src/library/manager.rs | grep -E '^\d+.*async fn|^\d+.*\}' | head -20Repository: spacedriveapp/spacedrive
Length of output: 50
🏁 Script executed:
# Count lines in open_library to know how much to read
rg -nP '\basync fn open_library\s*\(' -A 300 core/src/library/manager.rs | tail -50Repository: spacedriveapp/spacedrive
Length of output: 1645
🏁 Script executed:
# Search for migration files
find . -path '*/migrations/*.rs' -type f 2>/dev/null | head -20Repository: spacedriveapp/spacedrive
Length of output: 50
🏁 Script executed:
# Search for any backfill or "ensure defaults" in the codebase
rg -nP 'backfill|ensure.*default|missing.*default' -i core/src --type rs -C 3Repository: spacedriveapp/spacedrive
Length of output: 94
🏁 Script executed:
# Find all migrations (use simpler find syntax)
find . -name '*migration*' -type f | grep '\.rs$' | head -15Repository: spacedriveapp/spacedrive
Length of output: 213
🏁 Script executed:
# Get the exact line range of open_library function
rg -nP '\basync fn open_library\s*\(' core/src/library/manager.rsRepository: spacedriveapp/spacedrive
Length of output: 99
🏁 Script executed:
# Check if create_default_space is ever called from open_library
rg -nP 'create_default_space' core/src/library/manager.rsRepository: spacedriveapp/spacedrive
Length of output: 250
🏁 Script executed:
# Look at the database initialization logic to see if defaults are ensured
rg -nP 'run_migrations|migrate|init.*db' core/src/library/manager.rs -C 5Repository: spacedriveapp/spacedrive
Length of output: 1017
🏁 Script executed:
# Read the migration config file to understand migration structure
cat core/src/config/migration.rs | head -100Repository: spacedriveapp/spacedrive
Length of output: 606
🏁 Script executed:
# Search in core source for any existing migration that handles space_item defaults
rg -nP 'space_item|Redundancy|Sources' core/src/config/migration.rs -C 5Repository: spacedriveapp/spacedrive
Length of output: 50
🏁 Script executed:
# Check the Database::migrate() implementation
rg -nP 'pub.*async.*fn migrate|fn migrate' core/src --type-add 'rs:*.rs' -A 20 | head -80Repository: spacedriveapp/spacedrive
Length of output: 4054
🏁 Script executed:
# Look for any logic that ensures default space items when opening an existing library
rg -nP 'existing.*library|open.*library' core/src/config/migration.rs -C 10Repository: spacedriveapp/spacedrive
Length of output: 50
🏁 Script executed:
# Find where migrations are defined (SeaORM migrations directory)
find . -path '*/migration*' -name '*.rs' -type f | grep -v test | grep -v exampleRepository: spacedriveapp/spacedrive
Length of output: 2953
🏁 Script executed:
# Check if there's a migrations directory with actual migration files
find . -type d -name migrations | head -10Repository: spacedriveapp/spacedrive
Length of output: 50
🏁 Script executed:
# Look at the actual SeaORM migration structure
find . -path '*/infra/db*' -type f -name '*.rs' | head -20Repository: spacedriveapp/spacedrive
Length of output: 957
🏁 Script executed:
# Search for the most recent migration that modifies space_items
rg -nP 'fn up.*space_item|DELETE FROM space_items|INSERT.*space_item' . --type rs -C 5 | head -100Repository: spacedriveapp/spacedrive
Length of output: 94
🏁 Script executed:
# Search for migrations that modify space_items (without type flag)
rg -nP 'space_item|Redundancy' core/src/infra/db/migration/ -C 3 | head -150Repository: spacedriveapp/spacedrive
Length of output: 14006
🏁 Script executed:
# Check the most recent migration file to understand the structure
cat core/src/infra/db/migration/m20260414_000001_add_redundancy_indexes.rsRepository: spacedriveapp/spacedrive
Length of output: 1203
🏁 Script executed:
# Look for space_items entity definition and any backfill logic
rg -nP 'space_item' core/src/infra/db/entities/ -C 5Repository: spacedriveapp/spacedrive
Length of output: 9858
Add a migration to backfill missing default space items for existing libraries.
New libraries get Sources and Redundancy defaults via create_default_space() (called in new-library paths at lines 356, 421), but existing libraries opened through open_library() receive only schema changes from db.migrate() and won't get these new items. Migrations m20260414 onwards contain no backfill for missing defaults—they only adjust schema and indexes.
Since create_default_space() uses idempotent upserts (.on_conflict() patterns at lines 1217, 1275+), add a migration that inserts missing defaults only if they don't already exist by UUID, preserving user customization.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/src/library/manager.rs` around lines 1242 - 1250, Existing libraries
opened via open_library() only run db.migrate() and miss the new default space
items (Sources, Redundancy) that create_default_space() adds for new libraries;
add a new migration (after m20260414) that backfills the missing default space
items by performing idempotent inserts/upserts for the Sources and Redundancy
ItemType using the same .on_conflict() upsert pattern used in
create_default_space(), keyed by the canonical UUIDs so it only inserts when the
item UUID is not present and thus preserves any user-customized items.
| const items = useMemo(() => { | ||
| const pageUrl = | ||
| typeof window !== 'undefined' ? window.location.href : 'loading'; | ||
| const q = `Read ${pageUrl}, I want to ask questions about it.`; | ||
|
|
||
| return [ | ||
| { | ||
| title: 'Open in GitHub', | ||
| href: githubUrl, | ||
| icon: ( | ||
| <svg fill="currentColor" role="img" viewBox="0 0 24 24"> | ||
| <title>GitHub</title> | ||
| <path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12" /> | ||
| </svg> | ||
| ), | ||
| }, | ||
| { | ||
| title: 'View as Markdown', | ||
| href: markdownUrl, | ||
| icon: <TextIcon />, | ||
| }, | ||
| { | ||
| title: 'Open in ChatGPT', | ||
| href: `https://chatgpt.com/?${new URLSearchParams({ | ||
| hints: 'search', | ||
| q, | ||
| })}`, | ||
| icon: ( | ||
| <svg | ||
| role="img" | ||
| viewBox="0 0 24 24" | ||
| fill="currentColor" | ||
| xmlns="http://www.w3.org/2000/svg" | ||
| > | ||
| <title>OpenAI</title> | ||
| <path d="M22.2819 9.8211a5.9847 5.9847 0 0 0-.5157-4.9108 6.0462 6.0462 0 0 0-6.5098-2.9A6.0651 6.0651 0 0 0 4.9807 4.1818a5.9847 5.9847 0 0 0-3.9977 2.9 6.0462 6.0462 0 0 0 .7427 7.0966 5.98 5.98 0 0 0 .511 4.9107 6.051 6.051 0 0 0 6.5146 2.9001A5.9847 5.9847 0 0 0 13.2599 24a6.0557 6.0557 0 0 0 5.7718-4.2058 5.9894 5.9894 0 0 0 3.9977-2.9001 6.0557 6.0557 0 0 0-.7475-7.0729zm-9.022 12.6081a4.4755 4.4755 0 0 1-2.8764-1.0408l.1419-.0804 4.7783-2.7582a.7948.7948 0 0 0 .3927-.6813v-6.7369l2.02 1.1686a.071.071 0 0 1 .038.052v5.5826a4.504 4.504 0 0 1-4.4945 4.4944zm-9.6607-4.1254a4.4708 4.4708 0 0 1-.5346-3.0137l.142.0852 4.783 2.7582a.7712.7712 0 0 0 .7806 0l5.8428-3.3685v2.3324a.0804.0804 0 0 1-.0332.0615L9.74 19.9502a4.4992 4.4992 0 0 1-6.1408-1.6464zM2.3408 7.8956a4.485 4.485 0 0 1 2.3655-1.9728V11.6a.7664.7664 0 0 0 .3879.6765l5.8144 3.3543-2.0201 1.1685a.0757.0757 0 0 1-.071 0l-4.8303-2.7865A4.504 4.504 0 0 1 2.3408 7.872zm16.5963 3.8558L13.1038 8.364 15.1192 7.2a.0757.0757 0 0 1 .071 0l4.8303 2.7913a4.4944 4.4944 0 0 1-.6765 8.1042v-5.6772a.79.79 0 0 0-.407-.667zm2.0107-3.0231l-.142-.0852-4.7735-2.7818a.7759.7759 0 0 0-.7854 0L9.409 9.2297V6.8974a.0662.0662 0 0 1 .0284-.0615l4.8303-2.7866a4.4992 4.4992 0 0 1 6.6802 4.66zM8.3065 12.863l-2.02-1.1638a.0804.0804 0 0 1-.038-.0567V6.0742a4.4992 4.4992 0 0 1 7.3757-3.4537l-.142.0805L8.704 5.459a.7948.7948 0 0 0-.3927.6813zm1.0976-2.3654l2.602-1.4998 2.6069 1.4998v2.9994l-2.5974 1.4997-2.6067-1.4997Z" /> | ||
| </svg> | ||
| ), | ||
| }, | ||
| { | ||
| title: 'Open in Claude', | ||
| href: `https://claude.ai/new?${new URLSearchParams({ | ||
| q, | ||
| })}`, | ||
| icon: ( | ||
| <svg | ||
| fill="currentColor" | ||
| role="img" | ||
| viewBox="0 0 24 24" | ||
| xmlns="http://www.w3.org/2000/svg" | ||
| > | ||
| <title>Anthropic</title> | ||
| <path d="M17.3041 3.541h-3.6718l6.696 16.918H24Zm-10.6082 0L0 20.459h3.7442l1.3693-3.5527h7.0052l1.3693 3.5528h3.7442L10.5363 3.5409Zm-.3712 10.2232 2.2914-5.9456 2.2914 5.9456Z" /> | ||
| </svg> | ||
| ), | ||
| }, | ||
| { | ||
| title: 'Open in Cursor', | ||
| href: `https://cursor.com/link/prompt?${new URLSearchParams({ | ||
| text: q, | ||
| })}`, | ||
| icon: ( | ||
| <svg | ||
| fill="currentColor" | ||
| role="img" | ||
| viewBox="0 0 24 24" | ||
| xmlns="http://www.w3.org/2000/svg" | ||
| > | ||
| <title>Cursor</title> | ||
| <path d="M11.503.131 1.891 5.678a.84.84 0 0 0-.42.726v11.188c0 .3.162.575.42.724l9.609 5.55a1 1 0 0 0 .998 0l9.61-5.55a.84.84 0 0 0 .42-.724V6.404a.84.84 0 0 0-.42-.726L12.497.131a1.01 1.01 0 0 0-.996 0M2.657 6.338h18.55c.263 0 .43.287.297.515L12.23 22.918c-.062.107-.229.064-.229-.06V12.335a.59.59 0 0 0-.295-.51l-9.11-5.257c-.109-.063-.064-.23.061-.23" /> | ||
| </svg> | ||
| ), | ||
| }, | ||
| ]; | ||
| }, [githubUrl, markdownUrl]); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the file
fd -t f "page-actions.tsx"Repository: spacedriveapp/spacedrive
Length of output: 103
🏁 Script executed:
# If found, let's read the file to see the actual implementation
cat -n docs/components/ai/page-actions.tsxRepository: spacedriveapp/spacedrive
Length of output: 8004
🏁 Script executed:
# Also check if there are any related test files or documentation about hydration
fd -t f "page-actions" --type fRepository: spacedriveapp/spacedrive
Length of output: 103
Fix hydration mismatch for external-link href attributes by deferring window.location.href access to after mount.
The links to ChatGPT, Claude, and Cursor will have mismatched href attributes between server and client renders. The server uses the fallback 'loading' value while the client uses the actual window.location.href, causing hydration warnings.
Use state and effect to defer URL access until after the component mounts, ensuring consistent rendering. Update the dependency array to include pageUrl:
Implementation
-import { type ComponentProps, useMemo, useState } from 'react';
+import { type ComponentProps, useEffect, useMemo, useState } from 'react';
@@
}) {
+ const [pageUrl, setPageUrl] = useState<string | null>(null);
+
+ useEffect(() => {
+ setPageUrl(window.location.href);
+ }, []);
+
const items = useMemo(() => {
- const pageUrl =
- typeof window !== 'undefined' ? window.location.href : 'loading';
+ if (!pageUrl) return [];
const q = `Read ${pageUrl}, I want to ask questions about it.`;
@@
- }, [githubUrl, markdownUrl]);
+ }, [githubUrl, markdownUrl, pageUrl]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/components/ai/page-actions.tsx` around lines 70 - 144, The hydration
mismatch comes from reading window.location.href during render inside the
useMemo that builds items (variables pageUrl and q); change to initialize a
pageUrl state (e.g., const [pageUrl, setPageUrl] = useState('loading')) and set
it inside useEffect to window.location.href after mount, then use that state
when computing q and building items in the useMemo; finally add pageUrl to the
useMemo dependency array so items (and the ChatGPT/Claude/Cursor hrefs) are
stable between server and client.
| `is_system_mount_point(path)` matches Linux OS paths: | ||
| - Exact: `/`, `/usr`, `/var`, `/etc`, `/opt`, `/srv`, `/root`, `/boot`, `/home`, `/run`, `/dev`, `/proc`, `/sys`, `/tmp`, `/audit`, `/data`, `/conf`, `/mnt`, `/lost+found`. | ||
| - Prefixes: `/boot/`, `/sys/`, `/proc/`, `/dev/`, `/run/`, `/var/log`, `/var/db/`, `/var/lib/systemd`, `/var/local/`, `/var/cache/`. | ||
|
|
||
| The exact-match list includes TrueNAS Scale's split-root datasets (it mounts `/usr`, `/var`, `/etc` as separate ZFS datasets for atomic OS updates). | ||
|
|
||
| `is_nested_app_mount(path)` matches container/app mounts: | ||
| - Anything under `ix-applications/` or `.ix-apps/` (TrueNAS apps — one app creates dozens of datasets). | ||
| - `docker/overlay2/`, `containerd/`, `kubelet/`, `snap/`. | ||
| - `.snapshots/`, `.zfs/snapshot/` (ZFS snapshot browsing mounts). | ||
|
|
||
| `should_hide_by_mount_path(path)` is the combined check. It's applied at: | ||
| 1. **Detection** — so newly-discovered volumes get `is_user_visible = false` persistently. | ||
| 2. **Volume list query** (`core/src/ops/volumes/list/query.rs`) — retroactively for tracked volumes whose DB rows predate these filters. | ||
| 3. **Stats calculation** (`core/src/library/mod.rs`) — so `total_capacity` and `available_capacity` exclude hidden volumes even if the DB flag is stale. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify how /home is classified and where the visibility filter affects persisted volumes and stats.
rg -n -C4 'is_system_mount_point|"/home"|should_hide_by_mount_path|auto_track_eligible|is_user_visible' core/src/volume core/src/ops/volumes core/src/libraryRepository: spacedriveapp/spacedrive
Length of output: 30981
🏁 Script executed:
git log --oneline -S "/home" -- core/src/volume/utils.rs | head -20Repository: spacedriveapp/spacedrive
Length of output: 112
Remove /home from system mount list to avoid hiding user data.
/home is classified as VolumeType::UserData (line 115, classification.rs) but placed in is_system_mount_point() exact matches (line 118, utils.rs), causing it to be hidden despite being user data. This contradicts the volume classification and hides the primary user-data volume from detection, listing, and capacity calculations.
Remove /home from the exact-match list in is_system_mount_point(), or special-case it in should_hide_by_mount_path() to allow it to remain visible for user tracking.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/content/docs/core/filesystems.mdx` around lines 130 - 144, The system
mount list currently hides /home despite classification.rs marking it as
VolumeType::UserData; to fix, update is_system_mount_point (in utils.rs) to
remove '/home' from the exact-match set so it is no longer treated as a system
mount, and ensure should_hide_by_mount_path does not treat '/home' as hidden (or
add an explicit exception in should_hide_by_mount_path to allow paths equal to
'/home' to be visible); reference the functions is_system_mount_point and
should_hide_by_mount_path and the VolumeType::UserData classification to locate
the relevant logic and make a single change so /home remains discoverable and
counted in capacity calculations.
| turbopack: { | ||
| root: import.meta.dirname, | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Description: Inspect engine/runtime declarations and other `import.meta.dirname` usage.
# Expected: Either docs deployment is pinned to a compatible Node/Bun runtime, or the config avoids `import.meta.dirname`.
rg -n -C3 '"engines"|"node"|import\.meta\.dirname' -g 'package.json' -g '.nvmrc' -g 'next.config.mjs'Repository: spacedriveapp/spacedrive
Length of output: 969
🏁 Script executed:
# Check for docs-specific package.json or runtime pinning files
find docs -maxdepth 2 -type f \( -name 'package.json' -o -name '.nvmrc' -o -name '.node-version' \) -exec cat -n {} +Repository: spacedriveapp/spacedrive
Length of output: 1321
🏁 Script executed:
# Check for CI/CD configuration specifying Node version for docs
fd -e 'yml' -e 'yaml' -e 'json' --search-path '.github/workflows' -e '(workflows|ci)' -x grep -l 'node.*version\|docs.*build\|next.*build' {} +Repository: spacedriveapp/spacedrive
Length of output: 277
🏁 Script executed:
# Check if there's any environment or config file specifying Node version for docs
git ls-files | grep -E '(\.nvmrc|\.node-version|\.env|vercel\.json|netlify\.toml|dockerfile)' | head -20Repository: spacedriveapp/spacedrive
Length of output: 115
🏁 Script executed:
cat -n .node-version .nvmrcRepository: spacedriveapp/spacedrive
Length of output: 96
The docs build will fail with Node 20.x — import.meta.dirname is not available until Node 21.2.0.
The repository's .nvmrc and .node-version files pin Node to 20.x, which does not include import.meta.dirname. This will cause the docs build to fail immediately with ReferenceError: import.meta.dirname is not defined. Use fileURLToPath/dirname for compatibility with the pinned Node version.
🛠️ Portable alternative
+import { dirname } from 'node:path';
+import { fileURLToPath } from 'node:url';
import { createMDX } from 'fumadocs-mdx/next';
const withMDX = createMDX();
+const root = dirname(fileURLToPath(import.meta.url));
@@
turbopack: {
- root: import.meta.dirname,
+ root,
},🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/next.config.mjs` around lines 8 - 10, The docs config uses
import.meta.dirname under the turbopack config which fails on Node 20.x; replace
import.meta.dirname with a Node-20-compatible resolution using fileURLToPath and
dirname (imported from 'url' and 'path') to compute the directory, then assign
that value to the turbopack.root instead of import.meta.dirname so turbopack,
turbopack.root, and the surrounding next.config.mjs code works on the pinned
Node version.
| import { RedundancyDashboard } from "./routes/redundancy"; | ||
| import { AtRiskFiles } from "./routes/redundancy/at-risk"; | ||
| import { CompareVolumes } from "./routes/redundancy/compare"; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check whether redundancy routes are eager imports and whether ShellLayout wraps routed content in Suspense.
rg -nP --type=tsx -C3 'from "\./routes/redundancy|React\.lazy|lazy\s*\(|<Suspense\b' packages/interface/src/router.tsx
fd -i '^ShellLayout\.(tsx|ts)$' packages/interface/src -x sh -c '
echo "---- $1"
rg -nP -C3 "<Outlet\b|<Suspense\b" "$1"
' sh {}Repository: spacedriveapp/spacedrive
Length of output: 402
🏁 Script executed:
#!/bin/bash
# Check lines 4-6 of router.tsx for the eager imports
echo "=== Router.tsx lines 1-20 (checking imports) ==="
sed -n '1,20p' packages/interface/src/router.tsx
echo ""
echo "=== Router.tsx lines 84-110 (checking route elements) ==="
sed -n '84,110p' packages/interface/src/router.tsx
echo ""
echo "=== ShellLayout.tsx full Outlet context ==="
# Find the Outlet and show context
rg -n -B5 -A5 'Outlet' packages/interface/src/ShellLayout.tsxRepository: spacedriveapp/spacedrive
Length of output: 2730
Lazy-load the new redundancy route pages.
The RedundancyDashboard, AtRiskFiles, and CompareVolumes components are eagerly imported, increasing the initial explorer bundle size. Per guidelines, use React.lazy and wrap route elements in Suspense for code splitting.
♻️ Example refactor
+import { lazy, Suspense } from "react";
import { createBrowserRouter, Navigate, Outlet } from "react-router-dom";
import { Overview } from "./routes/overview";
import { ExplorerView } from "./routes/explorer";
-import { RedundancyDashboard } from "./routes/redundancy";
-import { AtRiskFiles } from "./routes/redundancy/at-risk";
-import { CompareVolumes } from "./routes/redundancy/compare";
import { ShellLayout } from "./ShellLayout";
+const RedundancyDashboard = lazy(() =>
+ import("./routes/redundancy").then((module) => ({
+ default: module.RedundancyDashboard,
+ }))
+);
+const AtRiskFiles = lazy(() =>
+ import("./routes/redundancy/at-risk").then((module) => ({
+ default: module.AtRiskFiles,
+ }))
+);
+const CompareVolumes = lazy(() =>
+ import("./routes/redundancy/compare").then((module) => ({
+ default: module.CompareVolumes,
+ }))
+);
+
{
path: "redundancy",
children: [
{
index: true,
- element: <RedundancyDashboard />,
+ element: (
+ <Suspense fallback={null}>
+ <RedundancyDashboard />
+ </Suspense>
+ ),
},
{
path: "at-risk",
- element: <AtRiskFiles />,
+ element: (
+ <Suspense fallback={null}>
+ <AtRiskFiles />
+ </Suspense>
+ ),
},
{
path: "compare",
- element: <CompareVolumes />,
+ element: (
+ <Suspense fallback={null}>
+ <CompareVolumes />
+ </Suspense>
+ ),
},🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/interface/src/router.tsx` around lines 4 - 6, Replace the eager
imports of RedundancyDashboard, AtRiskFiles, and CompareVolumes with React.lazy
imports and wrap their route elements in a Suspense boundary with a fallback;
specifically, change the import statements for RedundancyDashboard, AtRiskFiles,
and CompareVolumes to use React.lazy(...) and then ensure the route rendering
(the JSX Route elements or the component props that reference
RedundancyDashboard, AtRiskFiles, CompareVolumes in router.tsx) is wrapped in a
Suspense component (import Suspense from React) so those pages are code-split
and loaded on demand.
| <div className="flex h-3 w-full overflow-hidden rounded-full bg-app-box"> | ||
| {/* Redundant segment (safe) */} | ||
| {redundantPercent > 0 && ( | ||
| <motion.div | ||
| className="h-full cursor-pointer bg-status-success/70 hover:bg-status-success/90" | ||
| initial={{ width: 0 }} | ||
| animate={{ width: `${redundantPercent}%` }} | ||
| transition={{ | ||
| duration: 0.6, | ||
| ease: [0.25, 1, 0.5, 1], | ||
| }} | ||
| title={`Redundant: ${formatBytes(redundantBytes)} (${redundantFileCount} files) — safely backed up on other volumes`} | ||
| onClick={() => | ||
| navigate( | ||
| `/redundancy/at-risk?volume=${volumeUuid}&at_risk=false`, | ||
| ) | ||
| } | ||
| /> | ||
| )} | ||
| {/* At-risk segment */} | ||
| {atRiskPercent > 0 && ( | ||
| <motion.div | ||
| className="h-full cursor-pointer bg-status-warning/70 hover:bg-status-warning/90" | ||
| initial={{ width: 0 }} | ||
| animate={{ width: `${atRiskPercent}%` }} | ||
| transition={{ | ||
| duration: 0.6, | ||
| ease: [0.25, 1, 0.5, 1], | ||
| delay: 0.1, | ||
| }} | ||
| title={`At risk: ${formatBytes(atRiskBytes)} (${atRiskFileCount} files) — only copy, not backed up`} | ||
| onClick={() => | ||
| navigate( | ||
| `/redundancy/at-risk?volume=${volumeUuid}&at_risk=true`, | ||
| ) | ||
| } | ||
| /> | ||
| )} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/interface/src/routes/redundancy/components/RedundancyVolumeBar.tsxRepository: spacedriveapp/spacedrive
Length of output: 4562
🏁 Script executed:
# Check if motion.button is used elsewhere in the codebase
rg "motion\.button" packages/interface --type tsx --type ts -A 2 | head -40Repository: spacedriveapp/spacedrive
Length of output: 95
🏁 Script executed:
# Check if there are other clickable divs in the redundancy feature that might have same issue
fd "redundancy" packages/interface --type f -name "*.tsx" | head -20Repository: spacedriveapp/spacedrive
Length of output: 298
🏁 Script executed:
# Check if motion.button is used elsewhere in the codebase
rg "motion\.button" packages/interface -A 2 | head -50Repository: spacedriveapp/spacedrive
Length of output: 930
🏁 Script executed:
# Check redundancy-related files
find packages/interface -path "*redundancy*" -type f -name "*.tsx"Repository: spacedriveapp/spacedrive
Length of output: 300
🏁 Script executed:
# Check the motion.button usage in Spacedrop.tsx to see the pattern
rg "motion\.button" packages/interface/src/windows/Spacedrop.tsx -B 2 -A 8Repository: spacedriveapp/spacedrive
Length of output: 556
🏁 Script executed:
# Check if other redundancy components have similar clickable div issues
rg "motion\.div" packages/interface/src/routes/redundancy -B 1 -A 5Repository: spacedriveapp/spacedrive
Length of output: 2386
Replace clickable segments with keyboard-accessible button elements.
The redundant and at-risk segments are motion.div elements with onClick handlers, making them inaccessible to keyboard users who cannot focus or activate them with Tab/Enter/Space. Use motion.button with an accessible label and focus styles instead.
Proposed fix
- <motion.div
- className="h-full cursor-pointer bg-status-success/70 hover:bg-status-success/90"
+ <motion.button
+ type="button"
+ aria-label={`Show redundant files on ${displayName || "Unknown Volume"}`}
+ className="h-full cursor-pointer p-0 bg-status-success/70 border-0 hover:bg-status-success/90 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-accent"
initial={{ width: 0 }}
animate={{ width: `${redundantPercent}%` }}
transition={{
duration: 0.6,
ease: [0.25, 1, 0.5, 1],
}}
- title={`Redundant: ${formatBytes(redundantBytes)} (${redundantFileCount} files) — safely backed up on other volumes`}
onClick={() =>
navigate(
`/redundancy/at-risk?volume=${volumeUuid}&at_risk=false`,
)
}
/>Apply the same change to the at-risk segment (lines 79–94), replacing motion.div with motion.button and updating the aria-label accordingly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/interface/src/routes/redundancy/components/RedundancyVolumeBar.tsx`
around lines 58 - 95, The redundant and at-risk segments currently render as
clickable motion.div elements (conditional on redundantPercent and
atRiskPercent) which are not keyboard-accessible; replace each motion.div with
motion.button (e.g., the redundant segment where motion.div is used and the
at-risk segment where motion.div is used) keep the same className,
initial/animate/transition props, title and onClick behavior but add
type="button", an appropriate aria-label (e.g., aria-label={`Redundant files:
${redundantFileCount} (${formatBytes(redundantBytes)})`} and similar for
at-risk), and ensure focus styles are present (retain or add focus-visible
outline classes) so keyboard users can tab to and activate the buttons; no other
logic changes to redundantPercent, atRiskPercent, volumeUuid, atRiskFileCount
etc. should be required.


Summary
docs/with a self-hosted Next.js 16 + Fumadocs 16.7 app.mint.json,docs.json,custom.cssdeleted./llms.txt+/llms-full.txt+/:path.mdxLLM export routes, edit-on-GitHub,/→/overview/introductionredirect.#36A3FFas--color-fd-primaryon the neutral preset. Deployable atdocs.spacedrive.com.What's in the tree
Deferred
lastUpdatetimestamps — fumadocs-mdx@14 doesn't surface git-derived modified times without a custom plugin. Removed rather than faked.fumadocs-openapiintegration for API reference — no OpenAPI spec exists yet.core/api.mdxported as-is (turned out to have no<ResponseField>in the source)./core/librariesforlibrary,/core/sync,/core/security,/core/search,/core/performance,/core/normalized_cache,/core/transactions) — pre-existing, not regressions.Test plan
cd docs && bun install && bun run buildcompletes clean (179 routes generated in local build)bun run types:checkpassesbun run devand smoke test:/307s to/overview/introductionmain/llms.txt,/llms-full.txt,/overview/introduction.mdxreturn plaintext/markdown/og/docs/overview/introduction/image.pngrenders a branded 1200x630 PNGdocs.spacedrive.comat the deployment; setv2.spacedrive.com→docs.spacedrive.comredirect at the DNS layer🤖 Generated with Claude Code
Note
This PR migrates the entire documentation site from Mintlify to a self-hosted Fumadocs setup. The 163 changed files represent a complete rewrite of the docs infrastructure and content (6151 additions, 1875 deletions). All 58 documentation pages have been converted to use native Fumadocs components instead of Mintlify-specific markup, resulting in a fully self-contained Next.js application with integrated search, LLM export routes, and AI-powered page actions.
Written by Tembo for commit c471b5a. This will update automatically on new commits.