Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds opt-in "native" execution for Node-style scripts and local bins (CLI flags and HNI_NATIVE env), integrates native resolution into resolver and runner with fallback reasons, replaces Serde/confy storage with plain-text storage, relocates benchmarking into a new benchmark/ suite, and runs CI tests twice for native/delegated modes. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI Parser
participant Config as Config Loader
participant Resolver as Resolver
participant Runner as Runner
participant NativeExec as Native Executor
participant ExternalExec as External Executor
CLI->>Config: Load config (HNI_NATIVE / nativemode)
CLI->>Resolver: Provide parsed flags (native_override)
Resolver->>Resolver: Determine effective native_mode
alt native requested
Resolver->>Resolver: call attempt_nr / attempt_nlx
alt Eligible
Resolver->>Runner: Return ResolvedExecution (strategy: Native)
Runner->>NativeExec: run_script / run_local_bin
NativeExec-->>Runner: exit code
else Ineligible
Resolver->>Resolver: set native_fallback_reason
Resolver->>Runner: Return ResolvedExecution (External)
Runner->>ExternalExec: spawn delegated command
ExternalExec-->>Runner: exit code
end
else native disabled
Resolver->>Runner: Return ResolvedExecution (External)
Runner->>ExternalExec: spawn delegated command
ExternalExec-->>Runner: exit code
end
Runner->>CLI: process exit code
sequenceDiagram
participant Resolver as Resolver
participant NativeAttempt as Native Attempt
participant PackageUtil as Package Utils
Resolver->>NativeAttempt: attempt_nr(pm, args, ctx, if_present?)
NativeAttempt->>PackageUtil: find_nearest_package(cwd)
alt package.json found
PackageUtil->>NativeAttempt: return NearestPackage
NativeAttempt->>NativeAttempt: inspect scripts, pre/post, env constraints
alt eligible
NativeAttempt->>Resolver: Eligible(NativeExecution::RunScript)
else ineligible
NativeAttempt->>Resolver: Ineligible(reason)
end
else no package.json
NativeAttempt->>Resolver: Ineligible(no package)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can generate a title for your PR based on the changes with custom instructions.Set the |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (5)
benchmark/profile.sh (1)
58-60: Add an explicitpnpmprerequisite check for delegated profiling.The delegated case at line 58 depends on
pnpm(via thenrcommand), but the script currently only validatescargoandcargo flamegraph. Without a precheck, failures occur deeper in execution when cargo flamegraph attempts to run the package manager command. An explicit check provides clearer diagnostics.🔧 Suggested patch
if ! cargo flamegraph --help >/dev/null 2>&1; then echo "cargo flamegraph is required. Install it with: cargo install flamegraph" >&2 exit 1 fi + +if ! command -v pnpm >/dev/null 2>&1; then + echo "pnpm is required for delegated benchmark profiling" >&2 + exit 1 +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/profile.sh` around lines 58 - 60, Add an explicit check that pnpm is available before running the delegated profiling case; update the preflight validation (the same block that checks for cargo and cargo flamegraph) to test for pnpm (e.g., via command -v pnpm or similar) and exit with a clear error if missing so the delegated case (profile_case delegated-pnpm-noop / nr invocation) fails fast with a helpful message; ensure the check runs before any calls to profile_case delegated-pnpm-* or uses of nr.package.json (1)
58-62: Shell script invocations may not work on Windows.The
bench*scripts invoke shell scripts directly (./benchmark/run.sh), which won't execute on Windows without WSL or Git Bash. If Windows developer support is needed, consider usingnode ./benchmark/run.mjsor adding a cross-platform wrapper.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` around lines 58 - 62, The bench scripts (package.json entries "bench", "bench:compare", "bench:native", "bench:runtime", "bench:profile") call shell scripts like ./benchmark/run.sh which won’t run on Windows; update those script entries to invoke a cross-platform runner (for example use node ./benchmark/run.mjs or a small JS wrapper that spawns the appropriate shell on POSIX and a .cmd on Windows) or add a cross-platform wrapper package (like shelljs or a npm script that dispatches), ensuring the commands for "bench", "bench:compare", "bench:native", "bench:runtime", and "bench:profile" are replaced with node-based or platform-agnostic invocations.tests/cli_contract.rs (1)
110-110: Consider setting execute permissions on mock binary for Unix.The empty
vitebinary file is created without execute permissions. While this may work for resolution/debug tests, native execution tests on Unix typically require the binary to be executable.💡 Suggested addition for Unix compatibility
fs::write(project.join("node_modules").join(".bin").join("vite"), "").unwrap(); + + #[cfg(unix)] + { + use std::os::unix::fs::PermissionsExt; + let vite_path = project.join("node_modules").join(".bin").join("vite"); + let mut perms = fs::metadata(&vite_path).unwrap().permissions(); + perms.set_mode(0o755); + fs::set_permissions(&vite_path, perms).unwrap(); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli_contract.rs` at line 110, The test creates a mock `vite` file with fs::write(project.join("node_modules").join(".bin").join("vite"), "").unwrap() but doesn't mark it executable, which breaks native-execution tests on Unix; after creating the file set execute permissions (e.g., via std::fs::set_permissions on the Path returned by project.join(...).join("vite") or using std::os::unix::fs::PermissionsExt to set mode 0o755) so the mock binary is executable on Unix while preserving behavior on Windows.src/core/resolve/build.rs (1)
103-105: Verify the--stripping doesn't break edge cases.The code removes
--when it's exactly the second argument. This seems intended to normalizenr script -- argstonr script argsfor native execution. However, if a user runsnr -- --help, the--at position 1 (index 0) won't be stripped, which is correct sinceargs[0]would be--notargs[1].Consider adding a comment explaining the intent of this normalization for future maintainers.
📝 Suggested comment
+ // Strip explicit `--` delimiter between script name and forwarded args. + // Native execution doesn't need this separator since args are passed directly. if args.get(1).is_some_and(|arg| arg == "--") { args.remove(1); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/resolve/build.rs` around lines 103 - 105, Add a short comment above the args.get(1).is_some_and(|arg| arg == "--") { args.remove(1); } block explaining the normalization intent: that when running through the native executor we want to strip a literal "--" only when it's the second argument (args[1]) to transform invocations like `nr script -- args...` into `nr script args...`, and that we intentionally do not remove a leading "--" at args[0] (e.g., `nr -- --help`) to avoid changing top-level CLI semantics; reference the symbols args.get, is_some_and, and args.remove in the comment for clarity.benchmark/run.mjs (1)
207-214: Avoid routing fixturenodecalls through the wrapper.These fixtures execute
node -e ..., butaliasDiris prepended toPATHwithnode -> hni. Every child script now goes through the wrapper, which contaminates the timings. Leavenodeunaliased and invoke the multicall entrypoint explicitly for thenode run/execbenchmark cases.Suggested direction
function prepareAliasDir(tempRoot, ourBin) { const aliasDir = path.join(tempRoot, 'bin') ensureDir(aliasDir) - for (const name of ['hni', 'ni', 'nr', 'nlx', 'node']) { + for (const name of ['hni', 'ni', 'nr', 'nlx']) { createAlias(ourBin, aliasBinPath(aliasDir, name)) } return aliasDir }Then materialize the
nodecases through the normal entrypoint instead of PATH shadowing, e.g.:- { name: 'native', bin: 'node', args: ['-C', fixturePaths[pm.fixtureKey], 'run', 'noop'], env: { HNI_NATIVE: 'true' } } + { name: 'native', bin: 'hni', args: ['node', '-C', fixturePaths[pm.fixtureKey], 'run', 'noop'], env: { HNI_NATIVE: 'true' } }Also applies to: 867-872, 1032-1034
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/run.mjs` around lines 207 - 214, The benchmark scripts currently use plain "node -e ..." values (e.g. scripts noop, build, dev, args, prehooks, hooks, posthooks) which get routed through the aliased PATH (node -> hni); change those entries so they call the real Node binary explicitly (use process.execPath or an absolute node path) instead of relying on PATH aliasing, and for the node run/exec benchmark cases invoke the multicall entrypoint directly rather than relying on PATH shadowing so timings aren’t contaminated by the wrapper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmark/README.md`:
- Line 5: Replace the hardcoded absolute path in the README sentence with a
repository-relative link; update the text that currently references
"/Users/atlantic/Developer/nirs/benchmark/old" to use a relative markdown link
like "[`benchmark/old`](benchmark/old)" so it works for all developers and
points to the preserved benchmark trees, editing the content in
benchmark/README.md where that sentence appears.
- Around line 75-84: The README contains multiple hardcoded absolute paths like
"/Users/atlantic/Developer/nirs/benchmark/..." used in the Output list and
profiles line; update each occurrence to use relative repository paths (e.g.,
"benchmark/profiles/", "benchmark/results/", "benchmark/results/raw/<track>/",
"benchmark/LATEST.md", "benchmark/HISTORY.md") so links work for other machines
and CI; edit the README.md entries that show those absolute paths and replace
them with the corresponding relative paths and Markdown links.
In `@benchmark/results/benchmark-2026-03-20T00-08-06-117Z.md`:
- Line 13: The artifacts links are using an incorrect path; update the
"Artifacts:" line that currently links to
[native-2026-03-20T00-08-06-117Z.json](results/native-2026-03-20T00-08-06-117Z.json)
and
[native-2026-03-20T00-08-06-117Z.md](results/native-2026-03-20T00-08-06-117Z.md)
to be relative to the current results/ directory (e.g., change to
./native-2026-03-20T00-08-06-117Z.json and ./native-2026-03-20T00-08-06-117Z.md
or remove the leading "results/" prefix) so the links resolve correctly from
within the results folder.
- Line 5: The relative link in the markdown ("Combined JSON:
[benchmark-2026-03-20T00-08-06-117Z.json](results/benchmark-2026-03-20T00-08-06-117Z.json)")
is incorrect because this MD file already lives in the results/ folder; remove
the duplicated directory segment by changing the link target to the bare
filename (e.g. "benchmark-2026-03-20T00-08-06-117Z.json" or
"./benchmark-2026-03-20T00-08-06-117Z.json") so it resolves to the correct file.
In `@README.md`:
- Around line 335-355: Replace the hardcoded absolute local paths in README.md
by using relative repository paths: change the link text instances referencing
`/Users/atlantic/Developer/nirs/benchmark` to `benchmark/` (for the active
suite), and change `/Users/atlantic/Developer/nirs/benchmark/LATEST.md` and
`/Users/atlantic/Developer/nirs/benchmark/HISTORY.md` to `benchmark/LATEST.md`
and `benchmark/HISTORY.md` respectively; update the three affected link
occurrences (the lines showing [`benchmark/`], [`benchmark/LATEST.md`], and
[`benchmark/HISTORY.md`]) so they point to relative paths that work for other
developers and do not expose local filesystem structure.
In `@src/core/native.rs`:
- Around line 329-345: The file fails rustfmt; run rustfmt (or cargo fmt) and
reformat the function merged_path_with_bins (and surrounding native.rs) so it
matches the project's formatting rules—ensure expression chaining with let-else
and the if-let conditions (involving resolve_real_node_path and
path_with_real_node_priority) are formatted per rustfmt, and that
join_paths_string and other symbols keep proper indentation and spacing; commit
the formatted native.rs so cargo fmt --check passes.
- Around line 69-109: The code currently queues the pre hook unconditionally
before verifying the main script, causing pre{script_name} to run even when the
main script is missing and has_if_present is true; move the pre{script_name}
lookup/unsupported_script_reason check and its NativeScriptStep push into the
Some(script) branch (where the main script is found) so pre is only added when
the main script exists, keeping the existing use of scripts.get(...),
unsupported_script_reason(...), NativeScriptStep construction, forward_args
semantics, and the existing post{script_name} handling inside the Some(script)
branch; remove the initial standalone pre block that runs before matching on
scripts.get(&script_name).
In `@src/platform/node.rs`:
- Around line 18-20: The code currently caches None/errors in REAL_NODE_PATH via
resolve_real_node_path_uncached().ok().flatten(), which prevents recovery when
the environment (HNI_REAL_NODE) is later set; change the logic so you only cache
successful resolutions and re-check the environment on each call: in the
function that currently calls REAL_NODE_PATH.get_or_init(...) first inspect
std::env::var_os("HNI_REAL_NODE") and attempt resolve_real_node_path_uncached()
unconditionally if the env var is present, and only call REAL_NODE_PATH.set(...)
/ get_or_init with a PathBuf when resolve_real_node_path_uncached() returns
Ok(Some(path)) (do not cache None or swallow Err); alternatively keep
REAL_NODE_PATH as OnceLock<PathBuf> (not Option) and only initialize it on
success from resolve_real_node_path_uncached(), leaving unresolved cases to
retry later. Ensure resolve errors are propagated or logged instead of being
discarded.
In `@tests/native_execution.rs`:
- Around line 7-11: The tests in tests/native_execution.rs are incorrectly using
runtime cfg!(windows) guards while calling the unix-only make_executable() and
POSIX shell snippets, which still cause Windows compile failures; add a
compile-time module-level gate by inserting #![cfg(unix)] at the top of this
file and remove the cfg!(windows) early-return checks inside the test functions
(e.g., native_nr_runs_hooks_from_nearest_package_and_forwards_args and the other
tests around the make_executable() calls) so the file is excluded from non-Unix
builds and the unix-only make_executable() and shell usage no longer cause
compile errors.
---
Nitpick comments:
In `@benchmark/profile.sh`:
- Around line 58-60: Add an explicit check that pnpm is available before running
the delegated profiling case; update the preflight validation (the same block
that checks for cargo and cargo flamegraph) to test for pnpm (e.g., via command
-v pnpm or similar) and exit with a clear error if missing so the delegated case
(profile_case delegated-pnpm-noop / nr invocation) fails fast with a helpful
message; ensure the check runs before any calls to profile_case delegated-pnpm-*
or uses of nr.
In `@benchmark/run.mjs`:
- Around line 207-214: The benchmark scripts currently use plain "node -e ..."
values (e.g. scripts noop, build, dev, args, prehooks, hooks, posthooks) which
get routed through the aliased PATH (node -> hni); change those entries so they
call the real Node binary explicitly (use process.execPath or an absolute node
path) instead of relying on PATH aliasing, and for the node run/exec benchmark
cases invoke the multicall entrypoint directly rather than relying on PATH
shadowing so timings aren’t contaminated by the wrapper.
In `@package.json`:
- Around line 58-62: The bench scripts (package.json entries "bench",
"bench:compare", "bench:native", "bench:runtime", "bench:profile") call shell
scripts like ./benchmark/run.sh which won’t run on Windows; update those script
entries to invoke a cross-platform runner (for example use node
./benchmark/run.mjs or a small JS wrapper that spawns the appropriate shell on
POSIX and a .cmd on Windows) or add a cross-platform wrapper package (like
shelljs or a npm script that dispatches), ensuring the commands for "bench",
"bench:compare", "bench:native", "bench:runtime", and "bench:profile" are
replaced with node-based or platform-agnostic invocations.
In `@src/core/resolve/build.rs`:
- Around line 103-105: Add a short comment above the
args.get(1).is_some_and(|arg| arg == "--") { args.remove(1); } block explaining
the normalization intent: that when running through the native executor we want
to strip a literal "--" only when it's the second argument (args[1]) to
transform invocations like `nr script -- args...` into `nr script args...`, and
that we intentionally do not remove a leading "--" at args[0] (e.g., `nr --
--help`) to avoid changing top-level CLI semantics; reference the symbols
args.get, is_some_and, and args.remove in the comment for clarity.
In `@tests/cli_contract.rs`:
- Line 110: The test creates a mock `vite` file with
fs::write(project.join("node_modules").join(".bin").join("vite"), "").unwrap()
but doesn't mark it executable, which breaks native-execution tests on Unix;
after creating the file set execute permissions (e.g., via
std::fs::set_permissions on the Path returned by project.join(...).join("vite")
or using std::os::unix::fs::PermissionsExt to set mode 0o755) so the mock binary
is executable on Unix while preserving behavior on Windows.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7afd449e-ed28-458a-ae05-ef4425086fcb
📒 Files selected for processing (42)
.github/workflows/ci.yml.gitignoreREADME.mdbenchmark/HISTORY.mdbenchmark/LATEST.mdbenchmark/README.mdbenchmark/profile.shbenchmark/profiles/.gitkeepbenchmark/results/.gitkeepbenchmark/results/benchmark-2026-03-20T00-08-06-117Z.mdbenchmark/results/native-2026-03-20T00-08-06-117Z.mdbenchmark/run.mjsbenchmark/run.shbenchmarks/README.mdbenchmarks/run.mjspackage.jsonscripts/test-modes.mjssrc/app/cli.rssrc/app/dispatch.rssrc/app/doctor.rssrc/app/help.rssrc/core/batch.rssrc/core/config.rssrc/core/mod.rssrc/core/native.rssrc/core/package.rssrc/core/pkg_json.rssrc/core/resolve/build.rssrc/core/runner.rssrc/core/storage.rssrc/core/types.rssrc/features/interactive/nr_scripts.rssrc/features/nr.rssrc/platform/node.rstests/cli_contract.rstests/config_detect.rstests/dispatch_multicall.rstests/native_execution.rstests/native_package_managers.rstests/nr_interactive_behavior.rstests/passthrough.rstests/resolve_matrix.rs
💤 Files with no reviewable changes (2)
- benchmarks/README.md
- benchmarks/run.mjs
…ng capabilities. Update `.gitignore` to exclude benchmark artifacts, modify `package.json` scripts for new benchmark commands, and add detailed documentation in `README.md`. Introduce new files for benchmark history and latest results, along with a profiling script. Update CI workflow to support testing in both delegated and native modes. Enhance CLI with native execution options and update configuration handling for native mode.
9dc8668 to
51f7e00
Compare
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (3)
src/core/package.rs (1)
53-63:⚠️ Potential issue | 🟠 MajorReject path-like
bin_namevalues before joining.
resolve_local_bin()still accepts absolute paths and values like../foo, soPath::join()can escapenode_modules/.bin.exists()also returns true for directories. Restrictbin_nameto a single path component and only return files.Suggested fix
pub fn resolve_local_bin(bin_name: &str, bin_dirs: &[PathBuf]) -> Option<PathBuf> { + let bin_path = Path::new(bin_name); + if bin_path.is_absolute() || bin_path.components().count() != 1 { + return None; + } + #[cfg(windows)] const SUFFIXES: &[&str] = &["", ".cmd", ".exe", ".bat", ".ps1"]; #[cfg(not(windows))] const SUFFIXES: &[&str] = &[""]; @@ for suffix in SUFFIXES { let candidate = dir.join(format!("{bin_name}{suffix}")); - if candidate.exists() { + if candidate.is_file() { return Some(candidate); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/package.rs` around lines 53 - 63, In resolve_local_bin, reject path-like bin_name values before joining by rejecting any name whose Path::new(bin_name) has components other than a single Normal component (i.e., no RootDir, ParentDir, or multiple components) so values like "/abs", "..", or "a/b" return None; then when iterating bin_dirs and building candidate (the dir.join(...) using SUFFIXES), only return Some(candidate) if candidate.exists() && candidate.is_file() to avoid returning directories. Use these checks at the start of resolve_local_bin and right before returning Some.tests/native_execution.rs (1)
7-11:⚠️ Potential issue | 🔴 CriticalGate this test module with
#[cfg(unix)]instead ofcfg!(windows).The early returns are runtime-only. The bodies still compile on Windows, and the calls to
make_executable()at Lines 68 and 149 fail there because that helper only exists behind#[cfg(unix)].Suggested fix
+#![cfg(unix)] + use std::fs; @@ - if cfg!(windows) { - return; - }#!/bin/bash rg -n -C2 "cfg!\(windows\)|#\[cfg\(unix\)\]|make_executable" tests/native_execution.rsAlso applies to: 52-56, 90-94, 129-133, 190-197
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/native_execution.rs` around lines 7 - 11, The tests in tests/native_execution.rs are only valid on Unix but currently use runtime checks (cfg!(windows)) and early returns; change the module/test gating to compile-time by adding #[cfg(unix)] to the test functions or enclosing module so Windows-only helpers (like make_executable) are not compiled on Windows; remove the runtime if cfg!(windows) { return; } blocks from the tests (e.g., native_nr_runs_hooks_from_nearest_package_and_forwards_args and the other affected tests around the make_executable calls) so the tests are skipped at compile time instead of returning at runtime.src/core/native.rs (1)
66-106:⚠️ Potential issue | 🟠 MajorDon't queue
pre<script>before you've confirmed the main script exists.Line 66 pushes the pre-hook before the
scripts.get(&script_name)match. With--if-present, a missing main script still leavesstepsnon-empty, sorun_script()executes the pre-hook instead of cleanly no-oping.Suggested fix
- if let Some(pre) = scripts.get(&format!("pre{script_name}")) { - if let Some(reason) = unsupported_script_reason(&format!("pre{script_name}"), pre) { - return Ok(NativeAttempt::Ineligible(reason)); - } - steps.push(NativeScriptStep { - event_name: format!("pre{script_name}"), - command: pre.clone(), - forward_args: false, - }); - } - match scripts.get(&script_name) { Some(script) => { + if let Some(pre) = scripts.get(&format!("pre{script_name}")) { + if let Some(reason) = unsupported_script_reason(&format!("pre{script_name}"), pre) + { + return Ok(NativeAttempt::Ineligible(reason)); + } + steps.push(NativeScriptStep { + event_name: format!("pre{script_name}"), + command: pre.clone(), + forward_args: false, + }); + } + if let Some(reason) = unsupported_script_reason(&script_name, script) { return Ok(NativeAttempt::Ineligible(reason)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/native.rs` around lines 66 - 106, The pre-hook is being pushed into steps before confirming the main script exists, so when has_if_present is true a missing main script still leaves steps non-empty and run_script() runs the pre-hook; move the pre{script_name} lookup and its unsupported_script_reason check inside the Some(script) arm (after confirming scripts.get(&script_name) is Some) and only push the NativeScriptStep for pre{script_name} there, keeping the current behavior for post{script_name} and preserving use of formats, NativeScriptStep, scripts, has_if_present, and returning NativeAttempt::Ineligible on unsupported cases.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmark/profile.sh`:
- Around line 4-6: profile_case() currently runs cargo flamegraph from the
caller's CWD causing failures locating Cargo.toml; update the cargo flamegraph
invocation inside profile_case to run from the computed REPO_ROOT (either by
prefixing with a cd "$REPO_ROOT" && or by using cargo's --manifest-path pointing
at "$REPO_ROOT/Cargo.toml") so that SCRIPT_DIR and REPO_ROOT variables are used
to ensure cargo runs from the repository root.
In `@benchmark/run.mjs`:
- Around line 40-46: The Bun fixture entries currently list only 'bun' in their
requiredBins, but the runner calls node (node -e "") so add 'node' to the
requiredBins array for the Bun fixture(s) (the objects where id: 'bun' / label:
'bun' are defined) so availableBinaries() will probe for node as well; update
every Bun fixture instance in this file (the other similar bun entries) to
include 'node' in requiredBins to ensure those cases are skipped on Node-less
environments.
- Around line 117-127: Add an early Windows guard in main() so the script fails
fast on Windows: detect Windows via process.platform === 'win32' (or
require('os').platform()) at the start of main(), log a clear error mentioning
Windows is unsupported and that POSIX shell and Unix binaries are required, then
exit with non-zero status (process.exit(1)). This prevents later calls that
assume POSIX shell or Unix binaries (e.g., ensureBinary which uses
spawnSync('sh', ...), any POSIX quoting blocks, and usage of Unix-only
paths/binaries referenced alongside aliasBinPath and createAlias) from running
on Windows; keep the rest of the code unchanged.
- Around line 276-280: The installAntfuNi function currently installs whatever
`@antfu/ni` is latest, making benchmarks unreproducible; update installAntfuNi to
pin or record the version: add a NI_VERSION constant (or read from env/benchmark
config) and use it in the run call (install '@antfu/ni@' + NI_VERSION) to
guarantee a deterministic package, and/or capture the resolved version after
install and include it in the benchmark artifact payload so the exact `@antfu/ni`
version used is recorded; refer to the installAntfuNi function and the run(...)
invocation to implement these changes.
In `@docs/native-compat.md`:
- Line 29: Update the docs table row that currently references the wrong config
key name "nativeMode" to use the actual config key "native_mode"; locate the
table entry mentioning `np`, `ns` and replace the `nativeMode` text with
`native_mode` so the documentation points to the correct setting.
In `@README.md`:
- Around line 363-389: Add a short actionable note to the native-mode section
explaining how to enable/disable native mode: mention the CLI flags --native and
--no-native, the environment variable HNI_NATIVE=true|false, and the
corresponding config setting (e.g., native: true/false) so readers can reproduce
the benchmarks; update the paragraph that references HNI_NATIVE to include these
three ways to opt in/out and a brief example of usage (CLI flag, env var, and
config key) for clarity.
In `@src/core/native.rs`:
- Around line 55-60: Both native-path decision points must respect Yarn Berry
PnP by detecting .pnp.* in the package root or any ancestor and returning
NativeAttempt::Ineligible with the existing PnP message; update attempt_nr()
(which currently only checks pkg.root/.pnp.*) and add the same guard to
attempt_nlx() to prevent native execution for PackageManager::YarnBerry when PnP
is present. Modify has_yarn_pnp_loader() to walk parent directories from
pkg.root up to the filesystem root searching for .pnp.* files, and use that
updated helper in both attempt_nr() and attempt_nlx() where
PackageManager::YarnBerry and empty bin_paths would trigger native handling so
they instead return the same Ineligible result.
---
Duplicate comments:
In `@src/core/native.rs`:
- Around line 66-106: The pre-hook is being pushed into steps before confirming
the main script exists, so when has_if_present is true a missing main script
still leaves steps non-empty and run_script() runs the pre-hook; move the
pre{script_name} lookup and its unsupported_script_reason check inside the
Some(script) arm (after confirming scripts.get(&script_name) is Some) and only
push the NativeScriptStep for pre{script_name} there, keeping the current
behavior for post{script_name} and preserving use of formats, NativeScriptStep,
scripts, has_if_present, and returning NativeAttempt::Ineligible on unsupported
cases.
In `@src/core/package.rs`:
- Around line 53-63: In resolve_local_bin, reject path-like bin_name values
before joining by rejecting any name whose Path::new(bin_name) has components
other than a single Normal component (i.e., no RootDir, ParentDir, or multiple
components) so values like "/abs", "..", or "a/b" return None; then when
iterating bin_dirs and building candidate (the dir.join(...) using SUFFIXES),
only return Some(candidate) if candidate.exists() && candidate.is_file() to
avoid returning directories. Use these checks at the start of resolve_local_bin
and right before returning Some.
In `@tests/native_execution.rs`:
- Around line 7-11: The tests in tests/native_execution.rs are only valid on
Unix but currently use runtime checks (cfg!(windows)) and early returns; change
the module/test gating to compile-time by adding #[cfg(unix)] to the test
functions or enclosing module so Windows-only helpers (like make_executable) are
not compiled on Windows; remove the runtime if cfg!(windows) { return; } blocks
from the tests (e.g.,
native_nr_runs_hooks_from_nearest_package_and_forwards_args and the other
affected tests around the make_executable calls) so the tests are skipped at
compile time instead of returning at runtime.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 53edc29a-e1c6-4c85-b477-a07a116aa387
📒 Files selected for processing (43)
.github/workflows/ci.yml.gitignoreREADME.mdbenchmark/HISTORY.mdbenchmark/LATEST.mdbenchmark/README.mdbenchmark/profile.shbenchmark/profiles/.gitkeepbenchmark/results/.gitkeepbenchmark/results/benchmark-2026-03-20T00-08-06-117Z.mdbenchmark/results/native-2026-03-20T00-08-06-117Z.mdbenchmark/run.mjsbenchmark/run.shbenchmarks/README.mdbenchmarks/run.mjsdocs/native-compat.mdpackage.jsonscripts/test-modes.mjssrc/app/cli.rssrc/app/dispatch.rssrc/app/doctor.rssrc/app/help.rssrc/core/batch.rssrc/core/config.rssrc/core/mod.rssrc/core/native.rssrc/core/package.rssrc/core/pkg_json.rssrc/core/resolve/build.rssrc/core/runner.rssrc/core/storage.rssrc/core/types.rssrc/features/interactive/nr_scripts.rssrc/features/nr.rssrc/platform/node.rstests/cli_contract.rstests/config_detect.rstests/dispatch_multicall.rstests/native_execution.rstests/native_package_managers.rstests/nr_interactive_behavior.rstests/passthrough.rstests/resolve_matrix.rs
💤 Files with no reviewable changes (2)
- benchmarks/README.md
- benchmarks/run.mjs
✅ Files skipped from review due to trivial changes (11)
- benchmark/HISTORY.md
- src/app/doctor.rs
- benchmark/results/benchmark-2026-03-20T00-08-06-117Z.md
- src/core/mod.rs
- .github/workflows/ci.yml
- tests/passthrough.rs
- tests/nr_interactive_behavior.rs
- benchmark/LATEST.md
- tests/config_detect.rs
- src/features/interactive/nr_scripts.rs
- tests/dispatch_multicall.rs
🚧 Files skipped from review as they are similar to previous changes (13)
- .gitignore
- src/app/help.rs
- src/features/nr.rs
- src/core/batch.rs
- src/core/runner.rs
- scripts/test-modes.mjs
- tests/cli_contract.rs
- package.json
- src/platform/node.rs
- src/core/pkg_json.rs
- src/core/storage.rs
- tests/resolve_matrix.rs
- src/core/resolve/build.rs
| SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" | ||
| REPO_ROOT="$(cd -- "$SCRIPT_DIR/.." && pwd)" | ||
| RESULT_DIR="$REPO_ROOT/benchmark/profiles" |
There was a problem hiding this comment.
Run cargo flamegraph from REPO_ROOT.
profile_case() currently inherits the caller's working directory, so invoking this script from outside the repo fails once cargo flamegraph tries to locate Cargo.toml. The script already computes REPO_ROOT; use it for the cargo invocation as well.
Suggested fix
profile_case() {
local name="$1"
shift
local output="$RESULT_DIR/$(timestamp)-$name.svg"
echo "[benchmark] flamegraph: $name"
- cargo flamegraph --bin hni --output "$output" -- "$@"
+ (
+ cd "$REPO_ROOT"
+ cargo flamegraph --bin hni --output "$output" -- "$@"
+ )
echo "[benchmark] wrote $output"
}
Also applies to: 46-52
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/profile.sh` around lines 4 - 6, profile_case() currently runs cargo
flamegraph from the caller's CWD causing failures locating Cargo.toml; update
the cargo flamegraph invocation inside profile_case to run from the computed
REPO_ROOT (either by prefixing with a cd "$REPO_ROOT" && or by using cargo's
--manifest-path pointing at "$REPO_ROOT/Cargo.toml") so that SCRIPT_DIR and
REPO_ROOT variables are used to ensure cargo runs from the repository root.
| id: 'bun', | ||
| label: 'bun', | ||
| fixtureKey: 'bun', | ||
| packageManager: 'bun@1.3.5', | ||
| lockfile: 'bun.lockb', | ||
| requiredBins: ['bun'], | ||
| }, |
There was a problem hiding this comment.
Add node to the Bun fixture prerequisites.
The Bun fixture scripts still execute node -e "", but the Bun entry only requires bun and availableBinaries() never probes node. On Bun-only machines those cases will be marked runnable and then fail inside hyperfine instead of being skipped.
Suggested fix
{
id: 'bun',
label: 'bun',
fixtureKey: 'bun',
packageManager: 'bun@1.3.5',
lockfile: 'bun.lockb',
- requiredBins: ['bun'],
+ requiredBins: ['bun', 'node'],
},
- for (const name of ['npm', 'npx', 'pnpm', 'yarn', 'bun', 'deno']) {
+ for (const name of ['node', 'npm', 'npx', 'pnpm', 'yarn', 'bun', 'deno']) {
Also applies to: 197-223, 266-274
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/run.mjs` around lines 40 - 46, The Bun fixture entries currently
list only 'bun' in their requiredBins, but the runner calls node (node -e "") so
add 'node' to the requiredBins array for the Bun fixture(s) (the objects where
id: 'bun' / label: 'bun' are defined) so availableBinaries() will probe for node
as well; update every Bun fixture instance in this file (the other similar bun
entries) to include 'node' in requiredBins to ensure those cases are skipped on
Node-less environments.
| function ensureBinary(name, installHint = '') { | ||
| const result = spawnSync('sh', ['-c', `command -v ${name}`], { | ||
| encoding: 'utf8', | ||
| }) | ||
| const value = result.stdout.trim() | ||
| if (value) { | ||
| return value | ||
| } | ||
| const suffix = installHint ? ` (${installHint})` : '' | ||
| throw new Error(`required binary not found: ${name}${suffix}`) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the file exists and explore its structure
fd -t f "run.mjs" benchmark/
Repository: happytoolin/hni
Length of output: 76
🏁 Script executed:
# Get the line count to understand file size
wc -l benchmark/run.mjs
Repository: happytoolin/hni
Length of output: 81
🏁 Script executed:
# Read the file to examine the specific functions mentioned
cat -n benchmark/run.mjs | head -300
Repository: happytoolin/hni
Length of output: 9315
🏁 Script executed:
# Search for all spawnSync calls and process.platform references
rg "spawnSync|process.platform|target/release" benchmark/run.mjs -n
Repository: happytoolin/hni
Length of output: 513
🏁 Script executed:
# Check lines around 992-1002
sed -n '985,1010p' benchmark/run.mjs
Repository: happytoolin/hni
Length of output: 1116
🏁 Script executed:
# Search for target/release references
rg "target.*release|release.*hni" benchmark/run.mjs -n -B2 -A2
Repository: happytoolin/hni
Length of output: 279
🏁 Script executed:
# Find the main function
rg "^async function main|^function main" benchmark/run.mjs -n -A30
Repository: happytoolin/hni
Length of output: 1329
🏁 Script executed:
# Check if there's already a Windows platform check early in main
sed -n '1100,1137p' benchmark/run.mjs
Repository: happytoolin/hni
Length of output: 1323
Add early Windows guard in main() to fail fast.
The benchmark runner uses spawnSync('sh', ...) (lines 118, 270), POSIX shell quoting (lines 129–155), and assumes Unix binary paths like target/release/hni (line 1002). These hardcoded assumptions will cause failures on Windows where the POSIX shell does not exist, Windows cmd/PowerShell use different quoting syntax, and binaries need .exe extensions.
The codebase already demonstrates Windows awareness elsewhere (e.g., aliasBinPath, createAlias at lines 255–264), so full support is possible. Until then, guard early in main() to produce a clear, immediate error:
Suggested minimal guard
function main() {
+ if (process.platform === 'win32') {
+ throw new Error('benchmark runner currently requires a POSIX shell')
+ }
+
const args = parseArgs(process.argv.slice(2))
Also applies to: lines 129–155, 266–274, 992–1002
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/run.mjs` around lines 117 - 127, Add an early Windows guard in
main() so the script fails fast on Windows: detect Windows via process.platform
=== 'win32' (or require('os').platform()) at the start of main(), log a clear
error mentioning Windows is unsupported and that POSIX shell and Unix binaries
are required, then exit with non-zero status (process.exit(1)). This prevents
later calls that assume POSIX shell or Unix binaries (e.g., ensureBinary which
uses spawnSync('sh', ...), any POSIX quoting blocks, and usage of Unix-only
paths/binaries referenced alongside aliasBinPath and createAlias) from running
on Windows; keep the rest of the code unchanged.
| function installAntfuNi(repoRoot, cacheDir) { | ||
| ensureDir(cacheDir) | ||
| process.stdout.write('Installing/updating @antfu/ni in benchmark cache...\n') | ||
| run('npm', ['i', '-g', '@antfu/ni', '--prefix', cacheDir], { cwd: repoRoot }) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the exact content at lines 276-280 in benchmark/run.mjs
if [ -f benchmark/run.mjs ]; then
echo "=== Lines 276-280 of benchmark/run.mjs ==="
sed -n '276,280p' benchmark/run.mjs
echo ""
echo "=== Context: lines 270-285 ==="
sed -n '270,285p' benchmark/run.mjs
fi
Repository: happytoolin/hni
Length of output: 899
🏁 Script executed:
# Check if benchmark/HISTORY.md exists
if [ -f benchmark/HISTORY.md ]; then
echo "=== benchmark/HISTORY.md exists ==="
wc -l benchmark/HISTORY.md
echo ""
echo "=== First 50 lines ==="
head -50 benchmark/HISTORY.md
fi
Repository: happytoolin/hni
Length of output: 539
🏁 Script executed:
# Search for any package.json or version specs in benchmark directory
find benchmark -type f \( -name "package.json" -o -name "*.json" -o -name "*.mjs" -o -name "*.js" \) | head -20
Repository: happytoolin/hni
Length of output: 76
🏁 Script executed:
# Search for any version pinning or specification related to `@antfu/ni`
rg -i "@antfu/ni" --type js --type mjs -B 2 -A 2
Repository: happytoolin/hni
Length of output: 86
🏁 Script executed:
# Check the benchmark directory structure
ls -la benchmark/
Repository: happytoolin/hni
Length of output: 656
🏁 Script executed:
# Search for `@antfu/ni` mentions more broadly
rg "@antfu/ni" -B 3 -A 3
Repository: happytoolin/hni
Length of output: 1702
🏁 Script executed:
# Look at the run.mjs file structure - get line count and search for key terms
wc -l benchmark/run.mjs
echo "==="
# Search for version-related code
rg -i "version\|@antfu/ni\|npm.*@antfu" benchmark/run.mjs -A 2 -B 2
Repository: happytoolin/hni
Length of output: 85
🏁 Script executed:
# Check what's in the benchmark result files to see if versions are recorded
if [ -d benchmark/results ]; then
echo "=== Benchmark results directory ==="
ls -la benchmark/results/
echo ""
# Check one of the JSON files to see what's recorded
if [ -f benchmark/results/benchmark-2026-03-20T00-08-06-117Z.json ]; then
echo "=== Sample JSON result structure ==="
head -100 benchmark/results/benchmark-2026-03-20T00-08-06-117Z.json
fi
fi
Repository: happytoolin/hni
Length of output: 436
🏁 Script executed:
# Check if there's any package.json at root or in benchmark that might specify versions
if [ -f package.json ]; then
echo "=== Root package.json ==="
cat package.json | head -50
fi
Repository: happytoolin/hni
Length of output: 1255
🏁 Script executed:
# Check one of the benchmark result files to see what data is captured
if [ -f "benchmark/results/benchmark-2026-03-20T00-08-06-117Z.md" ]; then
echo "=== Benchmark result file content ==="
cat "benchmark/results/benchmark-2026-03-20T00-08-06-117Z.md"
fi
Repository: happytoolin/hni
Length of output: 513
🏁 Script executed:
# Check the full installAntfuNi function and surrounding code to understand the broader context
sed -n '270,320p' benchmark/run.mjs
Repository: happytoolin/hni
Length of output: 1599
🏁 Script executed:
# Search for any code that might record or check versions after installation
rg "installed|resolved|version" benchmark/run.mjs -i -B 2 -A 2
Repository: happytoolin/hni
Length of output: 466
🏁 Script executed:
# Check if there are any npm lock files or caching mechanisms
find . -maxdepth 2 -type f \( -name "package-lock.json" -o -name "npm-shrinkwrap.json" -o -name ".npmrc" \) 2>/dev/null
Repository: happytoolin/hni
Length of output: 78
🏁 Script executed:
# Check if there's code that queries or records the installed `@antfu/ni` version
rg -i "antfu.*ni.*version\|version.*antfu.*ni" benchmark/run.mjs -B 3 -A 3
Repository: happytoolin/hni
Length of output: 41
🏁 Script executed:
# Look at how the benchmark execution captures data - check for any metadata recording
sed -n '1,100p' benchmark/run.mjs | head -50
Repository: happytoolin/hni
Length of output: 1112
🏁 Script executed:
# Check if there's a recent git history showing changes to this function
git log --oneline -20 -- benchmark/run.mjs 2>/dev/null || echo "Git log not available or no history"
Repository: happytoolin/hni
Length of output: 569
🏁 Script executed:
# Search for how the benchmark commands are executed and if any environment info is captured
rg "hyperfine|spawnSync" benchmark/run.mjs -B 2 -A 2 | head -80
Repository: happytoolin/hni
Length of output: 1718
Pin or record the @antfu/ni version in benchmarks.
The compare track installs whatever @antfu/ni version is latest, making benchmark comparisons unreproducible. Other package managers have pinned versions (npm@10.0.0, pnpm@9.0.0, yarn@1.22.0, bun@1.3.5), but @antfu/ni does not. Either add a version specifier to the install command (e.g., @antfu/ni@0.x.y) or include the resolved version in the benchmark artifact payload.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/run.mjs` around lines 276 - 280, The installAntfuNi function
currently installs whatever `@antfu/ni` is latest, making benchmarks
unreproducible; update installAntfuNi to pin or record the version: add a
NI_VERSION constant (or read from env/benchmark config) and use it in the run
call (install '@antfu/ni@' + NI_VERSION) to guarantee a deterministic package,
and/or capture the resolved version after install and include it in the
benchmark artifact payload so the exact `@antfu/ni` version used is recorded;
refer to the installAntfuNi function and the run(...) invocation to implement
these changes.
| | `nlx` | Yes, local bins only | Uses native execution only when a local executable can be resolved confidently. | | ||
| | `node exec` / `node x` / `node dlx` | Yes, local bins only | Inherits the same local-bin behavior as `nlx`. | | ||
| | `ni`, `nci`, `nu`, `nun`, `na` | No | These remain delegated to the detected package manager. | | ||
| | `np`, `ns` | Already native | These are not controlled by `nativeMode`. | |
There was a problem hiding this comment.
Use the actual config key name here.
The PR introduces native_mode, but this row says nativeMode. That points readers at the wrong setting.
Suggested fix
-| `np`, `ns` | Already native | These are not controlled by `nativeMode`. |
+| `np`, `ns` | Already native | These are not controlled by `native_mode`. |
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | `np`, `ns` | Already native | These are not controlled by `nativeMode`. | | |
| | `np`, `ns` | Already native | These are not controlled by `native_mode`. | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/native-compat.md` at line 29, Update the docs table row that currently
references the wrong config key name "nativeMode" to use the actual config key
"native_mode"; locate the table entry mentioning `np`, `ns` and replace the
`nativeMode` text with `native_mode` so the documentation points to the correct
setting.
| Native mode is where `hni` shows the clearest gains in the current snapshot: | ||
|
|
||
| | Case | Delegated | Native | Gain | | ||
| | --- | ---: | ---: | ---: | | ||
| | `nr noop (npm)` | 372.01 ms | 44.28 ms | 8.40x | | ||
| | `nr noop (pnpm)` | 438.42 ms | 36.04 ms | 12.16x | | ||
| | `node run noop (pnpm)` | 594.08 ms | 34.30 ms | 17.32x | | ||
| | `nlx hello --flag (npm local bin)` | 276.90 ms | 6.78 ms | 40.87x | | ||
|
|
||
| Overall native-mode geometric mean in the current tracked snapshot: `5.02x`. | ||
|
|
||
| The runtime track keeps `bun` and `deno` separate from the Antfu comparison and focuses on a small fair set of task-style commands: | ||
|
|
||
| | Case | `hni` | `bun` | `deno` | | ||
| | --- | ---: | ---: | ---: | | ||
| | `task noop` | 35.50 ms | 42.98 ms | 33.00 ms | | ||
| | `task hooks` | 102.40 ms | 120.64 ms | 40.49 ms | | ||
|
|
||
| For CLI/startup-oriented comparisons against Antfu's `ni`, the results are closer and depend heavily on the exact command shape. The `compare` track is intentionally small and should be read as a lightweight sanity check, not a broad claim about every workflow. | ||
|
|
||
| ### Methodology | ||
|
|
||
| - All CLI timing uses `hyperfine`. | ||
| - `hni` is measured with the release binary from `target/release/hni`. | ||
| - `compare` benchmarks `@antfu/ni` against `hni` on a very small CLI-focused set. | ||
| - `native` benchmarks `HNI_NATIVE=false` vs `HNI_NATIVE=true`. | ||
| - `runtime` benchmarks `hni`, `bun`, and `deno` separately from the Antfu comparison to keep the chart fair. |
There was a problem hiding this comment.
Mention how users actually turn native mode on.
This new section explains the native-mode wins and references HNI_NATIVE, but it still doesn't tell readers how to opt into the feature in normal usage. Add a short pointer to --native, --no-native, and the config/env settings so the benchmark section is actionable.
Suggested fix
Native mode is where `hni` shows the clearest gains in the current snapshot:
+
+You can enable it per command with `--native`, disable it with `--no-native`,
+or set `HNI_NATIVE=true` / `nativeMode=true` as the default.
| Case | Delegated | Native | Gain |
| --- | ---: | ---: | ---: |
🧰 Tools
🪛 LanguageTool
[style] ~387-~387: As an alternative to the over-used intensifier ‘very’, consider replacing this phrase.
Context: ...benchmarks @antfu/ni against hni on a very small CLI-focused set. - native benchmarks ...
(EN_WEAK_ADJECTIVE)
[style] ~389-~389: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...alsevsHNI_NATIVE=true. - runtimebenchmarkshni, bun, and deno` separately fro...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 363 - 389, Add a short actionable note to the
native-mode section explaining how to enable/disable native mode: mention the
CLI flags --native and --no-native, the environment variable
HNI_NATIVE=true|false, and the corresponding config setting (e.g., native:
true/false) so readers can reproduce the benchmarks; update the paragraph that
references HNI_NATIVE to include these three ways to opt in/out and a brief
example of usage (CLI flag, env var, and config key) for clarity.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/resolve/build.rs (1)
95-107:⚠️ Potential issue | 🟠 MajorDefault
nrtostartafter stripping control flags.
nr --if-presentnow reachesbuild_exec()/node --runwith no script name because"start"is injected before--if-presentis removed. Native mode still defaults to"start"viaattempt_nr(), so the same CLI resolves differently depending on execution mode.💡 Suggested fix
- if args.is_empty() { - args.push("start".to_string()); - } - let has_if_present = args.iter().any(|a| a == "--if-present"); if has_if_present { args = exclude_flag(args, "--if-present"); } + + if args.is_empty() { + args.push("start".to_string()); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/resolve/build.rs` around lines 95 - 107, The default "start" is being injected before control flags are stripped, causing `--if-present` to be removed after insertion so `build_exec()`/`node --run` can end up with no script; to fix, first detect and remove control flags (use `has_if_present` and call `exclude_flag(args, "--if-present")` when true), remove the separator (`normalized_args.get(1)... remove(1)`) on the canonical args, then only after these mutations check `args.is_empty()` and push `"start"`; also ensure `normalized_args` is created/updated (clone) after you mutate `args` so both `args` and `normalized_args` reflect the stripped flags before applying the default.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmark/run.mjs`:
- Around line 602-620: The code assumes
payload.summary.geometric_mean_relative_to_first has keys and payload.results is
non-empty; if either is empty competitor becomes undefined and accessing
row.participants[competitor] will crash. Update the block that computes
competitor and iterates payload.results to first defensively check that
payload.results.length > 0 and
Object.keys(payload.summary.geometric_mean_relative_to_first || {}).length > 0
(or that competitor is defined) before building the header and looping; if the
checks fail, skip this section or emit a safe message/placeholder, and avoid
accessing row.participants[competitor] or row.relative_to_first_mean[competitor]
when competitor is undefined. Ensure you reference payload, payload.results,
payload.summary.geometric_mean_relative_to_first, competitor, baselineLabel,
row.participants, and row.relative_to_first_mean when making the fix.
- Around line 666-698: In trackTable, the computed competitor can be undefined
when payload.summary.geometric_mean_relative_to_first is empty, causing
row.participants[competitor] to throw in the loop; update trackTable to guard
against a missing competitor by deriving a safe fallback (e.g., use baseline or
a string like 'other') and/or check row.participants[competitor] before
accessing it, and ensure the table line uses that fallback or skips the relative
column when the competitor is absent (update references inside trackTable where
competitor is read and where row.participants[competitor] and
row.relative_to_first_mean[competitor] are used).
In `@src/core/package.rs`:
- Around line 75-85: The resolver resolve_declared_package_bin currently accepts
raw manifest bin paths that may be absolute or escape the package root; update
it to reject absolute bin values and ensure the resolved candidate lives under
the package directory before returning it. Specifically, in
resolve_declared_package_bin (and where manifest.bin_command_path is used):
skip/continue when the returned relative path is absolute; after computing
candidate = dir.join(relative), canonicalize both candidate and dir (handling
errors by skipping) and only consider candidate if
candidate_canon.starts_with(dir_canon); then perform the existing file-existence
check and return only when the candidate is inside the package root.
In `@src/core/resolve/build.rs`:
- Around line 133-142: The branch that builds node_args when
ctx.config.run_agent == RunAgent::Node currently unconditionally uses the
"--run" flag which fails on Node <22; add a Node.js capability/version check
before adding "--run" (e.g., call a helper like detect_node_version or spawn
`node --version`) and only push "--run" into node_args if the detected semantic
version is >=22.0.0; if the version is older, build a compatible invocation
(remove "--run" and use the existing script/path invocation instead) and return
the ResolvedExecution::external as before. Alternatively, if you prefer not to
runtime-check, update package.json engines to require Node >=22 and guard the
code with that policy, but do not leave the unconditional "--run" in the
RunAgent::Node branch.
---
Outside diff comments:
In `@src/core/resolve/build.rs`:
- Around line 95-107: The default "start" is being injected before control flags
are stripped, causing `--if-present` to be removed after insertion so
`build_exec()`/`node --run` can end up with no script; to fix, first detect and
remove control flags (use `has_if_present` and call `exclude_flag(args,
"--if-present")` when true), remove the separator (`normalized_args.get(1)...
remove(1)`) on the canonical args, then only after these mutations check
`args.is_empty()` and push `"start"`; also ensure `normalized_args` is
created/updated (clone) after you mutate `args` so both `args` and
`normalized_args` reflect the stripped flags before applying the default.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c1e25a78-e2f1-47e4-8501-62748c9a1eea
📒 Files selected for processing (9)
benchmark/results/benchmark-2026-03-20T00-08-06-117Z.mdbenchmark/run.mjssrc/core/native.rssrc/core/package.rssrc/core/resolve/build.rssrc/platform/node.rstests/native_execution.rstests/native_package_managers.rstests/resolve_matrix.rs
✅ Files skipped from review due to trivial changes (1)
- benchmark/results/benchmark-2026-03-20T00-08-06-117Z.md
🚧 Files skipped from review as they are similar to previous changes (2)
- src/platform/node.rs
- src/core/native.rs
| } else { | ||
| const competitor = Object.keys(payload.summary.geometric_mean_relative_to_first)[0] | ||
| const baselineLabel = payload.results[0] ? payload.results[0].baseline : 'baseline' | ||
| lines.push( | ||
| 'case'.padEnd(34) + | ||
| `${baselineLabel} (ms)`.padStart(16) + | ||
| `${competitor} (ms)`.padStart(16) + | ||
| 'relative'.padStart(12), | ||
| ) | ||
| lines.push('-'.repeat(78)) | ||
| for (const row of payload.results) { | ||
| lines.push( | ||
| row.case.padEnd(34) + | ||
| row.participants[baselineLabel].mean.toFixed(2).padStart(16) + | ||
| row.participants[competitor].mean.toFixed(2).padStart(16) + | ||
| `${row.relative_to_first_mean[competitor].toFixed(2)}x`.padStart(12), | ||
| ) | ||
| } | ||
| } |
There was a problem hiding this comment.
Potential crash when all cases are skipped or no relative data exists.
If payload.results is empty or geometric_mean_relative_to_first has no entries, competitor will be undefined (line 603), causing row.participants[competitor].mean (line 616) to throw when iterating results.
Suggested defensive check
} else {
const competitor = Object.keys(payload.summary.geometric_mean_relative_to_first)[0]
+ if (!competitor || payload.results.length === 0) {
+ lines.push('No benchmark data to display.')
+ process.stdout.write(lines.join('\n'))
+ return
+ }
const baselineLabel = payload.results[0] ? payload.results[0].baseline : 'baseline'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/run.mjs` around lines 602 - 620, The code assumes
payload.summary.geometric_mean_relative_to_first has keys and payload.results is
non-empty; if either is empty competitor becomes undefined and accessing
row.participants[competitor] will crash. Update the block that computes
competitor and iterates payload.results to first defensively check that
payload.results.length > 0 and
Object.keys(payload.summary.geometric_mean_relative_to_first || {}).length > 0
(or that competitor is defined) before building the header and looping; if the
checks fail, skip this section or emit a safe message/placeholder, and avoid
accessing row.participants[competitor] or row.relative_to_first_mean[competitor]
when competitor is undefined. Ensure you reference payload, payload.results,
payload.summary.geometric_mean_relative_to_first, competitor, baselineLabel,
row.participants, and row.relative_to_first_mean when making the fix.
| function trackTable(payload) { | ||
| if (payload.track === 'runtime') { | ||
| const lines = [ | ||
| '| Case | hni | bun | deno |', | ||
| '| --- | ---: | ---: | ---: |', | ||
| ] | ||
|
|
||
| for (const row of payload.results) { | ||
| lines.push( | ||
| `| ${row.case} | ${formatMs(row.participants.hni.mean)} | ${formatMs(row.participants.bun.mean)} | ${formatMs(row.participants.deno.mean)} |`, | ||
| ) | ||
| } | ||
|
|
||
| return lines.join('\n') | ||
| } | ||
|
|
||
| const baseline = payload.results[0]?.baseline ?? 'baseline' | ||
| const competitor = Object.keys(payload.summary.geometric_mean_relative_to_first)[0] ?? 'other' | ||
| const lines = [ | ||
| `| Case | ${baseline} | ${competitor} | Relative |`, | ||
| '| --- | ---: | ---: | ---: |', | ||
| ] | ||
|
|
||
| for (const row of payload.results) { | ||
| lines.push( | ||
| `| ${row.case} | ${formatMs(row.participants[baseline].mean)} | ${formatMs( | ||
| row.participants[competitor].mean, | ||
| )} | ${formatRatio(row.relative_to_first_mean[competitor])} |`, | ||
| ) | ||
| } | ||
|
|
||
| return lines.join('\n') | ||
| } |
There was a problem hiding this comment.
Same edge case: competitor could be undefined in trackTable.
Similar to printTrackSummary, if geometric_mean_relative_to_first is empty, line 683 sets competitor to undefined, and line 691-693 will fail when accessing row.participants[competitor].
Suggested fix
const baseline = payload.results[0]?.baseline ?? 'baseline'
const competitor = Object.keys(payload.summary.geometric_mean_relative_to_first)[0] ?? 'other'
+ if (payload.results.length === 0) {
+ return '*No benchmark results.*'
+ }
const lines = [
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@benchmark/run.mjs` around lines 666 - 698, In trackTable, the computed
competitor can be undefined when
payload.summary.geometric_mean_relative_to_first is empty, causing
row.participants[competitor] to throw in the loop; update trackTable to guard
against a missing competitor by deriving a safe fallback (e.g., use baseline or
a string like 'other') and/or check row.participants[competitor] before
accessing it, and ensure the table line uses that fallback or skips the relative
column when the competitor is absent (update references inside trackTable where
competitor is read and where row.participants[competitor] and
row.relative_to_first_mean[competitor] are used).
| pub fn resolve_declared_package_bin(cwd: &Path, bin_name: &str) -> HniResult<Option<PathBuf>> { | ||
| for dir in cwd.ancestors() { | ||
| let Some(manifest) = read_package_json(dir)? else { | ||
| continue; | ||
| }; | ||
| let Some(relative) = manifest.bin_command_path(bin_name) else { | ||
| continue; | ||
| }; | ||
| let candidate = dir.join(relative); | ||
| if candidate.is_file() { | ||
| return Ok(Some(candidate)); |
There was a problem hiding this comment.
Constrain declared bin targets to the package root.
Unlike resolve_local_bin(), this path accepts raw package.json bin values. A manifest can currently point hello at /usr/bin/env or ../../tool, and native nlx hello will return that file outside the package tree.
🔒 Suggested fix
pub fn resolve_declared_package_bin(cwd: &Path, bin_name: &str) -> HniResult<Option<PathBuf>> {
for dir in cwd.ancestors() {
let Some(manifest) = read_package_json(dir)? else {
continue;
};
let Some(relative) = manifest.bin_command_path(bin_name) else {
continue;
};
- let candidate = dir.join(relative);
+ let relative = Path::new(relative);
+ if relative.is_absolute()
+ || relative
+ .components()
+ .any(|component| !matches!(component, Component::Normal(_)))
+ {
+ continue;
+ }
+ let candidate = dir.join(relative);
if candidate.is_file() {
return Ok(Some(candidate));
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/core/package.rs` around lines 75 - 85, The resolver
resolve_declared_package_bin currently accepts raw manifest bin paths that may
be absolute or escape the package root; update it to reject absolute bin values
and ensure the resolved candidate lives under the package directory before
returning it. Specifically, in resolve_declared_package_bin (and where
manifest.bin_command_path is used): skip/continue when the returned relative
path is absolute; after computing candidate = dir.join(relative), canonicalize
both candidate and dir (handling errors by skipping) and only consider candidate
if candidate_canon.starts_with(dir_canon); then perform the existing
file-existence check and return only when the candidate is inside the package
root.
| if ctx.config.run_agent == RunAgent::Node { | ||
| let mut node_args = vec!["--run".to_string()]; | ||
| node_args.extend(args); | ||
|
|
||
| return Ok(ResolvedExecution { | ||
| program: "node".to_string(), | ||
| args: node_args, | ||
| cwd: ctx.cwd.clone(), | ||
| passthrough: true, | ||
| }); | ||
| return Ok(ResolvedExecution::external( | ||
| "node", | ||
| node_args, | ||
| ctx.cwd.clone(), | ||
| true, | ||
| )); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Which Node.js versions support the node --run CLI flag, and is it available in the Node version used by this project's CI?
💡 Result:
The node --run CLI flag was introduced experimentally in Node.js 22.0.0 (April 2024) as a faster alternative to npm run for executing scripts defined in package.json. It became stable in September 2024 via PR #53763, likely included in Node.js 22.6.0 or later (exact version requires checking CHANGELOG_V22.md#22.6.0 or similar). It is not available in Node.js 20.x or earlier, as confirmed by absence in v20.20.1 and v19.0.0 CLI docs. Regarding the project's CI: No specific project context (e.g., package.json engines field or .github/workflows config) is provided in the query or accessible via search. Common CI setups (GitHub Actions, etc.) often use Node 20 LTS by default unless specified otherwise. Without project details, cannot confirm the exact version used—check your repository's CI configuration (e.g., uses: actions/setup-node@v4 with node-version).
Citations:
- 1: https://nodejs.org/docs/latest/api/cli.html
- 2: https://nodejs.org/dist/latest-v24.x/docs/api/cli.html
- 3: https://nodejs.org/api/cli.html
- 4: https://nodejs.org/en/blog/announcements/v22-release-announce
- 5: src: mark
node --runas stable nodejs/node#53763 - 6: https://nodejs.org/dist/latest-v20.x/docs/api/cli.html
- 7: https://nodejs.org/download/release/v19.0.0/docs/api/cli.html
- 8: cli: implement
node --run <script-in-package-json>nodejs/node#52190
🏁 Script executed:
# Find CI configuration files
find . -type f \( -name "*.yml" -o -name "*.yaml" \) | grep -E "(workflow|github|ci)" | head -20
Repository: happytoolin/hni
Length of output: 117
🏁 Script executed:
# Check for node version specifications
find . -type f \( -name "package.json" -o -name ".nvmrc" -o -name ".node-version" \) | head -10
Repository: happytoolin/hni
Length of output: 359
🏁 Script executed:
# Examine the specific code in build.rs
sed -n '120,150p' src/core/resolve/build.rs
Repository: happytoolin/hni
Length of output: 796
🏁 Script executed:
cat ./.github/workflows/ci.yml
Repository: happytoolin/hni
Length of output: 3006
🏁 Script executed:
cat package.json | grep -A 5 -B 5 "engines"
Repository: happytoolin/hni
Length of output: 337
🏁 Script executed:
# Search for any version checks or capability detection for Node
rg "node.*version|--run|RunAgent::Node" --type rust -C 3 -i
Repository: happytoolin/hni
Length of output: 3826
Add Node.js version check before using node --run.
The --run flag is only available in Node.js 22.0.0+, but the project's engines field specifies >=18, allowing Node 18 and 20 where this flag doesn't exist. Without a version check, users on older Node versions will encounter "bad option: --run" errors when runAgent=node is configured. Either gate this code path on a Node version capability check or update the minimum required Node version in package.json.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/core/resolve/build.rs` around lines 133 - 142, The branch that builds
node_args when ctx.config.run_agent == RunAgent::Node currently unconditionally
uses the "--run" flag which fails on Node <22; add a Node.js capability/version
check before adding "--run" (e.g., call a helper like detect_node_version or
spawn `node --version`) and only push "--run" into node_args if the detected
semantic version is >=22.0.0; if the version is older, build a compatible
invocation (remove "--run" and use the existing script/path invocation instead)
and return the ResolvedExecution::external as before. Alternatively, if you
prefer not to runtime-check, update package.json engines to require Node >=22
and guard the code with that policy, but do not leave the unconditional "--run"
in the RunAgent::Node branch.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/core/native.rs (1)
425-450: Consider logging symlink resolution failures.The function silently returns the current path when
symlink_metadatafails (line 431) or when the loop limit is reached. While this fallback is safe, it may hide filesystem issues. Consider adding debug-level logging for troubleshooting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/native.rs` around lines 425 - 450, In resolve_bin_source_path, add debug-level logs when fs::symlink_metadata fails and when the symlink-resolution loop reaches the iteration limit (currently 0..8) so callers can troubleshoot filesystem issues; specifically log the path being resolved and the error from symlink_metadata (fs::symlink_metadata) before returning Ok(current), and log a debug message indicating the loop exhausted its max hops and the final path before returning the canonicalized/current PathBuf. Use the existing logging facility (e.g., log::debug or the crate's logger) and keep the behavior/fallbacks unchanged..claude/skills/desloppify/SKILL.md (1)
1-325: File appears unrelated to the PR objectives.This skill specification for "desloppify" (a codebase health scanner) doesn't seem connected to the native script runner feature described in the PR objectives. Consider whether this belongs in a separate PR to maintain focused, reviewable changes.
Regarding static analysis hints: the fenced code block at line 171 is intentionally language-less since it contains a system prompt template, not executable code.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/desloppify/SKILL.md around lines 1 - 325, The "desloppify" skill specification (SKILL.md) is unrelated to the native script runner feature in this PR; remove it from this change set by unstaging/omitting the SKILL.md commit or move it to a separate branch/PR (or create a new PR containing only the desloppify changes), and update this PR description to state that SKILL.md was intentionally excluded or moved; if you must keep it here, add a short note in the PR explaining the unrelated inclusion and that the fenced block containing the system prompt template is intentionally language-less (it's a prompt, not executable code).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.claude/skills/desloppify/SKILL.md:
- Around line 1-325: The "desloppify" skill specification (SKILL.md) is
unrelated to the native script runner feature in this PR; remove it from this
change set by unstaging/omitting the SKILL.md commit or move it to a separate
branch/PR (or create a new PR containing only the desloppify changes), and
update this PR description to state that SKILL.md was intentionally excluded or
moved; if you must keep it here, add a short note in the PR explaining the
unrelated inclusion and that the fenced block containing the system prompt
template is intentionally language-less (it's a prompt, not executable code).
In `@src/core/native.rs`:
- Around line 425-450: In resolve_bin_source_path, add debug-level logs when
fs::symlink_metadata fails and when the symlink-resolution loop reaches the
iteration limit (currently 0..8) so callers can troubleshoot filesystem issues;
specifically log the path being resolved and the error from symlink_metadata
(fs::symlink_metadata) before returning Ok(current), and log a debug message
indicating the loop exhausted its max hops and the final path before returning
the canonicalized/current PathBuf. Use the existing logging facility (e.g.,
log::debug or the crate's logger) and keep the behavior/fallbacks unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: eb31ee6b-209e-472d-bd17-c93914e71c80
⛔ Files ignored due to path filters (1)
scorecard.pngis excluded by!**/*.png
📒 Files selected for processing (9)
.claude/skills/desloppify/SKILL.md.gitignoresrc/app/cli.rssrc/app/completion.rssrc/app/dispatch.rssrc/app/doctor.rssrc/app/error_report.rssrc/core/native.rssrc/lib.rs
✅ Files skipped from review due to trivial changes (2)
- src/lib.rs
- src/app/completion.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- .gitignore
- src/app/doctor.rs
…improved argument handling. Update tests to reflect changes in argument formatting and add new regression tests for package manager behavior.
…tions. Introduce GitHub Actions workflow for benchmarking, including artifact management and reporting. Update benchmark history and latest results to reflect recent runs.
Benchmark ResultsGenerated: CompareOverall:
NativeOverall: native vs delegated geometric mean Top wins:
Cases where native was slower:
RuntimeRelative to
Artifacts were uploaded from |
…e direct benchmarking instructions. Enhance benchmark suite with new 'direct' track for comparing package manager commands against 'hni --native'. Update benchmark history and latest results to reflect new metrics and remove outdated reports.
Summary
--nativeflag to CLI to enable native execution modeImplementation
src/core/native.rs): Direct shell execution of npm scripts with proper environment handling, lifecycle hooks (pre/post), and PATH managementbenchmark/): Performance tracking with historical comparisons, profiling support, and automated benchmark runsnative_modeconfig option and--nativeCLI flagBenefits
Test plan
Summary by CodeRabbit
New Features
Tests
Documentation
Chores