feat: mcp servers#7220
Conversation
- Replace Vec::new() + push pattern with vec![] macro - Use next_back() instead of last() on DoubleEndedIterator - Replace single-pattern match with if let - Use range contains instead of manual bounds checking - Add allow annotations for factory method naming pattern
- Add comprehensive gRPC method discovery with 37+ base node and 25+ wallet methods - JSON Schema generation for all gRPC methods with validation support - Method categorization (blockchain, mining, network, mempool, balance, transaction, etc.) - Base node gRPC configuration parser with method filtering and wildcards - Support for control operation classification and streaming detection - Foundation for automated MCP tool generation from gRPC definitions
- Add 20+ base node tools covering all major gRPC methods - Blockchain tools: headers, blocks, tip info, sync status, network state - Mempool tools: stats, transactions, analysis, transaction state - Network tools: status, peers, diagnostics, node identity - Mining tools: templates, analysis, coinbase support - Enhanced existing mining tool with better parameter validation - Organized tools by functional category for maintainability - All tools include proper error handling and detailed responses
- Add 20+ wallet tools covering all major wallet gRPC methods - Balance tools: balance info, analysis, monitoring, unspent amounts - Transaction tools: info, history, transfers, coin splits, analysis - Address tools: generation, validation, format conversion, payment IDs - Atomic swap tools: SHA swaps, HTLC claims, refunds, status - Enhanced existing tools with better validation and detailed responses - Organized tools by functional category for maintainability - All tools include proper error handling and user-friendly outputs
- Created GrpcExecutor for unified gRPC method execution - Added NodeGrpcClient and WalletGrpcClient traits for type-safe operations - Implemented NodeGrpcClientImpl and WalletGrpcClientImpl concrete clients - Updated AutoDiscoveryRegistry to support real gRPC execution - Added real execution logic to DynamicGrpcTool - Added dependencies for minotari gRPC clients - Framework foundation complete, compilation errors to fix next
Remove unnecessary blank lines in tool_macros.rs
- Add else blocks to fix else-if-without-else warnings - Use drop() instead of let _ for cleaner resource handling - Add allow clippy::too-many-lines for data definition functions - Fix clone_on_copy warning by removing unnecessary clone() - Improve code consistency and clarity
Add lint configuration to allow certain style-focused lints that don't affect functionality: - too_many_lines: Functions are complex by nature for MCP tools - if_not_else: Sometimes clearer than suggested alternative - cast_possible_truncation: Controlled casting for protocol types - cast_possible_wrap: Blockchain values handled appropriately - struct_excessive_bools: CLI args naturally have many boolean flags - else_if_without_else: Not always needed for validation logic
## Walkthrough
This change introduces the foundational infrastructure for the Minotari Model Context Protocol (MCP) servers, primarily by adding the `minotari_mcp_common` library crate and the `minotari_mcp_node` application. The update includes comprehensive implementations for configuration, error handling, security, tool/resource/prompt registries, gRPC integration, process management, health monitoring, CLI, and extensive documentation. Node-specific tools and resources are also implemented, enabling secure, local-only AI access to Tari node operations.
## Changes
| File(s) / Path(s) | Change Summary |
|----------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Cargo.toml | Added MCP application directories to workspace members and set up workspace dependency for `minotari_mcp_common`. |
| README.md | Updated documentation to include MCP servers, build/run instructions, AI integration, security features, and links to further docs. |
| applications/minotari_mcp_common/** (all new) | Added new library crate implementing MCP server infrastructure: configuration, error handling, security, tool/resource/prompt registries, gRPC integration, process management, health monitoring, macros, and documentation. |
| applications/minotari_mcp_node/** (all new) | Added new node MCP server application: CLI, configuration, gRPC config parser, main entrypoint, server, node-specific tools, resources, prompts, and documentation. |
## Sequence Diagram(s)
```mermaid
sequenceDiagram
participant User/AI Agent
participant MCP Server (Node)
participant Tool/Resource/Prompt Registry
participant SecurityContext
participant BaseNode gRPC Client
participant Tari Base Node
User/AI Agent->>MCP Server (Node): JSON-RPC request (e.g., call tool)
MCP Server (Node)->>SecurityContext: Check permissions, rate limit, audit log
alt Permission granted
MCP Server (Node)->>Tool/Resource/Prompt Registry: Resolve tool/resource/prompt
Tool/Resource/Prompt Registry->>BaseNode gRPC Client: Prepare and execute gRPC call
BaseNode gRPC Client->>Tari Base Node: gRPC request
Tari Base Node-->>BaseNode gRPC Client: gRPC response
BaseNode gRPC Client-->>Tool/Resource/Prompt Registry: Response
Tool/Resource/Prompt Registry-->>MCP Server (Node): Result (JSON)
MCP Server (Node)-->>User/AI Agent: JSON-RPC response
else Permission denied / error
MCP Server (Node)-->>User/AI Agent: JSON-RPC error
endEstimated code review effort4 (60–120 minutes) – The changes add a large new library crate and a new application with many modules, complex features, and integrations requiring thorough review. Possibly related PRs
Poem
|
There was a problem hiding this comment.
Actionable comments posted: 45
♻️ Duplicate comments (1)
applications/minotari_mcp_common/Cargo.toml (1)
13-30: Same unpublished dependency versions as node crateThe
tokio 1.44/tonic 0.13.1combo here will break the build for the common crate too.
Fix as per previous comment so the workspace compiles on CI.
🧹 Nitpick comments (51)
applications/minotari_mcp_node/src/resources/transaction_info.rs (2)
13-15: Remove the#[allow(dead_code)]once the client is usedThe attribute is only needed while the field is unused. It’s easy to forget and leave the suppression in place once the
read()method starts querying the node.
62-68: Add a basic connectivity check to surface node errors earlyAll other resources test the gRPC connection (e.g.
get_version) before returning data. Doing the same here keeps behaviour consistent and gives callers immediate feedback when the base node is unavailable.applications/minotari_mcp_node/src/resources/sync_progress.rs (1)
41-49:mutnot required – remove to silence clippyThe cloned
clientisn’t mutated after creation. Drop themutto avoid unnecessarywarn(clippy::needless_mut)lint.- let mut client = self.grpc_client.as_ref().clone(); + let client = self.grpc_client.as_ref().clone();applications/minotari_mcp_node/src/resources/mempool_stats.rs (2)
40-49: Eliminate needlessmuton the cloned clientSame pattern as other resources – the variable is never mutated.
- let mut client = self.grpc_client.as_ref().clone(); + let client = self.grpc_client.as_ref().clone();
51-56: Consider returningnullfields instead of hard-coded zerosHard-coding
0may mislead consumers into thinking the mempool is empty rather than “data unavailable”. Usingnull(or omitting the keys) better signals the placeholder status.applications/minotari_mcp_node/src/resources/chain_metadata.rs (2)
40-48: Remove unnecessarymutonclientThe client isn’t mutated.
- let mut client = self.grpc_client.as_ref().clone(); + let client = self.grpc_client.as_ref().clone();
51-61: Avoid duplicating fields at two levels of the JSON
best_block_height,accumulated_difficulty, andpruned_heightappear both at the root and insidechain_metadata. Unless consumers need both, keeping a single canonical location prevents divergence.applications/minotari_mcp_node/src/resources/network_difficulty.rs (2)
40-49: Drop redundantmuton cloned clientConsistent with earlier comments.
- let mut client = self.grpc_client.as_ref().clone(); + let client = self.grpc_client.as_ref().clone();
52-57: Return numeric difficulty as well as hexMany consumers expect the raw
u128/Integerdifficulty value for comparison. Include both representations to save clients the conversion.Ok(serde_json::json!({ "accumulated_difficulty": hex::encode(&metadata.accumulated_difficulty), + "accumulated_difficulty_numeric": metadata.accumulated_difficulty, "height": metadata.best_block_height, "timestamp": metadata.timestamp, "message": "Network difficulty information" }))applications/minotari_mcp_node/src/resources/block_info.rs (1)
58-64: Stub implementation missing TODO tagAdd an explicit
TODO:insideread()socargo-todo/ IDE inspections can surface the needed follow-up.applications/minotari_mcp_common/src/tool_macros.rs (1)
78-96:requiredarray uses literal duplication – DRY helper would avoid copy-pasteSeveral schema arms repeat the
"required": [...]stanza.
Consider a small helper macro or function to append required fields, reducing maintenance overhead if the schema spec
changes.README.md (1)
842-883: Minor formatting & spelling fixes in the new MCP section
- Bullet list after line 848 lacks an empty line above, breaking markdown rendering.
- “Serviecs” earlier in the doc should be “Services”.
- Please verify the external link
modelcontextprotocol.io– it 404s at the time of writing.-Applications -- **`minotari_mcp_wallet`**: ... +Applications + +* **`minotari_mcp_wallet`** – Provides secure wallet operations for AI agents +* **`minotari_mcp_node`** – Enables AI agents to query blockchain and node information +* **`minotari_mcp_common`** – Shared infrastructure for building secure MCP servers🧰 Tools
🪛 LanguageTool
[uncategorized] ~848-~848: Loose punctuation mark.
Context: ...pplications -minotari_mcp_wallet: Provides secure wallet operations for A...(UNLIKELY_OPENING_PUNCTUATION)
applications/minotari_mcp_common/README.md (2)
34-50: Specify a language for the fenced blockMarkdown-lint flags this block (
```) because it lacks a language identifier.
Add e.g.textto silence MD040 and enable proper rendering.-``` +```text🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
34-34: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
21-23: Hyphenate compound adjective“per-client request rate limiting” → “per-client request-rate limiting” or simply “per-client rate limiting” to satisfy grammar check and improve clarity.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~21-~21: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...ting**: Configurable per-client request rate limiting - Audit logging: Comprehensive logg...(EN_COMPOUND_ADJECTIVE_INTERNAL)
applications/minotari_mcp_node/src/tools/mod.rs (1)
89-96: Gate control-only tools behindcfgto avoid unused code
submit_blockandsubmit_transaction(and commented peer-ban tools) are compiled even whencontrol_enabled == false, incurring extra compile time and binary size.
Consider:#[cfg(feature = "control")] mod submit_block; #[cfg(feature = "control")] mod submit_transaction;and enable the feature when the flag is true.
applications/minotari_mcp_common/src/stdio_transport.rs (1)
279-282: Leading-zero fix may corrupt string literalsThe regex
\b0+(\d+)\bruns on the full JSON text and can replace digits inside quoted strings (e.g., phone numbers).
Limit the substitution to numeric tokens outside string contexts or drop this heuristic to avoid data corruption.applications/minotari_mcp_node/src/main.rs (1)
38-43: Avoid reopening the log file for every entry
OpenOptions::openis called on each log record, causing many syscalls and potential contention.
Keep a singlestd::fs::FileinsideFileLoggerand write via aMutex<File>instead.applications/minotari_mcp_node/src/config.rs (1)
90-94: Clarify “all methods allowed” semanticsReturning an empty
HashSetto signify no restriction is easy to misinterpret.
Recommend either:
- Return
None/Option<HashSet<…>>whereNone== unrestricted, or- Add a doc comment & rename (e.g.
restricted_methods()).This eliminates accidental misuse in callers that expect “empty == deny all”.
applications/minotari_mcp_common/src/conversion_registry_factory.rs (1)
40-47: Explicit TODO for Miner/Proxy support
ServerType::{Miner, Proxy}silently fall back to the node registry.
This can mask future gaps—callers may assume miner-specific converters exist.Add a placeholder (or
unimplemented!()while behind a feature flag) so missing converters are obvious during integration tests.applications/minotari_mcp_node/README.md (1)
56-64: Tighten environment-variable list formattingThe bullet list currently renders as a single paragraph because of the missing blank line before it.
Insert a blank line or use markdown tables to improve readability.🧰 Tools
🪛 LanguageTool
[uncategorized] ~56-~56: Loose punctuation mark.
Context: ...Variables -MINOTARI_NODE_MCP_ENABLED: Enable MCP server - `MINOTARI_NODE_MCP_...(UNLIKELY_OPENING_PUNCTUATION)
applications/minotari_mcp_node/src/tools/ban_peer.rs (1)
84-88: Guard against fractional-hour precision lossCasting
f64→u64truncates fractions (e.g. 0.5 h → 1800 s is fine, 1.7 h → 6120 s drops 0.2 h).
If sub-second accuracy isn’t needed, document the intentional truncation; otherwiseround()before cast.let duration_secs = (duration_hours * 3600.0).round() as u64;applications/minotari_mcp_common/src/protobuf_reflector_simple.rs (1)
30-43: Unuseddescriptor_setparameter lowers clarity
newacceptsdescriptor_setbut never uses it, which may confuse readers and future maintainers expecting parsing logic. Either consume the bytes (even if only to validate they’re non-empty) or drop the parameter for now.-pub fn new(_descriptor_set: &[u8]) -> McpResult<Self> { +pub fn new() -> McpResult<Self> {applications/minotari_mcp_common/src/resources.rs (1)
125-130: Resolved URI is built but silently dropped
read_resourcecomputes_resolved_uriand then ignores it. Either use it (e.g., pass toread()) or remove the dead assignment to avoid misleading future edits.applications/minotari_mcp_common/src/method_implementations.rs (1)
70-88: Duplicate height-array checks – keep validation single-sourced
convertre-parses and type-checksheights;validaterepeats the same loop, doubling maintenance work.Refactor so
convertcallsvalidatefirst (cheap) and then performs the straightforward extraction, or extract the common logic into a helper.applications/minotari_mcp_node/src/cli.rs (1)
158-212: Two divergent code-paths build node arguments
generate_node_argshand-assembles the CLI whileextract_node_argsdelegates toNodeArgumentBuilder. Keeping both will drift over time.Prefer a single source (e.g., wrap
NodeArgumentBuilderinsidegenerate_node_argsand delete the manual version) to avoid mismatched behaviour.applications/minotari_mcp_common/src/tools.rs (1)
58-63: Duplicate tool registration not guarded
registerblindly inserts into theHashMap, silently overwriting an existing tool with the same name.Add a check to return an error (or at least log a warning) when the name already exists to avoid accidental shadowing.
applications/minotari_mcp_common/src/transport.rs (1)
170-188: Avoid the extra allocation and shadowing when trimming input.
let line = line.trim();shadows theStringwith a&str, triggering a re-allocation on every loop.
Just operate on the original buffer:if line.trim().is_empty() { continue; } let slice = line.trim_end(); ...applications/minotari_mcp_common/src/grpc_error_mapper.rs (1)
130-142: MappingCode::Okto an error context is misleading.A gRPC
OKstatus should never be treated as an error; carrying it through the error mapper is confusing and wastes allocations.
Consider returning early or skipping this variant.applications/minotari_mcp_common/src/process_manager.rs (1)
246-257: Double-kill may panic on some platforms.After the first
kill()succeeds the child is already dead; a second kill can
error withESRCH. Remove the redundant call or checktry_wait()first.applications/minotari_mcp_common/src/prompts.rs (1)
134-171: Macro leaksSimplePromptsymbol outside caller scope.Declare it with
#[allow(non_camel_case_types)]and wrap in an inner module or
usemacro_rules!hygiene (struct $crate::__private::SimplePrompt).
This prevents name clashes when the macro is invoked multiple times in one file.applications/minotari_mcp_common/src/input_sanitizer.rs (1)
124-126: Simplify the complex character filter for better readability.The character filter condition is complex and hard to understand. Consider extracting it to a helper function.
- sanitized = sanitized - .chars() - .filter(|&c| c == '\n' || c == '\t' || c == '\r' || (c >= ' ' && c != '\u{007F}') || c >= '\u{0080}') - .collect(); + sanitized = sanitized + .chars() + .filter(|&c| is_allowed_char(c)) + .collect();Add this helper function:
/// Check if a character should be allowed in sanitized strings fn is_allowed_char(c: char) -> bool { match c { '\n' | '\t' | '\r' => true, // Allow specific whitespace '\u{0020}'..='\u{007E}' => true, // Printable ASCII (excluding DEL) '\u{0080}'.. => true, // Allow all non-ASCII Unicode _ => false, // Reject other control characters } }applications/minotari_mcp_common/src/parameter_converter.rs (2)
60-61: Remove unused type alias.The
DynConvertertype alias is defined but never used in this file.-/// Dynamic converter function type for trait object storage -pub type DynConverter = dyn Fn(Value) -> Result<Box<dyn prost::Message + Send>, ConversionError> + Send + Sync;
82-85: Consider performance implications of blocking async operations.Using
block_in_placewithblock_onto run async code synchronously can impact performance in high-throughput scenarios as it blocks the thread. Consider either:
- Making the registry methods async
- Requiring converters to provide both sync and async versions
- Document the performance implications
Alternative approach:
// Store both sync and async converters pub struct ConversionRegistry { sync_converters: HashMap<String, Box<dyn Fn(Value) -> Result<Box<dyn prost::Message + Send>, ConversionError> + Send + Sync>>, async_converters: HashMap<String, Box<dyn ParameterConverter>>, }applications/minotari_mcp_node/src/grpc_config_parser.rs (1)
221-234: Consider making the valid services list configurable.The hardcoded list of valid services limits extensibility. As new services are added to Tari, this code would need to be updated.
Consider making this configurable:
fn validate_service_name(&self, service: &str) -> Result<(), GrpcConfigError> { - const VALID_SERVICES: &[&str] = &["BaseNode", "Wallet"]; + // Could be passed in via config or determined from registered services + let valid_services = self.get_valid_services(); - if !VALID_SERVICES.contains(&service) { + if !valid_services.contains(&service) { return Err(GrpcConfigError::UnknownService(format!( "Unknown service: {}. Valid services: {}", service, - VALID_SERVICES.join(", ") + valid_services.join(", ") ))); } Ok(()) }applications/minotari_mcp_common/src/response_converter.rs (2)
78-89: Converter registration silently overwrites previous entries
register_converterinserts every method into the sameHashMap; if two converters register the same method later registrations silently replace earlier ones.
Consider either:
- checking/explaining why duplicates cannot happen, or
- logging / returning an error when a duplicate registration occurs to avoid accidental shadowing.
90-107: No usage metrics for fallback hitsWhen
convert_responsefalls back because a method‐specific converter is missing you lose visibility – nothing distinguishes a “handled by fallback” case from an intended generic call.
Emit adebug!/warn!(or expose a counter) before returning to make these cases observable.applications/minotari_mcp_common/src/auto_registry.rs (2)
365-386: Heuristic risk classification is fragileString-contains checks such as
name_lower.contains("send")can mis-classify e.g.resend_*methods.
Regex anchors (e.g.^send_|_send$) or an explicit allow-list per category would avoid false positives.
460-485: Two read-locks taken independently – race to inconsistent view
get_healthy_toolsacquires one lock ontools_cacheand another ontool_status; a status change between the two can yield an enabled tool filtered out or vice-versa.
Grabbing both under a single read lock (or cloning status first) would avoid this minor TOCTOU window.applications/minotari_mcp_common/src/cli_integration.rs (1)
311-334: Address validation too permissive
validate_grpc_addressreturns true forhttp://:(host missing) and any string containing one:.
Consider tightening the check with a regex orurl::Urlparsing to avoid producing invalid endpoints.applications/minotari_mcp_common/src/grpc_discovery.rs (1)
681-693: Large number of identicaljson!allocationsAll schema helpers create a fresh
serde_json::Valueevery call.
The functions are used as constants – they never mutate – so everyGrpcMethodInfobuilt incurs extra heap work and a parse step.Using
once_cell::sync::Lazy(orconstonceserde_json::json!becomes const-eval) will:
- Avoid repeated allocations
- Cut compile-time code size (the macro expands to a lot of code)
Example:
use once_cell::sync::Lazy; static EMPTY_SCHEMA: Lazy<Value> = Lazy::new(|| { serde_json::json!({ "type": "object", "properties": {}, "additionalProperties": false }) }); fn empty_schema() -> Value { EMPTY_SCHEMA.clone() }This pattern keeps the ergonomics while being far lighter at runtime.
applications/minotari_mcp_common/src/error.rs (1)
112-125: JSON-RPC error codes are overly reusedAlmost every variant maps to
-32603(“Internal error”).
Down-stream clients lose the ability to distinguish authz vs rate-limit vs server errors.Consider:
PermissionDenied/AuthenticationFailed→-32001RateLimitExceeded→-32010ServerError/TransportError→ keep-32603ToolExecutionFailed→-32002Using the reserved server error range (
-32000 … -32099) preserves JSON-RPC compliance while conveying intent.applications/minotari_mcp_common/src/health_checker.rs (1)
282-344: Monitoring loop never exits on channel lossIf the gRPC channel disappears, the task logs a warning and
breaks—yet the JoinHandle remains inmonitoring_tasks.
Remove the stale entry to prevent memory leaks & false “still-monitored” state:// after break; { let mut tasks = monitoring_tasks.write().unwrap(); tasks.remove(&service_name_clone); }applications/minotari_mcp_common/src/connection_manager.rs (2)
52-77: Unused configuration fields
failure_window,keep_alive_interval, andkeep_alive_timeoutare defined but never referenced.
Dead fields complicate maintenance and mislead readers—either wire them into the pool / breaker logic or drop them until needed.
485-497: Connection selection lacks load-balancing
first()always returns the same channel, creating an unintended hotspot and defeating pooling benefits.
Round-robin or random selection is trivial:- if let Some(conn) = conn_list.first() { + if let Some(conn) = conn_list.choose(&mut rand::thread_rng()) {(
randcrate or simple index counter).applications/minotari_mcp_common/src/grpc_executor.rs (1)
445-454: Health check might mask missing service entry
get_all_health_status().get(service_name)returnsNonewhen the service hasn’t been registered. Treating this the same as “healthy” can hide configuration errors. Consider:let health_opt = conn_manager.get_all_health_status().get(service_name); match health_opt { Some(h) if !h.is_healthy() => { … warn or fail … } None => log::warn!("Health status for {} not found", service_name), _ => {} }applications/minotari_mcp_common/src/process_launcher.rs (3)
92-98: Conflicting timeout sources
HealthMonitor::with_timeout(Duration::from_secs(10))hard-codes a 10 s per-check timeout, yethealth_check_config.max_wait_timebelow dictates how long we wait overall. This divergence makes tuning tricky and unexpected for operators. Consider deriving the per-check timeout fromhealth_check_config(or exposing both explicitly).
341-345: Lock contention: use a read lock
is_running()acquires awrite()lock although it only reads state, blocking concurrent readers unnecessarily. Replace withread()for better concurrency.
270-279: Executable inference is brittleHeuristics based on substring search (
"wallet"/"node") can misclassify (e.g. a custom arg containing “node”). Accept an explicit enum or pass the executable finder result from the caller to avoid surprises.applications/minotari_mcp_common/src/lib.rs (1)
40-84: Extremely wide re-export surfaceRe-exporting almost every internal module at the crate root couples downstream code to internal structure and makes future refactors breaking. Consider grouping into sub-modules (e.g.
prelude::*) and exposing only stable, intentional APIs.applications/minotari_mcp_node/src/prompts/mod.rs (1)
15-19: Consider promoting the registry to a lazy static to avoid repeated allocations.
NodePromptRegistry::newconstructs the samePromptRegistryon every call.
If this factory is invoked frequently, useonce_cell::sync::Lazy(or similar) to cache the registry.This is a micro-optimisation, but it eliminates repeated string allocations and hashing.
applications/minotari_mcp_common/src/config.rs (1)
102-115: Minor: deriveDefault+Eq/PartialEqfor testability.You already implement
Defaultmanually; deriving it (along withEq/PartialEq) improves unit-test ergonomics and removes boilerplate.No code change strictly required, but worth considering.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (61)
Cargo.toml(2 hunks)README.md(8 hunks)applications/minotari_console_wallet/src/grpc/wallet_grpc_server.rs(1 hunks)applications/minotari_mcp_common/Cargo.toml(1 hunks)applications/minotari_mcp_common/README.md(1 hunks)applications/minotari_mcp_common/src/auto_registry.rs(1 hunks)applications/minotari_mcp_common/src/cli_integration.rs(1 hunks)applications/minotari_mcp_common/src/config.rs(1 hunks)applications/minotari_mcp_common/src/connection_manager.rs(1 hunks)applications/minotari_mcp_common/src/conversion_registry_factory.rs(1 hunks)applications/minotari_mcp_common/src/error.rs(1 hunks)applications/minotari_mcp_common/src/executable_finder.rs(1 hunks)applications/minotari_mcp_common/src/grpc_client_implementations.rs(1 hunks)applications/minotari_mcp_common/src/grpc_discovery.rs(1 hunks)applications/minotari_mcp_common/src/grpc_error_mapper.rs(1 hunks)applications/minotari_mcp_common/src/grpc_executor.rs(1 hunks)applications/minotari_mcp_common/src/health_checker.rs(1 hunks)applications/minotari_mcp_common/src/health_monitor.rs(1 hunks)applications/minotari_mcp_common/src/input_sanitizer.rs(1 hunks)applications/minotari_mcp_common/src/lib.rs(1 hunks)applications/minotari_mcp_common/src/method_implementations.rs(1 hunks)applications/minotari_mcp_common/src/parameter_converter.rs(1 hunks)applications/minotari_mcp_common/src/process_launcher.rs(1 hunks)applications/minotari_mcp_common/src/process_manager.rs(1 hunks)applications/minotari_mcp_common/src/prompts.rs(1 hunks)applications/minotari_mcp_common/src/protobuf_integration.rs(1 hunks)applications/minotari_mcp_common/src/protobuf_reflector.rs(1 hunks)applications/minotari_mcp_common/src/protobuf_reflector_simple.rs(1 hunks)applications/minotari_mcp_common/src/resources.rs(1 hunks)applications/minotari_mcp_common/src/response_converter.rs(1 hunks)applications/minotari_mcp_common/src/schema_generator.rs(1 hunks)applications/minotari_mcp_common/src/security.rs(1 hunks)applications/minotari_mcp_common/src/server.rs(1 hunks)applications/minotari_mcp_common/src/startup_diagnostics.rs(1 hunks)applications/minotari_mcp_common/src/stdio_transport.rs(1 hunks)applications/minotari_mcp_common/src/tool_macros.rs(1 hunks)applications/minotari_mcp_common/src/tool_metadata.rs(1 hunks)applications/minotari_mcp_common/src/tools.rs(1 hunks)applications/minotari_mcp_common/src/transport.rs(1 hunks)applications/minotari_mcp_node/Cargo.toml(1 hunks)applications/minotari_mcp_node/README.md(1 hunks)applications/minotari_mcp_node/src/cli.rs(1 hunks)applications/minotari_mcp_node/src/config.rs(1 hunks)applications/minotari_mcp_node/src/grpc_config_parser.rs(1 hunks)applications/minotari_mcp_node/src/main.rs(1 hunks)applications/minotari_mcp_node/src/prompts/mod.rs(1 hunks)applications/minotari_mcp_node/src/resources/block_info.rs(1 hunks)applications/minotari_mcp_node/src/resources/chain_metadata.rs(1 hunks)applications/minotari_mcp_node/src/resources/mempool_stats.rs(1 hunks)applications/minotari_mcp_node/src/resources/mod.rs(1 hunks)applications/minotari_mcp_node/src/resources/network_difficulty.rs(1 hunks)applications/minotari_mcp_node/src/resources/network_status.rs(1 hunks)applications/minotari_mcp_node/src/resources/peer_list.rs(1 hunks)applications/minotari_mcp_node/src/resources/sync_progress.rs(1 hunks)applications/minotari_mcp_node/src/resources/transaction_info.rs(1 hunks)applications/minotari_mcp_node/src/server.rs(1 hunks)applications/minotari_mcp_node/src/tools/ban_peer.rs(1 hunks)applications/minotari_mcp_node/src/tools/blockchain_tools.rs(1 hunks)applications/minotari_mcp_node/src/tools/mempool_tools.rs(1 hunks)applications/minotari_mcp_node/src/tools/mining_tools.rs(1 hunks)applications/minotari_mcp_node/src/tools/mod.rs(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (9)
applications/minotari_mcp_node/src/resources/peer_list.rs (5)
applications/minotari_mcp_common/src/resources.rs (6)
new(62-67)uri(15-15)name(18-18)description(21-21)mime_type(24-24)read(27-27)applications/minotari_mcp_node/src/prompts/mod.rs (1)
new(18-127)applications/minotari_mcp_node/src/resources/network_status.rs (5)
uri(24-26)name(28-30)description(32-34)mime_type(36-38)read(40-58)applications/minotari_mcp_node/src/resources/sync_progress.rs (5)
uri(24-26)name(28-30)description(32-34)mime_type(36-38)read(40-56)applications/minotari_mcp_common/src/error.rs (1)
resource_access_failed(81-83)
applications/minotari_mcp_node/src/resources/transaction_info.rs (9)
applications/minotari_mcp_common/src/resources.rs (8)
new(62-67)uri(15-15)name(18-18)description(21-21)mime_type(24-24)supports_templating(30-32)resolve_template(35-41)read(27-27)applications/minotari_mcp_node/src/resources/block_info.rs (8)
new(18-20)uri(25-27)name(29-31)description(33-35)mime_type(37-39)supports_templating(41-43)resolve_template(45-56)read(58-64)applications/minotari_mcp_node/src/resources/chain_metadata.rs (6)
new(17-19)uri(24-26)name(28-30)description(32-34)mime_type(36-38)read(40-70)applications/minotari_mcp_node/src/resources/network_difficulty.rs (6)
new(17-19)uri(24-26)name(28-30)description(32-34)mime_type(36-38)read(40-65)applications/minotari_mcp_node/src/resources/network_status.rs (5)
uri(24-26)name(28-30)description(32-34)mime_type(36-38)read(40-58)applications/minotari_mcp_node/src/resources/sync_progress.rs (5)
uri(24-26)name(28-30)description(32-34)mime_type(36-38)read(40-56)applications/minotari_mcp_node/src/resources/peer_list.rs (3)
uri(24-26)mime_type(36-38)read(40-56)applications/minotari_mcp_wallet/src/resources/transaction_history.rs (3)
uri(23-25)mime_type(35-37)read(39-122)applications/minotari_mcp_common/src/error.rs (1)
invalid_request(61-63)
applications/minotari_mcp_node/src/resources/network_status.rs (7)
applications/minotari_mcp_node/src/prompts/mod.rs (1)
new(18-127)applications/minotari_mcp_node/src/resources/network_difficulty.rs (6)
new(17-19)uri(24-26)name(28-30)description(32-34)mime_type(36-38)read(40-65)applications/minotari_mcp_common/src/resources.rs (5)
uri(15-15)name(18-18)description(21-21)mime_type(24-24)read(27-27)applications/minotari_mcp_node/src/resources/mempool_stats.rs (3)
uri(24-26)mime_type(36-38)read(40-57)applications/minotari_mcp_node/src/resources/sync_progress.rs (3)
uri(24-26)mime_type(36-38)read(40-56)applications/minotari_mcp_node/src/resources/peer_list.rs (3)
uri(24-26)mime_type(36-38)read(40-56)applications/minotari_mcp_common/src/error.rs (1)
resource_access_failed(81-83)
applications/minotari_mcp_node/src/resources/network_difficulty.rs (6)
applications/minotari_mcp_node/src/resources/chain_metadata.rs (6)
new(17-19)uri(24-26)name(28-30)description(32-34)mime_type(36-38)read(40-70)applications/minotari_mcp_node/src/tools/mod.rs (1)
new(54-98)applications/minotari_mcp_common/src/resources.rs (3)
uri(15-15)mime_type(24-24)read(27-27)applications/minotari_mcp_node/src/resources/network_status.rs (5)
uri(24-26)name(28-30)description(32-34)mime_type(36-38)read(40-58)applications/minotari_mcp_node/src/resources/peer_list.rs (3)
uri(24-26)mime_type(36-38)read(40-56)applications/minotari_mcp_common/src/error.rs (1)
resource_access_failed(81-83)
applications/minotari_mcp_node/src/resources/mod.rs (5)
applications/minotari_mcp_common/src/resources.rs (1)
new(62-67)applications/minotari_mcp_node/src/prompts/mod.rs (1)
new(18-127)applications/minotari_mcp_node/src/tools/mod.rs (1)
new(54-98)applications/minotari_mcp_node/src/server.rs (2)
new(48-50)new(85-178)applications/minotari_mcp_wallet/src/resources/mod.rs (1)
new(18-25)
applications/minotari_mcp_common/src/transport.rs (3)
applications/minotari_mcp_common/src/stdio_transport.rs (3)
serde_json(191-191)serde_json(199-199)handle_message(320-326)applications/minotari_mcp_common/src/server.rs (2)
handle_message(178-193)new(57-64)applications/minotari_mcp_common/src/error.rs (1)
invalid_request(61-63)
applications/minotari_mcp_common/src/parameter_converter.rs (2)
applications/minotari_mcp_common/src/error.rs (1)
tool_execution_failed(73-75)applications/minotari_mcp_common/src/method_implementations.rs (15)
method_name(18-20)method_name(38-40)method_name(96-98)method_name(114-116)method_name(153-155)convert(22-25)convert(42-68)convert(100-102)convert(118-120)convert(157-159)validate(27-30)validate(70-88)validate(104-106)validate(122-124)validate(161-163)
applications/minotari_mcp_common/src/process_launcher.rs (6)
applications/minotari_mcp_common/src/cli_integration.rs (7)
new(39-48)new(112-118)new(213-219)build(80-89)build(165-202)build(268-303)default(357-359)applications/minotari_mcp_common/src/executable_finder.rs (3)
new(24-30)find_wallet(295-297)find_node(290-292)applications/minotari_mcp_common/src/health_monitor.rs (1)
new(50-57)applications/minotari_mcp_node/src/server.rs (4)
new(48-50)new(85-178)stop(188-201)is_running(205-207)applications/minotari_mcp_wallet/src/server.rs (2)
new(38-71)stop(81-94)applications/minotari_mcp_common/src/error.rs (2)
server_error(85-87)config_error(77-79)
applications/minotari_mcp_common/src/error.rs (4)
applications/minotari_mcp_common/src/stdio_transport.rs (2)
serde_json(191-191)serde_json(199-199)applications/minotari_mcp_common/src/transport.rs (1)
serde_json(193-193)applications/minotari_mcp_common/src/parameter_converter.rs (1)
from(42-44)applications/minotari_mcp_common/src/grpc_client_implementations.rs (1)
from(24-26)
🪛 LanguageTool
applications/minotari_mcp_node/README.md
[uncategorized] ~56-~56: Loose punctuation mark.
Context: ...Variables - MINOTARI_NODE_MCP_ENABLED: Enable MCP server - `MINOTARI_NODE_MCP_...
(UNLIKELY_OPENING_PUNCTUATION)
[grammar] ~200-~200: You’ve repeated a verb. Did you mean to only write one of them?
Context: ...information. ## Use Cases ### Network Monitoring - Monitor node connectivity and peer health - Tra...
(REPEATED_VERBS)
applications/minotari_mcp_common/README.md
[uncategorized] ~21-~21: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...ting**: Configurable per-client request rate limiting - Audit logging: Comprehensive logg...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
[uncategorized] ~325-~325: Loose punctuation mark.
Context: ...ri_mcp_wallet](../minotari_mcp_wallet/): Wallet MCP server - [minotari_mcp_node...
(UNLIKELY_OPENING_PUNCTUATION)
README.md
[uncategorized] ~848-~848: Loose punctuation mark.
Context: ...pplications - minotari_mcp_wallet: Provides secure wallet operations for A...
(UNLIKELY_OPENING_PUNCTUATION)
🪛 markdownlint-cli2 (0.17.2)
applications/minotari_mcp_common/README.md
34-34: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms (6)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: cargo check with stable
- GitHub Check: Cucumber tests / FFI
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (12)
applications/minotari_console_wallet/src/grpc/wallet_grpc_server.rs (1)
1233-1237:Details
❌ Incorrect review comment
Bit-shift risks remain after removing the
.clone()Good call dropping the superfluous
clone, but note that
1 << (tx.status as u32)will panic in debug builds iftx.status≥ 32 (it’s undefined in release).
Unless theTransactionStatusenum is guaranteed to stay < 32, wrap the shift withchecked_shl
or upgrade to a wider integer to prevent accidental crashes.- .filter(|tx| req.status_bitflag == 0 || (req.status_bitflag & (1 << (tx.status as u32))) != 0) + .filter(|tx| { + req.status_bitflag == 0 + || tx.status + .try_into() + .ok() + .and_then(|v: u32| 1u128.checked_shl(v)) + .map(|mask| (req.status_bitflag as u128 & mask) != 0) + .unwrap_or(false) + })
🏁 Script executed:
#!/bin/bash # Locate TransactionStatus enum definition to inspect variant values rg -n "enum TransactionStatus" -A 20Length of output: 5656
No panic risk:
tx.statusvalues are 0–8, so shifts stay within bounds
After inspectingTransactionStatus(variants 0 through 8) and noting thatreq.status_bitflagis au64, the expressionreq.status_bitflag == 0 || (req.status_bitflag & (1 << (tx.status as u32))) != 0always shifts by less than 64 bits and will never panic. The existing code is safe as-is—no
checked_shlwrap or wider integer is required.Likely an incorrect or invalid review comment.
Cargo.toml (1)
41-44: Workspace members registered correctlyThe new MCP crates are wired into the workspace – no issues spotted here.
applications/minotari_mcp_common/src/tool_macros.rs (1)
187-197: Dead-code notice contradicts intent – remove or gate behind featureComment says “macros are no longer used in production code” yet the file is shipped and exported.
If truly obsolete, delete the file; otherwise drop the disclaimer to avoid confusing future maintainers.applications/minotari_mcp_node/src/resources/mod.rs (1)
43-46: Templated resources may need pattern registration
BlockInfoResourceandTransactionInfoResourcesupport templating, but they’re registered the same way as static
resources.
EnsureResourceRegistry::register()handles URI patterns; if not, they will shadow each other and template resolution
will fail.applications/minotari_mcp_common/src/grpc_client_implementations.rs (1)
49-53:⚠️ Potential issueReplace panic with error return in deprecated method.
Panicking in a deprecated method is too harsh and could crash the application unexpectedly. Consider returning an error instead.
#[deprecated(note = "Use new() with real client")] pub fn new_placeholder() -> Self { - // This will panic if called - placeholder is deprecated - panic!("Placeholder client deprecated - use real client") + // Return a dummy client that will fail on use + Self { + client: BaseNodeGrpcClient::new(Channel::from_static("http://[::]:1").connect_lazy()), + conversion_registry: Arc::new(ConversionRegistry::new()), + } }Likely an incorrect or invalid review comment.
applications/minotari_mcp_common/src/input_sanitizer.rs (1)
205-207: Consider additional validation for /tmp/ paths.While allowing
/tmp/paths is common for temporary files, it could still pose security risks if not properly controlled. Consider adding additional validation such as:
- Ensuring the path doesn't escape
/tmp/using symlinks- Validating the path doesn't contain special characters
- Adding a configurable whitelist of allowed subdirectories under
/tmp/applications/minotari_mcp_common/src/health_monitor.rs (1)
149-151: Channel creation approach is acceptable for health checks.While creating and immediately dropping a channel is not the most efficient approach, it's acceptable for health monitoring purposes where the frequency is typically low. The comment acknowledges this limitation.
applications/minotari_mcp_node/src/grpc_config_parser.rs (1)
1-472: Well-structured configuration parser with excellent test coverage.The implementation is comprehensive, handling various method specification formats, wildcards, and providing useful presets. The test coverage is thorough.
applications/minotari_mcp_node/src/tools/blockchain_tools.rs (1)
583-593: Likely wrong protobuf request type
GetTokensInCirculationToolbuilds aGetBlocksRequestbut callsget_tokens_in_circulation.
Unless the gRPC service really expectsGetBlocksRequest, this will not compile or will fail at runtime.
Double-check the generated proto – there usually is a dedicatedGetTokensInCirculationRequest.applications/minotari_mcp_node/src/server.rs (1)
215-223: gRPC endpoint lacks explicit connect timeout & scheme validation
Endpoint::from_shared(endpoint_url)will happily accept an URL without scheme (127.0.0.1:18142) and then fail later at dial time.
Also,timeout()sets per-RPC deadlines, not the TCP/TLS handshake timeout.let endpoint = Endpoint::from_shared(endpoint_url) .map_err(|e| McpError::config_error(format!("Invalid gRPC endpoint: {}", e)))? - .timeout(Duration::from_secs(config.node_grpc.timeout_secs)); + .timeout(Duration::from_secs(config.node_grpc.timeout_secs)) // per-RPC + .connect_timeout(Duration::from_secs(config.node_grpc.timeout_secs)) // handshake + .tcp_nodelay(true);You may also want to prepend
http://if the user omitted the scheme and fail fast otherwise.applications/minotari_mcp_common/src/process_launcher.rs (1)
437-448:convert_to_multiaddrlacks IPv6 / unix-socket supportThe helper assumes IPv4
ip:portand blindly splits on the first:, so[::1]:18142or/ip6/multiaddrs break. Consider usingmultiaddrcrate parsing or at least detect IPv6 brackets before splitting.applications/minotari_mcp_node/src/prompts/mod.rs (1)
21-48: Verify that every referenced resource/tool is actually registered.The prompts rely on resources such as
chain_metadata,network_status,sync_progress,mempool_stats,peer_list, andnetwork_difficulty.
A typo or a missing registry entry will surface only at runtime and will cripple the assistant’s response.Please double-check the resource and tool registries for matching identifiers (case-sensitive).
If you’re willing, an automated compile-time check could be added to the macro to avoid silent drift.Also applies to: 50-74, 75-98, 100-124
| tokio = { version = "1.44", features = ["full"] } | ||
| serde = { version = "1.0", features = ["derive"] } | ||
| serde_json = "1.0" | ||
| anyhow = "1.0" | ||
| thiserror = "1.0" | ||
| clap = { version = "3.2", features = ["derive", "env"] } | ||
| log = "0.4" | ||
| async-trait = "0.1" | ||
| tonic = { version = "0.13.1", default-features = false, features = ["codegen", "transport", "tls-ring"] } | ||
| uuid = { version = "1.0", features = ["v4"] } |
There was a problem hiding this comment.
Crate versions point to non-existent releases – build will fail
tokio = "1.44" and tonic = "0.13.1" are not on crates.io at time of writing (latest ≈ 1.37 and 0.11).
Cargo will error out for the whole workspace.
-tokio = { version = "1.44", features = ["full"] }
-tonic = { version = "0.13.1", default-features = false, features = ["codegen", "transport", "tls-ring"] }
+tokio = { version = "1.37", features = ["full"] } # align with workspace baseline
+tonic = { version = "0.11", default-features = false, features = ["codegen", "transport", "tls-ring"] }Please align these with published versions (and update minotari_mcp_common accordingly) or switch to
workspace = true if you already have pinned versions at the root.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| tokio = { version = "1.44", features = ["full"] } | |
| serde = { version = "1.0", features = ["derive"] } | |
| serde_json = "1.0" | |
| anyhow = "1.0" | |
| thiserror = "1.0" | |
| clap = { version = "3.2", features = ["derive", "env"] } | |
| log = "0.4" | |
| async-trait = "0.1" | |
| tonic = { version = "0.13.1", default-features = false, features = ["codegen", "transport", "tls-ring"] } | |
| uuid = { version = "1.0", features = ["v4"] } | |
| tokio = { version = "1.37", features = ["full"] } # align with workspace baseline | |
| serde = { version = "1.0", features = ["derive"] } | |
| serde_json = "1.0" | |
| anyhow = "1.0" | |
| thiserror = "1.0" | |
| clap = { version = "3.2", features = ["derive", "env"] } | |
| log = "0.4" | |
| async-trait = "0.1" | |
| tonic = { version = "0.11", default-features = false, features = ["codegen", "transport", "tls-ring"] } | |
| uuid = { version = "1.0", features = ["v4"] } |
🤖 Prompt for AI Agents
In applications/minotari_mcp_node/Cargo.toml between lines 31 and 40, the
versions specified for the tokio and tonic crates (1.44 and 0.13.1) do not exist
on crates.io, causing build failures. To fix this, update these dependencies to
the latest published versions available on crates.io (e.g., tokio around 1.37
and tonic around 0.11) or configure the workspace to use a shared version by
setting workspace = true and managing versions centrally in the root Cargo.toml.
Also, ensure minotari_mcp_common is updated to match these versions if
applicable.
| // TODO: Replace with actual peer list calls when available | ||
| Ok(serde_json::json!({ | ||
| "peers": [], | ||
| "message": "Peer list information - placeholder implementation" | ||
| })) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Resource still returns a placeholder – hook up real peer data
/peer_list currently always answers with an empty array.
minotari_app_grpc::tari_rpc::wallet_server::Wallet::list_connected_peers
(or the base-node equivalent) is available – call it and transform the response
so prompts relying on this resource get meaningful data.
Also consider caching the result for a few seconds to avoid hammering the node.
🤖 Prompt for AI Agents
In applications/minotari_mcp_node/src/resources/peer_list.rs around lines 51 to
55, the peer list endpoint currently returns a static empty array as a
placeholder. Replace this placeholder by calling
minotari_app_grpc::tari_rpc::wallet_server::Wallet::list_connected_peers (or the
base-node equivalent) to fetch the actual connected peers. Transform the
response into the expected JSON format and implement caching for a few seconds
to reduce load on the node.
| fn resolve_template(&self, params: &HashMap<String, String>) -> McpResult<String> { | ||
| let hash = params | ||
| .get("hash") | ||
| .ok_or_else(|| McpError::invalid_request("Missing hash parameter"))?; | ||
|
|
||
| // Validate hash is hex | ||
| if !hash.chars().all(|c| c.is_ascii_hexdigit()) { | ||
| return Err(McpError::invalid_request("Hash must be hexadecimal")); | ||
| } | ||
|
|
||
| if hash.is_empty() { | ||
| return Err(McpError::invalid_request("Hash cannot be empty")); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Tighten hash validation and avoid false-positives
hash.chars().all(|c| c.is_ascii_hexdigit())accepts the string""because the predicate is vacuously true. Although the subsequentis_empty()check catches this, flipping the order makes the intent clearer.- The check also accepts the
"0x"prefix and odd-length hashes. If the node expects a 32-byte (64-char) hex digest without the prefix, enforcing length early prevents malformed requests reaching lower layers.
- // Validate hash is hex
- if !hash.chars().all(|c| c.is_ascii_hexdigit()) {
- return Err(McpError::invalid_request("Hash must be hexadecimal"));
- }
-
- if hash.is_empty() {
- return Err(McpError::invalid_request("Hash cannot be empty"));
- }
+ if hash.is_empty() {
+ return Err(McpError::invalid_request("Hash cannot be empty"));
+ }
+
+ // Require exactly 64 hex chars (32-byte digest) – adjust if the spec differs
+ if hash.len() != 64 || !hash.chars().all(|c| c.is_ascii_hexdigit()) {
+ return Err(McpError::invalid_request("Hash must be a 64-character hexadecimal string"));
+ }🤖 Prompt for AI Agents
In applications/minotari_mcp_node/src/resources/transaction_info.rs around lines
45 to 57, the current validation checks the hash for hexadecimal characters
before checking if it is empty, which can cause confusion since an empty string
passes the hex check vacuously. Reorder the checks to first verify that the hash
is not empty. Additionally, add validation to reject hashes with a "0x" prefix
and enforce that the hash length is exactly 64 characters to ensure it is a
valid 32-byte hex digest. This will prevent malformed hashes from proceeding
further.
| Ok(serde_json::json!({ | ||
| "status": "online", | ||
| "node_version": version.value, | ||
| "network": "unknown", | ||
| "message": "Network status information - placeholder implementation" | ||
| })) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Placeholder returns hard-coded status → risk of misleading callers
"status": "online" and "network": "unknown" are always returned, even when the node is out of sync or on a testnet.
Consider:
- Translate connectivity outcome into
online/offline. - Populate
networkfromget_version().networkonce available, or omit the field until then.
Failing to do so will give an AI agent a false sense of liveness.
🤖 Prompt for AI Agents
In applications/minotari_mcp_node/src/resources/network_status.rs around lines
52 to 57, the JSON response always returns a hard-coded "status": "online" and
"network": "unknown", which can mislead callers about the actual node state.
Update the code to determine the real connectivity status and set "status" to
"online" or "offline" accordingly. Also, retrieve the actual network value from
get_version().network if available, or omit the "network" field until it can be
accurately populated.
| "timestamp": 0, // Timestamp not available in NewBlockHeaderTemplate | ||
| }, | ||
| "body": { | ||
| "inputs": template.body.as_ref().map(|b| b.inputs.len()).unwrap_or(0), |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid placeholder timestamps
Hard-coding "timestamp": 0 leaks a meaningless value into the API and may mislead downstream logic that expects a Unix epoch. Either:
- Expose the actual header timestamp when
NewBlockHeaderTemplateis extended, or - Omit the field (
serde_json::Value::Null) until the data is available.
Returning 0 silently is almost always the wrong default for a timestamp.
🤖 Prompt for AI Agents
In applications/minotari_mcp_node/src/tools/mining_tools.rs around lines 107 to
110, the code sets the "timestamp" field to a hardcoded 0, which is misleading
as it does not represent a valid timestamp. To fix this, either update the
NewBlockHeaderTemplate to include the actual timestamp and use it here, or if
the timestamp is not yet available, set the field to serde_json::Value::Null to
omit it from the output instead of using 0.
| let algo = params.get("algo").and_then(|v| v.as_u64()).ok_or_else(|| { | ||
| McpError::invalid_request("algo is required (0=SHA3X, 1=RANDOMXM, 2=RANDOMXT)".to_string()) | ||
| })?; | ||
|
|
||
| let max_weight = params.get("max_weight").and_then(|v| v.as_u64()).unwrap_or(19500); // Default max weight for blocks | ||
|
|
||
| let pow_algo_val = match algo { | ||
| 0 => PowAlgos::Randomxm.into(), // RANDOMXM | ||
| 1 => PowAlgos::Sha3x.into(), // SHA3X | ||
| 2 => PowAlgos::Randomxt.into(), // RANDOMXT | ||
| _ => { | ||
| return Err(McpError::invalid_request( | ||
| "Invalid algo: must be 0 (RANDOMXM), 1 (SHA3X), or 2 (RANDOMXT)".to_string(), | ||
| )) |
There was a problem hiding this comment.
Inconsistent algorithm mapping string → enum → error text
The mapping table (0 => Randomxm, 1 => Sha3x, 2 => Randomxt) is correct, but the error message at line 65 describes a different order (0=SHA3X, 1=RANDOMXM …). Pick one convention and use it in all:
- McpError::invalid_request("algo is required (0=SHA3X, 1=RANDOMXM, 2=RANDOMXT)".to_string())
+ McpError::invalid_request("algo is required (0=RANDOMXM, 1=SHA3X, 2=RANDOMXT)".to_string())Leaving it as-is will confuse API consumers and testers.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let algo = params.get("algo").and_then(|v| v.as_u64()).ok_or_else(|| { | |
| McpError::invalid_request("algo is required (0=SHA3X, 1=RANDOMXM, 2=RANDOMXT)".to_string()) | |
| })?; | |
| let max_weight = params.get("max_weight").and_then(|v| v.as_u64()).unwrap_or(19500); // Default max weight for blocks | |
| let pow_algo_val = match algo { | |
| 0 => PowAlgos::Randomxm.into(), // RANDOMXM | |
| 1 => PowAlgos::Sha3x.into(), // SHA3X | |
| 2 => PowAlgos::Randomxt.into(), // RANDOMXT | |
| _ => { | |
| return Err(McpError::invalid_request( | |
| "Invalid algo: must be 0 (RANDOMXM), 1 (SHA3X), or 2 (RANDOMXT)".to_string(), | |
| )) | |
| let algo = params.get("algo").and_then(|v| v.as_u64()).ok_or_else(|| { | |
| McpError::invalid_request("algo is required (0=RANDOMXM, 1=SHA3X, 2=RANDOMXT)".to_string()) | |
| })?; | |
| let max_weight = params.get("max_weight").and_then(|v| v.as_u64()).unwrap_or(19500); // Default max weight for blocks | |
| let pow_algo_val = match algo { | |
| 0 => PowAlgos::Randomxm.into(), // RANDOMXM | |
| 1 => PowAlgos::Sha3x.into(), // SHA3X | |
| 2 => PowAlgos::Randomxt.into(), // RANDOMXT | |
| _ => { | |
| return Err(McpError::invalid_request( | |
| "Invalid algo: must be 0 (RANDOMXM), 1 (SHA3X), or 2 (RANDOMXT)".to_string(), | |
| )) |
🤖 Prompt for AI Agents
In applications/minotari_mcp_node/src/tools/mining_tools.rs around lines 64 to
77, the numeric to algorithm enum mapping and the error message describing the
valid values are inconsistent. Update the error message at line 65 to match the
actual mapping used in the code: 0 should correspond to RANDOMXM, 1 to SHA3X,
and 2 to RANDOMXT. Ensure the error message clearly reflects this order to avoid
confusion for API consumers.
| use std::{ | ||
| collections::HashMap, | ||
| sync::{Arc, RwLock}, | ||
| time::{Duration, Instant}, | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Blocking locks inside async code
The module relies on std::sync::RwLock, but all access happens inside async tasks.
std locks block the entire Tokio worker thread on contention; use tokio::sync::RwLock (or parking_lot) to avoid priority-inversion and runtime warnings:
-use std::sync::{Arc, RwLock};
+use tokio::sync::RwLock;Refactor the handful of write().unwrap() / read().unwrap() calls accordingly.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/health_checker.rs around lines 6 to 10,
replace the use of std::sync::RwLock with tokio::sync::RwLock to avoid blocking
the Tokio async runtime. Update all instances of write().unwrap() and
read().unwrap() to use the async lock acquisition methods provided by
tokio::sync::RwLock, such as write().await and read().await, refactoring the
code to be async where necessary.
| fn basic_parameter_validation(&self, params: &Value) -> McpResult<()> { | ||
| // Basic validation: ensure it's an object | ||
| if !params.is_object() { | ||
| return Err(McpError::invalid_request("Parameters must be an object")); | ||
| } |
There was a problem hiding this comment.
Compilation error – invalid_request expects a String
invalid_request takes a String, but a &str is passed:
-return Err(McpError::invalid_request("Parameters must be an object"));
+return Err(McpError::invalid_request("Parameters must be an object".to_string()));This breaks the build.
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/protobuf_integration.rs around lines 200
to 204, the call to McpError::invalid_request currently passes a &str, but the
function expects a String. Fix this by converting the string literal to a String
using .to_string() or String::from() before passing it to invalid_request.
| let output_buffer = self.output_buffer.clone(); | ||
| tokio::spawn(async move { | ||
| let reader = BufReader::new(stdout); | ||
| let mut lines = reader.lines(); | ||
| while let Ok(Some(line)) = lines.next_line().await { | ||
| log::debug!("STDOUT: {}", line); | ||
| output_buffer.write().await.push(format!("STDOUT: {}", line)); | ||
| } | ||
| log::debug!("STDOUT capture ended"); |
There was a problem hiding this comment.
Unbounded in-memory log capture
Every stdout/stderr line is pushed onto output_buffer, which is never trimmed. A chatty process can exhaust memory or slow the node. Keep only the last n KB/lines, or flush to disk.
Also applies to: 185-191
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/process_launcher.rs around lines 171 to
179 and 185 to 191, the output_buffer accumulates all stdout/stderr lines
without limit, risking excessive memory use. Modify the code to limit the size
of output_buffer by keeping only the last n lines or n KB of logs. Implement
logic to trim or discard older entries when the buffer exceeds this limit to
prevent unbounded memory growth.
| pub startup_timeout: Duration, | ||
| /// Health check configuration | ||
| pub health_check_config: HealthCheckConfig, |
There was a problem hiding this comment.
🛠️ Refactor suggestion
startup_timeout is never referenced
LaunchConfig.startup_timeout is set by the builder but never consulted by ProcessLauncher. Either wire it into the launch/health-wait logic or drop the field to avoid misleading configuration knobs.
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/process_launcher.rs around lines 34 to
36, the startup_timeout field in LaunchConfig is set but never used in
ProcessLauncher. To fix this, either integrate startup_timeout into the process
launch or health check waiting logic to control timeout behavior, or remove the
startup_timeout field entirely if it is not needed to avoid confusion from
unused configuration options.
| // Try graceful shutdown first (SIGTERM) | ||
| match child.kill().await { | ||
| Ok(_) => log::info!("Sent SIGTERM to process"), | ||
| Err(e) => log::warn!("Failed to send SIGTERM: {}", e), | ||
| } |
There was a problem hiding this comment.
“Graceful” shutdown actually sends SIGKILL
Child::kill() maps to SIGKILL on Unix, yet the log states “Sent SIGTERM”. This prevents the process from performing cleanup and makes the two-phase TERM/KILL logic moot (second kill is redundant). Use nix::sys::signal::kill(pid, SIGTERM) (or platform abstraction) first, then fall back to SIGKILL.
Also applies to: 317-324
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/process_launcher.rs around lines 289 to
293, the code calls Child::kill() which sends SIGKILL but logs it as SIGTERM,
causing confusion and bypassing graceful shutdown. Replace the call to
Child::kill() with sending SIGTERM explicitly using nix::sys::signal::kill(pid,
SIGTERM) or a platform abstraction, then only call Child::kill() to send SIGKILL
if the process does not terminate after SIGTERM. Apply the same fix to lines
317-324.
| /// Get audit log path with defaults | ||
| pub fn get_audit_log_path(&self) -> Option<PathBuf> { | ||
| if self.audit_logging { | ||
| self.audit_log_path.clone().or_else(|| { | ||
| // Default audit log path | ||
| Some(PathBuf::from("logs/mcp_audit.log")) | ||
| }) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Ensure the audit-log directory exists or return an error early.
get_audit_log_path silently returns a path that may be unwritable at runtime, leading to I/O errors on the first logged event.
Either:
- Create the parent directory here, or
- Extend
validate()to check writability.
Fail fast rather than at the first log write.
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/config.rs around lines 170 to 176, the
get_audit_log_path method returns a default audit log path without ensuring the
directory exists or is writable, which can cause runtime I/O errors. To fix
this, modify the method to check if the parent directory of the audit log path
exists and create it if necessary, returning an error early if creation fails.
Alternatively, add a writability check for the audit log path in the validate()
method to fail fast before runtime logging attempts.
Test Results (CI) 3 files 132 suites 1h 0m 53s ⏱️ For more details on these failures, see this check. Results for commit 088fd74. ♻️ This comment has been updated with latest results. |
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (4)
applications/minotari_mcp_common/src/process_launcher.rs (4)
170-179: Unbounded memory growth inoutput_bufferstill not addressedEach captured stdout line is appended to an ever-growing
Vec<String>with no eviction strategy. A long-running or chatty node can exhaust memory. This is the exact issue flagged in the previous review and remains unresolved.
34-34:startup_timeoutis dead codeThe field is defined but never consulted anywhere in the launcher logic, exactly as noted in the previous review. Either integrate it into the launch/health-wait flow or drop it to avoid a misleading knob.
289-293: “Graceful” shutdown still sends SIGKILL while claiming SIGTERM
Child::kill()maps to SIGKILL on Unix, yet the log says “Sent SIGTERM”. The graceful/force-kill two-phase logic therefore never actually sends SIGTERM and the second kill is redundant.
170-191:⚠️ Potential issueLimit captured output to a bounded ring buffer
If you keep in-memory capture, trim the buffer after each push:
-output_buffer.write().await.push(format!("STDOUT: {}", line)); +{ + const MAX_LINES: usize = 1_000; + let mut buf = output_buffer.write().await; + buf.push(format!("STDOUT: {}", line)); + if buf.len() > MAX_LINES { + buf.remove(0); // discard oldest + } +}Apply the same pattern to the stderr block.
🧹 Nitpick comments (2)
applications/minotari_mcp_common/src/process_launcher.rs (2)
341-345: Unnecessary exclusive lock; useread()instead ofwrite()
is_running()acquires a write lock just to calltry_wait(). A read lock suffices and avoids blocking other writers:-pub async fn is_running(&self) -> bool { - if let Some(child) = self.process.write().await.as_mut() { +pub async fn is_running(&self) -> bool { + if let Some(child) = self.process.read().await.as_ref() {
218-222: DuplicateRunningstatus notifications
ProcessLaunchStatus::Runningis emitted once in line 218 and again in line 248. Remove the second send to avoid redundant updates.Also applies to: 248-249
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
applications/minotari_mcp_common/src/process_launcher.rs(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: ci
- GitHub Check: cargo check with stable
Test Results (Integration tests) 2 files + 2 10 suites +10 2h 10m 52s ⏱️ + 2h 10m 52s For more details on these failures, see this check. Results for commit 088fd74. ± Comparison against base commit 7ab0cd5. |
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (3)
applications/minotari_mcp_common/src/process_launcher.rs (3)
34-36: Remove unusedstartup_timeoutfieldThe
startup_timeoutfield is configured via the builder but never used in the actual process launching logic. This creates confusion about what timeouts are actually applied.
171-181: Implement bounded output buffer to prevent memory exhaustionThe output buffer accumulates all stdout/stderr lines indefinitely, which could lead to memory exhaustion with verbose processes.
Consider implementing a circular buffer or limiting the buffer size:
// In the struct definition - output_buffer: Arc<RwLock<Vec<String>>>, + output_buffer: Arc<RwLock<std::collections::VecDeque<String>>>, + max_buffer_size: usize, // Add this field // In the output capture tasks while let Ok(Some(line)) = lines.next_line().await { log::debug!("STDOUT: {}", line); - output_buffer.write().await.push(format!("STDOUT: {}", line)); + let mut buffer = output_buffer.write().await; + buffer.push_back(format!("STDOUT: {}", line)); + if buffer.len() > max_buffer_size { + buffer.pop_front(); + } }Also applies to: 185-191
291-295: Fix graceful shutdown to actually send SIGTERM before SIGKILLThe current implementation calls
child.kill()which sends SIGKILL immediately, making the "graceful shutdown" logic ineffective. The log message incorrectly claims to send SIGTERM.For proper graceful shutdown, you need to send SIGTERM first, then SIGKILL if the process doesn't terminate. Consider using a crate like
nixfor proper signal handling or implementing platform-specific signal sending.// Add nix dependency for proper signal handling use nix::sys::signal::{self, Signal}; use nix::unistd::Pid; // In the stop method - match child.kill().await { - Ok(_) => log::info!("Sent SIGTERM to process"), - Err(e) => log::warn!("Failed to send SIGTERM: {}", e), - } + if let Some(pid) = child.id() { + match signal::kill(Pid::from_raw(pid as i32), Signal::SIGTERM) { + Ok(_) => log::info!("Sent SIGTERM to process"), + Err(e) => log::warn!("Failed to send SIGTERM: {}", e), + } + }Also applies to: 317-324
🧹 Nitpick comments (5)
applications/minotari_mcp_common/src/process_launcher.rs (5)
114-252: Consider extracting complex launch logic into smaller methodsThe
launchmethod is quite long and handles multiple responsibilities. Consider extracting subprocess tasks like output capture, immediate crash detection, and health waiting into separate methods for better maintainability.Example refactoring:
pub async fn launch(&self) -> McpResult<LaunchResult> { drop(self.status_tx.send(ProcessLaunchStatus::Starting)); let executable_path = self.resolve_executable_path().await?; let mut child = self.spawn_process(&executable_path).await?; let pid = child.id(); self.start_output_capture(&mut child).await; *self.process.write().await = Some(child); self.check_immediate_crash(pid).await?; self.wait_for_health().await?; let result = LaunchResult { process_id: self.launch_id, executable_path, pid, grpc_endpoint: self.config.health_check_config.grpc_endpoint.clone(), status: ProcessLaunchStatus::Running, }; drop(self.status_tx.send(ProcessLaunchStatus::Running)); Ok(result) }
272-282: Improve executable discovery logicThe current discovery logic is quite simplistic, checking for "wallet" or "node" strings in arguments. This could lead to false positives or incorrect detection.
Consider a more robust approach:
async fn discover_executable(&self) -> McpResult<PathBuf> { - // Try to determine executable type from arguments - if self.config.args.iter().any(|arg| arg.contains("wallet")) { - TariExecutables::find_wallet() - } else if self.config.args.iter().any(|arg| arg.contains("node")) { - TariExecutables::find_node() - } else { - // Default to node if we can't determine - TariExecutables::find_node() - } + // Look for specific argument patterns + let has_wallet_flag = self.config.args.iter().any(|arg| + arg == "--wallet" || arg == "wallet" + ); + let has_node_flag = self.config.args.iter().any(|arg| + arg == "--node" || arg == "node" + ); + + match (has_wallet_flag, has_node_flag) { + (true, false) => TariExecutables::find_wallet(), + (false, true) | (false, false) => TariExecutables::find_node(), + (true, true) => Err(McpError::config_error("Ambiguous executable type: both wallet and node specified")), + } }
438-450: Enhance multiaddr conversion with better validationThe current implementation has basic string manipulation but lacks proper validation of IP addresses and port numbers.
Consider adding validation:
fn convert_to_multiaddr(address: &str) -> String { if address.starts_with("/ip4/") { // Already in multiaddr format address.to_string() } else if let Some((ip, port)) = address.split_once(':') { + // Validate IP address format + if std::net::Ipv4Addr::from_str(ip).is_err() { + log::warn!("Invalid IP address format: {}", ip); + } + // Validate port number + if port.parse::<u16>().is_err() { + log::warn!("Invalid port number: {}", port); + } // Convert IP:PORT to /ip4/IP/tcp/PORT format!("/ip4/{}/tcp/{}", ip, port) } else { + // Validate port number + if address.parse::<u16>().is_err() { + log::warn!("Invalid port number: {}", address); + } // Assume it's just a port, use localhost format!("/ip4/127.0.0.1/tcp/{}", address) } }
473-477: Consider making gRPC allowed methods configurableThe hardcoded list of allowed gRPC methods might need to be customizable for different deployment scenarios or security requirements.
Consider adding this to the launch configuration:
pub struct LaunchConfig { // ... existing fields pub grpc_allowed_methods: Option<Vec<String>>, } // In the launch_node method let allowed_methods = additional_args.iter() .find(|arg| arg.starts_with("--grpc-allowed-methods=")) .map(|arg| arg.strip_prefix("--grpc-allowed-methods=").unwrap_or("")) .unwrap_or("get_version,get_tip_info,get_sync_info,get_network_status,get_peers,get_header_by_hash,get_blocks,get_network_difficulty,get_tokens_in_circulation,get_mempool_stats,get_mempool_transactions,get_new_block_template,get_new_block_template_with_coinbases,submit_transaction,submit_block");
558-594: Add more comprehensive testsThe current tests only verify basic configuration building. Consider adding tests for error conditions, process lifecycle, and edge cases.
Add tests for:
- Invalid health check configurations
- Multiaddr conversion edge cases
- Executable discovery logic
- Error handling scenarios
#[test] fn test_multiaddr_conversion() { assert_eq!( TariProcessLauncher::convert_to_multiaddr("127.0.0.1:8080"), "/ip4/127.0.0.1/tcp/8080" ); assert_eq!( TariProcessLauncher::convert_to_multiaddr("8080"), "/ip4/127.0.0.1/tcp/8080" ); assert_eq!( TariProcessLauncher::convert_to_multiaddr("/ip4/127.0.0.1/tcp/8080"), "/ip4/127.0.0.1/tcp/8080" ); } #[test] fn test_build_without_health_check() { let result = LaunchConfigBuilder::new().build(); assert!(result.is_err()); assert!(result.unwrap_err().to_string().contains("Health check configuration is required")); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
applications/minotari_mcp_common/src/process_launcher.rs(1 hunks)applications/minotari_mcp_common/src/process_manager.rs(1 hunks)applications/minotari_mcp_node/src/main.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- applications/minotari_mcp_node/src/main.rs
- applications/minotari_mcp_common/src/process_manager.rs
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: MCozhusheck
PR: tari-project/tari#7262
File: applications/minotari_node/src/grpc/base_node_grpc_server.rs:533-536
Timestamp: 2025-06-30T06:54:18.170Z
Learning: In the Tari codebase, readiness status reporting is split between two gRPC servers: base_node_grpc_server's GetNetworkState method always reports READY when it can respond (indicating network-level readiness), while readiness_grpc_server handles granular readiness states like database initialization and migration progress via ReadinessStatusHandler.
applications/minotari_mcp_common/src/process_launcher.rs (4)
Learnt from: hansieodendaal
PR: tari-project/tari#7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: hansieodendaal
PR: tari-project/tari#7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of `tokio::task::spawn(async move {` with `tokio::task::spawn_blocking().await` in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Learnt from: SWvheerden
PR: tari-project/tari#6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: tari-project/tari#6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: cargo check with stable
- GitHub Check: ci
- GitHub Check: ledger build tests
- GitHub Check: Cucumber tests / Base Layer
- GitHub Check: Cucumber tests / FFI
🔇 Additional comments (1)
applications/minotari_mcp_common/src/process_launcher.rs (1)
343-349: Fix potential race condition inis_runningmethodThe method takes a write lock when it only needs to read the process status. This could cause unnecessary blocking and doesn't match the method's read-only semantics.
pub async fn is_running(&self) -> bool { - if let Some(child) = self.process.write().await.as_mut() { + if let Some(child) = self.process.read().await.as_ref() { child.try_wait().map(|status| status.is_none()).unwrap_or(false) } else { false } }Wait, this won't work because
try_wait()requires a mutable reference. The current implementation is actually correct. Let me reconsider this.Actually, the current implementation is correct -
try_wait()needs a mutable reference to theChildprocess, so a write lock is required. The method name might be misleading since it suggests a read-only operation, but the underlying system call requires mutability.
SWvheerden
left a comment
There was a problem hiding this comment.
quick review says this is fine.
This is a standalone app so it wont impact existing stuff
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (8)
applications/minotari_mcp_node/src/tools/mempool_tools.rs (1)
107-109: Input limit validation missing at runtimeThe existing review comment correctly identifies that while the JSON schema defines a maximum limit of 1000, the runtime code doesn't enforce this constraint, potentially allowing excessive resource usage.
The fix should clamp the limit value:
- let limit = params.get("limit").and_then(|v| v.as_u64()).unwrap_or(100); + let limit = params.get("limit").and_then(|v| v.as_u64()).unwrap_or(100).min(1000);applications/minotari_mcp_common/src/process_launcher.rs (5)
34-36:startup_timeoutfield is configured but never usedThe
startup_timeoutfield is set by the builder but never consulted byProcessLauncher. Either integrate it into the launch/health-wait logic or remove it to avoid confusion.
174-182: Unbounded in-memory log captureEvery stdout/stderr line is pushed onto
output_buffer, which is never trimmed. A chatty process can exhaust memory or slow the node. Keep only the last n KB/lines, or flush to disk.Also applies to: 185-193
292-296: "Graceful" shutdown actually sends SIGKILL
Child::kill()maps toSIGKILLon Unix, yet the log states "Sent SIGTERM". This prevents the process from performing cleanup and makes the two-phase TERM/KILL logic moot. Usenix::sys::signal::kill(pid, SIGTERM)first, then fall back toSIGKILL.Also applies to: 317-324
496-501: Inconsistent timeout configurationThe
TariProcessLauncher::launch_nodemethod setsstartup_timeoutto 120 seconds andmax_wait_timeto 90 seconds, but sincestartup_timeoutis unused, this creates confusing configuration. Either use both timeouts appropriately or align them.Remove the unused
startup_timeoutor implement its usage:let config = LaunchConfigBuilder::new() .with_executable(TariExecutables::find_node()?) .with_args(args) - .with_startup_timeout(Duration::from_secs(120)) .with_health_check(health_config) .build()?;
542-547: Inconsistent timeout configurationSimilar to the node launcher, the wallet launcher has misaligned timeout values (200s startup vs 180s health check max wait), creating confusion since startup_timeout is unused.
Remove the unused
startup_timeout:let config = LaunchConfigBuilder::new() .with_executable(TariExecutables::find_wallet()?) .with_args(args) - .with_startup_timeout(Duration::from_secs(200)) .with_health_check(health_config) .build()?;applications/minotari_mcp_node/src/tools/mining_tools.rs (2)
64-77: Fix inconsistent algorithm mapping documentationThe algorithm mapping in the error message doesn't match the actual implementation. The error message claims
0=SHA3X, 1=RANDOMXM, 2=RANDOMXTbut the code maps0 => RANDOMXM, 1 => SHA3X, 2 => RANDOMXT.- McpError::invalid_request("algo is required (0=SHA3X, 1=RANDOMXM, 2=RANDOMXT)".to_string()) + McpError::invalid_request("algo is required (0=RANDOMXM, 1=SHA3X, 2=RANDOMXT)".to_string())
107-110: Avoid hardcoded timestamp placeholdersSetting
"timestamp": 0creates misleading API output that could confuse consumers expecting valid Unix timestamps.Consider using
serde_json::Value::Nullto omit the field when the actual timestamp isn't available:- "timestamp": 0, // Timestamp not available in NewBlockHeaderTemplate + "timestamp": serde_json::Value::Null, // Timestamp not available in NewBlockHeaderTemplate
🧹 Nitpick comments (6)
applications/minotari_mcp_node/src/tools/mempool_tools.rs (2)
368-368: Unnecessary clippy allow directiveThe
#[allow(clippy::cast_possible_truncation)]directive appears to be misplaced or unnecessary, as there are no obvious truncating casts in the immediate vicinity.Remove the unnecessary directive or move it to the actual location where truncation occurs:
- #[allow(clippy::cast_possible_truncation)]
381-385: Potential integer overflow in fee calculationThe fee per gram calculation could overflow when converting large u64 values to f64 and back, especially with high transaction volumes.
Consider using checked arithmetic or a more robust calculation:
"fee_per_gram_avg": if total_weight > 0 { - (total_fees as f64 / total_weight as f64).round() as u64 + total_fees.saturating_div(total_weight as u64) } else { 0 },applications/minotari_mcp_common/src/process_launcher.rs (1)
474-478: Long hardcoded gRPC method allow-list could become outdatedThe hardcoded list of allowed gRPC methods may become stale as new methods are added or existing ones are modified in the base node. Consider making this configurable or using a more maintainable approach.
Would you like me to propose a configuration-based approach or a more dynamic method discovery mechanism for the gRPC allow-list?
applications/minotari_mcp_node/src/tools/mining_tools.rs (3)
384-385: Remove incorrect dead_code attributeThe
#[allow(dead_code)]attribute claims the field is used inexecute()method line 337-340, but those lines don't exist in this tool and the field is never actually used.- #[allow(dead_code)] // False positive: field is used in execute() method line 337-340 - grpc_client: BaseNodeGrpcClient<Channel>, + _grpc_client: BaseNodeGrpcClient<Channel>,
421-429: Incomplete tool implementationThis tool always returns an error and serves no functional purpose. Consider whether this placeholder adds value.
If this tool won't be implemented soon, consider removing it from the module to avoid API clutter. If keeping as a placeholder, simplify the error message:
- return Err(McpError::invalid_request( - "This tool requires a complete block template and coinbase configuration. Use \ - get_new_block_template_with_coinbases for most use cases." - .to_string(), - )); + Err(McpError::invalid_request( + "Tool not yet implemented. Use get_new_block_template_with_coinbases instead.".to_string(), + ))
537-541: Potential division by zero in profitability calculationWhile there's a check for
miner_data.target_difficulty > 0, the division could still result in infinity or NaN if the difficulty is very small.Consider using a more robust calculation:
"profitability_score": if miner_data.target_difficulty > 0 { - ((miner_data.reward + miner_data.total_fees) as f64 / miner_data.target_difficulty as f64 * 1000000.0).round() + let score = (miner_data.reward + miner_data.total_fees) as f64 / miner_data.target_difficulty as f64 * 1000000.0; + if score.is_finite() { score.round() } else { 0.0 } } else { 0.0 },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
Cargo.toml(2 hunks)README.md(8 hunks)applications/minotari_mcp_common/src/process_launcher.rs(1 hunks)applications/minotari_mcp_node/src/cli.rs(1 hunks)applications/minotari_mcp_node/src/main.rs(1 hunks)applications/minotari_mcp_node/src/resources/network_status.rs(1 hunks)applications/minotari_mcp_node/src/tools/blockchain_tools.rs(1 hunks)applications/minotari_mcp_node/src/tools/mempool_tools.rs(1 hunks)applications/minotari_mcp_node/src/tools/mining_tools.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- Cargo.toml
- applications/minotari_mcp_node/src/main.rs
- README.md
- applications/minotari_mcp_node/src/resources/network_status.rs
- applications/minotari_mcp_node/src/cli.rs
- applications/minotari_mcp_node/src/tools/blockchain_tools.rs
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: MCozhusheck
PR: tari-project/tari#7262
File: applications/minotari_node/src/grpc/base_node_grpc_server.rs:533-536
Timestamp: 2025-06-30T06:54:18.170Z
Learning: In the Tari codebase, readiness status reporting is split between two gRPC servers: base_node_grpc_server's GetNetworkState method always reports READY when it can respond (indicating network-level readiness), while readiness_grpc_server handles granular readiness states like database initialization and migration progress via ReadinessStatusHandler.
applications/minotari_mcp_node/src/tools/mining_tools.rs (4)
Learnt from: hansieodendaal
PR: #7266
File: base_layer/core/src/chain_storage/blockchain_database.rs:457-461
Timestamp: 2025-06-26T13:18:55.898Z
Learning: The MinedInfo struct in the Tari blockchain database contains two optional fields: input: Option<InputMinedInfo> and output: Option<OutputMinedInfo>. This design allows methods returning MinedInfo to always return a valid struct rather than wrapping it in Option, with "not found" scenarios represented through the internal optional fields.
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: hansieodendaal
PR: #6963
File: common_sqlite/src/error.rs:88-92
Timestamp: 2025-05-23T07:49:57.349Z
Learning: In the StorageError enum in common_sqlite/src/error.rs, the HexError variant should keep the manual From implementation rather than using #[from] attribute, as it stores a String representation of the error rather than the HexError type itself.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.
applications/minotari_mcp_node/src/tools/mempool_tools.rs (2)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: ksrichard
PR: #7129
File: applications/minotari_node/src/http/handler/sync_utxos_by_block.rs:21-29
Timestamp: 2025-05-29T20:59:32.020Z
Learning: In the Tari codebase, request validation for RPC services is handled at the service layer using the serde_valid crate rather than at the HTTP handler level. For example, SyncUtxosByBlockRequest validates that limit is between 1-5 and page is minimum 0. HTTP handlers should delegate validation to the service layer rather than implementing their own validation.
applications/minotari_mcp_common/src/process_launcher.rs (4)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: hansieodendaal
PR: #7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of tokio::task::spawn(async move { with tokio::task::spawn_blocking().await in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
🧬 Code Graph Analysis (1)
applications/minotari_mcp_node/src/tools/mining_tools.rs (4)
applications/minotari_mcp_node/src/tools/mempool_tools.rs (16)
new(19-21)new(73-75)new(183-185)new(272-274)name(26-28)name(80-82)name(190-192)name(279-281)description(30-32)description(84-86)description(194-196)description(283-285)execute(46-63)execute(107-173)execute(215-262)execute(299-412)applications/minotari_mcp_node/src/tools/blockchain_tools.rs (28)
new(26-28)new(127-129)new(202-204)new(301-303)new(374-376)new(436-438)new(535-537)name(33-35)name(134-136)name(209-211)name(308-310)name(381-383)name(443-445)name(542-544)description(37-39)description(138-140)description(213-215)description(312-314)description(385-387)description(447-449)description(546-548)execute(65-117)execute(159-192)execute(238-291)execute(328-364)execute(401-426)execute(479-525)execute(571-616)applications/minotari_mcp_node/src/tools/network_tools.rs (20)
new(19-21)new(101-103)new(225-227)new(363-365)new(421-423)name(26-28)name(108-110)name(232-234)name(370-372)name(428-430)description(30-32)description(112-114)description(236-238)description(374-376)description(432-434)execute(46-91)execute(128-215)execute(259-353)execute(390-411)execute(449-567)applications/minotari_mcp_common/src/error.rs (2)
invalid_request(61-63)tool_execution_failed(73-75)
🔇 Additional comments (4)
applications/minotari_mcp_node/src/tools/mempool_tools.rs (3)
46-64: Well-structured mempool stats implementationClean implementation that properly handles the gRPC request/response cycle with appropriate error handling and response formatting.
132-161: Efficient transaction streaming with proper data extractionGood implementation of streaming transaction data with proper null-safety checks and structured response formatting. The hex encoding for offset and fee calculation logic are correctly implemented.
394-402: Division by zero risk in fee rate calculationThe p75 weight could be zero, leading to division by zero panic on line 401.
Add proper zero-check to prevent panic:
let p75_fee = *fee_distribution.get(fee_distribution.len() * 3 / 4).unwrap_or(&0); let p75_weight = *weight_distribution.get(weight_distribution.len() * 3 / 4).unwrap_or(&1); - if p75_weight > 0 { p75_fee / p75_weight as u64 } else { 25 } + if p75_weight > 0 { + p75_fee.saturating_div(p75_weight as u64).max(1) + } else { + 25 + }⛔ Skipped due to learnings
Learnt from: hansieodendaal PR: tari-project/tari#6974 File: base_layer/wallet/src/output_manager_service/service.rs:2026-2031 Timestamp: 2025-04-23T05:56:30.985Z Learning: When setting the fee parameter in `output_to_self` method in the Tari wallet, use an accurate fee estimate instead of `fee_per_gram`. The `input_selection.as_final_fee()` method provides a good initial estimate, and the final fee can be obtained later from `stp.get_fee_amount()`.applications/minotari_mcp_node/src/tools/mining_tools.rs (1)
518-522: Potential division by zeroThe code checks
if hash_rate > 0but still performs division without handling the case whereminer_data.target_difficultycould be zero.Add proper zero checks for both values:
// Calculate estimated time to find block - let time_to_block = if hash_rate > 0 { - miner_data.target_difficulty / hash_rate + let time_to_block = if hash_rate > 0 && miner_data.target_difficulty > 0 { + miner_data.target_difficulty / hash_rate } else { 0 };Likely an incorrect or invalid review comment.
| let health_monitor = Some( | ||
| HealthMonitor::new( | ||
| "launched_process".to_string(), | ||
| config.health_check_config.grpc_endpoint.clone(), | ||
| ) | ||
| .with_timeout(Duration::from_secs(10)), | ||
| ); |
There was a problem hiding this comment.
Hard-coded timeout overrides configuration
The health monitor is created with a hard-coded 10-second timeout, ignoring the configured health_check_config.max_wait_time. This creates inconsistency between the configuration and actual behavior.
- let health_monitor = Some(
- HealthMonitor::new(
- "launched_process".to_string(),
- config.health_check_config.grpc_endpoint.clone(),
- )
- .with_timeout(Duration::from_secs(10)),
- );
+ let health_monitor = Some(
+ HealthMonitor::new(
+ "launched_process".to_string(),
+ config.health_check_config.grpc_endpoint.clone(),
+ )
+ .with_timeout(config.health_check_config.max_wait_time),
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let health_monitor = Some( | |
| HealthMonitor::new( | |
| "launched_process".to_string(), | |
| config.health_check_config.grpc_endpoint.clone(), | |
| ) | |
| .with_timeout(Duration::from_secs(10)), | |
| ); | |
| let health_monitor = Some( | |
| HealthMonitor::new( | |
| "launched_process".to_string(), | |
| config.health_check_config.grpc_endpoint.clone(), | |
| ) | |
| .with_timeout(config.health_check_config.max_wait_time), | |
| ); |
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/process_launcher.rs around lines 92 to
98, the HealthMonitor is initialized with a hard-coded 10-second timeout instead
of using the configured max_wait_time from health_check_config. Replace the
fixed Duration::from_secs(10) with a Duration derived from
config.health_check_config.max_wait_time to ensure the timeout respects the
configuration settings.
| tokio::time::sleep(Duration::from_millis(100)).await; | ||
| if !self.is_running().await { |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Race condition in immediate crash detection
A 100ms sleep is insufficient to reliably detect process crashes, especially on slow systems or under load. Consider using a longer delay or monitoring process exit status asynchronously.
- tokio::time::sleep(Duration::from_millis(100)).await;
+ tokio::time::sleep(Duration::from_millis(500)).await;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| tokio::time::sleep(Duration::from_millis(100)).await; | |
| if !self.is_running().await { | |
| tokio::time::sleep(Duration::from_millis(500)).await; | |
| if !self.is_running().await { |
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/process_launcher.rs around lines 200 to
201, the 100ms sleep before checking if the process is running is too short and
can cause race conditions in crash detection. Increase the delay duration to a
longer interval to allow the process enough time to start or crash, or refactor
the code to monitor the process exit status asynchronously instead of relying on
a fixed sleep.
| pub async fn is_running(&self) -> bool { | ||
| if let Some(child) = self.process.write().await.as_mut() { | ||
| child.try_wait().map(|status| status.is_none()).unwrap_or(false) | ||
| } else { | ||
| false | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Unnecessary write lock for read-only operation
The is_running() method only needs to check process status but takes a write lock unnecessarily. This could block other operations that need the write lock.
- pub async fn is_running(&self) -> bool {
- if let Some(child) = self.process.write().await.as_mut() {
- child.try_wait().map(|status| status.is_none()).unwrap_or(false)
- } else {
- false
- }
- }
+ pub async fn is_running(&self) -> bool {
+ if let Some(child) = self.process.read().await.as_ref() {
+ // Note: try_wait() requires &mut self, so we need to upgrade to write lock
+ // This is unavoidable due to tokio::process::Child API design
+ drop(child);
+ if let Some(child) = self.process.write().await.as_mut() {
+ child.try_wait().map(|status| status.is_none()).unwrap_or(false)
+ } else {
+ false
+ }
+ } else {
+ false
+ }
+ }Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/process_launcher.rs around lines 344 to
350, the is_running() method currently acquires a write lock on self.process
even though it only reads the process status. Change the lock from a write lock
to a read lock by using self.process.read().await instead of
self.process.write().await to avoid blocking other operations that require write
access.
| fn convert_to_multiaddr(address: &str) -> String { | ||
| if address.starts_with("/ip4/") { | ||
| // Already in multiaddr format | ||
| address.to_string() | ||
| } else if let Some((ip, port)) = address.split_once(':') { | ||
| // Convert IP:PORT to /ip4/IP/tcp/PORT | ||
| format!("/ip4/{}/tcp/{}", ip, port) | ||
| } else { | ||
| // Assume it's just a port, use localhost | ||
| format!("/ip4/127.0.0.1/tcp/{}", address) | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Multiaddr conversion logic is incomplete
The function assumes IP:PORT format but doesn't validate the input. Invalid formats (missing colon, non-numeric port) will produce malformed multiaddrs that could cause runtime failures.
fn convert_to_multiaddr(address: &str) -> String {
if address.starts_with("/ip4/") {
// Already in multiaddr format
address.to_string()
} else if let Some((ip, port)) = address.split_once(':') {
- // Convert IP:PORT to /ip4/IP/tcp/PORT
- format!("/ip4/{}/tcp/{}", ip, port)
+ // Validate port is numeric
+ if port.parse::<u16>().is_ok() {
+ format!("/ip4/{}/tcp/{}", ip, port)
+ } else {
+ // Invalid port, treat as single port number
+ format!("/ip4/127.0.0.1/tcp/{}", address)
+ }
} else {
- // Assume it's just a port, use localhost
- format!("/ip4/127.0.0.1/tcp/{}", address)
+ // Validate it's a numeric port
+ if address.parse::<u16>().is_ok() {
+ format!("/ip4/127.0.0.1/tcp/{}", address)
+ } else {
+ // Invalid format, default to a safe fallback
+ "/ip4/127.0.0.1/tcp/18142".to_string()
+ }
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn convert_to_multiaddr(address: &str) -> String { | |
| if address.starts_with("/ip4/") { | |
| // Already in multiaddr format | |
| address.to_string() | |
| } else if let Some((ip, port)) = address.split_once(':') { | |
| // Convert IP:PORT to /ip4/IP/tcp/PORT | |
| format!("/ip4/{}/tcp/{}", ip, port) | |
| } else { | |
| // Assume it's just a port, use localhost | |
| format!("/ip4/127.0.0.1/tcp/{}", address) | |
| } | |
| } | |
| fn convert_to_multiaddr(address: &str) -> String { | |
| if address.starts_with("/ip4/") { | |
| // Already in multiaddr format | |
| address.to_string() | |
| } else if let Some((ip, port)) = address.split_once(':') { | |
| // Validate port is numeric | |
| if port.parse::<u16>().is_ok() { | |
| format!("/ip4/{}/tcp/{}", ip, port) | |
| } else { | |
| // Invalid port, treat as single port number | |
| format!("/ip4/127.0.0.1/tcp/{}", address) | |
| } | |
| } else { | |
| // Validate it's a numeric port | |
| if address.parse::<u16>().is_ok() { | |
| format!("/ip4/127.0.0.1/tcp/{}", address) | |
| } else { | |
| // Invalid format, default to a safe fallback | |
| "/ip4/127.0.0.1/tcp/18142".to_string() | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/process_launcher.rs around lines 440 to
451, the convert_to_multiaddr function lacks validation for the IP:PORT input
format, which can lead to malformed multiaddrs if the input is invalid. Update
the function to validate that the IP is a valid IPv4 address and the port is a
numeric value before formatting. If validation fails, handle the error
gracefully, such as returning a Result type or a default/fallback multiaddr, to
prevent runtime failures.
| async fn execute(&self, params: Value) -> McpResult<Value> { | ||
| let excess_sig_hex = get_required_string_param(¶ms, "excess_signature")?; | ||
|
|
||
| // Parse excess signature - this would need to be properly structured | ||
| // For now, we'll assume it's provided in the correct format | ||
| let excess_sig_bytes = hex::decode(&excess_sig_hex) | ||
| .map_err(|e| McpError::invalid_request(format!("Invalid hex excess signature: {}", e)))?; | ||
|
|
||
| // Create signature object - this is a simplified version | ||
| // In reality, we'd need to properly parse the signature components | ||
| let signature = Signature { | ||
| public_nonce: excess_sig_bytes.clone(), | ||
| signature: excess_sig_bytes, | ||
| }; |
There was a problem hiding this comment.
Improper signature construction creates invalid cryptographic object
The signature construction reuses the same bytes for both public_nonce and signature fields, which creates an invalid cryptographic signature that won't work with the transaction state API.
This approach will likely cause the gRPC call to fail or return incorrect results. Consider either:
- Properly parsing the excess signature into its components, or
- Using a different API that accepts raw signature bytes
- // Create signature object - this is a simplified version
- // In reality, we'd need to properly parse the signature components
- let signature = Signature {
- public_nonce: excess_sig_bytes.clone(),
- signature: excess_sig_bytes,
- };
+ // TODO: Properly parse signature into public_nonce and signature components
+ // This current implementation is invalid and needs to be fixed
+ return Err(McpError::invalid_request(
+ "Signature parsing not yet implemented - need to properly split excess_sig_bytes".to_string()
+ ));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async fn execute(&self, params: Value) -> McpResult<Value> { | |
| let excess_sig_hex = get_required_string_param(¶ms, "excess_signature")?; | |
| // Parse excess signature - this would need to be properly structured | |
| // For now, we'll assume it's provided in the correct format | |
| let excess_sig_bytes = hex::decode(&excess_sig_hex) | |
| .map_err(|e| McpError::invalid_request(format!("Invalid hex excess signature: {}", e)))?; | |
| // Create signature object - this is a simplified version | |
| // In reality, we'd need to properly parse the signature components | |
| let signature = Signature { | |
| public_nonce: excess_sig_bytes.clone(), | |
| signature: excess_sig_bytes, | |
| }; | |
| async fn execute(&self, params: Value) -> McpResult<Value> { | |
| let excess_sig_hex = get_required_string_param(¶ms, "excess_signature")?; | |
| // Parse excess signature - this would need to be properly structured | |
| // For now, we'll assume it's provided in the correct format | |
| let excess_sig_bytes = hex::decode(&excess_sig_hex) | |
| .map_err(|e| McpError::invalid_request(format!("Invalid hex excess signature: {}", e)))?; | |
| // TODO: Properly parse signature into public_nonce and signature components | |
| // This current implementation is invalid and needs to be fixed | |
| return Err(McpError::invalid_request( | |
| "Signature parsing not yet implemented - need to properly split excess_sig_bytes".to_string() | |
| )); | |
| } |
🤖 Prompt for AI Agents
In applications/minotari_mcp_node/src/tools/mempool_tools.rs between lines 215
and 228, the signature is incorrectly constructed by assigning the same byte
array to both public_nonce and signature fields, resulting in an invalid
cryptographic object. To fix this, properly parse the excess signature hex
string into its distinct components for public_nonce and signature according to
the expected signature structure, or alternatively, use an API that accepts the
raw signature bytes directly without splitting them.
| async fn execute(&self, _params: Value) -> McpResult<Value> { | ||
| // This is a simplified implementation - in reality, we'd need to properly | ||
| // construct the NewBlockTemplate from the parameters | ||
| return Err(McpError::invalid_request( | ||
| "This tool requires a complete block template structure. Use get_new_block_template first.".to_string(), | ||
| )); | ||
|
|
||
| // TODO: Implement proper template parsing when needed | ||
| // let template = parse_block_template(¶ms)?; | ||
| // let request = Request::new(template); | ||
| // let response = self.grpc_client.clone().get_new_block(request).await?; | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Incomplete tool implementation creates dead code
This tool always returns an error and has unused parameters and commented-out code, making it effectively non-functional.
Either implement the functionality or remove this tool from the module. If keeping as a placeholder, consider:
async fn execute(&self, _params: Value) -> McpResult<Value> {
- // This is a simplified implementation - in reality, we'd need to properly
- // construct the NewBlockTemplate from the parameters
return Err(McpError::invalid_request(
- "This tool requires a complete block template structure. Use get_new_block_template first.".to_string(),
+ "Tool not yet implemented. Use get_new_block_template for template retrieval.".to_string(),
));
-
- // TODO: Implement proper template parsing when needed
- // let template = parse_block_template(¶ms)?;
- // let request = Request::new(template);
- // let response = self.grpc_client.clone().get_new_block(request).await?;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async fn execute(&self, _params: Value) -> McpResult<Value> { | |
| // This is a simplified implementation - in reality, we'd need to properly | |
| // construct the NewBlockTemplate from the parameters | |
| return Err(McpError::invalid_request( | |
| "This tool requires a complete block template structure. Use get_new_block_template first.".to_string(), | |
| )); | |
| // TODO: Implement proper template parsing when needed | |
| // let template = parse_block_template(¶ms)?; | |
| // let request = Request::new(template); | |
| // let response = self.grpc_client.clone().get_new_block(request).await?; | |
| } | |
| async fn execute(&self, _params: Value) -> McpResult<Value> { | |
| return Err(McpError::invalid_request( | |
| "Tool not yet implemented. Use get_new_block_template for template retrieval.".to_string(), | |
| )); | |
| } |
🤖 Prompt for AI Agents
In applications/minotari_mcp_node/src/tools/mining_tools.rs around lines 176 to
187, the execute function is incomplete and always returns an error, leaving
unused parameters and commented-out code that create dead code. To fix this,
either fully implement the function to parse the parameters and call the gRPC
client as intended, or remove the function and related tool from the module. If
you want to keep it as a placeholder, clearly mark it as unimplemented and
remove unused parameters and commented code to avoid confusion.
| "type": "number", | ||
| "description": "Mining algorithm (0=SHA3X, 1=RANDOMXM, 2=RANDOMXT)" | ||
| }, | ||
| "max_weight": { | ||
| "type": "number", | ||
| "description": "Maximum block weight (optional, default 19500)", | ||
| "minimum": 1 | ||
| }, | ||
| "coinbases": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "address": { | ||
| "type": "string", | ||
| "description": "Coinbase recipient address" | ||
| }, | ||
| "value": { | ||
| "type": "number", | ||
| "description": "Coinbase value" | ||
| }, | ||
| "stealth_payment": { | ||
| "type": "boolean", | ||
| "description": "Whether to use stealth payment (optional)" | ||
| }, | ||
| "revealed_value_proof": { | ||
| "type": "boolean", | ||
| "description": "Whether to include revealed value proof (optional)" | ||
| }, | ||
| "coinbase_extra": { | ||
| "type": "string", | ||
| "description": "Extra coinbase data (optional)" | ||
| } | ||
| }, | ||
| "required": ["address", "value"] | ||
| }, | ||
| "description": "Array of coinbase outputs" | ||
| } | ||
| }, | ||
| "required": ["algo", "coinbases"] | ||
| }) | ||
| } | ||
|
|
||
| async fn execute(&self, params: Value) -> McpResult<Value> { | ||
| let algo = params.get("algo").and_then(|v| v.as_u64()).ok_or_else(|| { | ||
| McpError::invalid_request("algo is required (0=SHA3X, 1=RANDOMXM, 2=RANDOMXT)".to_string()) | ||
| })?; | ||
|
|
||
| let max_weight = params.get("max_weight").and_then(|v| v.as_u64()).unwrap_or(19500); | ||
|
|
||
| let pow_algo_val = match algo { | ||
| 0 => PowAlgos::Sha3x.into(), // SHA3X | ||
| 1 => PowAlgos::Randomxm.into(), // RANDOMXM | ||
| 2 => PowAlgos::Randomxt.into(), // RANDOMXT | ||
| _ => { | ||
| return Err(McpError::invalid_request( | ||
| "Invalid algo: must be 0 (SHA3X), 1 (RANDOMXM), or 2 (RANDOMXT)".to_string(), |
There was a problem hiding this comment.
Inconsistent algorithm mapping across tools
This tool uses a different algorithm mapping (0 => SHA3X, 1 => RANDOMXM, 2 => RANDOMXT) than the first tool (0 => RANDOMXM, 1 => SHA3X, 2 => RANDOMXT). This creates API inconsistency.
Standardize the algorithm mapping across all tools. Update this tool to match the first tool's mapping:
- let pow_algo_val = match algo {
- 0 => PowAlgos::Sha3x.into(), // SHA3X
- 1 => PowAlgos::Randomxm.into(), // RANDOMXM
- 2 => PowAlgos::Randomxt.into(), // RANDOMXT
+ let pow_algo_val = match algo {
+ 0 => PowAlgos::Randomxm.into(), // RANDOMXM
+ 1 => PowAlgos::Sha3x.into(), // SHA3X
+ 2 => PowAlgos::Randomxt.into(), // RANDOMXTAlso update the error message and input schema description:
- McpError::invalid_request("algo is required (0=SHA3X, 1=RANDOMXM, 2=RANDOMXT)".to_string())
+ McpError::invalid_request("algo is required (0=RANDOMXM, 1=SHA3X, 2=RANDOMXT)".to_string())- "description": "Mining algorithm (0=SHA3X, 1=RANDOMXM, 2=RANDOMXT)"
+ "description": "Mining algorithm (0=RANDOMXM, 1=SHA3X, 2=RANDOMXT)"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "type": "number", | |
| "description": "Mining algorithm (0=SHA3X, 1=RANDOMXM, 2=RANDOMXT)" | |
| }, | |
| "max_weight": { | |
| "type": "number", | |
| "description": "Maximum block weight (optional, default 19500)", | |
| "minimum": 1 | |
| }, | |
| "coinbases": { | |
| "type": "array", | |
| "items": { | |
| "type": "object", | |
| "properties": { | |
| "address": { | |
| "type": "string", | |
| "description": "Coinbase recipient address" | |
| }, | |
| "value": { | |
| "type": "number", | |
| "description": "Coinbase value" | |
| }, | |
| "stealth_payment": { | |
| "type": "boolean", | |
| "description": "Whether to use stealth payment (optional)" | |
| }, | |
| "revealed_value_proof": { | |
| "type": "boolean", | |
| "description": "Whether to include revealed value proof (optional)" | |
| }, | |
| "coinbase_extra": { | |
| "type": "string", | |
| "description": "Extra coinbase data (optional)" | |
| } | |
| }, | |
| "required": ["address", "value"] | |
| }, | |
| "description": "Array of coinbase outputs" | |
| } | |
| }, | |
| "required": ["algo", "coinbases"] | |
| }) | |
| } | |
| async fn execute(&self, params: Value) -> McpResult<Value> { | |
| let algo = params.get("algo").and_then(|v| v.as_u64()).ok_or_else(|| { | |
| McpError::invalid_request("algo is required (0=SHA3X, 1=RANDOMXM, 2=RANDOMXT)".to_string()) | |
| })?; | |
| let max_weight = params.get("max_weight").and_then(|v| v.as_u64()).unwrap_or(19500); | |
| let pow_algo_val = match algo { | |
| 0 => PowAlgos::Sha3x.into(), // SHA3X | |
| 1 => PowAlgos::Randomxm.into(), // RANDOMXM | |
| 2 => PowAlgos::Randomxt.into(), // RANDOMXT | |
| _ => { | |
| return Err(McpError::invalid_request( | |
| "Invalid algo: must be 0 (SHA3X), 1 (RANDOMXM), or 2 (RANDOMXT)".to_string(), | |
| "type": "number", | |
| "description": "Mining algorithm (0=RANDOMXM, 1=SHA3X, 2=RANDOMXT)" | |
| }, | |
| "max_weight": { | |
| "type": "number", | |
| "description": "Maximum block weight (optional, default 19500)", | |
| "minimum": 1 | |
| }, | |
| // … rest of schema unchanged … | |
| }, | |
| "required": ["algo", "coinbases"] | |
| }) | |
| } | |
| async fn execute(&self, params: Value) -> McpResult<Value> { | |
| let algo = params.get("algo").and_then(|v| v.as_u64()).ok_or_else(|| { | |
| McpError::invalid_request("algo is required (0=RANDOMXM, 1=SHA3X, 2=RANDOMXT)".to_string()) | |
| })?; | |
| let max_weight = params.get("max_weight").and_then(|v| v.as_u64()).unwrap_or(19500); | |
| let pow_algo_val = match algo { | |
| 0 => PowAlgos::Randomxm.into(), // RANDOMXM | |
| 1 => PowAlgos::Sha3x.into(), // SHA3X | |
| 2 => PowAlgos::Randomxt.into(), // RANDOMXT | |
| _ => { | |
| return Err(McpError::invalid_request( | |
| "Invalid algo: must be 0 (RANDOMXM), 1 (SHA3X), or 2 (RANDOMXT)".to_string(), | |
| )); | |
| } | |
| }; | |
| // … rest of execute unchanged … |
🤖 Prompt for AI Agents
In applications/minotari_mcp_node/src/tools/mining_tools.rs between lines 221
and 277, the algorithm mapping for "algo" is inconsistent with the first tool,
currently mapping 0 to SHA3X, 1 to RANDOMXM, and 2 to RANDOMXT. Update the
mapping to match the first tool by mapping 0 to RANDOMXM, 1 to SHA3X, and 2 to
RANDOMXT. Also revise the error message and the input schema description to
reflect this corrected mapping for consistency across tools.
| let pow_algo_val = match algo { | ||
| 0 => PowAlgos::Sha3x.into(), // SHA3X | ||
| 1 => PowAlgos::Randomxm.into(), // RANDOMXM | ||
| 2 => PowAlgos::Randomxt.into(), // RANDOMXT | ||
| _ => continue, | ||
| }; |
There was a problem hiding this comment.
Inconsistent algorithm mapping in analysis tool
This tool uses yet another mapping (0 => SHA3X, 1 => RANDOMXM, 2 => RANDOMXT) that differs from the first tool, creating further API inconsistency.
Update this tool to use the same mapping as the first tool for consistency:
let pow_algo_val = match algo {
- 0 => PowAlgos::Sha3x.into(), // SHA3X
- 1 => PowAlgos::Randomxm.into(), // RANDOMXM
+ 0 => PowAlgos::Randomxm.into(), // RANDOMXM
+ 1 => PowAlgos::Sha3x.into(), // SHA3X
2 => PowAlgos::Randomxt.into(), // RANDOMXT
_ => continue,
};Also update the input schema and algorithm name mappings for consistency.
🤖 Prompt for AI Agents
In applications/minotari_mcp_node/src/tools/mining_tools.rs around lines 489 to
494, the algorithm mapping used in the match statement is inconsistent with the
first tool, causing API inconsistency. Update the match arms to use the same
algorithm-to-value mapping as the first tool, ensuring the numeric values
correspond to the correct PowAlgos variants. Additionally, revise the input
schema and any algorithm name mappings in this file to align with the first
tool's definitions for consistency across the codebase.
| let algo_name = match algo { | ||
| 0 => "SHA3X", | ||
| 1 => "MONERO", | ||
| 2 => "TARI", | ||
| _ => "UNKNOWN", | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Inconsistent algorithm name mapping
The algorithm names used here (SHA3X, MONERO, TARI) don't match the names used elsewhere in the file (SHA3X, RANDOMX_M, RANDOMX_T).
Standardize algorithm names across all tools. Use the same naming convention:
let algo_name = match algo {
0 => "SHA3X",
- 1 => "MONERO",
- 2 => "TARI",
+ 1 => "RANDOMX_M",
+ 2 => "RANDOMX_T",
_ => "UNKNOWN",
};And update the response mapping at lines 592-597 to match:
"algo_name": match requested_algo {
0 => "SHA3X",
- 1 => "MONERO",
- 2 => "TARI",
+ 1 => "RANDOMX_M",
+ 2 => "RANDOMX_T",
_ => "UNKNOWN",
}🤖 Prompt for AI Agents
In applications/minotari_mcp_node/src/tools/mining_tools.rs around lines 510 to
515, the algorithm name mapping uses inconsistent names ("SHA3X", "MONERO",
"TARI") compared to other parts of the file ("SHA3X", "RANDOMX_M", "RANDOMX_T").
Update the match arms to use the standardized names "SHA3X", "RANDOMX_M", and
"RANDOMX_T" respectively. Also, ensure that the response mapping at lines
592-597 is updated to use these same standardized names for consistency.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (32)
applications/minotari_mcp_common/src/executable_finder.rs (1)
214-240: Avoid executing untrusted binaries during verificationRunning an unknown executable just to call
--versioncan be abused if a malicious binary namedminotari_nodeis present earlier in the search path.Options:
- Skip execution; rely on file name/path & optional checksum.
- Sandbox the call (e.g. spawn with
seccomp/create_processlow-priv).- Make verification opt-in behind an env-flag.
At minimum, document the security implications in the function comment.
applications/minotari_mcp_common/src/security.rs (2)
180-201: Session stats reset on every request
create_or_update_sessionalways creates a freshSessionwithrequest_count = 1, even when a session already exists, discarding history and preventing accurate auditing.Quick fix:
-let session = Session { - id: session_id, - client_ip, - created_at: now, - last_activity: now, - request_count: 1, -}; -self.sessions.insert(session_id.to_string(), session); +let entry = self.sessions.entry(session_id.to_string()).or_insert(Session{ + id: session_id, + client_ip, + created_at: now, + last_activity: now, + request_count: 0, +}); +entry.last_activity = now; +entry.request_count += 1;
237-243:audit_logger.enabledignoresmcp_audit_loggingflag
enabledis true only if a path is supplied, silently disabling audit logging when the CLI sets--mcp-audit-loggingwithout a file path. Consider bindingenabledto a boolean flag and using the optional path solely for file write-out.applications/minotari_mcp_common/src/server.rs (3)
168-177:stopnever signals the transport
stop()only flips the flag; theStdioTransportkeeps running and the task remains alive.
Store a shutdown handle from the transport or expose a cancellation channel sostop()truly terminates the server.
179-183:is_runningreturns the wrong valueThe method always mirrors
config.should_start_server()instead of the liverunningflag, so callers cannot know if the server really started/stopped.Consider:
- fn is_running(&self) -> bool { - self.config.should_start_server() - } + fn is_running(&self) -> bool { + // Use an atomic or blocking read; simplest fix: + futures::executor::block_on(self.running.read()).clone() + }Or swap
RwLock<bool>for anAtomicBoolto avoid blocking.
250-281: Compile error –sanitized_argumentsmoved twice
sanitized_argumentsis moved into thejson!macro (lines 263-266) and then moved again when passed toexecute_tool(lines 278-280).
The second move will not compile.- let request_data = json!({ - "tool": params.name, - "arguments": sanitized_arguments - }); + let request_data = json!({ + "tool": params.name, + "arguments": sanitized_arguments.clone() + });applications/minotari_mcp_common/src/resources.rs (1)
160-164: Templated-URI pattern is computed incorrectly
extract_patternturnsblock/{height}intoblock/*height, so latermatches_pattern()will never succeed.
Replace every{…}segment with a single*.-fn extract_pattern(&self, uri: &str) -> String { - uri.replace('{', "*").replace('}', "") -} +fn extract_pattern(&self, uri: &str) -> String { + uri.split('/') + .map(|part| if part.starts_with('{') && part.ends_with('}') { "*" } else { part }) + .collect::<Vec<_>>() + .join("/") +}applications/minotari_mcp_common/src/protobuf_reflector.rs (1)
306-343: Compilation failure – undefinedfield
apply_field_validationis declared with parameter_field, but the body referencesfieldrepeatedly, producing an unresolved identifier error.- fn apply_field_validation(&self, _field: &FieldDescriptor, schema: &mut Schema) { + fn apply_field_validation(&self, field: &FieldDescriptor, schema: &mut Schema) {applications/minotari_mcp_common/src/startup_diagnostics.rs (1)
522-545: *whichis nix-only – prefercargo --versionorcommand -vRunning
whichwill fail on Windows, incorrectly flagging the build environment.
Replace with a cross-platform check:- match Command::new("which").arg("cargo").output() { + match Command::new("cargo").arg("--version").output() {applications/minotari_mcp_common/src/transport.rs (2)
110-118: Include the mandatoryjsonrpc:"2.0"field and echo back the originalidJSON-RPC 2.0 responses MUST contain
{"jsonrpc":"2.0","id":<same-as-request>, ...}.
Currently the struct omits the field and, on error paths, hard-codesid:null, breaking client correlation and the spec.#[derive(Debug, Clone, Serialize, Deserialize)] pub struct McpResponse { + #[serde(default = "default_jsonrpc")] + pub jsonrpc: String, pub id: Value, #[serde(skip_serializing_if = "Option::is_none")] pub result: Option<Value>, #[serde(skip_serializing_if = "Option::is_none")] pub error: Option<McpErrorResponse>, } + +fn default_jsonrpc() -> String { + "2.0".to_string() +}And preserve the request ID in error responses:
- id: Value::Null, + id: message.id().cloned().unwrap_or(Value::Null),Also applies to: 230-243
214-246: Extract request ID before handling to properly echo it in error responsesThe current implementation loses access to the request ID when handling fails, resulting in
id: nullin error responses.let response = match serde_json::from_str::<McpMessage>(line) { Ok(message) => { + // Extract ID for error responses + let request_id = match &message { + McpMessage::ListTools { id } | + McpMessage::CallTool { id, .. } | + McpMessage::ListResources { id } | + McpMessage::ReadResource { id, .. } | + McpMessage::ListPrompts { id } | + McpMessage::GetPrompt { id, .. } | + McpMessage::Ping { id } | + McpMessage::Initialize { id, .. } => id.clone(), + }; // Handle the message - handler.handle_message(message).await + handler.handle_message(message).await.or_else(|e| { + // Create error response with proper ID + Ok(McpResponse { + id: request_id, + result: None, + error: Some(McpErrorResponse { + code: match &e { + McpError::PermissionDenied(_) => -32000, + McpError::InvalidRequest(_) => -32600, + McpError::ToolNotFound(_) | McpError::ResourceNotFound(_) => -32601, + _ => -32603, + }, + message: e.to_string(), + data: None, + }), + }) + }) },applications/minotari_mcp_common/src/process_manager.rs (1)
74-78:_shutdown_txis dropped – supervisor stops instantlyBecause the sender is discarded,
shutdown_rx.recv().awaitreturnsNoneimmediately, so the supervisor reportsRunningand then exits.
Return the sender fromnew(or store it) so callers can signal shutdown.- let (_shutdown_tx, shutdown_rx) = mpsc::unbounded_channel(); + let (shutdown_tx, shutdown_rx) = mpsc::unbounded_channel(); Ok(( Self { process_id: Uuid::new_v4(), process_type, executable_path, args, port, child: RwLock::new(None), status_tx, + shutdown_tx: shutdown_tx.clone(), shutdown_rx: RwLock::new(Some(shutdown_rx)), max_restart_attempts: 3, restart_delay_secs: 5, }, status_rx, + shutdown_tx, ))Also update the struct definition to include the shutdown sender:
pub struct ProcessSupervisor { process_id: Uuid, process_type: ProcessType, executable_path: String, args: Vec<String>, port: u16, child: RwLock<Option<Child>>, status_tx: mpsc::UnboundedSender<ProcessStatus>, + shutdown_tx: mpsc::UnboundedSender<()>, shutdown_rx: RwLock<Option<mpsc::UnboundedReceiver<()>>>, max_restart_attempts: u32, restart_delay_secs: u64, }Also applies to: 81-108
applications/minotari_mcp_common/src/grpc_error_mapper.rs (1)
33-33: Incorrect import path for McpError.The import path
use crate::McpError;is incorrect. Based on the crate structure,McpErroris defined in theerrormodule.-use crate::McpError; +use crate::error::McpError;applications/minotari_mcp_common/src/health_monitor.rs (1)
97-104: Use proper URL parsing instead of string splitting.The current string splitting approach is fragile and won't handle all URL formats correctly (e.g., URLs with authentication, IPv6 addresses, etc.).
- // Extract host and port from endpoint - let endpoint_parts: Vec<&str> = self.endpoint.splitn(2, "://").collect(); - let address = if endpoint_parts.len() == 2 { - endpoint_parts[1] // Remove scheme (http://, https://) - } else { - &self.endpoint - }; + // Parse the endpoint URL properly + let address = if let Ok(url) = url::Url::parse(&self.endpoint) { + match (url.host_str(), url.port_or_known_default()) { + (Some(host), Some(port)) => format!("{}:{}", host, port), + _ => return HealthCheckResult { + status: HealthStatus::Unhealthy, + last_check: Instant::now(), + response_time: None, + error_message: Some("Invalid endpoint URL format".to_string()), + consecutive_failures: 1, + } + } + } else { + // Assume it's already in host:port format + self.endpoint.clone() + };You'll need to add
urlas a dependency inCargo.toml.applications/minotari_mcp_common/src/grpc_client_implementations.rs (4)
81-91: Use the converted protobuf request in method implementations.The
_proto_requestvariable is created but never used. The conversion logic should be utilized to pass the properly typed protobuf message to the actual gRPC methods.For example, in the
GetBlockscase, you should use the converted request:"GetBlocks" => { - // For GetBlocks, we'll use the converted parameters in a real implementation - // For now, delegate to get_tip_info as placeholder - self.get_tip_info().await + let proto_request = self.conversion_registry.convert(method_name, parameters)?; + // TODO: Implement actual GetBlocks call using proto_request + self.get_blocks_placeholder().await },
72-74: Replace panic with error return in deprecated method.Panicking is too harsh for deprecation. The deprecated method should return an appropriate error instead of panicking so callers can handle it gracefully.
#[deprecated(note = "Use new() with real client")] pub fn new_placeholder() -> Self { - // This will panic if called - placeholder is deprecated - panic!("Placeholder client deprecated - use real client") + // Return a dummy client that will fail on use + Self { + client: BaseNodeGrpcClient::new(Channel::from_static("http://[::]:1").connect_lazy()), + conversion_registry: Arc::new(ConversionRegistry::new()), + } }
247-248: Use the converted protobuf request for wallet methods.Same issue as the node client - the conversion result is not being used.
238-241: Replace panic with error return in wallet deprecated method.Same issue as the node client - panicking is too harsh for deprecation.
#[deprecated(note = "Use new() with real client")] pub fn new_placeholder() -> Self { - // This will panic if called - placeholder is deprecated - panic!("Placeholder wallet client deprecated - use real client") + // Return a dummy client that will fail on use + Self { + _client: TariWalletGrpcClient::new(Channel::from_static("http://[::]:1").connect_lazy()), + conversion_registry: Arc::new(ConversionRegistry::new()), + } }applications/minotari_mcp_common/src/schema_generator.rs (1)
287-310: Loss of precision for 64-bit integers
value.as_f64()converts JSON numbers to f64 before min/max checks; values larger than 2^53 lose precision and comparisons become wrong.
Parse viaas_u64()/as_i64()first and fall back to f64 only when necessary.applications/minotari_mcp_common/src/cli_integration.rs (1)
306-311: Same redundancy issue as NodeArgumentBuilderThe log config handling has the same redundancy issue - it could be moved inside the main
include_base_argsblock at line 294 for consistency and clarity.applications/minotari_mcp_common/src/auto_registry.rs (2)
262-270: ToolStatus statistics never updated in tool executionAs noted in previous reviews, the
ToolStatusfields (usage_count,error_count,last_used) are initialized here butDynamicGrpcTool::executedoesn't callrecord_usage()to update these statistics, making them meaningless.
636-678: Missing usage statistics recording in execute methodThe
executemethod doesn't callAutoDiscoveryRegistry::record_usage()to update tool statistics, making the usage tracking infrastructure ineffective. The method should record both successful and failed executions.However, there's an architectural issue:
DynamicGrpcTooldoesn't have a reference to theAutoDiscoveryRegistryto callrecord_usage(). Consider adding a callback or weak reference to enable usage recording.// In DynamicGrpcTool struct, add: +use_recorder: Option<Arc<dyn Fn(&str, bool, Option<String>) + Send + Sync>>, // In execute method, after determining success/failure: +if let Some(ref recorder) = self.usage_recorder { + recorder(&self.metadata.name, result.is_ok(), + result.as_ref().err().map(|e| e.to_string())); +}applications/minotari_mcp_common/src/grpc_discovery.rs (1)
149-155: Address the method name mismatch issue inrestrict_methods.As noted in the past review, the
restrict_methodsfunction expects full method paths (e.g.,"tari.rpc.Wallet/GetBalance"), but callers typically provide only short method names (e.g.,"GetBalance"). This mismatch will cause methods to remain unrestricted when they should be restricted.The filtering logic in
available_methods(),control_methods(), andreadonly_methods()usesfull_namefor lookup, but the restriction is likely configured using short names.Consider normalizing on short method names or accepting both formats:
- pub fn restrict_methods(&mut self, method_names: &[String]) { - for name in method_names { - self.restricted_methods.insert(name.clone()); - } + pub fn restrict_methods<S: AsRef<str>>(&mut self, method_names: &[S]) { + for name in method_names { + let name = name.as_ref(); + // Extract short name from full path or use as-is + let short_name = name.split('/').last().unwrap_or(name); + self.restricted_methods.insert(short_name.to_lowercase()); + } }And update the filtering methods to extract short names before lookup.
applications/minotari_mcp_common/src/health_checker.rs (1)
28-32: Use tokio::sync::RwLock instead of std::sync::RwLock in async code.The module uses
std::sync::RwLockbut all operations happen in async tasks. Blocking locks can block the entire Tokio worker thread on contention and cause priority inversion.applications/minotari_mcp_common/src/protobuf_integration.rs (1)
221-225: Fix string type mismatch for McpError::invalid_request.The call to
McpError::invalid_requestpasses a&strbut expects aString.-return Err(McpError::invalid_request("Parameters must be an object")); +return Err(McpError::invalid_request("Parameters must be an object".to_string()));applications/minotari_mcp_common/src/process_launcher.rs (7)
92-98: Hard-coded timeout overrides configurationThe health monitor is created with a hard-coded 10-second timeout, ignoring the configured
health_check_config.max_wait_time. This creates inconsistency between the configuration and actual behavior.- let health_monitor = Some( - HealthMonitor::new( - "launched_process".to_string(), - config.health_check_config.grpc_endpoint.clone(), - ) - .with_timeout(Duration::from_secs(10)), - ); + let health_monitor = Some( + HealthMonitor::new( + "launched_process".to_string(), + config.health_check_config.grpc_endpoint.clone(), + ) + .with_timeout(config.health_check_config.max_wait_time), + );
34-36:startup_timeoutis never referenced
LaunchConfig.startup_timeoutis set by the builder but never consulted byProcessLauncher. Either wire it into the launch/health-wait logic or drop the field to avoid misleading configuration knobs.
198-221: Unbounded in-memory log captureEvery stdout/stderr line is pushed onto
output_buffer, which is never trimmed. A chatty process can exhaust memory or slow the node. Keep only the last n KB/lines, or flush to disk.
224-225: Race condition in immediate crash detectionA 100ms sleep is insufficient to reliably detect process crashes, especially on slow systems or under load. Consider using a longer delay or monitoring process exit status asynchronously.
- tokio::time::sleep(Duration::from_millis(100)).await; + tokio::time::sleep(Duration::from_millis(500)).await;
316-320: "Graceful" shutdown actually sends SIGKILL
Child::kill()maps toSIGKILLon Unix, yet the log states "Sent SIGTERM". This prevents the process from performing cleanup and makes the two-phase TERM/KILL logic moot (second kill is redundant). Usenix::sys::signal::kill(pid, SIGTERM)(or platform abstraction) first, then fall back toSIGKILL.
368-374: Unnecessary write lock for read-only operationThe
is_running()method only needs to check process status but takes a write lock unnecessarily. This could block other operations that need the write lock.
464-475: Multiaddr conversion logic is incompleteThe function assumes IP:PORT format but doesn't validate the input. Invalid formats (missing colon, non-numeric port) will produce malformed multiaddrs that could cause runtime failures.
🧹 Nitpick comments (12)
applications/minotari_mcp_common/src/input_sanitizer.rs (1)
227-229: Consider more restrictive path validation for enhanced securityWhile allowing
/tmp/paths, consider also validating that paths don't escape the intended working directory using path canonicalization.if sanitized_path.starts_with('/') && !sanitized_path.starts_with("/tmp/") { return Err(McpError::invalid_request("Absolute paths not allowed (except /tmp/)")); } + + // Validate /tmp/ paths don't escape via symlinks + if sanitized_path.starts_with("/tmp/") { + if let Ok(canonical) = std::fs::canonicalize(&sanitized_path) { + if !canonical.starts_with("/tmp/") { + return Err(McpError::invalid_request("Path escapes /tmp/ via symlinks")); + } + } + }applications/minotari_mcp_common/src/process_manager.rs (1)
269-280: Implement proper graceful shutdown with SIGTERM before SIGKILLThe current implementation immediately uses
kill()which sends SIGKILL, not allowing graceful shutdown. Consider sending SIGTERM first.On Unix systems, you could use:
#[cfg(unix)] { use nix::sys::signal::{self, Signal}; use nix::unistd::Pid; if let Some(pid) = child.id() { // Try SIGTERM first for graceful shutdown signal::kill(Pid::from_raw(pid as i32), Signal::SIGTERM).ok(); // Wait for graceful shutdown match tokio::time::timeout(Duration::from_secs(5), child.wait()).await { Ok(Ok(_)) => return Ok(()), _ => { // Force kill if still running child.kill().await.ok(); } } } }applications/minotari_mcp_common/src/cli_integration.rs (2)
204-210: Minor redundancy in log config handlingThe log config section checks
include_base_argsagain, but this check is redundant since it's already within the base arguments section. Consider moving this inside the existingif self.include_base_argsblock at line 191.// Add base arguments only if requested if self.include_base_args { args.push("--base-path".to_string()); args.push(self.config.base_path); args.push("--config".to_string()); args.push(self.config.config_path); // Add network if specified if let Some(network) = self.config.network { args.push("--network".to_string()); args.push(network); } + + // Add log config if specified + if let Some(log_config) = self.config.log_config { + args.push("--log-config".to_string()); + args.push(log_config.to_string_lossy().to_string()); + } } - // Add log config if specified - if self.include_base_args { - if let Some(log_config) = self.config.log_config { - args.push("--log-config".to_string()); - args.push(log_config.to_string_lossy().to_string()); - } - }
358-360: Potential performance issue with port scanningThe
find_available_portfunction could be slow ifbase_portis high, as it iterates through all possible ports up tou16::MAX. Consider adding a reasonable upper limit or timeout.- pub fn find_available_port(base_port: u16) -> Option<u16> { - (base_port..=u16::MAX).find(|&port| std::net::TcpListener::bind(("127.0.0.1", port)).is_ok()) + pub fn find_available_port(base_port: u16) -> Option<u16> { + let max_port = (base_port + 1000).min(u16::MAX); + (base_port..=max_port).find(|&port| std::net::TcpListener::bind(("127.0.0.1", port)).is_ok()) + }applications/minotari_mcp_common/src/auto_registry.rs (1)
318-318: Parameter extraction not implementedThe TODO comment indicates parameter extraction from schema is not implemented, which means tools won't have proper parameter documentation for users.
Would you like me to help implement the parameter extraction from JSON schema, or should this be tracked as a separate issue?
applications/minotari_mcp_common/src/grpc_discovery.rs (2)
125-131: Consider performance optimization for method filtering.The
available_methods()function performs a linear scan and string comparison for each method. For large numbers of methods, consider using a more efficient lookup structure.- pub fn available_methods(&self) -> Vec<&GrpcMethodInfo> { - self.methods - .iter() - .filter(|m| !self.restricted_methods.contains(&m.full_name)) - .collect() - }Consider storing method indices or using a different data structure if performance becomes an issue with hundreds of methods.
706-712: Validate JSON schema generation approach.The schema generation functions return hardcoded
serde_json::Valueobjects. While functional, this approach has maintenance overhead as schemas need manual updates when protobuf definitions change.Consider whether automatic schema generation from protobuf definitions would be more maintainable in the long term, though the current approach provides explicit control over the schemas exposed to MCP clients.
applications/minotari_mcp_common/src/error.rs (1)
111-121: Improve semantic clarity of connection error constructors.The
connection_failed,service_unavailable, andservice_errormethods all wrap their messages inToolExecutionFailedvariants, which may confuse error handling logic that checks for specific error types.Consider either:
- Creating dedicated enum variants for these error types, or
- Using more specific existing variants like
TransportErrororServerErrorpub fn connection_failed(msg: impl Into<String>) -> Self { - Self::ToolExecutionFailed(format!("Connection failed: {}", msg.into())) + Self::TransportError(format!("Connection failed: {}", msg.into())) } pub fn service_unavailable(msg: impl Into<String>) -> Self { - Self::ToolExecutionFailed(format!("Service unavailable: {}", msg.into())) + Self::ServerError(format!("Service unavailable: {}", msg.into())) }applications/minotari_mcp_common/src/health_checker.rs (1)
266-301: Implement proper tonic_health integration.The connectivity checks are currently placeholders that simulate health checks. Consider implementing actual tonic_health protocol integration for more accurate health monitoring.
// Example of proper tonic_health integration use tonic_health::pb::health_client::HealthClient; use tonic_health::pb::HealthCheckRequest; async fn check_health_protocol(&self, channel: &Channel, service: &str) -> Result<(), Status> { let mut client = HealthClient::new(channel.clone()); let request = HealthCheckRequest { service: service.to_string(), }; client.check(request).await?; Ok(()) }applications/minotari_mcp_common/src/protobuf_integration.rs (1)
188-218: Complete placeholder implementations.Several methods return empty placeholders. Consider implementing these or creating tracking issues for future development.
Do you want me to generate implementation stubs for these methods or open tracking issues for:
extract_validation_rulesgenerate_parameter_examplesdocument_parametersapplications/minotari_mcp_common/src/lib.rs (2)
58-58: Document reasoning for disabled moduleThe commented module
protobuf_integrationmentions "API compatibility fixes" but lacks detail about the specific issues or timeline for resolution.-// pub mod protobuf_integration; // Disabled pending API compatibility fixes +// TODO: Re-enable protobuf_integration module once API compatibility issues with +// protobuf reflection are resolved (tracked in issue #XXXX) +// pub mod protobuf_integration;
150-151: Inconsistent commenting style for disabled exportsThe commented exports use inconsistent formatting and should align with the module comment style for clarity.
-// Export all macros for public use -// pub use protobuf_integration::{ReflectiveAutoDiscovery, EnhancedToolMetadata, ToolDocumentation, OpenApiSpec}; // -// Disabled pending API compatibility fixes +// TODO: Re-enable protobuf_integration exports once module is restored +// pub use protobuf_integration::{ReflectiveAutoDiscovery, EnhancedToolMetadata, ToolDocumentation, OpenApiSpec};
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (35)
applications/minotari_mcp_common/src/auto_registry.rs(1 hunks)applications/minotari_mcp_common/src/cli_integration.rs(1 hunks)applications/minotari_mcp_common/src/config.rs(1 hunks)applications/minotari_mcp_common/src/connection_manager.rs(1 hunks)applications/minotari_mcp_common/src/conversion_registry_factory.rs(1 hunks)applications/minotari_mcp_common/src/error.rs(1 hunks)applications/minotari_mcp_common/src/executable_finder.rs(1 hunks)applications/minotari_mcp_common/src/grpc_client_implementations.rs(1 hunks)applications/minotari_mcp_common/src/grpc_discovery.rs(1 hunks)applications/minotari_mcp_common/src/grpc_error_mapper.rs(1 hunks)applications/minotari_mcp_common/src/grpc_executor.rs(1 hunks)applications/minotari_mcp_common/src/health_checker.rs(1 hunks)applications/minotari_mcp_common/src/health_monitor.rs(1 hunks)applications/minotari_mcp_common/src/input_sanitizer.rs(1 hunks)applications/minotari_mcp_common/src/lib.rs(1 hunks)applications/minotari_mcp_common/src/method_implementations.rs(1 hunks)applications/minotari_mcp_common/src/parameter_converter.rs(1 hunks)applications/minotari_mcp_common/src/process_launcher.rs(1 hunks)applications/minotari_mcp_common/src/process_manager.rs(1 hunks)applications/minotari_mcp_common/src/prompts.rs(1 hunks)applications/minotari_mcp_common/src/protobuf_integration.rs(1 hunks)applications/minotari_mcp_common/src/protobuf_reflector.rs(1 hunks)applications/minotari_mcp_common/src/protobuf_reflector_simple.rs(1 hunks)applications/minotari_mcp_common/src/resources.rs(1 hunks)applications/minotari_mcp_common/src/response_converter.rs(1 hunks)applications/minotari_mcp_common/src/schema_generator.rs(1 hunks)applications/minotari_mcp_common/src/security.rs(1 hunks)applications/minotari_mcp_common/src/server.rs(1 hunks)applications/minotari_mcp_common/src/startup_diagnostics.rs(1 hunks)applications/minotari_mcp_common/src/stdio_transport.rs(1 hunks)applications/minotari_mcp_common/src/tool_macros.rs(1 hunks)applications/minotari_mcp_common/src/tool_metadata.rs(1 hunks)applications/minotari_mcp_common/src/tools.rs(1 hunks)applications/minotari_mcp_common/src/transport.rs(1 hunks)applications/minotari_mcp_node/Cargo.toml(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- applications/minotari_mcp_common/src/config.rs
🚧 Files skipped from review as they are similar to previous changes (9)
- applications/minotari_mcp_node/Cargo.toml
- applications/minotari_mcp_common/src/tool_macros.rs
- applications/minotari_mcp_common/src/stdio_transport.rs
- applications/minotari_mcp_common/src/conversion_registry_factory.rs
- applications/minotari_mcp_common/src/protobuf_reflector_simple.rs
- applications/minotari_mcp_common/src/tools.rs
- applications/minotari_mcp_common/src/prompts.rs
- applications/minotari_mcp_common/src/response_converter.rs
- applications/minotari_mcp_common/src/tool_metadata.rs
🧰 Additional context used
🧠 Learnings (21)
📓 Common learnings
Learnt from: MCozhusheck
PR: tari-project/tari#7262
File: applications/minotari_node/src/grpc/base_node_grpc_server.rs:533-536
Timestamp: 2025-06-30T06:54:18.170Z
Learning: In the Tari codebase, readiness status reporting is split between two gRPC servers: base_node_grpc_server's GetNetworkState method always reports READY when it can respond (indicating network-level readiness), while readiness_grpc_server handles granular readiness states like database initialization and migration progress via ReadinessStatusHandler.
applications/minotari_mcp_common/src/executable_finder.rs (3)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: leet4tari
PR: #7206
File: buildtools/docker_rig/tor.Dockerfile:0-0
Timestamp: 2025-06-18T08:09:27.943Z
Learning: In the Tari project's Docker builds (buildtools/docker_rig/tor.Dockerfile), APK package version constraints like tor>${TOR_VERSION} should remain unquoted despite general Alpine Linux documentation suggesting quoting. Adding quotes around the version constraint breaks the Docker build process in their specific build environment.
Learnt from: SWvheerden
PR: #7111
File: integration_tests/build.rs:7-12
Timestamp: 2025-06-02T07:28:49.228Z
Learning: In build scripts (build.rs files), using .unwrap() on environment variables is acceptable and preferred when you want the build to fail if the required environment variables are not set, rather than using fallbacks.
applications/minotari_mcp_common/src/startup_diagnostics.rs (3)
Learnt from: SWvheerden
PR: #7111
File: integration_tests/build.rs:7-12
Timestamp: 2025-06-02T07:28:49.228Z
Learning: In build scripts (build.rs files), using .unwrap() on environment variables is acceptable and preferred when you want the build to fail if the required environment variables are not set, rather than using fallbacks.
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: hansieodendaal
PR: #7189
File: integration_tests/tests/steps/wallet_cli_steps.rs:464-467
Timestamp: 2025-06-07T06:49:32.174Z
Learning: In integration test code, using panic! for setup validation and prerequisite checking is acceptable and often preferred, as it provides immediate feedback when test scenarios are incorrectly configured rather than allowing tests to continue in invalid states.
applications/minotari_mcp_common/src/resources.rs (1)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
applications/minotari_mcp_common/src/grpc_error_mapper.rs (2)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: hansieodendaal
PR: #6963
File: common_sqlite/src/error.rs:88-92
Timestamp: 2025-05-23T07:49:57.349Z
Learning: In the StorageError enum in common_sqlite/src/error.rs, the HexError variant should keep the manual From implementation rather than using #[from] attribute, as it stores a String representation of the error rather than the HexError type itself.
applications/minotari_mcp_common/src/transport.rs (3)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: SolfataraEmit
PR: #6994
File: docs/src/API_GRPC_Explanation.md:236-244
Timestamp: 2025-04-28T11:41:15.722Z
Learning: The minotari_console_wallet reports a fourth field timelocked_balance in the terminal output for the getBalance method, which should be reflected in the documentation for accuracy. The wallet.proto file confirms that timelocked_balance is defined as the fourth field in the GetBalanceResponse message.
Learnt from: ksrichard
PR: #7129
File: applications/minotari_node/src/http/handler/sync_utxos_by_block.rs:21-29
Timestamp: 2025-05-29T20:59:32.020Z
Learning: In the Tari codebase, request validation for RPC services is handled at the service layer using the serde_valid crate rather than at the HTTP handler level. For example, SyncUtxosByBlockRequest validates that limit is between 1-5 and page is minimum 0. HTTP handlers should delegate validation to the service layer rather than implementing their own validation.
applications/minotari_mcp_common/src/server.rs (2)
Learnt from: hansieodendaal
PR: #7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of tokio::task::spawn(async move { with tokio::task::spawn_blocking().await in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
applications/minotari_mcp_common/src/process_manager.rs (3)
Learnt from: hansieodendaal
PR: #7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of tokio::task::spawn(async move { with tokio::task::spawn_blocking().await in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In base_layer/core/src/base_node/tari_pulse_service/mod.rs, the disconnect_if_unused call uses parameters (0, 2) because the tari_pulse_service creates exactly 2 substreams and 0 RPC sessions during health checks.
applications/minotari_mcp_common/src/input_sanitizer.rs (1)
Learnt from: ksrichard
PR: #7129
File: applications/minotari_node/src/http/handler/sync_utxos_by_block.rs:21-29
Timestamp: 2025-05-29T20:59:32.020Z
Learning: In the Tari codebase, request validation for RPC services is handled at the service layer using the serde_valid crate rather than at the HTTP handler level. For example, SyncUtxosByBlockRequest validates that limit is between 1-5 and page is minimum 0. HTTP handlers should delegate validation to the service layer rather than implementing their own validation.
applications/minotari_mcp_common/src/schema_generator.rs (1)
Learnt from: ksrichard
PR: #7129
File: applications/minotari_node/src/http/handler/sync_utxos_by_block.rs:21-29
Timestamp: 2025-05-29T20:59:32.020Z
Learning: In the Tari codebase, request validation for RPC services is handled at the service layer using the serde_valid crate rather than at the HTTP handler level. For example, SyncUtxosByBlockRequest validates that limit is between 1-5 and page is minimum 0. HTTP handlers should delegate validation to the service layer rather than implementing their own validation.
applications/minotari_mcp_common/src/grpc_discovery.rs (1)
Learnt from: ksrichard
PR: #7129
File: applications/minotari_node/src/http/handler/sync_utxos_by_block.rs:21-29
Timestamp: 2025-05-29T20:59:32.020Z
Learning: In the Tari codebase, request validation for RPC services is handled at the service layer using the serde_valid crate rather than at the HTTP handler level. For example, SyncUtxosByBlockRequest validates that limit is between 1-5 and page is minimum 0. HTTP handlers should delegate validation to the service layer rather than implementing their own validation.
applications/minotari_mcp_common/src/grpc_client_implementations.rs (5)
Learnt from: SWvheerden
PR: #7301
File: clients/rust/base_node_wallet_client/src/client/http.rs:70-106
Timestamp: 2025-07-11T06:07:36.367Z
Learning: In clients/rust/base_node_wallet_client/src/client/http.rs, the HTTP client server selection logic is intentionally designed to try the local API once and then fallback to the seed server permanently without retry mechanisms. This is the preferred behavior - they want to use local API if available, otherwise use seed server, without continuously retrying the local API.
Learnt from: SolfataraEmit
PR: #6994
File: docs/src/API_GRPC_Explanation.md:236-244
Timestamp: 2025-04-28T11:41:15.722Z
Learning: The minotari_console_wallet reports a fourth field timelocked_balance in the terminal output for the getBalance method, which should be reflected in the documentation for accuracy. The wallet.proto file confirms that timelocked_balance is defined as the fourth field in the GetBalanceResponse message.
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: ksrichard
PR: #7129
File: applications/minotari_node/src/http/handler/sync_utxos_by_block.rs:21-29
Timestamp: 2025-05-29T20:59:32.020Z
Learning: In the Tari codebase, request validation for RPC services is handled at the service layer using the serde_valid crate rather than at the HTTP handler level. For example, SyncUtxosByBlockRequest validates that limit is between 1-5 and page is minimum 0. HTTP handlers should delegate validation to the service layer rather than implementing their own validation.
Learnt from: MCozhusheck
PR: #7262
File: applications/minotari_node/src/grpc/base_node_grpc_server.rs:533-536
Timestamp: 2025-06-30T06:54:18.170Z
Learning: In the Tari codebase, readiness status reporting is split between two gRPC servers: base_node_grpc_server's GetNetworkState method always reports READY when it can respond (indicating network-level readiness), while readiness_grpc_server handles granular readiness states like database initialization and migration progress via ReadinessStatusHandler.
applications/minotari_mcp_common/src/cli_integration.rs (2)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: SWvheerden
PR: #7111
File: integration_tests/build.rs:7-12
Timestamp: 2025-06-02T07:28:49.228Z
Learning: In build scripts (build.rs files), using .unwrap() on environment variables is acceptable and preferred when you want the build to fail if the required environment variables are not set, rather than using fallbacks.
applications/minotari_mcp_common/src/health_monitor.rs (3)
Learnt from: MCozhusheck
PR: #7262
File: applications/minotari_node/src/grpc/base_node_grpc_server.rs:533-536
Timestamp: 2025-06-30T06:54:18.170Z
Learning: In the Tari codebase, readiness status reporting is split between two gRPC servers: base_node_grpc_server's GetNetworkState method always reports READY when it can respond (indicating network-level readiness), while readiness_grpc_server handles granular readiness states like database initialization and migration progress via ReadinessStatusHandler.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In base_layer/core/src/base_node/tari_pulse_service/mod.rs, the disconnect_if_unused call uses parameters (0, 2) because the tari_pulse_service creates exactly 2 substreams and 0 RPC sessions during health checks.
applications/minotari_mcp_common/src/parameter_converter.rs (1)
Learnt from: ksrichard
PR: #7129
File: applications/minotari_node/src/http/handler/sync_utxos_by_block.rs:21-29
Timestamp: 2025-05-29T20:59:32.020Z
Learning: In the Tari codebase, request validation for RPC services is handled at the service layer using the serde_valid crate rather than at the HTTP handler level. For example, SyncUtxosByBlockRequest validates that limit is between 1-5 and page is minimum 0. HTTP handlers should delegate validation to the service layer rather than implementing their own validation.
applications/minotari_mcp_common/src/error.rs (2)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: hansieodendaal
PR: #6963
File: common_sqlite/src/error.rs:88-92
Timestamp: 2025-05-23T07:49:57.349Z
Learning: In the StorageError enum in common_sqlite/src/error.rs, the HexError variant should keep the manual From implementation rather than using #[from] attribute, as it stores a String representation of the error rather than the HexError type itself.
applications/minotari_mcp_common/src/protobuf_integration.rs (2)
Learnt from: hansieodendaal
PR: #6963
File: common_sqlite/src/error.rs:88-92
Timestamp: 2025-05-23T07:49:57.349Z
Learning: In the StorageError enum in common_sqlite/src/error.rs, the HexError variant should keep the manual From implementation rather than using #[from] attribute, as it stores a String representation of the error rather than the HexError type itself.
Learnt from: hansieodendaal
PR: #7189
File: integration_tests/tests/steps/wallet_cli_steps.rs:464-467
Timestamp: 2025-06-07T06:49:32.174Z
Learning: In integration test code, using panic! for setup validation and prerequisite checking is acceptable and often preferred, as it provides immediate feedback when test scenarios are incorrectly configured rather than allowing tests to continue in invalid states.
applications/minotari_mcp_common/src/health_checker.rs (5)
Learnt from: hansieodendaal
PR: #7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of tokio::task::spawn(async move { with tokio::task::spawn_blocking().await in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In base_layer/core/src/base_node/tari_pulse_service/mod.rs, the disconnect_if_unused call uses parameters (0, 2) because the tari_pulse_service creates exactly 2 substreams and 0 RPC sessions during health checks.
applications/minotari_mcp_common/src/process_launcher.rs (6)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: hansieodendaal
PR: #7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of tokio::task::spawn(async move { with tokio::task::spawn_blocking().await in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.
applications/minotari_mcp_common/src/connection_manager.rs (3)
Learnt from: hansieodendaal
PR: #6921
File: comms/core/src/connectivity/manager.rs:470-476
Timestamp: 2025-04-03T15:16:27.829Z
Learning: In the Tari codebase's connectivity manager, connections with uncertain state (like timed-out disconnects) are deliberately kept in the pool rather than forcibly removed. The connection pool refresh function runs on a regular interval, which handles any inconsistencies between the actual connection state and how it's tracked in the pool.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In base_layer/core/src/base_node/tari_pulse_service/mod.rs, the disconnect_if_unused call uses parameters (0, 2) because the tari_pulse_service creates exactly 2 substreams and 0 RPC sessions during health checks.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In comms/core/src/connectivity/manager.rs, the disconnect_if_unused call uses parameters (0, 0) intentionally to be conservative and avoid terminating connections that might be in use. This approach is used while troubleshooting inbound connection drop issues to eliminate the disconnect logic as a potential cause.
applications/minotari_mcp_common/src/grpc_executor.rs (1)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
🧬 Code Graph Analysis (5)
applications/minotari_mcp_common/src/resources.rs (5)
applications/minotari_mcp_node/src/resources/block_info.rs (7)
uri(25-27)name(29-31)description(33-35)mime_type(37-39)read(58-64)supports_templating(41-43)resolve_template(45-56)applications/minotari_mcp_wallet/src/resources/transaction_info.rs (3)
uri(41-43)mime_type(53-55)read(97-99)applications/minotari_mcp_wallet/src/resources/payment_info.rs (3)
uri(39-41)mime_type(51-53)read(128-130)applications/minotari_mcp_common/src/error.rs (2)
invalid_request(83-85)resource_not_found(91-93)applications/minotari_mcp_common/src/server.rs (1)
new(79-86)
applications/minotari_mcp_common/src/transport.rs (3)
applications/minotari_mcp_common/src/stdio_transport.rs (3)
serde_json(213-213)serde_json(221-221)handle_message(342-348)applications/minotari_mcp_common/src/server.rs (2)
handle_message(200-215)new(79-86)applications/minotari_mcp_common/src/error.rs (1)
invalid_request(83-85)
applications/minotari_mcp_common/src/auto_registry.rs (5)
applications/minotari_mcp_common/src/grpc_discovery.rs (3)
base_node_methods(170-411)wallet_methods(415-701)default(163-165)applications/minotari_mcp_common/src/tool_metadata.rs (9)
new(307-314)new(469-492)category(504-507)from(175-190)tags(524-527)description(51-59)description(132-150)description(499-502)default(454-456)applications/minotari_mcp_common/src/grpc_executor.rs (1)
is_healthy(550-552)applications/minotari_mcp_common/src/grpc_error_mapper.rs (1)
default(482-484)applications/minotari_mcp_common/src/schema_generator.rs (1)
default(498-501)
applications/minotari_mcp_common/src/health_monitor.rs (2)
applications/minotari_mcp_common/src/health_checker.rs (5)
new(204-211)check_connectivity(267-293)drop(442-453)check_health(175-175)check_health(380-394)applications/minotari_mcp_common/src/error.rs (1)
server_error(107-109)
applications/minotari_mcp_common/src/error.rs (4)
applications/minotari_mcp_common/src/stdio_transport.rs (2)
serde_json(213-213)serde_json(221-221)applications/minotari_mcp_common/src/transport.rs (1)
serde_json(215-215)applications/minotari_mcp_common/src/grpc_client_implementations.rs (1)
from(46-48)applications/minotari_mcp_common/src/parameter_converter.rs (1)
from(64-66)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ledger build tests
- GitHub Check: ci
🔇 Additional comments (60)
applications/minotari_mcp_common/src/executable_finder.rs (2)
67-112: Comprehensive executable discovery strategyThe multi-strategy approach for finding executables is well-designed, with clear fallbacks and informative error messages that help users troubleshoot missing executables.
171-211: Solid executable validation with proper permission checksThe validation includes appropriate existence, file type, and Unix permission checks. The decision to continue on verification failure with a warning is reasonable for robustness.
applications/minotari_mcp_common/src/security.rs (1)
101-178: Well-designed security permission checkingThe permission check implementation properly enforces local-only access, applies rate limiting, validates permission levels, and includes comprehensive audit logging for both success and failure cases.
applications/minotari_mcp_common/src/server.rs (1)
78-124: Well-implemented builder patternThe
McpServerBuilderprovides a clean fluent interface for constructing the server with proper validation in thebuild()method.applications/minotari_mcp_common/src/resources.rs (2)
33-64: Well-designed resource trait with good defaultsThe
McpResourcetrait provides a clean async interface with sensible default implementations for templating support. The error handling usingMcpErrortypes is appropriate.
214-257: Useful macro for creating static resourcesThe
static_resource!macro provides a convenient way to create static JSON resources with proper async trait implementation and clean encapsulation.applications/minotari_mcp_common/src/method_implementations.rs (3)
64-111: Robust parameter validation in GetBlocksConverterThe implementation properly handles optional parameters, validates array elements, and provides detailed error messages with element indices. The fallback to empty heights for latest blocks is a good design choice.
149-201: Good transition to registry-based converter registrationThe move from factory functions to registry-based registration is well-designed. The deprecation warnings clearly guide users to the new approach while maintaining backward compatibility.
209-247: Comprehensive test coverageThe tests properly cover both success and failure scenarios, including parameter validation and edge cases like empty parameters defaulting to empty heights.
applications/minotari_mcp_common/src/input_sanitizer.rs (1)
329-347: Regex compilation approach is appropriateUsing
expect()for compiling hardcoded regex patterns is acceptable since these patterns are known to be valid at compile time.applications/minotari_mcp_common/src/grpc_error_mapper.rs (3)
35-124: Excellent error context design.The
ErrorContextstruct and related enums provide comprehensive error classification with user-friendly messaging, severity levels, and actionable recommendations. The color coding system will enhance UI presentation.
125-485: Comprehensive and well-structured gRPC error mapping.The
GrpcErrorMapperprovides thorough mapping of all gRPC status codes to detailed error contexts with user-friendly messages, actionable recommendations, and proper documentation links. The error ID generation and metadata extraction enhance debugging capabilities.
487-598: Well-designed utility functions and comprehensive tests.The
error_utilsmodule provides context-specific error handling for wallet and node scenarios with appropriate severity levels and recommendations. The unit tests effectively verify error mapping, MCP error conversion, and response generation.applications/minotari_mcp_common/src/parameter_converter.rs (4)
35-67: Excellent error handling design.The
ConversionErrorenum provides comprehensive error variants for all parameter conversion failure modes with detailed context. TheFrom<ConversionError>implementation properly integrates with the MCP error system.
69-141: Clean trait design and effective registry implementation.The
ParameterConvertertrait provides a clean async interface for method-specific conversion. TheConversionRegistryeffectively manages converters with proper use oftokio::task::block_in_placefor async/sync bridging.
143-255: Comprehensive parameter extraction utilities.The
JsonParameterExtractortrait provides excellent helper methods for extracting typed parameters from JSON with detailed error reporting. The implementation covers all common parameter types with both required and optional variants.
257-281: Adequate test coverage for parameter extraction.The unit tests effectively verify the
JsonParameterExtractorfunctionality, covering both successful parameter extraction and error cases for missing parameters.applications/minotari_mcp_common/src/health_monitor.rs (3)
36-59: Well-designed health status representation.The
HealthStatusenum provides comprehensive service states, andHealthCheckResultcaptures detailed metadata including timing, errors, and failure counts for effective health monitoring.
130-234: Robust health checking with proper retry logic.The gRPC health check implementation and
wait_for_healthymethod provide robust service monitoring with exponential backoff, appropriate timeouts, and comprehensive logging for debugging.
236-296: Service-specific configuration and adequate testing.The
ServiceHealthMonitorsfactory provides appropriate service-specific configurations with different timeouts for base node and wallet services. Unit tests effectively verify core functionality including error scenarios.applications/minotari_mcp_common/src/grpc_client_implementations.rs (1)
113-387: Appropriate placeholder implementations with one real example.The method implementations provide reasonable placeholder responses for development. The
get_tip_infomethod demonstrates the proper pattern for real gRPC integration that should be followed for other methods as they're implemented.applications/minotari_mcp_common/src/schema_generator.rs (3)
35-102: Excellent schema infrastructure design.The
SchemaErrorenum provides comprehensive error variants, andSchemaGeneratoroffers a clean interface for schema management and validation with proper integration withServiceDiscovery.
113-216: Comprehensive schema generation with proper conversions.The MCP tool schema generation and OpenAPI documentation provide excellent conversion from gRPC methods to user-friendly formats with proper metadata inclusion and comprehensive API documentation.
217-680: Thorough validation logic and comprehensive utilities.The JSON schema validation logic is comprehensive, covering all data types with proper error reporting. The utility functions provide excellent reusable schema components, and the test coverage effectively verifies the core functionality.
applications/minotari_mcp_common/src/cli_integration.rs (5)
29-29: Good use of targeted clippy allow directiveThe
#![allow(clippy::vec_init_then_push)]is appropriately placed since the builder patterns in this module legitimately use this pattern for clarity and fluent API design.
34-112: Well-designed configuration and builder patternThe
LaunchCliConfigstruct andCliConfigBuilderfollow good Rust patterns with clear field documentation and sensible defaults. The builder pattern implementation is clean and follows standard conventions.
114-124: Good trait abstraction for CLI configurationThe
CliConfigExtractortrait provides a clean abstraction for different CLI implementations to provide launch configurations and extract arguments. The method signatures are well-designed.
378-382: Standard Default implementationClean and appropriate Default implementation for the builder.
384-464: Comprehensive test coverageExcellent test coverage for all the main functionality including builders, utilities, and validation functions. The test cases are well-structured and cover both positive and negative scenarios.
applications/minotari_mcp_common/src/auto_registry.rs (5)
23-44: Well-documented module with appropriate importsGood module documentation explaining the auto-discovery system purpose. The imports are comprehensive and well-organized.
46-138: Well-designed configuration structuresThe configuration structs and enums are thoughtfully designed with comprehensive options for auto-discovery behavior, server types, tool overrides, and status tracking. The
ServerType::name()method provides clean string representation.
139-211: Solid initialization and constructor designThe dual constructor pattern (with/without executor) provides good flexibility. The initialization method has a clear flow: discover → filter → generate → log. Error handling for failed tool generation is appropriate with warning logs.
476-576: Well-implemented registry query methodsThe query methods provide comprehensive access to tools with proper health filtering and statistics aggregation. The
get_healthy_toolsmethod has robust health checking logic.
681-735: Reasonable defaults and basic test coverageThe
AutoDiscoveryConfig::default()provides safe defaults (control disabled). The tests cover core functionality including the risk level determination heuristics. Test coverage could be expanded but covers essential functionality.applications/minotari_mcp_common/src/grpc_discovery.rs (1)
2035-2036: Regex patterns for hex values are safe as implementedThese patterns are properly anchored (
^…$), use a simple character class with a single quantifier, and do not introduce nested or ambiguous quantifiers—so they correctly validate hex‐encoded strings without risk of catastrophic backtracking or unexpected matches. No changes required here.applications/minotari_mcp_common/src/error.rs (3)
134-148: Validate JSON-RPC error code mappings.The error code mapping follows JSON-RPC 2.0 specification, but several different error types map to the same code (-32603). This is compliant but may reduce diagnostic precision.
The mapping correctly follows JSON-RPC 2.0 standards:
- -32700: Parse error (serialization)
- -32600: Invalid Request
- -32601: Method not found
- -32603: Internal error (covers most application errors)
This is a reasonable approach for MCP error reporting.
150-205: Excellent JSON-RPC error formatting with helpful context.The
to_json_rpc_errormethod provides rich error information including timestamps, error types, and context-specific help messages. This will greatly improve the debugging experience for MCP clients.The structured error responses with helpful guidance (e.g., "Use 'tools/list' to see available tools") follow good API design practices and will improve developer experience.
69-69: Review automatic From implementations.The
#[from]attributes forserde_json::Error,std::io::Error, andanyhow::Errorprovide convenient error conversion, which aligns with the manual From implementations seen in the relevant code snippets.This is consistent with the Tari codebase pattern for error handling and provides good ergonomics for error propagation.
Also applies to: 72-72, 75-75
applications/minotari_mcp_common/src/health_checker.rs (5)
45-73: Well-designed health status enum.The enum provides appropriate states for service health lifecycle with proper trait implementations.
75-142: Comprehensive health result implementation.The struct provides detailed metrics tracking and sensible business logic for health assessment. The consecutive failure threshold of 3 for attention alerts is appropriate.
144-188: Well-designed configuration and trait interface.The default configuration values are sensible and the trait provides a clean async interface for health checking operations.
304-376: Well-structured monitoring task implementation.The monitoring task properly handles periodic health checks with appropriate logging and state management. The task spawning and interval handling are correctly implemented.
456-514: Comprehensive test coverage.The tests effectively validate health result operations, configuration defaults, and basic checker functionality with clear assertions.
applications/minotari_mcp_common/src/protobuf_integration.rs (3)
42-90: Well-designed reflective auto-discovery integration.The struct effectively combines auto-discovery registry with protobuf reflection capabilities, providing enhanced runtime schema generation with appropriate error handling.
135-186: Comprehensive OpenAPI specification generation.The implementation provides thorough OpenAPI 3.0 spec generation with proper schema mapping and path generation. The structure supports good API documentation.
329-370: Adequate test coverage for current functionality.The tests effectively validate reflective discovery creation and basic parameter validation behavior with clear assertions.
applications/minotari_mcp_common/src/grpc_executor.rs (6)
60-218: Excellent trait design and constructor patterns.The
NodeGrpcClientandWalletGrpcClienttraits provide clean abstractions, and the constructor variants with/without health monitoring offer good flexibility for different deployment scenarios.
219-267: Well-implemented method execution with health awareness.The implementation properly integrates health checking while maintaining service availability during health warnings. The dispatch logic based on server type is clean and appropriate.
270-323: Robust parameter handling and method dispatch.The node method execution provides thorough parameter extraction with clear error messages for missing required parameters. The fallback to generic method execution is a good design choice.
325-413: Consistent and thorough wallet method implementation.The wallet method execution mirrors the quality of the node implementation with proper handling of various parameter types, including arrays for UTXO operations. Error messaging is clear and helpful.
432-553: Comprehensive capability checking and status reporting.The
can_executemethod properly matches server types with appropriate method categories. The status reporting provides detailed health and availability information with sensible readiness logic.
555-676: Comprehensive test coverage with integration testing.The tests include a well-structured mock client, cover executor creation and status reporting, and include integration testing with the conversion registry. The test assertions are clear and thorough.
applications/minotari_mcp_common/src/connection_manager.rs (6)
47-150: Well-designed circuit breaker foundation.The circuit breaker state management, configuration, and metrics provide a solid foundation following established circuit breaker patterns. The default configuration values are sensible for production use.
153-321: Robust circuit breaker implementation.The circuit breaker correctly implements state transitions, timeout handling, and comprehensive metrics tracking. The async execution with proper error handling and logging provides good observability.
350-396: Well-implemented connection lifecycle management.The
ManagedConnectionprovides proper usage tracking, idle detection, and statistics collection. The thread-safe usage counting ensures accurate metrics in concurrent scenarios.
399-504: Excellent integration of connection management components.The
ConnectionManagereffectively combines health monitoring, circuit breakers, and connection pooling into a cohesive system. The service addition and connection retrieval logic properly consider health status and circuit breaker states.
506-640: Robust connection pool management with maintenance.The connection pool creation handles failures gracefully, and the background maintenance task provides effective idle connection cleanup. The statistics collection enables good operational visibility.
642-698: Good test coverage for circuit breaker functionality.The tests effectively validate circuit breaker states, metrics calculations, and configuration defaults. The commented integration test with network access is appropriately documented for future reference.
applications/minotari_mcp_common/src/lib.rs (2)
27-57: Well-organized module structureThe module organization follows clear domain separation with comprehensive coverage of MCP infrastructure components. The alphabetical ordering enhances maintainability.
60-148: Comprehensive and well-structured public APIThe re-exports provide a clean, unified API surface that appropriately exposes the necessary types and traits while maintaining clear boundaries between different functional areas.
| async fn is_process_healthy(&self) -> bool { | ||
| // For now, just check if we can connect to the port | ||
| // In a full implementation, this would use gRPC health checks | ||
| std::net::TcpStream::connect(format!("127.0.0.1:{}", self.port)).is_ok() | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Use async TCP connection to avoid blocking the executor
The current implementation uses blocking std::net::TcpStream::connect which can block the async executor.
async fn is_process_healthy(&self) -> bool {
// For now, just check if we can connect to the port
// In a full implementation, this would use gRPC health checks
- std::net::TcpStream::connect(format!("127.0.0.1:{}", self.port)).is_ok()
+ tokio::net::TcpStream::connect(format!("127.0.0.1:{}", self.port))
+ .await
+ .is_ok()
}Also update line 315 in is_service_running:
pub async fn is_service_running(port: u16) -> bool {
- std::net::TcpStream::connect(format!("127.0.0.1:{}", port)).is_ok()
+ tokio::net::TcpStream::connect(format!("127.0.0.1:{}", port))
+ .await
+ .is_ok()
}🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/process_manager.rs around lines 237 to
241, replace the blocking std::net::TcpStream::connect call with an asynchronous
TCP connection method, such as tokio::net::TcpStream::connect, to avoid blocking
the async executor. Also, update the is_service_running function at line 315 to
use async TCP connection checks consistently.
| fn test_schema_generation_nonexistent_message() { | ||
| // Test with minimal valid descriptor set (would need actual protobuf bytes) | ||
| // This is a placeholder for real test data | ||
| let descriptor_bytes = vec![]; // Would contain actual FileDescriptorSet bytes | ||
|
|
||
| if let Ok(mut reflector) = ProtobufReflector::new(&descriptor_bytes) { | ||
| let result = reflector.generate_schema("NonexistentMessage"); | ||
| assert!(result.is_err()); | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add functional tests with real protobuf descriptors
The test uses empty descriptor bytes, making it ineffective at testing schema generation logic. Consider generating actual FileDescriptorSet bytes from a test proto file.
- let descriptor_bytes = vec![]; // Would contain actual FileDescriptorSet bytes
+ // Generate descriptor bytes from a test proto file or use hardcoded bytes
+ // Example: include_bytes!("../test_data/test.descriptor.bin")
+ let descriptor_bytes = generate_test_descriptor_bytes();Consider adding a test proto file and compiling it to descriptors as part of the build process, or include pre-compiled descriptor bytes for testing.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/protobuf_reflector.rs around lines 360
to 369, the test uses empty descriptor bytes, which does not effectively test
schema generation. Fix this by adding a real test proto file, compiling it to
generate actual FileDescriptorSet bytes, and using those bytes in the test. This
can be done by including pre-compiled descriptor bytes or integrating proto
compilation into the build process to provide valid descriptor data for the
test.
| // Check network configuration and ports | ||
| results.extend(self.check_network_configuration().await); | ||
|
|
||
| // Check running services |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Handle channel send errors instead of silently dropping them
Using drop() to ignore send errors could hide important failures. Consider logging when sends fail:
- drop(self.status_tx.send(ProcessStatus::Running));
+ if let Err(e) = self.status_tx.send(ProcessStatus::Running) {
+ log::warn!("Failed to send status update: {:?}", e);
+ }Also applies to: 132-132, 137-137, 149-149, 162-162, 171-171, 177-177, 186-186
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/startup_diagnostics.rs at lines 125,
132, 137, 149, 162, 171, 177, and 186, the code currently uses drop() to ignore
errors from channel send operations, which can hide failures. Modify the code to
handle these send errors explicitly by checking the result of the send operation
and logging an error message if the send fails, instead of silently dropping the
error.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
applications/minotari_mcp_node/src/prompts/mod.rs (1)
25-32: Remove unusedMcpResultimport to keep the file warning-free.
McpResultis not used anywhere in this file and should be removed to avoid compiler warnings.Apply this diff to remove the unused import:
use minotari_mcp_common::{ prompts::MessageRole, resource_message, simple_prompt, text_message, - McpResult, PromptRegistry, };
🧹 Nitpick comments (2)
applications/minotari_mcp_node/src/resources/chain_metadata.rs (2)
21-24: Fix inconsistent file description.The file description mentions "Common MCP infrastructure" but this file contains a node-specific resource implementation, not common infrastructure.
Apply this diff to correct the description:
-// USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.//! Common MCP (Model Context Protocol) -// infrastructure for Tari applications +// USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + //! Chain metadata resource
72-84: Consider simplifying the JSON structure to eliminate duplication.The current JSON structure duplicates several fields between the root level and the nested
chain_metadataobject, which adds complexity without clear benefit.Consider this simplified structure:
Ok(serde_json::json!({ "best_block_height": metadata.best_block_height, "best_block_hash": hex::encode(&metadata.best_block_hash), "accumulated_difficulty": hex::encode(&metadata.accumulated_difficulty), "pruned_height": metadata.pruned_height, - "timestamp": metadata.timestamp, - "chain_metadata": { - "best_block_height": metadata.best_block_height, - "accumulated_difficulty": hex::encode(&metadata.accumulated_difficulty), - "pruned_height": metadata.pruned_height, - } + "timestamp": metadata.timestamp }))
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (18)
Cargo.toml(2 hunks)applications/minotari_mcp_node/src/cli.rs(1 hunks)applications/minotari_mcp_node/src/config.rs(1 hunks)applications/minotari_mcp_node/src/grpc_config_parser.rs(1 hunks)applications/minotari_mcp_node/src/main.rs(1 hunks)applications/minotari_mcp_node/src/prompts/mod.rs(1 hunks)applications/minotari_mcp_node/src/resources/block_info.rs(1 hunks)applications/minotari_mcp_node/src/resources/chain_metadata.rs(1 hunks)applications/minotari_mcp_node/src/resources/mempool_stats.rs(1 hunks)applications/minotari_mcp_node/src/resources/mod.rs(1 hunks)applications/minotari_mcp_node/src/resources/network_difficulty.rs(1 hunks)applications/minotari_mcp_node/src/resources/network_status.rs(1 hunks)applications/minotari_mcp_node/src/resources/peer_list.rs(1 hunks)applications/minotari_mcp_node/src/resources/sync_progress.rs(1 hunks)applications/minotari_mcp_node/src/resources/transaction_info.rs(1 hunks)applications/minotari_mcp_node/src/server.rs(1 hunks)applications/minotari_mcp_node/src/tools/ban_peer.rs(1 hunks)applications/minotari_mcp_node/src/tools/blockchain_tools.rs(1 hunks)
✅ Files skipped from review due to trivial changes (3)
- applications/minotari_mcp_node/src/resources/transaction_info.rs
- applications/minotari_mcp_node/src/resources/sync_progress.rs
- applications/minotari_mcp_node/src/resources/peer_list.rs
🚧 Files skipped from review as they are similar to previous changes (13)
- Cargo.toml
- applications/minotari_mcp_node/src/resources/mempool_stats.rs
- applications/minotari_mcp_node/src/resources/network_difficulty.rs
- applications/minotari_mcp_node/src/resources/mod.rs
- applications/minotari_mcp_node/src/resources/network_status.rs
- applications/minotari_mcp_node/src/main.rs
- applications/minotari_mcp_node/src/resources/block_info.rs
- applications/minotari_mcp_node/src/cli.rs
- applications/minotari_mcp_node/src/config.rs
- applications/minotari_mcp_node/src/grpc_config_parser.rs
- applications/minotari_mcp_node/src/tools/blockchain_tools.rs
- applications/minotari_mcp_node/src/tools/ban_peer.rs
- applications/minotari_mcp_node/src/server.rs
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: MCozhusheck
PR: tari-project/tari#7262
File: applications/minotari_node/src/grpc/base_node_grpc_server.rs:533-536
Timestamp: 2025-06-30T06:54:18.170Z
Learning: In the Tari codebase, readiness status reporting is split between two gRPC servers: base_node_grpc_server's GetNetworkState method always reports READY when it can respond (indicating network-level readiness), while readiness_grpc_server handles granular readiness states like database initialization and migration progress via ReadinessStatusHandler.
applications/minotari_mcp_node/src/prompts/mod.rs (1)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: cargo check with stable
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: ci
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: ledger build tests
- GitHub Check: Cucumber tests / Base Layer
- GitHub Check: Cucumber tests / FFI
🔇 Additional comments (5)
applications/minotari_mcp_node/src/resources/chain_metadata.rs (2)
25-32: LGTM!The imports are well-organized and all necessary for the implementation.
33-42: LGTM!Clean struct design with proper use of Arc for shared ownership and a straightforward constructor.
applications/minotari_mcp_node/src/prompts/mod.rs (3)
34-41: LGTM!Clean factory pattern with appropriate clippy allow attribute and good documentation.
43-147: LGTM!Excellent prompt definitions that provide clear, structured guidance for AI agents. Each prompt:
- Has well-defined system and user messages
- References appropriate resources for context
- Addresses specific node operational scenarios
- Follows consistent formatting and structure
The prompts effectively cover the main use cases for node diagnostics and management.
148-150: LGTM!Clean completion of the factory method.
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (10)
applications/minotari_mcp_common/src/process_manager.rs (3)
74-78: Critical shutdown mechanism flaw prevents proper process control.The
_shutdown_txsender is immediately dropped, causingshutdown_rx.recv().awaitto returnNoneinstantly, making the supervisor exit prematurely without waiting for shutdown signals.
237-241: Replace blocking TCP connection with async equivalent.The blocking
std::net::TcpStream::connectcan stall the async executor. Usetokio::net::TcpStream::connect().awaitinstead.
289-316: Well-designed utility functions with one blocking operation issue.The port management and command building utilities are well-implemented with proper error handling. However, line 315 uses blocking
std::net::TcpStream::connectwhich should be replaced withtokio::net::TcpStream::connect().awaitfor consistency with async patterns.applications/minotari_mcp_common/src/process_launcher.rs (7)
57-59: Unused configuration field creates misleading interface.The
startup_timeoutfield inLaunchConfigis never used byProcessLauncher, making it a misleading configuration option.
115-121: Hard-coded timeout overrides user configuration.The health monitor uses a fixed 10-second timeout instead of respecting
config.health_check_config.max_wait_time, creating inconsistency between configuration and behavior.
196-220: Unbounded output buffer risks memory exhaustion.The output capture tasks continuously append to
output_bufferwithout size limits, potentially causing memory issues with verbose processes.
222-224: Race condition in immediate crash detection.The 100ms delay before checking process status may be insufficient to reliably detect crashes, especially on loaded systems.
315-319: Incorrect signal handling prevents graceful shutdown.
Child::kill()sends SIGKILL immediately but the log claims to send SIGTERM, bypassing graceful shutdown opportunity.
367-373: Unnecessary write lock blocks concurrent operations.The
is_running()method uses a write lock when a read lock should suffice for checking process status.
463-474: Missing input validation in multiaddr conversion.The
convert_to_multiaddrfunction lacks validation for IP:PORT format, potentially creating malformed multiaddrs that could cause runtime failures.
🧹 Nitpick comments (3)
applications/minotari_mcp_common/src/input_sanitizer.rs (3)
47-65: Consider extracting HTML entities into a constant or method.The HTML entity mapping is duplicated between
Default::default()andfrom_config(), which violates the DRY principle and makes maintenance harder.+impl InputSanitizer { + fn default_html_entities() -> HashMap<String, String> { + let mut html_entities = HashMap::new(); + html_entities.insert("&".to_string(), "&".to_string()); + html_entities.insert("<".to_string(), "<".to_string()); + html_entities.insert(">".to_string(), ">".to_string()); + html_entities.insert(""".to_string(), "\"".to_string()); + html_entities.insert("'".to_string(), "'".to_string()); + html_entities.insert("'".to_string(), "'".to_string()); + html_entities.insert("'".to_string(), "'".to_string()); + html_entities.insert("/".to_string(), "/".to_string()); + html_entities.insert("/".to_string(), "/".to_string()); + html_entities + } +} + impl Default for InputSanitizer { fn default() -> Self { - let mut html_entities = HashMap::new(); - html_entities.insert("&".to_string(), "&".to_string()); - html_entities.insert("<".to_string(), "<".to_string()); - html_entities.insert(">".to_string(), ">".to_string()); - html_entities.insert(""".to_string(), "\"".to_string()); - html_entities.insert("'".to_string(), "'".to_string()); - html_entities.insert("'".to_string(), "'".to_string()); - html_entities.insert("'".to_string(), "'".to_string()); - html_entities.insert("/".to_string(), "/".to_string()); - html_entities.insert("/".to_string(), "/".to_string()); Self { max_string_length: 65536, // 64KB max string max_array_length: 1000, // Max 1000 array elements max_object_depth: 32, // Max 32 levels of nesting - allowed_html_entities: html_entities, + allowed_html_entities: Self::default_html_entities(), } } }
79-101: Apply the same refactoring to eliminate code duplication.The same HTML entity mapping duplication exists in the
from_configmethod.pub fn from_config(config: &InputSanitizationConfig) -> Self { - let mut html_entities = HashMap::new(); - - if config.clean_html_entities { - html_entities.insert("&".to_string(), "&".to_string()); - html_entities.insert("<".to_string(), "<".to_string()); - html_entities.insert(">".to_string(), ">".to_string()); - html_entities.insert(""".to_string(), "\"".to_string()); - html_entities.insert("'".to_string(), "'".to_string()); - html_entities.insert("'".to_string(), "'".to_string()); - html_entities.insert("'".to_string(), "'".to_string()); - html_entities.insert("/".to_string(), "/".to_string()); - html_entities.insert("/".to_string(), "/".to_string()); - } + let html_entities = if config.clean_html_entities { + Self::default_html_entities() + } else { + HashMap::new() + }; Self { max_string_length: config.max_string_length,
329-332: Optimize regex compilation for better performance.The regex patterns are compiled on every call, which is inefficient. Consider using
lazy_staticoronce_cellfor compilation.+use once_cell::sync::Lazy; + +static BASE58_PATTERN: Lazy<regex::Regex> = Lazy::new(|| { + regex::Regex::new(r"^[123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz]+$") + .expect("Valid regex pattern") +}); + impl ValidationPatterns { /// Base58 pattern for Tari addresses - pub fn base58() -> regex::Regex { - regex::Regex::new(r"^[123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz]+$") - .expect("Valid regex pattern") + pub fn base58() -> &'static regex::Regex { + &BASE58_PATTERN }Apply the same pattern to all other regex methods.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (4)
Cargo.lockis excluded by!**/*.lockapplications/minotari_ledger_wallet/wallet/Cargo.lockis excluded by!**/*.lockclients/ffi_client/package-lock.jsonis excluded by!**/package-lock.jsonpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (5)
applications/minotari_ledger_wallet/comms/Cargo.toml(1 hunks)applications/minotari_ledger_wallet/wallet/Cargo.toml(1 hunks)applications/minotari_mcp_common/src/input_sanitizer.rs(1 hunks)applications/minotari_mcp_common/src/process_launcher.rs(1 hunks)applications/minotari_mcp_common/src/process_manager.rs(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- applications/minotari_ledger_wallet/comms/Cargo.toml
- applications/minotari_ledger_wallet/wallet/Cargo.toml
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: MCozhusheck
PR: tari-project/tari#7262
File: applications/minotari_node/src/grpc/base_node_grpc_server.rs:533-536
Timestamp: 2025-06-30T06:54:18.170Z
Learning: In the Tari codebase, readiness status reporting is split between two gRPC servers: base_node_grpc_server's GetNetworkState method always reports READY when it can respond (indicating network-level readiness), while readiness_grpc_server handles granular readiness states like database initialization and migration progress via ReadinessStatusHandler.
applications/minotari_mcp_common/src/process_manager.rs (8)
Learnt from: hansieodendaal
PR: #7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of tokio::task::spawn(async move { with tokio::task::spawn_blocking().await in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In base_layer/core/src/base_node/tari_pulse_service/mod.rs, the disconnect_if_unused call uses parameters (0, 2) because the tari_pulse_service creates exactly 2 substreams and 0 RPC sessions during health checks.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In comms/core/src/connectivity/manager.rs, the disconnect_if_unused call uses parameters (0, 0) intentionally to be conservative and avoid terminating connections that might be in use. This approach is used while troubleshooting inbound connection drop issues to eliminate the disconnect logic as a potential cause.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
applications/minotari_mcp_common/src/input_sanitizer.rs (1)
Learnt from: ksrichard
PR: #7129
File: applications/minotari_node/src/http/handler/sync_utxos_by_block.rs:21-29
Timestamp: 2025-05-29T20:59:32.020Z
Learning: In the Tari codebase, request validation for RPC services is handled at the service layer using the serde_valid crate rather than at the HTTP handler level. For example, SyncUtxosByBlockRequest validates that limit is between 1-5 and page is minimum 0. HTTP handlers should delegate validation to the service layer rather than implementing their own validation.
applications/minotari_mcp_common/src/process_launcher.rs (6)
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Learnt from: hansieodendaal
PR: #7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of tokio::task::spawn(async move { with tokio::task::spawn_blocking().await in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.
🧬 Code Graph Analysis (1)
applications/minotari_mcp_common/src/input_sanitizer.rs (10)
applications/minotari_mcp_common/src/stdio_transport.rs (3)
serde_json(213-213)serde_json(221-221)new(49-51)applications/minotari_mcp_common/src/config.rs (3)
default(99-108)default(112-121)default(125-136)applications/minotari_mcp_common/src/auto_registry.rs (3)
default(682-691)new(141-157)new(599-611)applications/minotari_mcp_common/src/connection_manager.rs (5)
default(59-61)default(90-98)default(147-149)default(339-347)new(115-126)applications/minotari_mcp_common/src/grpc_error_mapper.rs (1)
default(482-484)applications/minotari_mcp_common/src/parameter_converter.rs (1)
default(138-140)applications/minotari_mcp_common/src/schema_generator.rs (3)
default(498-501)value(382-382)value(390-390)applications/minotari_mcp_common/src/process_manager.rs (1)
new(69-93)applications/minotari_mcp_common/src/prompts.rs (1)
new(97-101)applications/minotari_mcp_common/src/error.rs (1)
invalid_request(83-85)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: cargo check with stable
- GitHub Check: ledger build tests
- GitHub Check: ci
🔇 Additional comments (8)
applications/minotari_mcp_common/src/process_manager.rs (3)
101-188: Well-structured supervision logic with proper restart handling.The supervision loop correctly implements health checking, restart attempts with backoff, and comprehensive error handling. The logic will work correctly once the shutdown mechanism issue (lines 74-78) is resolved.
191-283: Comprehensive process lifecycle management with proper error handling.The process launching, monitoring, and shutdown methods demonstrate good practices:
- Proper error propagation and logging
- Health check waiting with reasonable timeouts
- Graceful shutdown with force kill fallback
- Appropriate child process management
318-421: Excellent command building utilities and appropriate test coverage.The CLI argument construction functions demonstrate good understanding of Tari application interfaces, with proper handling of optional parameters and required flags. The test suite appropriately covers the core utility functions.
applications/minotari_mcp_common/src/process_launcher.rs (4)
137-276: Well-structured process launching with comprehensive error handling.Despite the identified issues with output buffering and crash detection timing, the overall launch logic demonstrates good practices:
- Proper executable discovery
- Comprehensive command setup with environment and working directory
- Good error reporting and status updates
- Integration with health monitoring
390-456: Excellent builder pattern implementation with proper validation.The
LaunchConfigBuilderfollows Rust conventions with clear method names, proper chaining, and appropriate validation in thebuild()method. The required health check configuration validation is particularly good.
476-574: Excellent Tari service launching utilities with appropriate configurations.The
launch_nodeandlaunch_walletmethods demonstrate deep understanding of Tari CLI interfaces:
- Proper gRPC configuration with comprehensive allowed methods for nodes
- Appropriate health check timeouts (90s for node, 180s for wallet)
- Good argument construction and parameter handling
- Proper multiaddr format usage
582-618: Appropriate test coverage for configuration and builder functionality.The tests properly validate the builder pattern implementation and configuration construction, ensuring that values are correctly assigned and the API works as expected.
applications/minotari_mcp_common/src/input_sanitizer.rs (1)
350-463: LGTM! Comprehensive test coverage.The test suite covers all major functionality including:
- HTML entity sanitization
- Depth limits
- String/array length limits
- Path validation (including security cases)
- URL validation (including malicious inputs)
- Numeric range validation (including edge cases like NaN/infinity)
- Pattern validation
The tests properly validate both positive and negative cases, which is excellent for security-critical code.
| // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.//! Common MCP (Model Context Protocol) | ||
| // infrastructure for Tari applications |
There was a problem hiding this comment.
Fix the malformed copyright header.
There's a syntax error in the copyright header where the license text and module documentation are incorrectly merged.
-// USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.//! Common MCP (Model Context Protocol)
-// infrastructure for Tari applications
+// USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+//! Common MCP (Model Context Protocol) infrastructure for Tari applications🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/input_sanitizer.rs around lines 21 to
22, the copyright header is malformed because the license text and module
documentation are merged incorrectly. Separate the license comment block from
the module documentation comment by ensuring the license text is properly
commented as a block comment and the module documentation starts with triple
slashes (///) on its own line to fix the syntax error.
| sanitized = sanitized | ||
| .chars() | ||
| .filter(|&c| c == '\n' || c == '\t' || c == '\r' || (c >= ' ' && c != '\u{007F}') || c >= '\u{0080}') | ||
| .collect(); |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Verify the character filtering logic for security completeness.
The control character filtering allows newlines, tabs, and carriage returns, but the logic may have edge cases. The condition c >= '\u{0080}' allows all Unicode characters above the ASCII range without validation.
🏁 Script executed:
#!/bin/bash
# Description: Verify if there are known security issues with allowing all high Unicode characters
# Expected: Check if the current filtering approach is sufficient for security
echo "Checking Unicode character ranges and potential security concerns..."
echo "Current filter allows:"
echo "- Newlines (\\n)"
echo "- Tabs (\\t)"
echo "- Carriage returns (\\r)"
echo "- Printable ASCII (>= space, != DEL)"
echo "- All Unicode above \\u{0080}"
echo ""
echo "Potentially problematic characters that would be allowed:"
python3 -c "
import unicodedata
# Check for potentially dangerous Unicode characters
dangerous_categories = ['Cf', 'Cc'] # Format controls, other controls
sample_chars = [
'\u200E', # Left-to-right mark
'\u200F', # Right-to-left mark
'\u202A', # Left-to-right embedding
'\u202B', # Right-to-left embedding
'\u202C', # Pop directional formatting
'\u202D', # Left-to-right override
'\u202E', # Right-to-left override
'\uFEFF', # Byte order mark
]
print('Potentially dangerous Unicode characters that current filter allows:')
for char in sample_chars:
category = unicodedata.category(char)
name = unicodedata.name(char, 'UNKNOWN')
print(f' \\u{ord(char):04X} ({category}): {name}')
"Length of output: 1684
🏁 Script executed:
#!/bin/bash
echo "Listing potentially problematic Unicode characters allowed by current filter:"
python3 - << 'EOF'
import unicodedata
codes = [
0x200E, # Left-to-right mark
0x200F, # Right-to-left mark
0x202A, # Left-to-right embedding
0x202B, # Right-to-left embedding
0x202C, # Pop directional formatting
0x202D, # Left-to-right override
0x202E, # Right-to-left override
0xFEFF, # Byte order mark
]
for code in codes:
char = chr(code)
print(f"U+{code:04X}: category={unicodedata.category(char)}, name={unicodedata.name(char, 'UNKNOWN')}")
EOFLength of output: 569
Disallow Unicode Format Control Characters in Input Sanitizer
The current filter in
applications/minotari_mcp_common/src/input_sanitizer.rs (lines 145–148)
allows every codepoint ≥ U+0080, including “Cf” (format control) characters such as U+200E (LTR mark), U+202E (RTL override), U+FEFF (BOM), etc. These can be abused for visual obfuscation or RLO attacks.
• File: applications/minotari_mcp_common/src/input_sanitizer.rs
Lines: 145–148
Suggested change—explicitly exclude Unicode format controls (category Cf), for example by using a Unicode-categories crate or manual blacklist:
- sanitized = sanitized
- .chars()
- .filter(|&c| c == '\n' || c == '\t' || c == '\r'
- || (c >= ' ' && c != '\u{007F}') || c >= '\u{0080}')
- .collect();
+ use unicode_categories::UnicodeCategories; // add to Cargo.toml
+
+ sanitized = sanitized
+ .chars()
+ .filter(|&c| {
+ // allow basic whitespace & printable ASCII
+ matches!(c, '\n' | '\t' | '\r')
+ || (c >= ' ' && c != '\u{007F}')
+ // for non-ASCII, drop format controls
+ && !(c.is_general_category_format())
+ })
+ .collect();If you prefer not to pull in a crate, at minimum blacklist known ranges:
.filter(|&c| {
/* …same as above… */
&& !matches!(c,
'\u{200E}'|'\u{200F}'|
'\u{202A}'..='\u{202E}'|
'\u{FEFF}'
)
})This ensures directional overrides and zero-width markers are stripped out.
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/input_sanitizer.rs lines 145 to 148, the
current character filter allows all codepoints >= U+0080, including Unicode
format control characters (category Cf) like U+200E, U+202E, and U+FEFF, which
can be abused for visual obfuscation. To fix this, update the filter to
explicitly exclude these format control characters by adding a condition that
blacklists them, for example using a matches! macro to exclude '\u{200E}',
'\u{200F}', the range '\u{202A}' to '\u{202E}', and '\u{FEFF}', ensuring these
characters are filtered out from the sanitized input.
| if obj.len() > self.max_array_length { | ||
| return Err(McpError::invalid_request(format!( | ||
| "Object too large ({} properties, max {})", | ||
| obj.len(), | ||
| self.max_array_length | ||
| ))); |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fix inconsistent object size validation.
The object size validation uses max_array_length instead of a dedicated object size limit, which is confusing and may not reflect the intended behavior.
Consider adding a dedicated max_object_size field or clearly document why max_array_length is reused for objects:
// Check object size limit
- if obj.len() > self.max_array_length {
+ if obj.len() > self.max_array_length { // Note: reuses array limit for simplicity
return Err(McpError::invalid_request(format!(
- "Object too large ({} properties, max {})",
+ "Object has too many properties ({}, max {})",
obj.len(),
self.max_array_length
)));Alternatively, add a dedicated field:
pub struct InputSanitizer {
max_string_length: usize,
max_array_length: usize,
+ max_object_properties: usize,
max_object_depth: usize,
allowed_html_entities: HashMap<String, String>,
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/input_sanitizer.rs around lines 179 to
184, the object size validation incorrectly uses the max_array_length field,
causing confusion and potential misbehavior. To fix this, introduce a dedicated
max_object_size field in the struct to represent the maximum allowed object
size, update the validation logic to use max_object_size instead of
max_array_length, and adjust any related documentation to clarify the
distinction between array length and object size limits.
| if sanitized_path.contains("..") { | ||
| return Err(McpError::invalid_request( | ||
| "Path cannot contain '..' (directory traversal not allowed)", | ||
| )); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Strengthen directory traversal protection.
The current check only looks for literal ".." strings, which can be bypassed with URL encoding, Unicode normalization, or other obfuscation techniques.
// Check for directory traversal attempts
- if sanitized_path.contains("..") {
+ if sanitized_path.contains("..") || sanitized_path.contains("%2e%2e") || sanitized_path.contains("%2E%2E") {
return Err(McpError::invalid_request(
- "Path cannot contain '..' (directory traversal not allowed)",
+ "Path contains directory traversal patterns",
));
}Additionally, consider using std::path::Path::canonicalize() for more robust path validation:
use std::path::Path;
// After basic checks, validate the canonical path
if let Ok(canonical) = Path::new(&sanitized_path).canonicalize() {
if !canonical.starts_with(std::env::current_dir()?) &&
!canonical.starts_with("/tmp/") {
return Err(McpError::invalid_request("Path resolves outside allowed directories"));
}
}🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/input_sanitizer.rs around lines 221 to
225, the directory traversal check only looks for the literal ".." string, which
can be bypassed by encoding or obfuscation. To fix this, enhance the validation
by using std::path::Path::canonicalize() to resolve the absolute path and verify
it stays within allowed directories like the current working directory or /tmp/.
Replace or supplement the existing check with this canonical path validation to
robustly prevent directory traversal attacks.
| if sanitized_path.starts_with('/') && !sanitized_path.starts_with("/tmp/") { | ||
| return Err(McpError::invalid_request("Absolute paths not allowed (except /tmp/)")); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Reconsider the blanket exception for /tmp/ paths.
Allowing any path under /tmp/ may be overly permissive and could lead to security issues, especially in shared hosting environments or containers.
Consider a more restrictive approach:
- if sanitized_path.starts_with('/') && !sanitized_path.starts_with("/tmp/") {
- return Err(McpError::invalid_request("Absolute paths not allowed (except /tmp/)"));
+ if sanitized_path.starts_with('/') {
+ // Only allow specific whitelisted absolute paths
+ let allowed_prefixes = ["/tmp/tari_mcp/", "/tmp/minotari/"];
+ if !allowed_prefixes.iter().any(|prefix| sanitized_path.starts_with(prefix)) {
+ return Err(McpError::invalid_request("Absolute path not in allowed directories"));
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if sanitized_path.starts_with('/') && !sanitized_path.starts_with("/tmp/") { | |
| return Err(McpError::invalid_request("Absolute paths not allowed (except /tmp/)")); | |
| } | |
| if sanitized_path.starts_with('/') { | |
| // Only allow specific whitelisted absolute paths | |
| let allowed_prefixes = ["/tmp/tari_mcp/", "/tmp/minotari/"]; | |
| if !allowed_prefixes.iter().any(|prefix| sanitized_path.starts_with(prefix)) { | |
| return Err(McpError::invalid_request("Absolute path not in allowed directories")); | |
| } | |
| } |
🤖 Prompt for AI Agents
In applications/minotari_mcp_common/src/input_sanitizer.rs around lines 227 to
229, the current code allows any absolute path starting with /tmp/, which is too
permissive and may cause security risks. Modify the condition to restrict
allowed /tmp/ paths more narrowly, for example by allowing only specific
subdirectories or validating the path against a whitelist, instead of permitting
all paths under /tmp/.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (3)
applications/minotari_mcp_common/src/process_manager.rs (3)
74-78: Critical:_shutdown_txis dropped - supervisor stops instantly.The shutdown sender is discarded, causing
shutdown_rx.recv().awaitto returnNoneimmediately and the supervisor to exit prematurely. Return the sender fromnew()so callers can signal shutdown properly.
237-241: Use async TCP connection to avoid blocking the executor.The blocking
std::net::TcpStream::connectcan block the async executor. Usetokio::net::TcpStream::connect().awaitinstead.
314-316: Use async TCP connection to avoid blocking the executor.The blocking
std::net::TcpStream::connectcan block the async executor. Usetokio::net::TcpStream::connect().awaitinstead.
🧹 Nitpick comments (2)
applications/minotari_mcp_common/src/process_manager.rs (2)
395-411: Expand test coverage for utility functions.Consider adding tests for:
find_available_port()with known available/unavailable portsis_port_available()functionalitybuild_node_command()andbuild_wallet_command()argument generation- Edge cases like invalid ports or missing executables
88-89: Consider making supervisor parameters configurable.The
max_restart_attempts,restart_delay_secs, andMAX_HEALTH_ATTEMPTSare hardcoded. Consider making these configurable for different deployment scenarios.Also applies to: 208-208
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
applications/minotari_mcp_common/src/process_manager.rs(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: MCozhusheck
PR: tari-project/tari#7262
File: applications/minotari_node/src/grpc/base_node_grpc_server.rs:533-536
Timestamp: 2025-06-30T06:54:18.170Z
Learning: In the Tari codebase, readiness status reporting is split between two gRPC servers: base_node_grpc_server's GetNetworkState method always reports READY when it can respond (indicating network-level readiness), while readiness_grpc_server handles granular readiness states like database initialization and migration progress via ReadinessStatusHandler.
applications/minotari_mcp_common/src/process_manager.rs (8)
Learnt from: hansieodendaal
PR: #7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of tokio::task::spawn(async move { with tokio::task::spawn_blocking().await in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In base_layer/core/src/base_node/tari_pulse_service/mod.rs, the disconnect_if_unused call uses parameters (0, 2) because the tari_pulse_service creates exactly 2 substreams and 0 RPC sessions during health checks.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In comms/core/src/connectivity/manager.rs, the disconnect_if_unused call uses parameters (0, 0) intentionally to be conservative and avoid terminating connections that might be in use. This approach is used while troubleshooting inbound connection drop issues to eliminate the disconnect logic as a potential cause.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: cargo check with stable
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: ci
- GitHub Check: ledger build tests
Description
This feature introduces a comprehensive implementation of the Model Context Protocol (MCP), enabling secure AI agent integration with the Tari network. It adds three new crates to the workspace:
minotari_mcp_common: A robust, shared framework for building MCP servers. It includes a security-first architecture with local-only binding, permission levels (read-only vs. control), rate limiting, audit logging, input sanitization, and utilities for auto-launching and managing dependent Tari processes.minotari_mcp_node: A new MCP server application that exposes read-only base node functionality to AI agents. This allows AI to query blockchain state, network status, and mempool information safely.minotari_mcp_wallet: A new MCP server application that provides AI agents with access to wallet operations. It supports both read-only queries (e.g., checking balance) and protected control operations (e.g., sending funds), with an optional user confirmation layer for added security.Additionally, this PR includes:
.clone()was removed from aCopytype).internal_output_capture) from the integration tests.README.mdand a newdocs/mcp/TARI_MCP_IMPLEMENTATION.mdfile explaining the architecture and usage.Motivation and Context
The primary motivation is to allow AI agents (like Claude, GPT-4, etc.) to securely and effectively interact with Tari blockchain functionality. Direct gRPC access is often too complex and risky for AI models. The Model Context Protocol provides a standardized, secure, and agent-friendly interface.
This implementation opens up new possibilities for building AI-powered applications on top of Tari, such as:
By providing a structured and secure access layer, we reduce the risk of AI agents making unintended state-changing operations while empowering them with rich data and functionality.
How Has This Been Tested?
minotari_mcp_nodeandminotari_mcp_walletapplications have been manually started and tested. This includes verifying their ability to connect to the underlying Tari base node and wallet via gRPC.minotari_mcp_commonframework, such as the security context, registries, and process launchers, have been tested during development.What process can a PR reviewer use to test or verify this change?
curl). Send atools/listrequest to one of the servers and verify it returns a list of available tools.{"jsonrpc":"2.0","method":"tools/list","id":1}README.mdand the detailed overview indocs/mcp/TARI_MCP_IMPLEMENTATION.mdfor correctness and clarity.Breaking Changes
Summary by CodeRabbit