feat!: add burnt commitments index#7512
feat!: add burnt commitments index#7512hansieodendaal wants to merge 7 commits intotari-project:developmentfrom
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughAdds burn-commitment support end-to-end: new CLI command, new BurntCommitmentInfo type, async/backend DB APIs and LMDB index with migration/rebuild status, transaction-builder and validation changes for burn outputs, tests, and assorted logging/error-wrapping adjustments. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant CLI as minotari_node CLI
participant Ctx as CommandContext
participant AsyncDB as AsyncBlockchainDb
participant LMDB as LMDBDatabase
User->>CLI: get-burnt-commitments [--start --end | --all] [--output]
CLI->>Ctx: Command::GetBurntCommitments(args)
Ctx->>Ctx: validate/compute height range vs best height
Ctx->>AsyncDB: fetch_burnt_commitments_info(range)
AsyncDB->>LMDB: fetch_burnt_commitments_info(range) (blocking)
LMDB-->>AsyncDB: Vec<BurntCommitmentInfo>
AsyncDB-->>Ctx: Vec<BurntCommitmentInfo}
alt output_to_file
Ctx->>Ctx: write CSV (create dir, manage header)
Ctx->>User: print file path / success
else
Ctx->>User: render table to console
end
sequenceDiagram
autonumber
participant DB as BlockchainDatabase
participant RT as Tokio Runtime
participant BG as Background Task
participant LMDB as LMDBDatabase
DB->>RT: start() (runtime check)
RT-->>DB: runtime available
DB->>BG: spawn burn-index rebuild task
loop per height
BG->>LMDB: update_burn_commitments_index(height, header)
LMDB-->>BG: BurnCommitmentIndexRebuildStatus
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (3)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
956a56b to
b88ea50
Compare
- Added a burnt commitments index table to the blockchain database to ensure no duplicate burns can be made. - Added a background migration task to rebuild the burnt commitments index table. - Fixed a bug in TransactionBuilder where not all output types were evaluated for burnt commitments. - Added a method to retrieve burnt commitments info from the blockchain database for all blocks or an optional inclusive range. - Wired in the retrieve burnt commitments info method to the command line. - Reduced `common_sqlite::sqlite_connection_pool` spam logs
b88ea50 to
e494fac
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (19)
common_sqlite/src/sqlite_connection_pool.rs (2)
128-131: Avoid misleading "Acquired" log on error; guard with is_ok().Currently logs "Acquired ..." even if checkout failed. Only log on success.
Apply this diff:
- let timing = start.elapsed(); - if timing > Duration::from_millis(100) { - debug!(target: LOG_TARGET, "Acquired 'get_pooled_connection' from pool in {:.2?}", timing); - } + let timing = start.elapsed(); + if connection.is_ok() && timing > Duration::from_millis(100) { + debug!(target: LOG_TARGET, "Acquired 'get_pooled_connection' from pool in {:.2?}", timing); + }
150-153: Same here: log only on successful acquisition.Apply this diff:
- let timing = start.elapsed(); - if timing > Duration::from_millis(100) { - debug!(target: LOG_TARGET, "Acquired 'get_pooled_connection_timeout' from pool in {:.2?}", timing); - } + let timing = start.elapsed(); + if connection.is_ok() && timing > Duration::from_millis(100) { + debug!(target: LOG_TARGET, "Acquired 'get_pooled_connection_timeout' from pool in {:.2?}", timing); + }base_layer/transaction_components/src/validation/aggregate_body/aggregate_body_internal_validator.rs (1)
644-681: Suggest removing the debugging print statement.The test logic is sound and properly validates that multiple burnt outputs with only a single burn kernel should fail validation. However, there's a debugging print statement on Line 676 that should be removed.
Apply this diff to remove the debugging statement:
- // The check_total_burned function should fail - println!("Validate: {:?}", check_total_burned(&body)); - assert!(check_total_burned(&body)+ // The check_total_burned function should fail + assert!(check_total_burned(&body)base_layer/core/src/chain_storage/tests/blockchain_database.rs (2)
701-701: Fix typo in commentChange “form” to “from”.
- // Verify fetch burnt commitments info form a block that does not contain any returns empty + // Verify fetch burnt commitments info from a block that does not contain any returns empty
681-699: Optionally assert heights for stronger range validationAlso assert each returned item’s header_height == last_block_height to confirm correct range filtering.
base_layer/transaction_components/src/transaction_components/transaction_kernel.rs (1)
254-254: Align terminology in doc commentUse “burnt” to match type/feature naming elsewhere.
-/// Information about a burned commitment that can be stored in the database +/// Information about a burnt commitment that can be stored in the databasebase_layer/core/src/chain_storage/async_db.rs (1)
191-192: Align trace label with function nameMinor log label mismatch.
- make_async_fn!(fetch_burnt_commitments_info(block_height_range: Option<RangeInclusive<u64>>) -> Vec<BurntCommitmentInfo>, "fetch_all_burnt_commitment_info"); + make_async_fn!(fetch_burnt_commitments_info(block_height_range: Option<RangeInclusive<u64>>) -> Vec<BurntCommitmentInfo>, "fetch_burnt_commitments_info");applications/minotari_node/src/commands/command/get_burnt_commitments.rs (3)
23-24: Remove unused importAfter removing the sleep, Duration is unused.
-use std::{fs, fs::OpenOptions, io::Write, path::PathBuf, time::Duration}; +use std::{fs, fs::OpenOptions, io::Write, path::PathBuf};
148-156: Prefer Path::exists over fs::existsSimpler and avoids handling Result.
- let file_path = if let Some(path) = output_directory.clone() { - if let Ok(true) = fs::exists(&path) { + let file_path = if let Some(path) = output_directory.clone() { + if path.exists() { path.join(file_name) } else if fs::create_dir_all(&path).is_ok() { path.join(file_name) } else { PathBuf::from(file_name) } } else { PathBuf::from(file_name) };
159-187: Remove unnecessary sleep and handle open errorsSleep is not needed after remove_file; add explicit error if file open fails.
- let _unused = fs::remove_file(&file_path); - tokio::time::sleep(Duration::from_secs(1)).await; + let _unused = fs::remove_file(&file_path); let write_header = !file_path.exists(); - if let Ok(mut file) = OpenOptions::new().append(true).create(true).open(file_path.clone()) { + if let Ok(mut file) = OpenOptions::new().append(true).create(true).open(file_path.clone()) { let mut file_content = String::new(); if write_header { file_content.push_str("#,Commitment,Height,Header Hash,Kernel Hash,\n"); } for (i, item) in burnt_commitments_info.iter().enumerate() { file_content.push_str(&format!( "{},{},{},{},{},\n", i + 1, item.commitment.to_hex(), item.header_height, item.header_hash.to_hex(), item.kernel_hash.to_hex() )); } match writeln!(file, "{file_content}") { Ok(_) => { println!("📝 Result written to file: {}", file_path.display()); }, Err(e) => { println!("❌ Error writing result to file: {e}"); }, } - } + } else { + println!("❌ Could not open file for writing: {}", file_path.display()); + }applications/minotari_node/src/commands/command/mod.rs (2)
143-143: Align type name with command/module spelling (“burnt”)The variant is
GetBurntCommitments, module isget_burnt_commitments, but the args type isArgsGetBurnCommitments(missing the “t”). For consistency and grep‑ability, preferArgsGetBurntCommitments.If the struct is already named without “t”, keep this as-is for now and consider a follow‑up renaming PR across the codebase.
243-253: Consider raising the timeout for GetBurntCommitmentsFetching all burnt commitments (default range) can scan the full chain and may exceed 30s on larger DBs. Consider moving this to the longer timeout bucket or making the command default to a bounded range unless explicitly asked for all.
base_layer/core/src/chain_storage/blockchain_backend.rs (1)
113-118: Doc comment mismatch (“burnt kernels”)The new method returns burnt commitment info, not “burnt kernels”. Update the doc to avoid confusion.
- /// Fetch all burnt kernels in all blocks + /// Fetch burnt-commitment info + /// + /// Returns all burnt-commitment entries for the given inclusive height range. + /// If `None`, the full range `0..=tip` is used. Errors on empty ranges or if end > tip.base_layer/core/src/chain_storage/blockchain_database.rs (2)
555-623: Start from next height to avoid redundant workReprocessing
last_rebuild_heightis safe (idempotent) but unnecessary. Start atlast_rebuild_height + 1when present to save cycles on large chains.- let start_height = initial_status.last_rebuild_height.unwrap_or_default(); + let start_height = initial_status + .last_rebuild_height + .map(|h| h.saturating_add(1)) + .unwrap_or_default();
3110-3127: Copy-paste comment nitUpdate the “accumulated data” mention to “burn commitment indexes” to match function purpose.
- // Safety check to ensure we do not rebuild accumulated data for a height that has been reorged out. + // Safety check to ensure we do not rebuild burn commitment indexes for a height that has been reorged out.base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (4)
942-1000: Consider extracting constants for better readability.The burn commitment insertion logic is correct, but consider extracting the table name string as a constant to avoid duplication.
Add a constant at the module level:
+const KERNEL_BURN_COMMITMENT_INDEX_NAME: &str = "kernel_burn_commitment_index";Then use it throughout:
- "kernel_burn_commitment_index", + KERNEL_BURN_COMMITMENT_INDEX_NAME,
979-989: Add tracing for duplicate detection during rebuild.When a duplicate is found during the rebuild process, it would be helpful to log this for debugging purposes.
if self.burn_commitment_exists_in_history_before_height_full_search( txn, header_hash, burn_commitment, last_rebuild_height, )? { + trace!( + target: LOG_TARGET, + "Burn commitment '{}' already exists in history before height", + burn_commitment.to_hex() + ); return Err(ChainStorageError::KeyExists { table_name: "kernel_burn_commitment_index", key: burn_commitment.to_hex(), }); }
2885-2938: Add input validation for large ranges.Consider adding a check to prevent excessively large ranges that could cause performance issues.
fn fetch_burnt_commitments_info( &self, block_height_range: Option<RangeInclusive<u64>>, ) -> Result<Vec<BurntCommitmentInfo>, ChainStorageError> { + const MAX_RANGE_SIZE: u64 = 100_000; // Adjust based on requirements let txn = self.read_transaction()?; let mut result = Vec::new(); let best_block_height = self.fetch_chain_metadata()?.best_block_height(); let burnt_commitments_info_range = if let Some(range) = block_height_range { if range.is_empty() { return Err(ChainStorageError::InvalidArguments { message: "The provided block height range is empty".to_string(), func: "fetch_burnt_commitments_info", arg: "block_height_range", }); } + let range_size = range.end() - range.start() + 1; + if range_size > MAX_RANGE_SIZE { + return Err(ChainStorageError::InvalidArguments { + message: format!( + "The provided block height range is too large ({} blocks). Maximum allowed: {}", + range_size, MAX_RANGE_SIZE + ), + func: "fetch_burnt_commitments_info", + arg: "block_height_range", + }); + } if range.end() > &best_block_height {
3271-3326: Add error logging for duplicate burn commitments.When a duplicate is encountered during index rebuilding, log it at the appropriate level for better observability.
match err { ChainStorageError::KeyExists { .. } => { - debug!( + warn!( target: LOG_TARGET, - "Burn commitment index already had entry for '{}'", + "Duplicate burn commitment found at height {}: '{}'", + height, burn_commitment.to_hex() ); }, _ => return Err(err), }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
applications/minotari_node/src/commands/command/get_burnt_commitments.rs(1 hunks)applications/minotari_node/src/commands/command/mod.rs(4 hunks)base_layer/core/src/base_node/comms_interface/comms_response.rs(3 hunks)base_layer/core/src/chain_storage/async_db.rs(3 hunks)base_layer/core/src/chain_storage/blockchain_backend.rs(6 hunks)base_layer/core/src/chain_storage/blockchain_database.rs(8 hunks)base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs(21 hunks)base_layer/core/src/chain_storage/lmdb_db/mod.rs(1 hunks)base_layer/core/src/chain_storage/mod.rs(1 hunks)base_layer/core/src/chain_storage/tests/blockchain_database.rs(2 hunks)base_layer/core/src/test_helpers/blockchain.rs(6 hunks)base_layer/transaction_components/src/test_helpers/test_helpers_functions.rs(3 hunks)base_layer/transaction_components/src/transaction_builder/builder.rs(3 hunks)base_layer/transaction_components/src/transaction_builder/error.rs(1 hunks)base_layer/transaction_components/src/transaction_components/mod.rs(1 hunks)base_layer/transaction_components/src/transaction_components/transaction_kernel.rs(2 hunks)base_layer/transaction_components/src/validation/aggregate_body/aggregate_body_internal_validator.rs(1 hunks)common_sqlite/src/sqlite_connection_pool.rs(3 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-11T15:37:17.354Z
Learnt from: hansieodendaal
PR: tari-project/tari#7387
File: base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs:2896-2927
Timestamp: 2025-08-11T15:37:17.354Z
Learning: The `update_accumulated_difficulty` method in `base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs` is migration-specific code that only runs during a once-off migration when existing ChainHeader data is present in LMDB. The consistency guarantees are upheld by the calling function `process_accumulated_data_for_height`, so additional validation within this method is redundant.
Applied to files:
base_layer/core/src/test_helpers/blockchain.rsbase_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs
📚 Learning: 2025-07-04T10:56:46.079Z
Learnt from: hansieodendaal
PR: tari-project/tari#7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of `tokio::task::spawn(async move {` with `tokio::task::spawn_blocking().await` in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Applied to files:
base_layer/core/src/chain_storage/blockchain_database.rs
📚 Learning: 2025-07-15T12:23:14.650Z
Learnt from: hansieodendaal
PR: tari-project/tari#7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Applied to files:
base_layer/core/src/chain_storage/blockchain_database.rs
🧬 Code graph analysis (8)
base_layer/transaction_components/src/validation/aggregate_body/aggregate_body_internal_validator.rs (2)
base_layer/transaction_components/src/key_manager/memory_key_manager.rs (1)
create_memory_key_manager(68-70)base_layer/transaction_components/src/test_helpers/test_helpers_functions.rs (4)
create_utxo(812-880)default(217-228)create_test_kernel(799-809)new(106-139)
applications/minotari_node/src/commands/command/get_burnt_commitments.rs (2)
applications/minotari_node/src/commands/command/mod.rs (3)
handle_command(165-165)handle_command(280-324)new(187-205)applications/minotari_node/src/builder.rs (1)
blockchain_db(154-156)
base_layer/core/src/chain_storage/async_db.rs (4)
base_layer/core/src/chain_storage/blockchain_database.rs (1)
fetch_burnt_commitments_info(828-834)base_layer/core/src/chain_storage/blockchain_backend.rs (1)
fetch_burnt_commitments_info(114-117)base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (1)
fetch_burnt_commitments_info(2885-2938)base_layer/core/src/test_helpers/blockchain.rs (1)
fetch_burnt_commitments_info(304-312)
base_layer/core/src/test_helpers/blockchain.rs (3)
base_layer/core/src/chain_storage/blockchain_database.rs (1)
fetch_burnt_commitments_info(828-834)base_layer/core/src/chain_storage/blockchain_backend.rs (3)
fetch_burnt_commitments_info(114-117)fetch_burn_commitments_index_rebuild_status(173-175)update_burn_commitments_index(192-196)base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (3)
fetch_burnt_commitments_info(2885-2938)fetch_burn_commitments_index_rebuild_status(3142-3156)update_burn_commitments_index(3271-3326)
base_layer/core/src/chain_storage/blockchain_backend.rs (3)
base_layer/core/src/chain_storage/blockchain_database.rs (1)
fetch_burnt_commitments_info(828-834)base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (3)
fetch_burnt_commitments_info(2885-2938)fetch_burn_commitments_index_rebuild_status(3142-3156)update_burn_commitments_index(3271-3326)base_layer/core/src/test_helpers/blockchain.rs (3)
fetch_burnt_commitments_info(304-312)fetch_burn_commitments_index_rebuild_status(395-399)update_burn_commitments_index(426-435)
base_layer/core/src/chain_storage/blockchain_database.rs (3)
base_layer/core/src/test_helpers/blockchain.rs (3)
db(203-205)db(840-842)fetch_burnt_commitments_info(304-312)base_layer/core/src/chain_storage/blockchain_backend.rs (1)
fetch_burnt_commitments_info(114-117)base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (1)
fetch_burnt_commitments_info(2885-2938)
base_layer/core/src/chain_storage/tests/blockchain_database.rs (5)
base_layer/core/src/chain_storage/blockchain_database.rs (4)
db(2073-2093)setup(4142-4157)clone(197-203)clone(3018-3027)base_layer/transaction_key_manager/src/memory_db_key_manager.rs (1)
create_memory_db_key_manager(89-91)base_layer/transaction_components/src/transaction_components/wallet_output.rs (3)
features(682-684)script_key_id(724-726)commitment(798-800)base_layer/transaction_components/src/test_helpers/test_helpers_functions.rs (1)
schema_to_transaction(882-895)base_layer/core/src/test_helpers/mod.rs (1)
default_coinbase_entities(100-112)
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (3)
base_layer/core/src/chain_storage/lmdb_db/lmdb.rs (5)
lmdb_get(283-306)lmdb_insert(69-111)lmdb_exists(372-383)lmdb_delete(190-203)lmdb_replace(140-175)base_layer/core/src/chain_storage/blockchain_database.rs (5)
fetch_burnt_commitments_info(828-834)new(183-193)new(255-272)write(686-689)db(2073-2093)base_layer/core/src/chain_storage/blockchain_backend.rs (4)
fetch_burnt_commitments_info(114-117)fetch_burn_commitments_index_rebuild_status(173-175)update_burn_commitments_index(192-196)write(59-59)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: ci
- GitHub Check: cargo check with stable
- GitHub Check: ledger build tests
- GitHub Check: wasm build tests
- GitHub Check: Cucumber tests / FFI
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (45)
common_sqlite/src/sqlite_connection_pool.rs (1)
171-174: LGTM: conditional debug log only on success.This block already avoids the "Acquired" message when no connection is obtained.
base_layer/core/src/chain_storage/lmdb_db/mod.rs (1)
44-44: LGTM!The re-export of
BurnCommitmentIndexRebuildStatusmakes it accessible from the public API. This change is properly aligned with the broader PR objectives of exposing burn commitment functionality.base_layer/transaction_components/src/transaction_components/mod.rs (1)
75-75: LGTM!The re-export of
BurntCommitmentInfoproperly exposes the new data structure for public consumption, enabling other modules to work with burnt commitment information.base_layer/core/src/chain_storage/mod.rs (1)
74-74: LGTM!The re-export of
BurnCommitmentIndexRebuildStatusthrough the chain_storage module's public API is consistent with the other status types and properly exposes the burn commitment index rebuild functionality.base_layer/transaction_components/src/test_helpers/test_helpers_functions.rs (3)
43-43: LGTM!The import of
TxTypeis necessary for the burn transaction handling logic added in the test helper functions.
718-722: LGTM!The burn handling logic correctly configures the transaction builder when a burned output is detected, setting the transaction type to Burn and kernel features to BURN_KERNEL, which is consistent with the burn commitment validation requirements.
747-751: LGTM!The burn handling logic for the
to_outputspath is consistent with the similar logic above, properly configuring burn commitment, transaction type, and kernel features when a burned UTXO is detected.base_layer/transaction_components/src/transaction_builder/builder.rs (5)
39-39: LGTM!The import of
OutputTypeis necessary for the burn verification logic that checks if an output is of typeOutputType::Burn.
690-701: LGTM!The
verify_burnmethod properly enforces the constraint that only a single burn commitment is allowed per transaction. The logic correctly validates that all burned outputs share the same commitment or records the commitment if it's the first one encountered.
724-726: LGTM!The call to
verify_burnfor custom outputs ensures burn commitment validation is performed during transaction building.
729-730: LGTM!The call to
verify_burnfor recipient outputs ensures consistent burn commitment validation across all output types.
735-741: LGTM!The validation logic correctly enforces the relationship between transaction type and burn commitment:
- If
TxType::Burnis set, a burn commitment must be present- If a burn commitment exists, the transaction type must be
TxType::BurnThis ensures consistency between the transaction type and its actual contents.
base_layer/transaction_components/src/transaction_builder/error.rs (1)
52-55: LGTM!The new error variants properly handle burn commitment consistency validation:
TxTypeBurnWithNoBurnCommitmentfor when burn type is set but no commitment is foundBurnCommitmentWithoutTxTypeBurnfor when commitment exists but transaction type isn't burnThe error messages are clear and descriptive.
base_layer/core/src/base_node/comms_interface/comms_response.rs (3)
36-42: LGTM!The import reorganization properly includes
BurntCommitmentInfoalongside other transaction components, maintaining consistency with the module structure.
59-59: LGTM!The addition of
BurntCommitmentsInfo(Vec<BurntCommitmentInfo>)variant to theNodeCommsResponseenum properly extends the communication interface to handle burnt commitments data.
92-92: LGTM!The Display implementation properly handles the new
BurntCommitmentsInfovariant, maintaining consistency with the formatting pattern used for other variants.base_layer/core/src/chain_storage/tests/blockchain_database.rs (1)
621-706: Solid test coverage for burnt commitments retrievalThe test exercises ranged, full-chain, and empty-range cases correctly.
base_layer/transaction_components/src/transaction_components/transaction_kernel.rs (1)
254-262: BurntCommitmentInfo addition looks goodPublic type with serde/bor sh derives is appropriate and aligns with storage use.
base_layer/core/src/chain_storage/async_db.rs (1)
188-195: LGTM on async API additionSignature and routing via blocking pool match existing patterns.
applications/minotari_node/src/commands/command/get_burnt_commitments.rs (1)
57-116: CLI handler logic reads wellRange handling and delegation to backend are sound; console/CSV outputs are reasonable defaults.
base_layer/core/src/test_helpers/blockchain.rs (4)
52-60: Imports updated appropriatelyNew imports for BurntCommitmentInfo and RangeInclusive look correct.
304-313: Delegate burnt commitments fetch in TempDatabaseForwarder implementation is consistent with other methods.
395-400: Index rebuild status exposure is fineMatches trait changes and LMDB delegates.
426-435: Index update delegate is correctParameters align with trait; good to have in test helper.
applications/minotari_node/src/commands/command/mod.rs (2)
32-32: LGTM: module wiredModule
get_burnt_commitmentsadded and compiled in. No issues spotted.
312-323: LGTM: dispatch addedCommand routed correctly through
HandleCommand<Command>.base_layer/core/src/chain_storage/blockchain_database.rs (2)
386-393: LGTM: background tasks gated by Tokio presenceConsistent with prior tasks; avoids panics when no runtime is available.
828-835: LGTM: API plumbing for burnt commitments infoMethod cleanly proxies to backend and matches trait signature.
base_layer/core/src/chain_storage/blockchain_backend.rs (1)
172-176: Trait surface extended — in-repo backends updated; confirm out‑of‑tree/backendsLMDBDatabase (base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs) and TempDatabase (base_layer/core/src/test_helpers/blockchain.rs) implement: fetch_burnt_commitments_info, fetch_burn_commitments_index_rebuild_status, update_burn_commitments_index. Confirm any external/out‑of‑tree BlockchainBackend implementations are updated to avoid breaking changes.
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (16)
93-93: LGTM!The addition of
RangeInclusiveto the imports is appropriate for the newfetch_burnt_commitments_infomethod that accepts an optional range of block heights.
137-138: LGTM!The import of
BurntCommitmentInfois necessary for the new burnt commitments index feature.
258-258: LGTM!The database name follows the established naming convention and clearly indicates its purpose.
299-299: LGTM!Correctly added to the list of database names to ensure it's properly managed.
356-356: LGTM!Properly registered in the LMDB store builder with appropriate flags.
470-471: LGTM!The database field is correctly documented and follows the naming pattern of other kernel index fields.
550-550: LGTM!Database handle properly initialized in the constructor.
811-811: LGTM!Database reference correctly added to the
all_dbsarray for telemetry and statistics.
1035-1073: LGTM! Thorough duplicate checking implementation.The full search method correctly handles both quick index lookups and comprehensive scans from the last rebuild height. The logic properly prevents duplicate burn commitments.
1629-1643: LGTM! Safe handling of missing index entries.The deletion code correctly handles the case where burn commitment index entries might be missing during background rebuilds by ignoring the delete error.
3141-3156: LGTM!The fetch method correctly retrieves the burn commitment index rebuild status from metadata.
3992-3992: LGTM!The new metadata key is properly added to the enum and display implementation.
Also applies to: 4015-4015
4042-4050: LGTM!The struct is well-documented and follows the same pattern as other rebuild status structs.
4065-4065: LGTM!The metadata value variant is correctly added with proper display formatting.
Also applies to: 4085-4087
4094-4094: LGTM!Migration version correctly incremented to 8.
4468-4509: LGTM! Well-structured migration logic.The migration correctly:
- Sets rebuild status as "rebuilt" for new databases
- Resets status to trigger background rebuild for existing databases
- Follows the established pattern of other migrations
The logging is informative and will help with debugging.
SWvheerden
left a comment
There was a problem hiding this comment.
I am missing a validation check for unique kernels? It should be a validation check not a db write failure
| } else { | ||
| PathBuf::from(file_name) | ||
| }; | ||
| let _unused = fs::remove_file(&file_path); |
There was a problem hiding this comment.
I don't think we should auto delete, we should ask/ error the command
base_layer/transaction_components/src/transaction_builder/builder.rs
Outdated
Show resolved
Hide resolved
base_layer/transaction_components/src/transaction_builder/builder.rs
Outdated
Show resolved
Hide resolved
...er/transaction_components/src/validation/aggregate_body/aggregate_body_internal_validator.rs
Outdated
Show resolved
Hide resolved
…el_indexes # Conflicts: # base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
applications/minotari_miner/src/run_miner.rs (2)
661-661: Potential panic:metadata.unwrap()before readiness check
tip.metadatais unwrapped before checking readiness, causing a panic when the node is not ready. HandleNonefirst, then read the height.Apply this minimal fix:
- let longest_height = tip.clone().metadata.unwrap().best_block_height; + let longest_height = match tip.metadata.as_ref() { + Some(m) => m.best_block_height, + None => return Err(MinerError::NodeNotReady), + }; - if !tip.initial_sync_achieved || tip.metadata.is_none() { + if !tip.initial_sync_achieved || tip.metadata.is_none() { return Err(MinerError::NodeNotReady); }
265-269: Misleading log/message text refers to “base node” in p2pool pathThis branch is for p2pool connection, but the error logs and user-facing message mention “base node”, which can confuse operators.
Suggested tweak:
- error!(target: LOG_TARGET, "Could not connect to base node: {e}"); - let msg = "Could not connect to base node. \nIs the base node's gRPC running? Try running it with \ - `--enable-grpc` or enable it in the config."; + error!(target: LOG_TARGET, "Could not connect to p2pool node: {e}"); + let msg = "Could not connect to p2pool node. \nIs the p2pool gRPC running and reachable? Check address, TLS, and auth.";
🧹 Nitpick comments (27)
applications/minotari_app_grpc/src/authentication/server_interceptor.rs (2)
40-46: Avoid extra allocation when storing the hashed passwordIf create_salted_hashed_password(...) already returns SafePassword, you can pass it directly and skip the as_str().into() round‑trip.
- GrpcAuthentication::Basic { username, password } => GrpcAuthentication::Basic { - username, - password: create_salted_hashed_password(password.reveal()).ok()?.as_str().into(), - }, + GrpcAuthentication::Basic { username, password } => GrpcAuthentication::Basic { + username, + password: create_salted_hashed_password(password.reveal()).ok()?, + },
100-104: Don’t log underlying auth error at warn levelThe error string could contain sensitive details from the Authorization header path. Prefer a generic warn and move the cause to debug/trace.
fn unauthenticated<E: ToString>(err: E) -> Status { - warn!(target: LOG_TARGET, "GRPC authentication failed: {}", err.to_string()); + // Avoid leaking potentially sensitive details at warn level. + warn!(target: LOG_TARGET, "GRPC authentication failed"); + debug!(target: LOG_TARGET, "Auth failure cause: {}", err.to_string()); Status::unauthenticated("Auth failed") }Please confirm that error types returned by parse_header/to_str cannot include raw credentials. If unsure, the above change is safer.
comms/core/src/connection_manager/dial_state.rs (1)
74-84: Make error boxing explicit to avoid relying on implicit conversions.With the new signature
Result<(), Box<Result<PeerConnection, ConnectionManagerError>>>,reply.send(result)?relies on aFrom<E> for Box<E>conversion. Make the intent explicit and future‑proof by mapping the error:- reply.send(result)?; + reply.send(result).map_err(Box::new)?;Optionally, if a dropped receiver isn't an error for this call path, consider swallowing the send error instead:
- reply.send(result)?; + let _ = reply.send(result);Please confirm all call sites expecting
Result<(), Result<...>>have been updated to handleBox<Result<...>>.base_layer/transaction_components/src/validation/aggregate_body/aggregate_body_internal_validator.rs (3)
413-428: Use HashSet insertion to detect duplicate burn commitments (simpler, avoids Ord bound).Current approach allocates a Vec, sorts, and dedups. A HashSet-based early-return avoids the Ord requirement on CompressedCommitment, is O(n), and is clearer.
Apply:
- // Multiple burned kernels with same commitment - let mut burned_commitments = body - .kernels() - .iter() - .filter_map(|k| k.burn_commitment.clone()) - .collect::<Vec<_>>(); - if !burned_commitments.is_empty() { - let burned_commitments_len = burned_commitments.len(); - burned_commitments.sort(); - burned_commitments.dedup(); - if burned_commitments_len != burned_commitments.len() { - return Err(AggregatedBodyValidationError::InvalidBurnError( - "Multiple burned kernels with same commitment".to_string(), - )); - } - } + // Multiple burned kernels with same commitment + { + let mut seen = HashSet::new(); + for c in body.kernels().iter().filter_map(|k| k.burn_commitment.as_ref()) { + if !seen.insert(c.clone()) { + return Err(AggregatedBodyValidationError::InvalidBurnError( + "Multiple burned kernels with same commitment".to_string(), + )); + } + } + }Please confirm whether CompressedCommitment currently implements Ord. If not guaranteed, this change also prevents potential compilation friction.
424-426: Stabilize error matching: prefer typed variants or shared constants for messages.Tests assert on exact strings; small wording tweaks will break them. Consider dedicated error variants (e.g., InvalidBurnDuplicateKernel) or module-level &'static str constants reused by both code and tests.
661-764: Great coverage; rename test to reflect all scenarios (or split into focused tests).The test exercises 4 cases, but the name implies only the “multiple burnt outputs with single burn kernel” case. Rename for clarity (or split into four tests).
Example rename:
-async fn it_fails_if_multiple_burnt_outputs_with_single_burn_kernel() { +async fn check_total_burned_validation_scenarios() {applications/minotari_merge_mining_proxy/src/block_template_manager.rs (2)
219-223: Boxed gRPC status mapping looks correctThe boxing aligns with MmProxyError::GrpcRequestError’s new shape. Optionally include status code/message in details for clearer logs.
Apply this diff to enrich details without cloning status:
- .await - .map_err(|status| MmProxyError::GrpcRequestError { - status: Box::new(status), - details: "failed to get new block template".to_string(), - })? + .await + .map_err(|status| { + let details = format!( + "failed to get new block template: {} ({})", + status.message(), + status.code() + ); + MmProxyError::GrpcRequestError { status: Box::new(status), details } + })?
299-299: Fix log message typo“Deseriale” → “Deserialize”.
- debug!(target: LOG_TARGET, "Deseriale Monero block template blob into Monero block",); + debug!(target: LOG_TARGET, "Deserialize Monero block template blob into Monero block",);applications/minotari_merge_mining_proxy/src/proxy/inner.rs (2)
119-124: Include status context in details (optional)Boxing is correct. Consider adding status code/message to details for faster triage.
- let result = base_node_client.get_tip_info(tari_rpc::Empty {}).await.map_err(|err| { - MmProxyError::GrpcRequestError { - status: Box::new(err), - details: "get_tip_info failed".to_string(), - } - })?; + let result = base_node_client.get_tip_info(tari_rpc::Empty {}).await.map_err(|err| { + let details = format!("get_tip_info failed: {} ({})", err.message(), err.code()); + MmProxyError::GrpcRequestError { status: Box::new(err), details } + })?;
530-533: Same here: enrich details for observability (optional)Keeps the boxed status; add code/message to details.
- Err(err) => Err(MmProxyError::GrpcRequestError { - status: Box::new(err), - details: "failed to get header by hash".to_string(), - }), + Err(err) => Err(MmProxyError::GrpcRequestError { + status: Box::new(err), + details: format!("failed to get header by hash: {} ({})", err.message(), err.code()), + }),infrastructure/test_utils/src/enums.rs (1)
24-24: Nit: fix doc typo.
“decalred” → “declared”.-/// Each extracted variable is decalred as mutable for maximum flexibility. +/// Each extracted variable is declared as mutable for maximum flexibility.base_layer/transaction_components/src/transaction_builder/builder.rs (1)
151-152: Potential performance issue with cloning burn_commitment.The cloning of
self.burn_commitmenton line 151 could be expensive for complex commitment structures. Consider passing a reference toverify_burninstead.Based on learnings
Apply this diff to avoid the clone:
- let mut burn_commitment = self.burn_commitment.clone(); - Self::verify_burn(&mut burn_commitment, &recipient_output)?; + Self::verify_burn(&mut self.burn_commitment, &recipient_output)?;applications/minotari_miner/src/errors.rs (1)
74-78:From<tonic::Status>impl is appropriate; consider convenience accessors (optional)The impl enables
?and.map_err(MinerError::from)uniformly. Optionally add helpers to exposecode()/message()from the boxed status to improve logging without pattern matching everywhere.Example:
impl MinerError { + pub fn grpc_status(&self) -> Option<&tonic::Status> { + match self { + MinerError::GrpcStatus(s) => Some(s.as_ref()), + _ => None, + } + } }base_layer/transaction_components/src/consensus/consensus_constants.rs (2)
383-391: Public API looks fine; ensure intent ofburn_active_height()is clearMethod returns
Some(height)only when burns are currently permitted, otherwiseNone. If callers ever need the historical activation height regardless of the current allowance, consider documenting that this value is masked by current permission, or provide an explicitburns_permitted()helper for clarity.Is
burn_active_height()used anywhere that expects the unconditional activation height?
505-506: Network configs: activation height semanticsYou set
burn_active_height: Some(0)where burns are permitted, andNonewhere not. If the intent is “since genesis” vs. “disabled,” this is consistent. If you later enable burns at a nonzero height on any network, consider settingSome(<activation_height>)to make tooling (e.g., CLI displays) precise.Do we plan nonzero activation heights for burns on any network?
Also applies to: 576-577, 640-641, 754-755, 811-812
base_layer/core/src/validation/aggregate_body/aggregate_body_chain_validator.rs (3)
231-237: Doc nit: typos“will not br duplicated” → “will not be duplicated”; “the outputs does not” → “the outputs do not”.
-/// 3. that the outputs does not already exist in the UTXO set, -/// 4. that the burn commitment will not br duplicated in the burn commitment index, +/// 3. that the outputs do not already exist in the UTXO set, +/// 4. that the burn commitment will not be duplicated in the burn commitment index,
246-256: Order checks to avoid unnecessary DB reads when burns are disallowedIf burns aren’t permitted at this height, we can skip the index lookup. Suggest checking allowance before duplicate-index lookup:
check_not_duplicate_txo(db, output)?; - check_not_duplicate_burn_commitment(db, output)?; - check_burn_commitments_are_allowed(output, constants)?; + check_burn_commitments_are_allowed(output, constants)?; + check_not_duplicate_burn_commitment(db, output)?;
246-256: Optional: detect duplicate burn commitments within the same blockIndex check only catches duplicates against chain history, not duplicates among outputs in the current body. If uniqueness within a block is desired, track seen burn commitments during iteration and fail on repeats.
Example patch:
pub fn check_outputs<B: BlockchainBackend>( db: &B, constants: &ConsensusConstants, body: &AggregateBody, height: u64, ) -> Result<(), ValidationError> { let max_script_size = constants.max_script_byte_size(); let max_encrypted_data_size = constants.max_extra_encrypted_data_byte_size(); + let mut seen_burn_commitments = HashSet::new(); for output in body.outputs() { let epoch = constants.block_height_to_epoch(height); check_tari_script_byte_size(&output.script, max_script_size)?; check_tari_encrypted_data_byte_size(&output.encrypted_data, max_encrypted_data_size)?; check_not_duplicate_txo(db, output)?; - check_not_duplicate_burn_commitment(db, output)?; + if output.features.output_type == OutputType::Burn { + if !seen_burn_commitments.insert(output.commitment.clone()) { + return Err(ValidationError::ContainsDuplicateBurnCommitment); + } + } + check_not_duplicate_burn_commitment(db, output)?; check_burn_commitments_are_allowed(output, constants)?; check_validator_node_registration(db, output, epoch)?; check_validator_node_exit(db, output, epoch)?; check_eviction_proof(db, output, constants)?; } Ok(()) }Confirm whether “duplicate within the same block” should be rejected or allowed as a single burn event.
base_layer/core/src/validation/helpers.rs (2)
230-246: Clarify comment; logic LGTMFunction enforces cross-block uniqueness via the kernel burn-commitment index. Comment has a small typo.
-/// This function checks that the burn commitment will not br duplicated in the burn commitment index. +/// Ensures the burn commitment is not already present in the burn-commitment index (prevents cross-block duplicates).
247-261: Allow/disallow check is correct; consider explicit helperThe check relies on
permitted_output_types()for the current height (as provided by the caller). Consider a tinyconsensus_constants.burns_permitted()helper to encapsulate the permission intent and reduce call-site duplication.base_layer/core/src/chain_storage/blockchain_backend.rs (2)
112-117: New API surface: OK; consider streaming for large ranges
fetch_burnt_commitments_inforeturns aVec. For very large ranges, an iterator/streaming reader could reduce memory spikes in tooling. Not a blocker for CLI usage.
171-176: Naming consistency (nit)
fetch_kernel_burn_commitments_indexcould be singular (fetch_kernel_burn_commitment_index) since it returns at most one mapping. Keep as-is if aligning with existing naming.applications/minotari_node/src/commands/command/get_burnt_commitments.rs (2)
173-183: CSV formatting: drop trailing commas, write directly.Minor polish: trailing commas add an empty column; also prefer write!/write_all to avoid a double newline.
- file_content.push_str("#,Commitment,Height,Header Hash,Kernel Hash,\n"); + file_content.push_str("#,Commitment,Height,Header Hash,Kernel Hash\n"); @@ - "{},{},{},{},{},\n", + "{},{},{},{},\n", i + 1, item.commitment.to_hex(), item.header_height, item.header_hash.to_hex(), item.kernel_hash.to_hex() )); @@ - match writeln!(file, "{file_content}") { + match write!(file, "{}", file_content) {
120-124: Propagate file I/O errors to the caller (or exit non‑zero).Right now errors are only printed. Returning Result from print_to_file and bubbling up improves UX and automates scripting.
Happy to provide a minimal Result<()> refactor and call-site handling.
base_layer/core/src/chain_storage/blockchain_database.rs (1)
385-391: Good: background tasks gated on Tokio runtime availability.Nice guard to avoid spawning without a runtime. Consider whether the burnt-commitment index/migration task also needs to be started here (or is started elsewhere) for consistency.
Can you confirm where the burnt-commitment index rebuild task is kicked off? If not here, I can help add a similar starter.
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (2)
942-968: Fix typo in comment.Line 966 has a typo: "Nohing here" should be "Nothing here".
Otherwise the burn commitment indexing logic is correct - properly validates that burn outputs are permitted at the current height before indexing the commitment.
Apply this diff to fix the typo:
- // Nohing here + // Nothing here
2813-2884: Consider adding range size limits for performance.The method correctly validates the height range and cross-references with the burn commitment index. However, there's no limit on the range size, which could lead to performance issues for large ranges.
Consider adding a maximum range size validation:
let burnt_commitments_info_range = if let Some(range) = block_height_range { if range.is_empty() { return Err(ChainStorageError::InvalidArguments { message: "The provided block height range is empty".to_string(), func: "fetch_burnt_commitments_info", arg: "block_height_range", }); } + if range.len() > 10000 { // or some reasonable limit + return Err(ChainStorageError::InvalidArguments { + message: "The provided block height range is too large".to_string(), + func: "fetch_burnt_commitments_info", + arg: "block_height_range", + }); + } // ... rest of validation
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (23)
applications/minotari_app_grpc/src/authentication/server_interceptor.rs(1 hunks)applications/minotari_merge_mining_proxy/src/block_template_manager.rs(1 hunks)applications/minotari_merge_mining_proxy/src/error.rs(2 hunks)applications/minotari_merge_mining_proxy/src/proxy/inner.rs(2 hunks)applications/minotari_miner/src/errors.rs(2 hunks)applications/minotari_miner/src/run_miner.rs(1 hunks)applications/minotari_node/src/commands/command/get_burnt_commitments.rs(1 hunks)applications/minotari_node/src/commands/command/mod.rs(4 hunks)applications/minotari_node/src/grpc/base_node_grpc_server.rs(1 hunks)base_layer/core/src/chain_storage/blockchain_backend.rs(4 hunks)base_layer/core/src/chain_storage/blockchain_database.rs(6 hunks)base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs(22 hunks)base_layer/core/src/chain_storage/tests/blockchain_database.rs(2 hunks)base_layer/core/src/test_helpers/blockchain.rs(4 hunks)base_layer/core/src/validation/aggregate_body/aggregate_body_chain_validator.rs(3 hunks)base_layer/core/src/validation/error.rs(2 hunks)base_layer/core/src/validation/helpers.rs(2 hunks)base_layer/transaction_components/src/consensus/consensus_constants.rs(9 hunks)base_layer/transaction_components/src/transaction_builder/builder.rs(5 hunks)base_layer/transaction_components/src/validation/aggregate_body/aggregate_body_internal_validator.rs(2 hunks)base_layer/transaction_components/src/validation/helpers.rs(1 hunks)comms/core/src/connection_manager/dial_state.rs(1 hunks)infrastructure/test_utils/src/enums.rs(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- applications/minotari_node/src/grpc/base_node_grpc_server.rs
- base_layer/transaction_components/src/validation/helpers.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- base_layer/core/src/chain_storage/tests/blockchain_database.rs
- base_layer/core/src/test_helpers/blockchain.rs
🧰 Additional context used
🧠 Learnings (5)
📚 Learning: 2025-07-04T10:56:46.079Z
Learnt from: hansieodendaal
PR: tari-project/tari#7280
File: base_layer/core/src/chain_storage/blockchain_database.rs:361-438
Timestamp: 2025-07-04T10:56:46.079Z
Learning: The combination of `tokio::task::spawn(async move {` with `tokio::task::spawn_blocking().await` in the payref rebuild background task works well and shuts down properly with the tokio environment, as confirmed by testing in the Tari codebase.
Applied to files:
base_layer/core/src/chain_storage/blockchain_database.rs
📚 Learning: 2025-07-15T12:23:14.650Z
Learnt from: hansieodendaal
PR: tari-project/tari#7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Applied to files:
base_layer/core/src/chain_storage/blockchain_database.rs
📚 Learning: 2025-08-11T15:37:17.354Z
Learnt from: hansieodendaal
PR: tari-project/tari#7387
File: base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs:2896-2927
Timestamp: 2025-08-11T15:37:17.354Z
Learning: The `update_accumulated_difficulty` method in `base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs` is migration-specific code that only runs during a once-off migration when existing ChainHeader data is present in LMDB. The consistency guarantees are upheld by the calling function `process_accumulated_data_for_height`, so additional validation within this method is redundant.
Applied to files:
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs
📚 Learning: 2025-09-23T08:35:00.046Z
Learnt from: hansieodendaal
PR: tari-project/tari#7512
File: base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs:794-794
Timestamp: 2025-09-23T08:35:00.046Z
Learning: The `all_dbs()` function in `base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs` should return exactly 36 database entries as confirmed by the maintainer hansieodendaal.
Applied to files:
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs
📚 Learning: 2025-09-23T08:35:00.046Z
Learnt from: hansieodendaal
PR: tari-project/tari#7512
File: base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs:794-794
Timestamp: 2025-09-23T08:35:00.046Z
Learning: The `all_dbs()` function in `base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs` correctly returns exactly 36 database entries, matching its function signature `-> [(&'static str, &DatabaseRef); 36]`. This includes both regular DatabaseRef fields and the `.db` references from TypedDatabaseRef fields.
Applied to files:
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs
🧬 Code graph analysis (10)
applications/minotari_miner/src/run_miner.rs (1)
applications/minotari_miner/src/errors.rs (1)
from(75-77)
applications/minotari_miner/src/errors.rs (1)
applications/minotari_merge_mining_proxy/src/error.rs (2)
from(132-137)from(141-143)
applications/minotari_node/src/commands/command/get_burnt_commitments.rs (2)
applications/minotari_node/src/commands/command/mod.rs (2)
handle_command(165-165)handle_command(282-326)applications/minotari_node/src/builder.rs (1)
blockchain_db(154-156)
base_layer/core/src/validation/aggregate_body/aggregate_body_chain_validator.rs (1)
base_layer/core/src/validation/helpers.rs (4)
check_burn_commitments_are_allowed(248-261)check_eviction_proof(333-389)check_input_is_utxo(167-209)check_not_duplicate_burn_commitment(231-245)
base_layer/core/src/validation/helpers.rs (2)
base_layer/core/src/chain_storage/blockchain_database.rs (1)
consensus_constants(560-563)base_layer/transaction_components/src/transaction_components/transaction_output.rs (1)
commitment(157-159)
base_layer/core/src/chain_storage/blockchain_database.rs (3)
base_layer/core/src/chain_storage/blockchain_backend.rs (1)
fetch_burnt_commitments_info(113-116)base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (1)
fetch_burnt_commitments_info(2813-2884)base_layer/core/src/test_helpers/blockchain.rs (3)
fetch_burnt_commitments_info(303-311)db(202-204)db(832-834)
base_layer/transaction_components/src/validation/aggregate_body/aggregate_body_internal_validator.rs (3)
base_layer/transaction_components/src/key_manager/memory_key_manager.rs (1)
create_memory_key_manager(66-68)base_layer/transaction_components/src/test_helpers/test_helpers_functions.rs (3)
create_utxo(812-880)create_test_kernel(799-809)new(106-139)base_layer/transaction_components/src/transaction_components/transaction_kernel.rs (1)
new(72-90)
base_layer/transaction_components/src/consensus/consensus_constants.rs (1)
base_layer/core/src/chain_storage/blockchain_database.rs (1)
consensus_constants(560-563)
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (5)
base_layer/core/src/consensus/base_node_consensus_manager.rs (2)
consensus_manager(105-107)consensus_constants(96-98)base_layer/node_components/src/blocks/chain_block.rs (8)
height(60-62)height(125-127)hash(64-66)hash(133-135)header(74-76)header(137-139)try_construct(49-58)try_construct(114-123)base_layer/core/src/chain_storage/lmdb_db/lmdb.rs (5)
lmdb_insert(69-111)lmdb_delete(190-203)lmdb_get(283-306)lmdb_replace(140-175)lmdb_get_typed(253-280)base_layer/core/src/chain_storage/blockchain_backend.rs (4)
fetch_burnt_commitments_info(113-116)fetch_kernel_burn_commitments_index(172-175)write(58-58)fetch_chain_header_by_height(68-68)base_layer/core/src/base_node/proto/chain_metadata.rs (1)
best_block_height(99-101)
base_layer/core/src/chain_storage/blockchain_backend.rs (3)
base_layer/core/src/chain_storage/blockchain_database.rs (1)
fetch_burnt_commitments_info(757-763)base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (2)
fetch_burnt_commitments_info(2813-2884)fetch_kernel_burn_commitments_index(3088-3095)base_layer/core/src/test_helpers/blockchain.rs (2)
fetch_burnt_commitments_info(303-311)fetch_kernel_burn_commitments_index(394-402)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: wasm build tests
- GitHub Check: cargo check with stable
- GitHub Check: ci
- GitHub Check: ledger build tests
- GitHub Check: Cucumber tests / FFI
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (31)
applications/minotari_app_grpc/src/authentication/server_interceptor.rs (1)
51-57: Scoped lint suppression is appropriate hereTonic’s Interceptor requires Result<_, tonic::Status>, so allowing clippy::result_large_err on this function is reasonable and localized. LGTM.
applications/minotari_merge_mining_proxy/src/error.rs (1)
74-79: Approve boxing ofGrpcRequestErrorstatus
All instances now wraptonic::StatusinBox::new, aligning with the#[source]attribute andFrom<tonic::Status>implementation; no further changes needed.infrastructure/test_utils/src/enums.rs (1)
90-96: Approve use of:pat_param; no breakages detectedThe stable Rust toolchain (≥1.65) supports
:pat_param, and no existingunpack_enum!call sites use a single top-level or-pattern.base_layer/transaction_components/src/transaction_builder/builder.rs (5)
38-38: Import addition looks correct.The
OutputTypeimport is needed for the newverify_burnfunction to check forOutputType::Burnvariants.
306-307: Same cloning issue here.Similar to the previous comment, the cloning on line 306 is unnecessary.
Apply the same fix:
- let mut burn_commitment = self.burn_commitment.clone(); - Self::verify_burn(&mut burn_commitment, &output)?; + Self::verify_burn(&mut self.burn_commitment, &output)?;
690-705: Burn commitment validation logic is sound.The
verify_burnfunction correctly:
- Checks if the output is a burn type
- Ensures consistency by comparing commitments
- Records the first burn commitment encountered
- Returns appropriate error on mismatch
The logic prevents multiple different burn commitments in a single transaction.
709-709: Method signature change frommut selftoselfis correct.The change removes unnecessary mutability since the method now consumes
selfand the burn commitment verification is done earlier in the flow rather than modifying the builder state.
737-743: Runtime burn commitment consistency checks are well-implemented.The validation ensures:
- If transaction type is
Burn, a burn commitment must exist- If transaction type is not
Burn, no burn commitment should be presentThis prevents invalid transaction configurations and provides clear error messages.
applications/minotari_miner/src/run_miner.rs (3)
515-516: gRPC error mapping viaMinerError::fromis correctThe switch to
.map_err(MinerError::from)?aligns with the newFrom<tonic::Status>impl and keeps the call sites clean.Also applies to: 523-524
189-207: GrpcStatus boxing does not affect retry filter Pattern matching onErr(MinerError::GrpcStatus(_))correctly matches the boxed variant—no change needed. A repo-wide search found no legacymap_errcalls mapping directly toGrpcStatus. If you have other retry sites, consider normalizing them to treatGrpcStatusas retryable.
281-286: No dedicated p2pool address in MinerConfig
connect_sha_p2poolcorrectly falls back toconfig.base_node_grpc_addressbecauseMinerConfigonly definesbase_node_grpc_address: Option<String>and has nosha_p2pool_grpc_addressfield. To introduce a separate p2pool address, addpub sha_p2pool_grpc_address: Option<String>toMinerConfigand updateconnect_sha_p2poolaccordingly.Likely an incorrect or invalid review comment.
applications/minotari_miner/src/errors.rs (1)
33-33: Boxingtonic::Statusreduces enum size and avoids large-variant bloatGood move; it mirrors patterns used elsewhere (e.g., merge-mining proxy) and keeps
MinerErrorsize reasonable.base_layer/transaction_components/src/consensus/consensus_constants.rs (1)
1013-1016: Good: helper for standard-only output typesThe dedicated helper improves readability for toggling burn permission. LGTM.
base_layer/core/src/validation/error.rs (1)
69-73: New burn-related errors wired into ban reasons — LGTMVariants and ban mapping look consistent with existing duplicate/invalid cases.
Also applies to: 181-183
base_layer/core/src/chain_storage/blockchain_backend.rs (1)
112-117: All backend implementations updated with new trait methods Verified that bothLMDBDatabaseandTempDatabaseincludefetch_burnt_commitments_infoandfetch_kernel_burn_commitments_index.applications/minotari_node/src/commands/command/get_burnt_commitments.rs (2)
166-171: Don’t auto-delete output; fail if the file exists (or gate with an explicit --overwrite).This silently removes an existing file and sleeps. Prefer to error out if the file exists to avoid accidental data loss (matches prior feedback). If overwrite is desired, add an explicit flag and honor it.
- let _unused = fs::remove_file(&file_path); - tokio::time::sleep(Duration::from_secs(1)).await; - - let write_header = !file_path.exists(); - if let Ok(mut file) = OpenOptions::new().append(true).create(true).open(file_path.clone()) { + if file_path.exists() { + eprintln!("❌ File already exists: {}. Refusing to overwrite. Use an explicit --overwrite flag if you intend to replace it.", file_path.display()); + return; + } + + let write_header = true; + if let Ok(mut file) = OpenOptions::new().write(true).create_new(true).open(file_path.clone()) {If you’d like, I can add a boolean --overwrite flag (short = 'O' or similar) and wire it here. Based on learnings.
69-113: Range validation and clamping look good.The start/end/all handling and best_height clamping are consistent with backend constraints.
applications/minotari_node/src/commands/command/mod.rs (4)
32-32: Module inclusion looks correct.get_burnt_commitments is properly declared.
143-155: Enum variant wiring is correct.New command is exposed and consistent with other commands.
255-259: Timeout classification makes sense (heavy DB ops).600s aligns with other long-running DB tasks.
314-315: Dispatch arm wired correctly.Calls through to the typed HandleCommand.
base_layer/core/src/chain_storage/blockchain_database.rs (1)
757-764: New fetch_burnt_commitments_info API is idiomatic.Thin wrapper with read guard and delegation matches existing fetch_* patterns.
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (9)
93-93: LGTM!The new imports
RangeInclusiveandBurntCommitmentInfoare appropriately added to support the burnt commitments index functionality.Also applies to: 138-138
258-258: LGTM!The burn commitment index database is properly integrated following the established pattern:
- Constant definition
- Registration in database names list
- Database creation in build_lmdb_store
- Field initialization in LMDBDatabase::new
- Inclusion in all_dbs array for stats/operations
Also applies to: 299-299, 356-356, 470-471, 550-550, 811-811
1557-1571: LGTM!The burn commitment cleanup logic is appropriately implemented as best-effort deletion, which makes sense given that the index rebuild runs as a background task and may not be fully populated yet.
3087-3095: LGTM!The
fetch_kernel_burn_commitments_indexmethod is straightforward and correctly implemented for looking up burn commitments in the index.
3874-3874: LGTM!The
BurnCommitmentIndexRebuildStatusmetadata is properly integrated following the established pattern with appropriate enum variants and display implementations.Also applies to: 3897-3897, 3924-3932, 3947-3947, 3967-3969
4289-4419: Well-structured migration with good chunking strategy.The burn commitments index migration is well-implemented with:
- Proper handling of new vs existing databases
- Efficient 1000-height chunking to manage memory
- Graceful handling of duplicate key errors
- Progress tracking and status updates
The migration correctly uses
burn_active_heightto determine the starting point and processes only the relevant height range.
4442-4482: LGTM!The
fetch_chain_header_by_heighthelper function is properly implemented with appropriate error handling and block version compatibility. It follows the established pattern used elsewhere in the codebase.
584-584: LGTM!The
consensus_manager.clone()is necessary to make the consensus manager available in the migration logic and other parts of the database implementation.
593-593: LGTM!The changes to pass the consensus manager to
run_migrationsand increment the migration version to 7 are necessary to support the burn commitment index migration.Also applies to: 4975-4979
applications/minotari_node/src/commands/command/get_burnt_commitments.rs
Show resolved
Hide resolved
applications/minotari_node/src/commands/command/get_burnt_commitments.rs
Outdated
Show resolved
Hide resolved
base_layer/transaction_components/src/consensus/consensus_constants.rs
Outdated
Show resolved
Hide resolved
|
This PR is not for now, maybe required at a later stage when Ootle is ready. |
Description
TransactionBuilderwhere not all output types were evaluated for burnt commitments.Note: This will cause a hard fork on mainnet, as we switch off burn commitments for the time being.
Motivation and Context
Burn commitments must be unique across the blockchain.
How Has This Been Tested?
it_fails_if_multiple_burnt_outputs_with_single_burn_kernelit_fetches_all_burnt_commitments_infoWhat process can a PR reviewer use to test or verify this change?
Breaking Changes
BREAKING CHANGE: Changed mainnet consensus to disallow burnt commitments from height 106_000 - this will cuase a hard fork.
Summary by CodeRabbit
New Features
Performance/Infrastructure
Validation
Reliability