feat!: add c29 pow algorithm#7398
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughAdds Cuckaroo PoW support (siphash, verification, difficulty), introduces BlockVersion and versioned accumulated-data storage (typed LMDB V1/V2), removes accumulated-data from protobuf surfaces, updates validation/consensus/grpc/miner/node/template flows and caches, and adds NOTICE/Apache license plus SQLite bundling feature. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant GRPC as BaseNode gRPC
participant Cache as DataCache
participant Node as NodeService
Client->>GRPC: GetNewBlockTemplate(pow=Cuckaroo)
GRPC->>Cache: get_cuckaroo_new_block_template(tip)
alt cached
Cache-->>GRPC: template
else miss
GRPC->>Node: request_new_block_template(Cuckaroo)
Node-->>GRPC: template
GRPC->>Cache: set_cuckaroo_new_block_template(template, tip)
end
GRPC-->>Client: Template (mining_hash for Cuckaroo)
sequenceDiagram
participant Miner
participant GRPC as BaseNode gRPC
participant MA as HashRateMA
Miner->>GRPC: GetNetworkDifficulty(pow=Cuckaroo)
GRPC->>MA: current_hash_rate_moving_average(Cuckaroo)
MA-->>GRPC: rate
GRPC-->>Miner: difficulty estimate
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (4)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
SWvheerden
left a comment
There was a problem hiding this comment.
DId a quick review, looks good mostly
| .to_json() | ||
| .map_err(|e| ArgsError::MessageFormatError(format!("{}", e)))? | ||
| ), | ||
| Format::Json => panic!("JSON format not supported for blocks in this command"), |
There was a problem hiding this comment.
should these not rather just be a printline?
There was a problem hiding this comment.
yeah good idea
| // siphash_keys: &[u64; 4], | ||
| // nonces: &[u64], |
There was a problem hiding this comment.
| // siphash_keys: &[u64; 4], | |
| // nonces: &[u64], |
There was a problem hiding this comment.
Actionable comments posted: 13
🔭 Outside diff range comments (8)
applications/minotari_node/src/commands/command/get_block.rs (1)
42-45: Help text is now inaccurate: JSON is no longer supported here.Update the clap docstring to avoid advertising JSON support for this command.
- /// Supported options are 'json' and 'text'. 'text' is the default if omitted. + /// Supported options: 'text' only. JSON is not supported for this command.base_layer/core/src/proof_of_work/proof_of_work_algorithm.rs (4)
64-75: Add numeric mapping for Cuckaroo in TryFromCurrently, 3 is not handled, causing failures when parsing numeric algorithm IDs.
Apply this diff:
fn try_from(v: u64) -> Result<Self, Self::Error> { match v { 0 => Ok(PowAlgorithm::RandomXM), 1 => Ok(PowAlgorithm::Sha3x), 2 => Ok(PowAlgorithm::RandomXT), - _ => Err("Invalid PoWAlgorithm".into()), + 3 => Ok(PowAlgorithm::Cuckaroo), + _ => Err("Invalid PoWAlgorithm".into()), } }
80-90: Support FromStr for “Cuckaroo” (and optional aliases)Text parsing does not recognize the new algorithm. Add at least "CUCKAROO"; consider accepting "C29" if that’s used in configs.
match s_trimmed.as_str() { "RANDOMXM" | "RANDOM_XM" | "MONERO_RANDOM_X" | "RANDOMX" | "RANDOM_X" | "RANDOMXMONERO" => { Ok(Self::RandomXM) }, "SHA" | "SHA3" | "SHA3X" => Ok(Self::Sha3x), "RANDOMXT" | "RANDOM_XT" | "TARI_RANDOM_X" | "RANDOMXTARI" => Ok(Self::RandomXT), + "CUCKAROO" | "CUCKAROO29" | "C29" => Ok(Self::Cuckaroo), _ => Err(anyhow::Error::msg(format!("Unknown pow algorithm type: {}", s))), }
155-162: Extend serde round-trip tests to include CuckarooEnsure serialization/deserialization covers all variants.
- for algo in [PowAlgorithm::Sha3x, PowAlgorithm::RandomXM, PowAlgorithm::RandomXT] { + for algo in [PowAlgorithm::Sha3x, PowAlgorithm::RandomXM, PowAlgorithm::RandomXT, PowAlgorithm::Cuckaroo] { let serialized = serde_json::to_string(&algo).expect("Failed to serialize PowAlgorithm"); let deserialized: PowAlgorithm = serde_json::from_str(&serialized).expect("Failed to deserialize PowAlgorithm"); assert_eq!(deserialized, algo); }Additionally, consider adding a dedicated FromStr test for the new variants:
#[test] fn test_pow_algorithm_from_str_cuckaroo() { for s in ["Cuckaroo", "cuckaroo", "CUCKAROO", "c29", "C29", "Cuckaroo29"] { let algo = PowAlgorithm::from_str(s).expect("Failed to parse Cuckaroo variant"); assert_eq!(algo, PowAlgorithm::Cuckaroo); } }
35-40: Add Cuckaroo to TryFrom and FromStr for PowAlgorithmThe autogenerated
TryFrom<u8>inproof_of_work_algorithm.rsstill only handles 0–2 and treats everything else as an error. Likewise, theFromStrimpl doesn’t parse “Cuckaroo.” Update both to include the new variant and add a unit test for parsing it.• File: base_layer/core/src/proof_of_work/proof_of_work_algorithm.rs
– Around lines 69–74, change:
```diff
match value {
0 => Ok(PowAlgorithm::RandomXM),
1 => Ok(PowAlgorithm::Sha3x),
2 => Ok(PowAlgorithm::RandomXT),
_ => Err("Invalid PoWAlgorithm".into()),
3 => Ok(PowAlgorithm::Cuckaroo), }_ => Err("Invalid PoWAlgorithm".into()),– In the `FromStr for PowAlgorithm` impl, add a branch for `"Cuckaroo"` (and any accepted aliases), e.g.: ```rust match s { "RandomXMonero" | "RandomXM" => Ok(PowAlgorithm::RandomXM), "Sha3" | "Sha3x" => Ok(PowAlgorithm::Sha3x), "RandomXTari" | "RandomXT" => Ok(PowAlgorithm::RandomXT), }"Cuckaroo" => Ok(PowAlgorithm::Cuckaroo), _ => Err(format!("Invalid PoWAlgorithm: {}", s)),– Add a test in this file to verify `PowAlgorithm::from_str("Cuckaroo")` yields `PowAlgorithm::Cuckaroo`.After those changes, downstream
matchstatements already include Cuckaroo (or use_), so no further edits should be needed.base_layer/core/src/base_node/proto/response.rs (1)
79-107: Align HistoricalBlocks Producers with Proto ChangesThe
try_fromimplementation inbase_layer/core/src/base_node/proto/response.rsno longer handlesHistoricalBlocks, so any remaining code paths that emitNodeCommsResponse::HistoricalBlockswill hit the fallback error at runtime. Please update or remove these producers—or restore the proto handler—before enabling this in production:• base_layer/core/src/base_node/comms_interface/comms_response.rs
– Variant definition at line 60
– Display impl at line 93
• base_layer/core/src/base_node/comms_interface/inbound_handlers.rs
–Ok(NodeCommsResponse::HistoricalBlocks(blocks))at lines 189, 225, 260
• base_layer/core/src/base_node/comms_interface/local_interface.rs
– Handlers at lines 117, 136, 244, 259
• base_layer/core/tests/tests/node_comms_interface.rs
– Tests matching onNodeCommsResponse::HistoricalBlocksat lines 252, 522, 534base_layer/core/src/base_node/comms_interface/inbound_handlers.rs (2)
1025-1038: Fix non-exhaustive match for PowAlgorithm (pipeline failure)CI error indicates a missing match arm for PowAlgorithm::Cuckaroo in update_target_difficulty (metrics). Add a no-op until a metric is added, or wire a new metric.
fn update_target_difficulty(block: &ChainBlock) { match block.header().pow_algo() { PowAlgorithm::Sha3x => { metrics::target_difficulty_sha() .set(i64::try_from(block.accumulated_data().target_difficulty.as_u64()).unwrap_or(i64::MAX)); }, PowAlgorithm::RandomXM => { metrics::target_difficulty_monero_randomx() .set(i64::try_from(block.accumulated_data().target_difficulty.as_u64()).unwrap_or(i64::MAX)); }, PowAlgorithm::RandomXT => { metrics::target_difficulty_tari_randomx() .set(i64::try_from(block.accumulated_data().target_difficulty.as_u64()).unwrap_or(i64::MAX)); }, + PowAlgorithm::Cuckaroo => { + // TODO: add a dedicated metric; no-op for now to keep match exhaustive + }, } }This will resolve the “non-exhaustive patterns” error. If you prefer, replace the Cuckaroo arm with
_ => {}.
187-206: Handlers still returning HistoricalBlocksThese request handlers still respond with HistoricalBlocks. With proto support removed in response.rs, these will fail to serialize into gRPC responses.
- NodeCommsRequest::FetchMatchingBlocks
- FetchBlocksByKernelExcessSigs
- FetchBlocksByUtxos
- GetBlockByHash returns HistoricalBlock(Box<Option<...>>)
Please update these paths to the new API shape (e.g., streaming or a repeated Block wrapper).
I can help generate a transitional plan and patches once you confirm how the new proto intends to return these results.
Also applies to: 225-261, 267-269
♻️ Duplicate comments (2)
applications/minotari_node/src/commands/command/get_block.rs (1)
91-92: Don’t panic on user input; return a proper error.Replace panic! with a user-facing error. This matches the project’s command error-handling pattern (print error, don’t crash).
- Format::Json => panic!("JSON format not supported for blocks in this command"), + Format::Json => { + return Err(ArgsError::MessageFormatError( + "JSON format not supported for this command; use --format text".to_string() + ).into()); + }Apply to all three occurrences.
Also applies to: 101-102, 114-115
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (1)
139-140: Remove commented parameters.fn verify_from_edges( - // siphash_keys: &[u64; 4], - // nonces: &[u64], uvs: &[(u64, u64)], cycle_length: usize, ) -> Result<(), CuckarooVerificationError> {
🧹 Nitpick comments (31)
base_layer/core/src/validation/error.rs (1)
247-250: LGTM: non-bannable classification is correctTreating FatalStorageError, IncorrectNumberOfTimestampsProvided, and Other as non-bannable makes sense.
Consider adding a small unit test to lock this in:
#[test] fn validation_error_other_not_bannable() { let e = ValidationError::from(anyhow::anyhow!("transient")); assert!(e.get_ban_reason().is_none()); }base_layer/core/src/blocks/genesis_block.rs (1)
33-33: Centralizing genesis accumulated data is a solid refactor.Using BlockHeaderAccumulatedData::genesis(...) across all networks reduces duplication and ensures new difficulty fields (e.g., cuckaroo) initialize consistently. LGTM.
Consider adding a small test assertion that accumulated_cuckaroo_difficulty is min() for genesis to lock in the invariant.
Also applies to: 105-107, 156-158, 206-208, 263-265, 315-317, 347-349
base_layer/core/src/proof_of_work/siphash.rs (1)
24-26: Consider documenting the rationale for these constantsThese constants define the block size for the siphash block algorithm. While they appear correct (64 = 2^6), it would be helpful to add a comment explaining why a 64-element block size was chosen for the Cuckaroo implementation.
applications/minotari_node/src/commands/command/header_stats.rs (2)
91-93: CSV header formatting issueThere's an inconsistency in the CSV header format. The header has "Acc.Sha3, Acc.Rxt, Acc.Cuckaroo" with a space after "Acc.Sha3," but the penultimate column should be "Acc.Sha3" without trailing comma in the string itself.
- "Height,Achieved,TargetDifficulty,CalculatedDifficulty,SolveTime,NormalizedSolveTime,Algo,Timestamp,\ - Window,Acc.Monero,Acc.Sha3, Acc.Rxt, Acc.Cuckaroo" + "Height,Achieved,TargetDifficulty,CalculatedDifficulty,SolveTime,NormalizedSolveTime,Algo,Timestamp,\ + Window,Acc.Monero,Acc.Sha3,Acc.Rxt,Acc.Cuckaroo"
141-161: CSV data row formatting inconsistencyThe format string has an extra space before the last placeholder which could cause CSV parsing issues.
- "{},{},{},{},{},{},{},{},{},{},{},{}, {}", + "{},{},{},{},{},{},{},{},{},{},{},{},{}",base_layer/core/src/chain_storage/lmdb_db/row_data/mod.rs (1)
1-24: Minor: align license header year/style for consistencyOther files use 2019/2020 and slightly different header formatting. Consider aligning this header for repo-wide consistency.
applications/minotari_miner/src/config.rs (1)
60-62: Update doc comment to include CuckarooThe comment still says “Sha3x or RandomXT”; include Cuckaroo for accuracy.
- /// The proof of work algorithm to use (Sha3x or RandomXT) + /// The proof of work algorithm to use (Sha3x, RandomXT, or Cuckaroo)base_layer/core/src/validation/helpers.rs (1)
145-151: Avoid duplicate consensus_constants fetch; keep one referenceMinor refactor to improve readability and avoid two lookups.
- PowAlgorithm::Cuckaroo => { - let cuckaroo_cycle_length = consensus - .consensus_constants(block_header.height) - .cuckaroo_cycle_length(); - let cuckaroo_bits = consensus.consensus_constants(block_header.height).cuckaroo_edge_bits(); - cuckaroo_difficulty(block_header, cuckaroo_cycle_length, cuckaroo_bits)? - }, + PowAlgorithm::Cuckaroo => { + let constants = consensus.consensus_constants(block_header.height); + let cuckaroo_cycle_length = constants.cuckaroo_cycle_length(); + let cuckaroo_bits = constants.cuckaroo_edge_bits(); + cuckaroo_difficulty(block_header, cuckaroo_cycle_length, cuckaroo_bits)? + },Also, please confirm that cuckaroo_difficulty(anyhow::Error) is mapped into ValidationError via From/Into; otherwise, map it explicitly to a PowError/BlockHeaderValidationError variant.
base_layer/core/src/chain_storage/blockchain_database.rs (1)
2602-2606: Propagate version for descendant orphan updatesSame note as above; this is consistent. Consider a brief doc comment on why version association is needed (fork-awareness, schema evolution, etc.).
base_layer/core/src/base_node/proto/response.rs (1)
209-216: Leftover FromIterator for HistoricalBlocksIf HistoricalBlocks is fully removed from the proto, consider removing this helper too to prevent dead code and accidental use.
base_layer/core/src/base_node/comms_interface/inbound_handlers.rs (2)
601-656: Note: min-difficulty calculation logicNo change here, but FYI, the retro-walk until algo match can be O(window) over chain headers during bursts of mixed algo mining. Consider caching recent per-algo target difficulty windows if needed for perf under load.
521-599: GeneralNo functional changes here, but worth noting: list_of_reconciling_blocks protects from duplicate reconciliation; good pattern. Just ensure it’s bounded in size or pruned if under heavy attack.
applications/minotari_node/src/grpc/data_cache.rs (1)
99-106: Accessor looks good; minor readability nit on lock guard usagePattern matches existing accessors. For readability and to make the lock scope explicit, consider binding the read guard first:
- let res = &self.inner_data_cache.read().await.cuckaroo_new_block_template; + let guard = self.inner_data_cache.read().await; + let res = &guard.cuckaroo_new_block_template;base_layer/core/src/chain_storage/db_transaction.rs (1)
433-435: Trim log verbosity to hash and versionLogging the entire BlockHeaderAccumulatedData can be noisy. Prefer logging the hash and version.
- SetAccumulatedDataForOrphan { version, data } => { - write!(f, "Set accumulated data for orphan {} version {}", data, version) - }, + SetAccumulatedDataForOrphan { version, data } => { + write!(f, "Set accumulated data for orphan {} version {}", data.hash, version) + },base_layer/core/src/chain_storage/lmdb_db/lmdb.rs (4)
55-67: Typed insert: unnecessary DeserializeOwned boundInsert does not deserialize. You can drop DeserializeOwned from V to reduce constraints.
-pub fn lmdb_insert_typed<K, V>( +pub fn lmdb_insert_typed<K, V>( txn: &WriteTransaction<'_>, db: &TypedDatabaseRef<K, V>, key: &K, val: &V, ) -> Result<(), ChainStorageError> where - K: AsLmdbBytes + ?Sized + Debug, - V: Serialize + Debug + ?Sized + DeserializeOwned, + K: AsLmdbBytes + ?Sized + Debug, + V: Serialize + Debug + ?Sized, { lmdb_insert(txn, &db.db, key, val, db.name) }
177-188: Typed delete: remove unused generic bound on VThe delete operation does not use V. Simplify the bounds.
-pub fn lmdb_delete_typed<K, V>( +pub fn lmdb_delete_typed<K, V>( txn: &WriteTransaction<'_>, db: &TypedDatabaseRef<K, V>, key: &K, ) -> Result<(), ChainStorageError> where - K: AsLmdbBytes + ?Sized, - V: Serialize + DeserializeOwned, + K: AsLmdbBytes + ?Sized, { lmdb_delete(txn, &db.db, key, db.name) }
253-281: Typed get: include table name in logs and fix message textAdding the table name helps debugging; also fix the duplicated “Could not could not” string.
- Err(e) => { - error!(target: LOG_TARGET, "Could not get value from lmdb: {:?}", e); + Err(e) => { + error!(target: LOG_TARGET, "Could not get value from lmdb table {}: {:?}", db.name, e); Err(ChainStorageError::AccessError(e.to_string())) }, Ok(Some(v)) => match deserialize(v) { Ok(val) => Ok(Some(val)), Err(e) => { - error!( - target: LOG_TARGET, - "Could not could not deserialize value from lmdb: {:?}", e - ); + error!(target: LOG_TARGET, "Could not deserialize value from lmdb table {}: {:?}", db.name, e); Err(ChainStorageError::AccessError(e.to_string())) }, },
359-370: Typed exists: remove unused V boundNo need to constrain V here.
-pub fn lmdb_exists_typed<K, V>( +pub fn lmdb_exists_typed<K, V>( txn: &ConstTransaction<'_>, db: &TypedDatabaseRef<K, V>, key: &K, ) -> Result<bool, ChainStorageError> where - K: AsLmdbBytes + ?Sized, - V: Serialize + DeserializeOwned, + K: AsLmdbBytes + ?Sized, { lmdb_exists(txn, &db.db, key) }base_layer/core/src/chain_storage/lmdb_db/row_data/block_header_accumulated_data.rs (1)
92-92: Remove unnecessary clone() on total_kernel_offset.Since
datais moved here, cloning is unnecessary.- total_kernel_offset: data.total_kernel_offset.clone(), + total_kernel_offset: data.total_kernel_offset,base_layer/core/src/consensus/consensus_constants.rs (1)
84-92: Consider using discriminant values directly in From implementation.Since the enum variants have explicit discriminant values, the conversion could be simplified.
impl From<BlockVersion> for u16 { fn from(value: BlockVersion) -> Self { - match value { - BlockVersion::V0 => 0, - BlockVersion::V1 => 1, - BlockVersion::V2 => 2, - } + value as u16 } }base_layer/core/src/proof_of_work/cuckaroo_pow.rs (1)
96-97: Fix double comment slashes.- // // Generate the hasher. + // Generate the hasher.base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (10)
231-249: Tighten TypedDatabaseRef generic bounds; drop unnecessary ?Sized on valueThe value type
TValueTypeis constrained byDeserializeOwned + Serialize, which effectively requires it to be sized. The additional?Sizedbound onTValueTypeis unnecessary and misleading.Proposed change:
-pub(crate) struct TypedDatabaseRef<TKeyType: AsLmdbBytes + ?Sized, TValueType: DeserializeOwned + Serialize + ?Sized> { +pub(crate) struct TypedDatabaseRef<TKeyType: AsLmdbBytes + ?Sized, TValueType: DeserializeOwned + Serialize> {Optionally derive Debug for easier diagnostics:
- pub(crate) struct TypedDatabaseRef<...> { + #[derive(Debug)] + pub(crate) struct TypedDatabaseRef<...> {
796-849: Avoid hard-coding the array length in all_dbs; prefer Vec to reduce maintenance riskThe
fn all_dbs(&self) -> [..; 35]requires manual updates to the array length when DBs are added/removed. Switching to aVec<(&'static str, &DatabaseRef)>removes this sharp edge without measurable cost in call sites that already iterate.Proposed shape:
- fn all_dbs(&self) -> [(&'static str, &DatabaseRef); 35] { - [ ... ] - } + fn all_dbs(&self) -> Vec<(&'static str, &DatabaseRef)> { + vec![ ... ] + }
1078-1110: Versioned orphan accumulated-data storage is correct; unify version parsing approach
- Correctly validates orphan existence before writing accumulated data.
- Routes to v1/v2 typed stores based on parsed
BlockVersion.Minor consistency nit: elsewhere you use
BlockVersion::from_u16while here you useBlockVersion::try_from(and in another placetry_into()). Consider standardizing on a single approach (e.g.,from_u16(..).ok_or(..)?) for clarity.Example:
- let blockchain_version = - BlockVersion::try_from(blockchain_version).map_err(|e| ChainStorageError::InvalidArguments { - func: "set_accumulated_data_for_orphan", - arg: "blockchain_version", - message: format!("Invalid blockchain version: {} :{}", blockchain_version, e), - })?; + let blockchain_version = BlockVersion::from_u16(blockchain_version).ok_or( + ChainStorageError::InvalidArguments { + func: "set_accumulated_data_for_orphan", + arg: "blockchain_version", + message: format!("Invalid blockchain version: {}", blockchain_version), + } + )?;Optional: consider taking
blockchain_version: BlockVersionat the API boundary to remove conversion from hot paths.
1185-1203: Insert: choose per-version DB based on header.version — standardize conversionLogic to select V1 vs V2 DB is sound. To keep conversions uniform and avoid reliance on
TryFrom/try_intotrait impls, preferBlockVersion::from_u16(header.version).ok_or(..)?here as well.- let version: BlockVersion = header - .version - .try_into() - .map_err(|e| ChainStorageError::InvalidArguments { - func: "insert_header", - arg: "version", - message: format!("Invalid blockchain version: {} :{}", header.version, e), - })?; + let version = BlockVersion::from_u16(header.version).ok_or( + ChainStorageError::InvalidArguments { + func: "insert_header", + arg: "version", + message: format!("Invalid blockchain version: {}", header.version), + } + )?;
1243-1275: Delete: per-version removal is correct — align version parsing with insert pathDeletion selects the correct typed DB by version. Recommend aligning conversion with the insert path (see prior comment) for consistency:
- let block_version = BlockVersion::try_from(version).map_err(|e| ChainStorageError::InvalidArguments { - func: "delete_header", - arg: "version", - message: format!("Invalid blockchain version: {} :{}", version, e), - })?; + let block_version = BlockVersion::from_u16(version).ok_or( + ChainStorageError::InvalidArguments { + func: "delete_header", + arg: "version", + message: format!("Invalid blockchain version: {}", version), + } + )?;
1559-1593: Orphan deletion promotes parent tip using typed reads; cleanup across both V1/V2 storesThis looks good:
- Reads parent orphan via
orphans_typedand re-inserts tip with the parent’s total difficulty if found.- Deletes accumulated-data entries in both V1 and V2 typed stores if present.
Minor nit: add the parent’s version to the warning to aid diagnostics when accumulated data is missing.
- warn!(target: LOG_TARGET, "Empty 'BlockHeaderAccumulatedData' for parent hash '{}'", parent_hash.to_hex()); + warn!(target: LOG_TARGET, "Empty 'BlockHeaderAccumulatedData' for parent hash '{}' (version={})", + parent_hash.to_hex(), orphan.header.version);
2129-2149: fetch_orphan_header_accumulated_data: versioned read path looks correct
- Parses
block_versionand selects the correct typed orphan DB.- Converts row to
BlockHeaderAccumulatedDataviaInto.Consider extracting the version parsing into a small helper to DRY with other fetchers.
2173-2189: fetch_header_accumulated_data_by_height: per-version selection for canonical chainSame as orphan variant; looks good. Again, factor out version parsing if you want to reduce duplication.
3815-3836: Migrations: V0→V1 conversion correctly uses raw DB handle; v1 reads for difficulty rewind
- Reading V0 rows via
db.header_accumulated_data_db.dband writing V1 rows back is correct given the typed wrapper now denotes V1.- Orphan migration mirrors the same pattern; chain tips’ difficulty updated to U512.
One small suggestion: consider chunking long loops with periodic transaction commits or progress logs for very large chains to reduce long-running write transactions (you already update stats every 50 heights; optional to add periodic commits if practical).
Also applies to: 3854-3872, 3908-3910
1070-1076: Optional: Prefer typed insert for orphans for symmetryCurrently, orphans are inserted with raw
lmdb_insertintoorphans_db. Given you introducedorphans_typed, consider usinglmdb_insert_typed(txn, &self.orphans_typed, k.as_slice(), block)for symmetry and type safety on writes too.- lmdb_insert(txn, &self.orphans_db, k.as_slice(), &block, "orphans_db")?; + lmdb_insert_typed(txn, &self.orphans_typed, k.as_slice(), block)?;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (43)
NOTICE(1 hunks)applications/minotari_app_grpc/proto/block.proto(1 hunks)applications/minotari_app_grpc/src/conversions/consensus_constants.rs(1 hunks)applications/minotari_miner/src/config.rs(1 hunks)applications/minotari_node/src/commands/command/get_block.rs(3 hunks)applications/minotari_node/src/commands/command/header_stats.rs(3 hunks)applications/minotari_node/src/grpc/base_node_grpc_server.rs(7 hunks)applications/minotari_node/src/grpc/data_cache.rs(3 hunks)base_layer/LICENCE-APACHE(1 hunks)base_layer/core/Cargo.toml(2 hunks)base_layer/core/src/base_node/comms_interface/error.rs(2 hunks)base_layer/core/src/base_node/comms_interface/inbound_handlers.rs(3 hunks)base_layer/core/src/base_node/proto/response.proto(1 hunks)base_layer/core/src/base_node/proto/response.rs(3 hunks)base_layer/core/src/base_node/sync/block_sync/synchronizer.rs(0 hunks)base_layer/core/src/base_node/sync/header_sync/validator.rs(1 hunks)base_layer/core/src/blocks/accumulated_data.rs(6 hunks)base_layer/core/src/blocks/genesis_block.rs(7 hunks)base_layer/core/src/blocks/historical_block.rs(1 hunks)base_layer/core/src/chain_storage/blockchain_database.rs(2 hunks)base_layer/core/src/chain_storage/db_transaction.rs(3 hunks)base_layer/core/src/chain_storage/lmdb_db/lmdb.rs(4 hunks)base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs(38 hunks)base_layer/core/src/chain_storage/lmdb_db/mod.rs(1 hunks)base_layer/core/src/chain_storage/lmdb_db/row_data/block_header_accumulated_data.rs(1 hunks)base_layer/core/src/chain_storage/lmdb_db/row_data/mod.rs(1 hunks)base_layer/core/src/consensus/chain_strength_comparer.rs(4 hunks)base_layer/core/src/consensus/consensus_constants.rs(21 hunks)base_layer/core/src/consensus/consensus_manager.rs(1 hunks)base_layer/core/src/proof_of_work/cuckaroo_pow.rs(1 hunks)base_layer/core/src/proof_of_work/mod.rs(2 hunks)base_layer/core/src/proof_of_work/proof_of_work_algorithm.rs(2 hunks)base_layer/core/src/proof_of_work/siphash.rs(1 hunks)base_layer/core/src/proto/block.proto(1 hunks)base_layer/core/src/proto/block.rs(2 hunks)base_layer/core/src/test_helpers/mod.rs(2 hunks)base_layer/core/src/validation/error.rs(2 hunks)base_layer/core/src/validation/header/header_full_validator.rs(1 hunks)base_layer/core/src/validation/helpers.rs(2 hunks)base_layer/core/tests/helpers/block_builders.rs(6 hunks)base_layer/core/tests/helpers/database.rs(1 hunks)base_layer/core/tests/tests/block_validation.rs(2 hunks)common_sqlite/Cargo.toml(2 hunks)
💤 Files with no reviewable changes (1)
- base_layer/core/src/base_node/sync/block_sync/synchronizer.rs
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-07-15T12:23:14.650Z
Learnt from: hansieodendaal
PR: tari-project/tari#7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Applied to files:
applications/minotari_node/src/commands/command/get_block.rs
🧬 Code Graph Analysis (21)
base_layer/core/tests/helpers/database.rs (1)
base_layer/core/src/blocks/block_header.rs (1)
new(129-148)
applications/minotari_node/src/grpc/data_cache.rs (2)
base_layer/core/src/blocks/new_block_template.rs (1)
empty(78-87)base_layer/core/src/blocks/new_blockheader_template.rs (1)
empty(62-71)
applications/minotari_app_grpc/src/conversions/consensus_constants.rs (1)
base_layer/core/src/consensus/consensus_constants.rs (1)
from(85-91)
base_layer/core/src/base_node/comms_interface/error.rs (10)
base_layer/core/src/base_node/proto/response.rs (1)
from(112-116)base_layer/core/src/consensus/consensus_constants.rs (1)
from(85-91)base_layer/core/src/validation/error.rs (2)
from(178-180)from(184-186)base_layer/wallet/src/transaction_service/handle.rs (1)
from(724-728)base_layer/core/src/chain_storage/async_db.rs (1)
from(292-294)base_layer/core/src/chain_storage/error.rs (3)
from(222-224)from(228-230)from(234-245)base_layer/core/src/transactions/aggregated_body.rs (1)
from(513-515)base_layer/core/src/base_node/sync/horizon_state_sync/error.rs (3)
from(109-111)from(115-117)from(121-123)base_layer/core/src/transactions/transaction_components/error.rs (1)
from(97-99)base_layer/core/src/proof_of_work/monero_rx/error.rs (1)
from(83-85)
base_layer/core/src/chain_storage/db_transaction.rs (5)
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (2)
set_accumulated_data_for_orphan(1078-1113)write(2439-2530)base_layer/core/src/blocks/historical_block.rs (1)
accumulated_data(68-70)base_layer/core/src/blocks/accumulated_data.rs (2)
accumulated_data(329-331)accumulated_data(401-403)base_layer/core/src/chain_storage/blockchain_backend.rs (1)
write(49-49)base_layer/core/src/blocks/block.rs (1)
version(87-89)
base_layer/core/src/chain_storage/blockchain_database.rs (1)
base_layer/core/tests/helpers/block_builders.rs (1)
chain_block(263-291)
base_layer/core/src/validation/helpers.rs (2)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (1)
cuckaroo_difficulty(218-225)base_layer/core/src/consensus/consensus_constants.rs (1)
cuckaroo_cycle_length(433-435)
base_layer/core/src/blocks/genesis_block.rs (1)
base_layer/core/src/blocks/accumulated_data.rs (4)
accumulated_data(329-331)accumulated_data(401-403)genesis(227-239)block(386-388)
base_layer/core/src/blocks/accumulated_data.rs (6)
base_layer/core/src/proof_of_work/target_difficulty.rs (1)
target(53-55)base_layer/core/src/blocks/block_header.rs (1)
hash(150-157)base_layer/core/src/blocks/historical_block.rs (1)
hash(72-74)base_layer/core/src/blocks/block.rs (1)
hash(144-146)base_layer/core/src/proof_of_work/difficulty.rs (1)
min(58-60)base_layer/core/src/proof_of_work/accumulated_difficulty.rs (1)
min(49-51)
base_layer/core/src/base_node/sync/header_sync/validator.rs (1)
base_layer/core/src/blocks/accumulated_data.rs (3)
header(325-327)header(391-393)genesis(227-239)
base_layer/core/tests/tests/block_validation.rs (3)
base_layer/core/src/consensus/consensus_manager.rs (1)
consensus_constants(112-121)base_layer/core/src/chain_storage/blockchain_database.rs (1)
consensus_constants(461-464)base_layer/core/tests/helpers/sample_blockchains.rs (1)
consensus_constants(173-177)
applications/minotari_node/src/commands/command/get_block.rs (4)
base_layer/core/src/blocks/historical_block.rs (2)
block(60-62)hash(72-74)base_layer/core/src/blocks/genesis_block.rs (3)
block(544-544)block(682-688)block(689-695)base_layer/core/src/blocks/accumulated_data.rs (3)
block(386-388)hash(321-323)hash(381-383)base_layer/core/src/blocks/block_header.rs (1)
hash(150-157)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (3)
base_layer/core/src/proof_of_work/siphash.rs (3)
digest(104-106)siphash_block(46-69)new(85-87)base_layer/core/src/blocks/block_header.rs (5)
new(129-148)vec(349-355)vec(365-371)vec(402-408)vec(418-424)base_layer/core/src/proof_of_work/difficulty.rs (1)
big_endian_difficulty(68-71)
base_layer/core/src/consensus/consensus_constants.rs (1)
applications/minotari_app_grpc/src/conversions/consensus_constants.rs (1)
from(31-152)
base_layer/core/src/base_node/comms_interface/inbound_handlers.rs (1)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (1)
cuckaroo_difficulty(218-225)
base_layer/core/src/chain_storage/lmdb_db/lmdb.rs (3)
infrastructure/storage/src/lmdb_store/store.rs (2)
db(690-692)deserialize(698-706)base_layer/core/src/chain_storage/lmdb_db/helpers.rs (1)
deserialize(73-81)base_layer/core/src/chain_storage/lmdb_db/cursors.rs (1)
deserialize(94-94)
base_layer/core/src/proto/block.rs (3)
base_layer/core/src/proto/block_header.rs (2)
try_from(38-67)try_from(98-103)base_layer/core/src/blocks/historical_block.rs (2)
block(60-62)confirmations(51-53)base_layer/core/src/blocks/accumulated_data.rs (1)
block(386-388)
base_layer/core/tests/helpers/block_builders.rs (2)
base_layer/core/src/blocks/block_header.rs (1)
new(129-148)base_layer/core/src/proof_of_work/accumulated_difficulty.rs (1)
min(49-51)
base_layer/core/src/base_node/proto/response.rs (1)
base_layer/core/src/blocks/historical_block.rs (2)
block(60-62)new(43-49)
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (3)
base_layer/core/src/chain_storage/lmdb_db/lmdb.rs (8)
lmdb_delete_typed(177-187)lmdb_exists(372-383)lmdb_exists_typed(359-369)lmdb_get(283-306)lmdb_get_typed(253-280)lmdb_insert(69-111)lmdb_insert_typed(55-66)lmdb_replace(140-175)base_layer/core/src/consensus/consensus_constants.rs (3)
new(1009-1016)try_from(74-81)from_u16(61-68)base_layer/core/src/blocks/accumulated_data.rs (9)
hash(321-323)hash(381-383)accumulated_data(329-331)accumulated_data(401-403)header(325-327)header(391-393)try_construct(302-311)try_construct(366-375)block(386-388)
applications/minotari_node/src/commands/command/header_stats.rs (1)
base_layer/core/src/blocks/accumulated_data.rs (2)
header(325-327)header(391-393)
🪛 GitHub Actions: Integration Tests
base_layer/core/src/proof_of_work/proof_of_work_algorithm.rs
[error] 35-39: PowAlgorithm enum includes Cuckaroo variant that is not matched; update match arms to cover all variants or add a wildcard arm.
base_layer/core/src/base_node/comms_interface/inbound_handlers.rs
[error] 1025-1025: Rust compile error: non-exhaustive patterns: PowAlgorithm::Cuckaroo not covered in match on block.header().pow_algo().
🪛 GitHub Actions: CI
base_layer/core/src/base_node/comms_interface/inbound_handlers.rs
[error] 1025-1025: Command 'cargo +nightly-2025-01-01 lints clippy --all-targets --all-features' failed: non-exhaustive patterns: proof_of_work::proof_of_work_algorithm::PowAlgorithm::Cuckaroo not covered.
🔇 Additional comments (58)
base_layer/core/src/base_node/comms_interface/error.rs (2)
112-112: LGTM: classify Other as non-bannableMarking Other as non-bannable aligns with the intent of a generic/opaque error bucket.
80-81: Avoid exposinganyhow::Errorin public API: updateOthervariant to use a boxedstd::error::ErrorFile:
base_layer/core/src/base_node/comms_interface/error.rs(lines 80–81)
Apply this diff:- #[error("An error occurred: {0}")] - Other(#[from] anyhow::Error), + #[error("An error occurred: {0}")] + Other(#[from] Box<dyn std::error::Error + Send + Sync>),Add a convenience
From<anyhow::Error>impl to retain ergonomic conversion:impl From<anyhow::Error> for CommsInterfaceError { fn from(err: anyhow::Error) -> Self { CommsInterfaceError::Other(err.into()) } }• All existing
CommsInterfaceError::Other(_)pattern matches (e.g. in the ban‐reason logic) continue to work unchanged.
• No additional call sites need updating—everyOtherconstructor still resolves correctly.
• (Optional) Rename the display string to"Other error: {0}"for consistency withValidationError.base_layer/core/src/blocks/historical_block.rs (1)
31-31: No remaining serialization of HistoricalBlock
A repository-wide search found no serializers,to_json/from_jsoncalls, or serde derives referencingHistoricalBlock. DroppingSerialize/Deserializehere aligns with proto deprecation and is safe. LGTM.applications/minotari_app_grpc/proto/block.proto (1)
86-87: Confirmed Cuckaroo Mapping and Stub RegenerationAll gRPC‐to‐domain mappings for the new POW_ALGOS_CUCKAROO (3) are in place, and stubs will be regenerated automatically from the updated .proto.
• applications/minotari_node/src/grpc/base_node_grpc_server.rs – handles
PowAlgorithm::Cuckarooin hash rate, difficulty lookup, mining hash branches, etc.
• applications/minotari_miner/src/config.rs – mapsPowAlgorithm::Cuckaroo→PowAlgos::Cuckarooin requests.
• Consensus, validation, difficulty, genesis, and accumulated data modules all includePowAlgorithm::Cuckaroo.No additional changes required.
base_layer/core/src/proof_of_work/siphash.rs (5)
29-33: Rotation macro implementation looks correctThe left rotation macro follows the standard pattern for 64-bit rotation. Since all usages have hardcoded shift values within valid range, no bounds checking is needed.
37-41: Standard SipHash-2-4 implementationThe function correctly implements a single SipHash-2-4 operation with the standard rotation constant of 21.
46-69: Block hashing implementation for CuckarooThe block-based hashing correctly implements the required functionality for Cuckaroo PoW. The XOR logic on lines 59-68 ensures proper hash dependencies within the block.
108-123: SipHash round function implementation verifiedThe round function correctly implements the SipHash round with standard rotation values (13, 16, 32, 17, 32) and a parameterizable rotation value
rot_e. The use of wrapping arithmetic is appropriate for the algorithm.
130-143: Comprehensive unit tests with reference valuesThe tests provide good coverage with specific expected values that serve as regression tests. These values likely match the reference implementation from Grin.
base_layer/core/src/blocks/accumulated_data.rs (4)
126-163: Cuckaroo difficulty accumulation logic correctly implementedThe builder correctly handles the new Cuckaroo PoW algorithm by:
- Adding the achieved difficulty to the appropriate accumulated difficulty based on the PoW algorithm
- Maintaining backward compatibility with existing algorithms (RandomXM, RandomXT, Sha3x)
- Properly handling difficulty overflow with error returns
171-173: Cuckaroo difficulty not included in total_accumulated_difficulty calculationThe total accumulated difficulty is calculated as the product of only Monero RandomX, Tari RandomX, and SHA3x difficulties. The Cuckaroo difficulty is tracked separately but not included in this product. This design decision should be documented to clarify whether this is intentional for backward compatibility or consensus reasons.
Could you confirm if excluding Cuckaroo from the total_accumulated_difficulty calculation is intentional? This might affect chain selection logic.
227-239: Genesis block initialization correctly implementedThe genesis constructor properly initializes all fields with appropriate minimum values, including the new Cuckaroo difficulty field.
245-259: Getter methods for accumulated difficultiesThe new getter methods provide clean access to all accumulated difficulty fields. Good API design.
common_sqlite/Cargo.toml (1)
10-12: SQLite bundling feature properly configuredThe bundled feature correctly enables static linking of SQLite, which improves deployment flexibility by not requiring SQLite to be installed on target systems.
Also applies to: 31-31
base_layer/core/tests/helpers/database.rs (1)
54-54: Type conversion for BlockVersionThe addition of
.into()correctly handles the conversion from the newBlockVersionenum tou16, maintaining type safety while preserving backward compatibility.base_layer/core/src/consensus/consensus_manager.rs (1)
371-373: Chain strength tie-breaker ordering adds Cuckaroo after Sha3x — LGTMThe additional then().by_cuckaroo_cycle_difficulty() at the end is consistent with the intended tie-breaker order. Confirm this placement matches policy (i.e., cuckaroo only used when all prior criteria are equal).
If the intent is to prioritize Cuckaroo over some algos, adjust ordering accordingly. Otherwise, no change required.
base_layer/core/src/chain_storage/lmdb_db/mod.rs (1)
44-44: Publicly exporting lmdb_db::row_data — LGTMConsistent with the typed/Versioned LMDB approach. No issues spotted.
base_layer/core/src/proof_of_work/proof_of_work_algorithm.rs (1)
99-100: Display mapping for Cuckaroo — LGTMapplications/minotari_app_grpc/src/conversions/consensus_constants.rs (1)
113-114: Explicit BlockVersion -> u16 -> grpc mapping — LGTMThe explicit conversion removes ambiguity introduced by the new BlockVersion type.
applications/minotari_miner/src/config.rs (1)
133-136: Miner PoW request supports Cuckaroo — LGTMbase_layer/core/src/proof_of_work/mod.rs (2)
58-60: Expose cuckaroo_pow under base_node — LGTM
85-87: Expose siphash under base_node — LGTMbase_layer/core/Cargo.toml (1)
91-91: siphasher version sanity-check results
- crates.io latest version is 1.0.1 – your Cargo.toml is already on the newest release
- No explicit MSRV is declared (rust_version = null) – ensure your Rust toolchain meets its implicit requirements or consult the crate’s repo/CI for a pinned MSRV
- Default features = ["std"] – no serde support enabled by default (enable serde_std or serde_no_std only if you need it)
base_layer/core/src/validation/helpers.rs (1)
41-41: Import looks correctImporting cuckaroo_difficulty here is consistent with the new PoW path.
base_layer/core/src/test_helpers/mod.rs (2)
88-91: Version Into() usage is correctHeader constructor now receives the typed blockchain version. Good.
118-119: Consistent conversion for header.versionAligns with BlockVersion typing across the codebase.
base_layer/core/src/base_node/sync/header_sync/validator.rs (2)
325-326: Version conversion aligned with new BlockVersionheader.version = ..blockchain_version().into() is the expected change.
329-331: Centralized genesis accumulated dataUsing BlockHeaderAccumulatedData::genesis(...) reduces boilerplate and keeps genesis semantics consistent. Good change.
base_layer/core/src/chain_storage/blockchain_database.rs (1)
2515-2516: Allset_accumulated_data_for_orphaninvocations now include theversionargument and the LMDB backend handles both V1 and V2 schemas correctly. No further changes required.base_layer/core/tests/tests/block_validation.rs (2)
128-129: with_blockchain_version(BlockVersion::V0)Switch to the enum is correct and clearer than a raw literal.
1294-1298: header.version conversion in testsMatches the production path; helps avoid accidental drift. LGTM.
base_layer/core/src/base_node/comms_interface/inbound_handlers.rs (5)
51-59: Cuckaroo import integrationImport is correct and the module boundary aligns with proof_of_work layout.
296-299: header.version typed conversionGood alignment with new BlockVersion API.
643-649: Cuckaroo min-difficulty check is wired correctly
- Uses consensus constants at new_block.header.height.
- Computes achieved difficulty via cuckaroo_difficulty.
Looks correct.
Please ensure cuckaroo_cycle_length() and cuckaroo_edge_bits() are stable across potential version changes at boundary heights.
1070-1087: Target difficulty calculation unchangedNo Cuckaroo-specific path needed here since the target window abstracts the algo; just noting correctness.
1-1092: No missing Cuckaroo match arms foundAll
matchstatements onPowAlgorithmeither include theCuckaroovariant explicitly or fall back to a catch-all arm that covers it. No further updates are needed.applications/minotari_node/src/grpc/data_cache.rs (2)
140-143: Setter mirrors existing implementations — LGTMConsistent with the other template setters; no issues found.
151-151: Field addition and Default init are consistentAdding cuckaroo_new_block_template and initializing it with NewBlockTemplate::empty() aligns with the other cached templates.
Also applies to: 163-163
base_layer/core/tests/helpers/block_builders.rs (3)
125-125: Version conversion via Into() is correctUsing .into() for BlockVersion -> u16 matches updated typing on BlockHeader::new. Good change.
271-271: Header version assignment now follows BlockVersion typingSetting header.version using consensus.consensus_constants(...).blockchain_version().into() at block construction is correct and consistent across helpers.
Also applies to: 303-303, 342-344
209-209: accumulated_cuckaroo_difficulty is initialized in all genesis and test pathsVerified via
rgthat:
- The
BlockHeaderAccumulatedData::genesisconstructor inaccumulated_data.rs:231–238setsaccumulated_cuckaroo_difficulty: AccumulatedDifficulty::min().- All test block builders in
base_layer/core/tests/helpers/block_builders.rs(lines 201–209 and 244–252) explicitly setaccumulated_cuckaroo_difficulty: AccumulatedDifficulty::min().- LMDB row‐data converters handle it: V1 defaults to
min()(v1 →BlockHeaderAccumulatedData), V2 reads the stored value.No further changes required.
base_layer/core/src/consensus/chain_strength_comparer.rs (2)
54-56: Switch to getters for accumulated difficulties — LGTMUsing accumulated_*_difficulty() getters via accumulated_data() keeps encapsulation consistent with recent changes.
Also applies to: 65-67, 76-78
132-134: Builder wiring for cuckaroo difficulty — LGTMby_cuckaroo_cycle_difficulty correctly composes the new comparer into the chain.
base_layer/core/src/chain_storage/db_transaction.rs (2)
340-344: WriteOperation variant change — LGTMVariant extended with version and data matches the updated storage requirements.
219-228: set_accumulated_data_for_orphan API change is sound and fully propagatedAll call sites have been updated to include the new
block_versionargument. As an optional consistency improvement, you may rename the parameter fromblock_versiontoblockchain_versionto align with other storage‐layer APIs.applications/minotari_node/src/grpc/base_node_grpc_server.rs (1)
301-302: LGTM! Cuckaroo integration follows existing patterns.The Cuckaroo PoW algorithm is correctly integrated following the same patterns as existing algorithms (Sha3x, RandomXM, RandomXT) for:
- Hash rate moving average calculation
- Block template caching
- Mining hash extraction
Also applies to: 345-345, 902-928, 1024-1024, 1342-1342, 1592-1592, 1691-1691
base_layer/core/src/consensus/consensus_constants.rs (1)
652-652: Historical error documented but not fixed - needs clarification.The comment indicates this should be V1 but remains V0. Consider either:
- Fixing it to V1 if backward compatibility allows
- Explaining why it must remain V0 despite being incorrect
base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (11)
256-257: DB naming and enumeration updates for V2 are correctNew V2 DB names for accumulated data (headers and orphans) are added and included in the global name list. This keeps builder/enumeration consistent.
Also applies to: 275-276, 296-297, 313-314
352-353: Builder registers the new V2 databases — verify env maxdbs marginRegistration of the new V2 DBs looks correct. Please just confirm that
set_max_number_of_databases(40)comfortably exceeds the total inbuild_lmdb_store(including dupsort indexes) and any future additions.Also applies to: 369-370
456-458: Typed DB fields for per-version accumulated data and orphansGood move to strongly type header accumulated data (V1/V2) and orphan accumulated data (V1/V2), and to add a typed orphans store.
Also applies to: 490-499
536-544: Instantiation of typed DB wrappersConstructors for typed wrappers look correct; DB handles and names line up with the constants.
Also applies to: 559-567
680-682: WriteOperation payload now includes version for orphan accumulated dataThis is the right call; you need the version to route to the correct typed store. Insertion path correctly forwards the header version.
Also applies to: 685-689
2589-2594: fetch_chain_header_by_height: passes header.version to select the correct per-version DBThis is the right approach; ensures we don’t deserialize rows into the wrong schema.
2612-2618: fetch_header_accumulated_data(hash): efficient: resolve height → header.version → typed fetchLGTM; avoids guessing the version by routing through the canonical header.
2632-2642: fetch_chain_header_in_all_chains: typed orphan lookup + per-version accumulated dataGood consistency with the typed orphan store and per-version accumulated data retrieval.
2677-2687: fetch_header_containing_kernel_mmr: uses header.version for correct per-version readLGTM; consistent with other fetch paths.
2886-2891: fetch_last_chain_header and fetch_tip_header: correct per-version accumulated data retrievalBoth methods pick the per-version DB using the header version. Looks correct.
Also applies to: 2909-2916
3031-3037: Orphan fetch paths consistently use per-version accumulated data
- fetch_orphan_chain_tip_by_hash
- fetch_strongest_orphan_chain_tips
- fetch_orphan_chain_block
All consistently select accumulated data by orphan version. LGTM.
Also applies to: 3078-3084, 3123-3135
| benches = ["base_node"] | ||
| ledger = ["minotari_ledger_wallet_comms"] | ||
| metrics = ["tari_metrics"] | ||
| sqlite-bundled = ["tari_common_sqlite/bundled"] |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Confirm feature wiring and consumer crates
sqlite-bundled = ["tari_common_sqlite/bundled"] looks fine. Please verify that:
- Consumers opt-in consistently (docs/release notes).
- No unintended default feature enabling.
Run a quick check to see where it’s referenced.
🏁 Script executed:
#!/bin/bash
rg -n --context 2 'sqlite-bundled' || trueLength of output: 317
🏁 Script executed:
#!/bin/bash
# 1. Check for default feature configuration in base_layer/core/Cargo.toml
rg -n '^\s*default\s*=' -n 5 base_layer/core/Cargo.toml || echo "No default features found"
# 2. Search documentation for any mention of the new feature
rg -n 'sqlite-bundled' -n 5 docs || echo "No docs references found"Length of output: 348
Add documentation and release notes for the new sqlite-bundled feature
The feature is correctly wired (not enabled by default), but there’s no mention of it in any docs or changelog. Please:
- Update your project’s documentation (README.md or a dedicated docs/ section) to describe the
sqlite-bundledfeature and how to enable it. - Add an entry to your CHANGELOG or RELEASE_NOTES detailing the addition of
sqlite-bundled. - Ensure any consuming crates (e.g., wallet, node, API) explicitly opt in by adding
under the dependency on this crate in their Cargo.toml.
features = ["sqlite-bundled"]
Once these updates are made, consumers will clearly know how to opt in and there will be no surprises at release.
🤖 Prompt for AI Agents
In base_layer/core/Cargo.toml around line 28, the new optional feature
"sqlite-bundled" is declared but there is no documentation or release notes; add
a short entry to the project README (or docs/) explaining what sqlite-bundled
does and how to enable it, add an entry to CHANGELOG or RELEASE_NOTES describing
the new feature and its implications, and update consuming crates' Cargo.toml
examples (wallet, node, api) to show how to opt in by adding features =
["sqlite-bundled"] under the dependency on this crate so consumers know to
explicitly enable it.
| use ProtoNodeCommsResponse::{BlockResponse, FetchMempoolTransactionsByExcessSigsResponse}; | ||
| let response = match self { | ||
| BlockResponse(block) => NodeCommsResponse::Block(Box::new(block.try_into()?)), | ||
| HistoricalBlocks(blocks) => { | ||
| let blocks = try_convert_all(blocks.blocks)?; | ||
| NodeCommsResponse::HistoricalBlocks(blocks) | ||
| }, | ||
| // HistoricalBlocks(blocks) => { | ||
| // let blocks = try_convert_all(blocks.blocks)?; | ||
| // NodeCommsResponse::HistoricalBlocks(blocks) | ||
| // }, | ||
| FetchMempoolTransactionsByExcessSigsResponse(response) => { |
There was a problem hiding this comment.
HistoricalBlocks response variant removed here, but still produced elsewhere
This match no longer supports HistoricalBlocks. Inbound handlers still return NodeCommsResponse::HistoricalBlocks in several request paths, which will now hit the default “unsupported response” branch and error at runtime.
Please reconcile either by:
- Updating inbound request handlers to no longer return HistoricalBlocks, or
- Providing a new proto response for multi-block results (e.g., a repeated Block wrapper or streaming), and implementing the conversion here.
This is a cross-module breaking change; address before merging.
I can generate a patch to migrate the handlers off HistoricalBlocks (e.g., to streaming or a new repeated type) if you confirm the target API shape.
🤖 Prompt for AI Agents
In base_layer/core/src/base_node/proto/response.rs around lines 43 to 50, the
match arm for the HistoricalBlocks variant was removed but inbound handlers
still construct NodeCommsResponse::HistoricalBlocks, causing those code paths to
fall through to the unsupported-response branch at runtime; fix by either (A)
updating all inbound request handlers to stop returning HistoricalBlocks
(migrate them to the new streaming API or to return multiple single-Block
responses) and remove any remaining constructors of
NodeCommsResponse::HistoricalBlocks, or (B) add a new proto response variant
that represents multiple blocks (e.g., a repeated Block wrapper or explicit
HistoricalBlocks proto) and implement the conversion in this file to convert
that variant into NodeCommsResponse::HistoricalBlocks (or the new equivalent) so
all producers and consumers agree; ensure you update proto definitions,
regenerate bindings, and run compile-time checks to find all remaining
producers/consumers and update tests accordingly.
| #[error("Other error: {0}")] | ||
| Other(#[from] anyhow::Error), |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Avoid exposing anyhow::Error in public API; prefer a boxed std::error::Error
Exposing anyhow::Error in a public enum ties your library API to anyhow and makes future changes harder. Prefer a trait-object to keep the public surface generic while preserving source and message. You can keep the same display string.
Consider this change:
- #[error("Other error: {0}")]
- Other(#[from] anyhow::Error),
+ #[error("Other error: {0}")]
+ Other(#[from] Box<dyn std::error::Error + Send + Sync>),To preserve the current ergonomics (using ? on functions returning anyhow::Error), add a convenience impl:
impl From<anyhow::Error> for ValidationError {
fn from(err: anyhow::Error) -> Self {
Self::Other(err.into())
}
}Optional consistency nit: CommsInterfaceError uses “An error occurred: {0}”; consider unifying the message across modules.
To check for any exhaustive matches that might be impacted by this new variant, run:
🏁 Script executed:
#!/bin/bash
# Find all explicit pattern matches and usages that may need review after adding a new variant
rg -n --hidden --glob '!target' 'ValidationError::' -S
rg -n --hidden --glob '!target' 'match\s*\(.*\)\s*\{' -S | sed -n 'p'Length of output: 323147
Prefer a boxed std::error::Error over anyhow::Error in your public API
To decouple from anyhow and keep the enum generic, change the Other variant to hold a trait object:
#[error("Other error: {0}")]
- Other(#[from] anyhow::Error),
+ Other(#[from] Box<dyn std::error::Error + Send + Sync>),Then restore the ergonomic ? operator for functions returning anyhow::Error by adding:
impl From<anyhow::Error> for ValidationError {
fn from(err: anyhow::Error) -> Self {
Self::Other(err.into())
}
}• Existing pattern matches like ValidationError::Other(_) (e.g. in error‐to‐ban_reason) will continue to compile and require no updates.
• After this change, your public variant is generic, while consumer code can still use ? with anyhow::Error.
🤖 Prompt for AI Agents
In base_layer/core/src/validation/error.rs around lines 171-172, change the
public enum variant to hold a boxed trait object instead of anyhow::Error and
add a From<anyhow::Error> impl: replace the line `Other(#[from] anyhow::Error)`
with `Other(Box<dyn std::error::Error + Send + Sync>)` (remove the #[from]
attribute), ensure std::error::Error is in scope, and add an implementation
`impl From<anyhow::Error> for ValidationError { fn from(err: anyhow::Error) ->
Self { Self::Other(err.into()) } }` so callers can still use the `?` operator
with anyhow errors.
| Apache License | ||
| Version 2.0, January 2004 | ||
| http://www.apache.org/licenses/ | ||
|
|
||
| TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION | ||
|
|
||
| Copyright 2021 The Grin Developers | ||
|
|
||
| Licensed under the Apache License, Version 2.0 (the "License"); | ||
| you may not use this file except in compliance with the License. | ||
| You may obtain a copy of the License at | ||
|
|
||
| http://www.apache.org/licenses/LICENSE-2.0 | ||
|
|
||
| Unless required by applicable law or agreed to in writing, software | ||
| distributed under the License is distributed on an "AS IS" BASIS, | ||
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| See the License for the specific language governing permissions and | ||
| limitations under the License. |
There was a problem hiding this comment.
Fix filename/location mismatch with NOTICE (LICENSE-APACHE vs LICENCE-APACHE).
NOTICE references LICENSE-APACHE, but this file is named LICENCE-APACHE and lives under base_layer/. Standardize by renaming this to LICENSE-APACHE at the repo root, or update NOTICE to point to the correct path. Avoiding mismatches helps downstream packaging and license scanners.
If renaming is not feasible now, update NOTICE as follows:
-See LICENSE-APACHE for the full license text.
+See base_layer/LICENCE-APACHE for the full license text.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Apache License | |
| Version 2.0, January 2004 | |
| http://www.apache.org/licenses/ | |
| TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION | |
| Copyright 2021 The Grin Developers | |
| Licensed under the Apache License, Version 2.0 (the "License"); | |
| you may not use this file except in compliance with the License. | |
| You may obtain a copy of the License at | |
| http://www.apache.org/licenses/LICENSE-2.0 | |
| Unless required by applicable law or agreed to in writing, software | |
| distributed under the License is distributed on an "AS IS" BASIS, | |
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
| See the License for the specific language governing permissions and | |
| limitations under the License. | |
| -See LICENSE-APACHE for the full license text. | |
| +See base_layer/LICENCE-APACHE for the full license text. |
🤖 Prompt for AI Agents
In base_layer/LICENCE-APACHE lines 1-19 the filename and path mismatch the
NOTICE reference (NOTICE expects LICENSE-APACHE at repo root); fix by either
renaming and moving this file to LICENSE-APACHE at the repository root with the
same contents and updating any references, or update NOTICE to reference
base_layer/LICENCE-APACHE (or correct the spelling to LICENSE-APACHE in both
NOTICE and this file path) so the name/path and NOTICE entry match; ensure
spelling is "LICENSE-APACHE" if standardizing and verify license scanners pick
up the new location.
| The following file(s) are subject to the Apache 2.0 License: | ||
|
|
||
| - [base_layer/core/src/proof_of_work/siphash.rs] | ||
|
|
||
| See LICENSE-APACHE for the full license text. |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Ensure all Apache-2.0-derived files are listed and fix the license path.
- If cuckaroo/c29 PoW code is derived from Grin, list those files here too (e.g., base_layer/core/src/proof_of_work/cuckaroo_pow.rs) to meet attribution obligations.
- The path currently says LICENSE-APACHE, but the file added is base_layer/LICENCE-APACHE. Either rename the file or update this reference.
Proposed quick fix if not renaming:
-See LICENSE-APACHE for the full license text.
+See base_layer/LICENCE-APACHE for the full license text.Optionally enumerate all applicable files (siphash.rs, and any c29/cuckaroo-derived modules).
Run this to find Apache-2.0 attributions and Grin-origin headers:
🏁 Script executed:
#!/bin/bash
rg -n --hidden --glob '!*target/*' -e 'Apache-2\.0|SPDX-License-Identifier:.*Apache-2\.0|The Grin Developers|Copyright.*Grin'Length of output: 4609
🏁 Script executed:
#!/bin/bash
# List all files in the proof_of_work directory to verify which should be in NOTICE
find base_layer/core/src/proof_of_work -maxdepth 1 -type f -printf "%f\n"Length of output: 296
Update NOTICE to include cuckaroo_pow.rs and correct license path
The NOTICE file currently only lists siphash.rs and refers to LICENSE-APACHE, but:
cuckaroo_pow.rsalso carries Grin-derived Apache-2.0 code and must be attributed.- The on-disk file is named
base_layer/LICENCE-APACHE, so the reference in NOTICE should be updated.
Apply the following patch to NOTICE (around lines 8–12):
-The following file(s) are subject to the Apache 2.0 License:
-
-- [base_layer/core/src/proof_of_work/siphash.rs]
+The following file(s) are subject to the Apache 2.0 License:
+- base_layer/core/src/proof_of_work/siphash.rs
+- base_layer/core/src/proof_of_work/cuckaroo_pow.rs
-See LICENSE-APACHE for the full license text.
+See base_layer/LICENCE-APACHE for the full license text.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| The following file(s) are subject to the Apache 2.0 License: | |
| - [base_layer/core/src/proof_of_work/siphash.rs] | |
| See LICENSE-APACHE for the full license text. | |
| The following file(s) are subject to the Apache 2.0 License: | |
| - base_layer/core/src/proof_of_work/siphash.rs | |
| - base_layer/core/src/proof_of_work/cuckaroo_pow.rs | |
| See base_layer/LICENCE-APACHE for the full license text. |
🤖 Prompt for AI Agents
In NOTICE around lines 8 to 12, the file currently lists only
base_layer/core/src/proof_of_work/siphash.rs and references LICENSE-APACHE;
update the notice to also include
base_layer/core/src/proof_of_work/cuckaroo_pow.rs as a file subject to the
Apache-2.0 License and change the license filename reference from LICENSE-APACHE
to the on-disk name base_layer/LICENCE-APACHE; make these two textual edits so
the NOTICE accurately attributes both source files and points to the correct
licence file path.
There was a problem hiding this comment.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
applications/minotari_node/src/commands/command/get_block.rs (1)
41-43: Help text is now incorrect; JSON is not supported. Update clap help.The message still claims JSON support. Update to avoid misleading users.
- /// Supported options are 'json' and 'text'. 'text' is the default if omitted. + /// Only 'text' is supported. 'text' is the default if omitted.Optionally, to prevent users from selecting json at parse-time, consider restricting accepted values at the arg level (if feasible with your Format type), or remove the Json variant from Format entirely.
♻️ Duplicate comments (3)
base_layer/core/src/consensus/chain_strength_comparer.rs (1)
80-89: Cuckaroo cycle comparer added; previous double-reference issue resolvedThe extra ampersand noted earlier is no longer present; cmp now receives &T, not &&T. Consider adding a brief doc comment clarifying ordering semantics (higher difficulty => “greater”) and a focused unit test covering tie-break and strictly-greater cases.
Example doc comment:
- “Compares by accumulated Cuckaroo cycle difficulty using natural ordering.”
I can add a minimal test asserting Ordering::Greater/Less for representative values if you’d like.
applications/minotari_node/src/commands/command/get_block.rs (2)
82-82: Use println! for user-facing message (as per prior review agreement).Earlier feedback agreed to prefer println! here. Switch to stdout for consistency.
- Format::Json => eprintln!("JSON format not supported for blocks in this command"), + Format::Json => println!("JSON format not supported for blocks in this command"),
99-99: Use println! for user-facing message (as per prior review agreement).Make this consistent with the other branches and prior feedback.
- Format::Json => eprintln!("JSON format not supported for blocks in this command"), + Format::Json => println!("JSON format not supported for blocks in this command"),
🧹 Nitpick comments (4)
applications/minotari_node/src/commands/command/get_block.rs (1)
90-93: Parity with height path: include accumulated data in by-hash Text output.For a consistent UX, mirror the height-based output that prints accumulated data.
- Some(block) => match format { - Format::Text => println!("{}", block), + Some(block) => match format { + Format::Text => { + println!("{}", block); + let block_data = self.blockchain_db.fetch_block_accumulated_data(*block.hash()).await?; + println!("-- Accumulated data --"); + println!("{}", block_data); + }, Format::Json => panic!("JSON format not supported for blocks in this command"), },base_layer/core/src/proof_of_work/cuckaroo_pow.rs (3)
121-121: Clean up the double comment.- // // Generate the hasher. + // Generate the hasher.
136-168: Remove debug statements from test helper.Debug statements should be removed even from test code to keep the codebase clean.
fn pack_nonces(uncompressed: &[u64], bit_width: u8) -> Vec<u8> { let mut target = vec![0u8; (uncompressed.len() * bit_width as usize + 7) / 8]; - dbg!(&target); let mut compressed = target.as_mut_slice(); let mut mini_buffer = 0u64; let mut remaining = 64; for el in uncompressed { - dbg!(el, bit_width, remaining, mini_buffer); mini_buffer |= el << (64 - remaining); - dbg!(format!("mini_buffer: {:064b}", mini_buffer)); if bit_width < remaining { remaining -= bit_width; } else { compressed[..8].copy_from_slice(&mini_buffer.to_le_bytes()); compressed = &mut compressed[8..]; mini_buffer = el >> remaining; remaining = 64 + remaining - bit_width; } } let mut remainder = compressed.len() % 8; if remainder == 0 { remainder = 8; } if mini_buffer > 0 { compressed[..].copy_from_slice(&mini_buffer.to_le_bytes()[..remainder]); } - dbg!(&target); - dbg!(format!("target: {:08b}", target[0])); - dbg!(format!("target: {:08b}", target[1])); target }
180-183: Remove redundant nonce validation.The nonce is already masked with
node_maskon line 180, so it cannot exceednode_mask.let nonce = mini_buffer & node_mask; - if nonce > node_mask { - return Err(CuckarooVerificationError::NonceTooLarge); - } nonces.push(nonce);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
applications/minotari_node/src/commands/command/get_block.rs(2 hunks)base_layer/core/src/consensus/chain_strength_comparer.rs(4 hunks)base_layer/core/src/proof_of_work/cuckaroo_pow.rs(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-07-15T12:23:14.650Z
Learnt from: hansieodendaal
PR: tari-project/tari#7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
Applied to files:
applications/minotari_node/src/commands/command/get_block.rs
🧬 Code Graph Analysis (1)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (3)
base_layer/core/src/proof_of_work/siphash.rs (3)
digest(104-106)siphash_block(46-69)new(85-87)base_layer/core/src/blocks/block_header.rs (6)
new(129-148)vec(349-355)vec(365-371)vec(402-408)vec(418-424)from(274-293)base_layer/core/src/proof_of_work/difficulty.rs (1)
big_endian_difficulty(68-71)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: Cucumber tests / Base Layer
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: ledger build tests
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: ci
- GitHub Check: Cucumber tests / FFI
- GitHub Check: cargo check with stable
🔇 Additional comments (10)
base_layer/core/src/consensus/chain_strength_comparer.rs (4)
54-56: Accessor-based compare looks correctSingle reference passed to cmp; consistent with other comparers and expected Ord usage.
65-67: LGTM: Corrected to use accessor and single referenceThe comparison is symmetric with the Monero one and maintains natural ordering.
76-78: LGTM: Accessor + cmp usage is consistentMatches the pattern used elsewhere and should behave as intended.
132-134: Confirmed:by_cuckaroo_cycle_difficultyis wired and exercised– Invocation found in
base_layer/core/src/consensus/consensus_manager.rs:372
No further changes needed; this comparer is active in the consensus build path.applications/minotari_node/src/commands/command/get_block.rs (1)
66-85: Confirm exit semantics for unsupported JSON formatThe code currently handles
Format::Jsonin three places with mixed behaviors (two print-and-continue, one panic):• In get_block (line 82):
Format::Json => eprintln!("JSON format not supported for blocks in this command"),• In get_block_by_hash → Some branch (line 92):
Format::Json => panic!("JSON format not supported for blocks in this command"),• In get_block_by_hash → None branch (line 99):
Format::Json => eprintln!("JSON format not supported for blocks in this command"),Which of the following should we adopt?
- Print a message and return
Ok(())(exit code 0), or- Return a dedicated error to signal failure (non-zero exit)?
If you’d like the latter, introduce a new error variant:
#[derive(Error, Debug)] enum ArgsError { #[error("Block not found at height {height}")] NotFoundAt { height: u64 }, + #[error("Unsupported output format: {0}")] + UnsupportedFormat(&'static str), } // Replace all `Format::Json` branches with: - Format::Json => eprintln!("JSON format not supported for blocks in this command"), + Format::Json => return Err(ArgsError::UnsupportedFormat("json").into()),Please confirm which approach you prefer so we can apply the change consistently.
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (5)
51-77: Well-structured error handling enum.The error variants cover all necessary validation cases with clear, descriptive messages.
198-223: Correct implementation of nonce verification and edge generation.The function properly validates nonces and generates edges using the siphash algorithm as expected for Cuckaroo.
225-298: Excellent cycle verification implementation.The function thoroughly validates all cycle properties including connectivity, edge uniqueness, and proper closure. The XOR sum check for endpoint matching is particularly elegant.
300-307: Clean difficulty calculation wrapper.Properly delegates to
cuckaroo_resultand converts to Difficulty type.
309-476: Comprehensive test coverage.The test suite thoroughly covers packing/unpacking logic, various edge cases, and all error conditions. Good use of descriptive test names.
SWvheerden
left a comment
There was a problem hiding this comment.
Looks good, we just need to properly map the errors.
Difficulty validation failures need to get mapped to a proper bannable error, not a ignore
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
base_layer/core/src/transactions/transaction_components/test.rs (1)
438-439: Nit: drop the unused binding and prefer expect for clearer failure contextUnwrap is fine in tests and provides better error details than assert(is_ok()). To tighten this up:
- Remove the unnecessary
_unusedbinding.- Use expect(...) to add context to the panic message.
Option A (preferred, adds context):
- // This will panic if not valid - let _unused = validator.validate(&tx, None, None, u64::MAX).unwrap(); + // Panics with context if validation fails + validator + .validate(&tx, None, None, u64::MAX) + .expect("internal consistency validation should succeed");Option B (minimal change, keep unwrap, drop binding):
- // This will panic if not valid - let _unused = validator.validate(&tx, None, None, u64::MAX).unwrap(); + // This will panic if not valid + validator.validate(&tx, None, None, u64::MAX).unwrap();Note: Per our retrieved learnings, panicking in tests for setup/validation is acceptable; this suggestion is just for clarity.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
base_layer/core/src/transactions/transaction_components/test.rs(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-07T06:49:32.174Z
Learnt from: hansieodendaal
PR: tari-project/tari#7189
File: integration_tests/tests/steps/wallet_cli_steps.rs:464-467
Timestamp: 2025-06-07T06:49:32.174Z
Learning: In integration test code, using panic! for setup validation and prerequisite checking is acceptable and often preferred, as it provides immediate feedback when test scenarios are incorrectly configured rather than allowing tests to continue in invalid states.
Applied to files:
base_layer/core/src/transactions/transaction_components/test.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: cargo check with stable
- GitHub Check: ci
- GitHub Check: Cucumber tests / FFI
- GitHub Check: ledger build tests
- GitHub Check: Cucumber tests / Base Layer
hansieodendaal
left a comment
There was a problem hiding this comment.
Some comments - review of the actual C29 code will be in a consecutive review
There was a problem hiding this comment.
Actionable comments posted: 2
🔭 Outside diff range comments (2)
base_layer/core/src/base_node/proto/response.rs (2)
43-66: Address missing HistoricalBlocks conversion
The currentTryFrom<ProtoNodeCommsResponse>impl only handlesBlockResponseandFetchMempoolTransactionsByExcessSigsResponse, yet the inbound handlers and local interface still emitHistoricalBlocks. Any proto‐encodedHistoricalBlockswill now fail conversion at runtime.Files emitting HistoricalBlocks:
- base_layer/core/src/base_node/comms_interface/inbound_handlers.rs
- lines ~187–190, 223–226, 257–261
- base_layer/core/src/base_node/comms_interface/local_interface.rs
- lines ~114–119, 133–139, 241–247, 256–262
- base_layer/core/tests/tests/node_comms_interface.rs
- multiple
if let Ok(NodeCommsResponse::HistoricalBlocks(...))assertionsTwo remediation options:
- Remove all
HistoricalBlocksproducers and replace with repeated single‐block responses or a streaming API.- Reintroduce a
ProtoNodeCommsResponse::HistoricalBlocksarm and map it back toNodeCommsResponse::HistoricalBlocks.Please confirm which API shape you’d like, and I can draft the patch.
171-177: Remove unusedFromIteratorimpl forHistoricalBlocksThe
impl FromIterator<proto::core::HistoricalBlock> for proto::base_node::HistoricalBlocksat lines 171–177 inresponse.rsisn’t referenced anywhere in the codebase (no calls tocollect::<HistoricalBlocks>, nor any conversions to that type). Deleting it will tighten the public surface without impacting functionality.• File:
base_layer/core/src/base_node/proto/response.rs
Remove lines 171–177 (the entireFromIterator<HistoricalBlock>implementation forHistoricalBlocks).
♻️ Duplicate comments (1)
base_layer/core/src/validation/header/header_full_validator.rs (1)
230-236: LGTM: strict Cuckaroo pow_data length check (addresses prior review)The exact-length check using consensus params is correct and prevents malformed/oversized headers early.
🧹 Nitpick comments (2)
base_layer/core/src/proto/block.rs (2)
66-71: Stop binding unused accumulated data and remove commented fieldYou don’t need to bind
_accumulated_data; also remove the commented-outaccumulated_datafield to avoid dead/commented code.Apply:
- let (block, _accumulated_data, confirmations) = block.dissolve(); + let (block, _, confirmations) = block.dissolve(); Ok(Self { confirmations, - // accumulated_data: Some(accumulated_data.into()), block: Some(block.try_into()?), })
80-99: Minor ergonomics: prefer iterator collects over manual push loopsUsing iterator
map(...).collect::<Result<Vec<_>, _>>()?reduces boilerplate and surfaces errors uniformly.For example:
let coinbase_kernels = new_block .coinbase_kernels .into_iter() .map(TryInto::try_into) .collect::<Result<Vec<_>, _>>()?; let coinbase_outputs = new_block .coinbase_outputs .into_iter() .map(TryInto::try_into) .collect::<Result<Vec<_>, _>>()?;Same applies in the reverse conversion block at lines 106-114.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
base_layer/core/src/base_node/proto/response.rs(2 hunks)base_layer/core/src/proof_of_work/error.rs(2 hunks)base_layer/core/src/proto/block.rs(2 hunks)base_layer/core/src/validation/header/header_full_validator.rs(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-11T15:37:17.344Z
Learnt from: hansieodendaal
PR: tari-project/tari#7387
File: base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs:2896-2927
Timestamp: 2025-08-11T15:37:17.344Z
Learning: The `update_accumulated_difficulty` method in `base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs` is migration-specific code that only runs during a once-off migration when existing ChainHeader data is present in LMDB. The consistency guarantees are upheld by the calling function `process_accumulated_data_for_height`, so additional validation within this method is redundant.
Applied to files:
base_layer/core/src/proto/block.rs
🧬 Code Graph Analysis (2)
base_layer/core/src/validation/header/header_full_validator.rs (2)
base_layer/core/src/consensus/consensus_manager.rs (1)
consensus_constants(112-121)base_layer/core/src/chain_storage/blockchain_database.rs (1)
consensus_constants(461-464)
base_layer/core/src/proto/block.rs (2)
base_layer/core/src/blocks/historical_block.rs (2)
block(60-62)confirmations(51-53)base_layer/core/src/blocks/accumulated_data.rs (1)
block(386-388)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: ledger build tests
- GitHub Check: Cucumber tests / FFI
- GitHub Check: ci
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: Cucumber tests / Base Layer
- GitHub Check: cargo check with stable
🔇 Additional comments (6)
base_layer/core/src/proof_of_work/error.rs (1)
50-52: Good addition: explicit Cuckaroo size-mismatch error and ban mappingThe dedicated PowError variant and its inclusion in get_ban_reason are appropriate and align with the other PoW validation errors.
base_layer/core/src/validation/header/header_full_validator.rs (1)
219-238: Verify Miner Uses Bit-Packed Cuckaroo EncodingWe’ve confirmed that:
- The header validator in
header_full_validator.rscomputes and enforces the exact byte length for a bit-packed sequence ofcycle_length × edge_bits.- The
proof_of_work/cuckaroo_pow.rsmodule providespack_nonces/unpack_noncesimplementations (with comprehensive tests) that pack each nonce intoedge_bitsand pad to the expected length.- All gRPC/proto conversions (in
proto/block.proto,proto/block_header.rs, andminotari_app_grpc/src/conversions) treatpow_dataas rawbyteswith no additional encoding.Please ensure that your miner (e.g. in the minotari_miner crate) uses the same
pack_nonceslogic when filling inpow_dataon theNewBlockTemplatebefore submitting a solution. If the miner produces a fixed-width or differently packed byte array, valid Cuckaroo proofs will be rejected at validation time.Locations to review:
- proof_of_work/cuckaroo_pow.rs – reference implementation of
pack_nonces/unpack_nonces.- minotari_miner/** – confirm where and how
pow_datais generated for submission.- gRPC conversion layers – already propagate raw bytes correctly.
base_layer/core/src/base_node/proto/response.rs (2)
30-30: Import cleanup LGTMDropping
try_convert_allin favor of explicit maps is fine;ByteArrayis still required forto_vec()onPrivateKey.
76-94: One-way narrowing: non-Block/FetchMempoolTransactions responses now hard-errorThe catch-all error for unsupported
NodeCommsResponsevariants will surface immediately now that only two variants are supported. Ensure all callers have been migrated; otherwise, this becomes a breaking change at runtime.If intended, consider documenting this breaking surface change in the PR description and release notes.
base_layer/core/src/proto/block.rs (2)
23-23: Imports LGTMKeeping both
TryFromandTryIntotraits local is appropriate for the conversion impls in this module.
29-29: Proto/domain imports consistent with intentRetaining
HistoricalBlockhere (while removing its accumulated data from proto) is consistent with the new surface.
| use crate::{ | ||
| base_node::comms_interface::{FetchMempoolTransactionsResponse, NodeCommsResponse}, | ||
| blocks::{Block, BlockHeader, HistoricalBlock}, | ||
| blocks::{Block, BlockHeader}, |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
HistoricalBlock removed from imports — verify producers are updated
Removing HistoricalBlock from this conversion layer implies all producers of NodeCommsResponse::HistoricalBlocks were removed or migrated. Please confirm there are no remaining producers.
Run this to locate any remaining uses:
🏁 Script executed:
#!/bin/bash
# Find any construction or mention of HistoricalBlocks and NodeCommsResponse::HistoricalBlocks
rg -n -A2 -B2 $'NodeCommsResponse::HistoricalBlocks|HistoricalBlocks\\b'Length of output: 7967
Add back HistoricalBlock to the proto response conversion
The NodeCommsResponse::HistoricalBlocks variant is still produced in multiple places (inbound handlers, local interfaces, and tests), so the conversion layer in base_layer/core/src/base_node/proto/response.rs must import HistoricalBlock again. Without it, the code won’t compile or correctly translate that variant.
Please restore the import on line 35:
- blocks::{Block, BlockHeader},
+ blocks::{Block, BlockHeader, HistoricalBlock},Then verify that all tests and NodeCommsResponse::HistoricalBlocks producers continue to compile and run as expected.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| blocks::{Block, BlockHeader}, | |
| @@ base_layer/core/src/base_node/proto/response.rs | |
| - blocks::{Block, BlockHeader}, | |
| + blocks::{Block, BlockHeader, HistoricalBlock}, |
🤖 Prompt for AI Agents
In base_layer/core/src/base_node/proto/response.rs around line 35, the proto
conversion no longer imports HistoricalBlock, but
NodeCommsResponse::HistoricalBlocks is still produced elsewhere; restore the
import by adding HistoricalBlock back into the blocks import list on line 35 so
the conversion can handle that variant, then run cargo build and the test suite
to verify compilation and that all producers of
NodeCommsResponse::HistoricalBlocks still work.
| // First byte must be 3 for Cuckaroo | ||
| if pow[0] != 3 { |
There was a problem hiding this comment.
| // First byte must be 3 for Cuckaroo | |
| if pow[0] != 3 { | |
| if pow[0] != PowAlgorithm::Cuckaroo as u8 { |
|
|
||
| fn verify_from_edges(uvs: &[(u64, u64)], cycle_length: usize) -> Result<(), CuckarooVerificationError> { | ||
| let proof_size = uvs.len(); | ||
| if proof_size != cycle_length { |
There was a problem hiding this comment.
| if proof_size != cycle_length { | |
| if proof_size != cycle_length || cycle_length == 0 { |
then we can safely expect later on
| blocks::BlockHeader, | ||
| proof_of_work::{siphash::siphash_block, Difficulty}, | ||
| }; | ||
|
|
There was a problem hiding this comment.
Nit: Then we use this everywhere instead of 32
| const OUTPUT_SIZE: usize = 32; | |
| // Walk the cycle | ||
|
|
||
| let mut visited_edges = HashSet::new(); | ||
| let mut visited_nodes = HashSet::new(); |
There was a problem hiding this comment.
visited_nodes is never read
| let mut visited_nodes = HashSet::new(); |
| let mut previous = None; | ||
|
|
||
| for _ in 0..cycle_length { | ||
| visited_nodes.insert(current); |
There was a problem hiding this comment.
ditto never read
| visited_nodes.insert(current); |
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (4)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (4)
93-100: Consider using a constant for the output size.The magic number
32appears multiple times in the code. Consider defining a constant for better maintainability.
116-123: Clean up the duplicate comment marker.Line 122 has a stray double-comment marker.
Apply this diff:
- // // Generate the hasher. + // Generate the hasher.
81-92: Add bounds checking before accessing pow[0].The code accesses
pow[0]without first checking if the byte array is empty, which could cause a panic.Apply this diff to add bounds checking:
pub fn cuckaroo_result( header: &BlockHeader, required_cycle_length: u8, edge_bits: u8, ) -> Result<Vec<u8>, CuckarooVerificationError> { let pow = header.pow.to_bytes(); + if pow.is_empty() { + return Err(CuckarooVerificationError::PowDataTooShort); + } let required_cycle_length = required_cycle_length as usize; // First byte must be 3 for Cuckaroo if pow[0] != 3 { return Err(CuckarooVerificationError::BlockHeaderInvalidPowAlgorithm); }
101-108: Add bounds checking before slicing pow_data.The code slices
pow_datawithout verifying that it has enough bytes for the required cycle length.Apply this diff to add bounds checking:
let pow_data = &pow[1..]; + let required_bytes = required_cycle_length * edge_bits as usize / 8; + if pow_data.len() < required_bytes { + return Err(CuckarooVerificationError::PowDataTooShort); + } // Data after <required_cycle_length * edge_bits> is padding, it must be zero - for &byte in &pow_data[required_cycle_length * edge_bits as usize / 8..] { + for &byte in &pow_data[required_bytes..] { if byte != 0 { return Err(CuckarooVerificationError::PowDataContainsNonZeroPadding); } }
🧹 Nitpick comments (3)
base_layer/core/src/proto/block.proto (3)
92-93: Good reservation of removed field; verify field 2 intent and reserve its name if applicableReserving tag 4 and the "accumulated_data" name prevents accidental reuse. Please confirm whether reserving tag 2 is intentional (i.e., it corresponded to a previously-removed field). If so, also reserve the field name for tag 2 to fully guard against reuse. If not, prefer not reserving it to keep field numbers available.
If tag 2 was not previously used in HistoricalBlock:
- reserved 2,4; + reserved 4;If tag 2 was previously used (replace placeholder with the actual field name):
reserved 2,4; + // Also reserve the old field name for 2 to prevent reuse: + reserved "previous_field_2_name";
15-20: Update PoW comments: add Cuckaroo mapping and fix Sha3x casingThe PR introduces Cuckaroo PoW; document it here and fix the casing inconsistency (Sha3X vs Sha3x).
- // 1 = Sha3X + // 1 = Sha3x + // 2 = Cuckaroo (C29) - // Supplemental proof of work data. For example for Sha3x, this would be empty (only the block header is - // required), but for Monero merge mining we need the Monero block header and RandomX seed hash. + // Supplemental proof of work data. For example for Sha3x, this would be empty (only the block header is + // required), for Monero merge mining we need the Monero block header and RandomX seed hash, + // and for Cuckaroo (C29) this contains the cycle proof bytes.
17-17: Consider using an enum for pow_algo (optional, potentially breaking)Defining an enum improves type safety and self-documentation. This is a breaking change for generated clients; only consider if acceptable within this PR’s breaking scope.
+// Algorithm used for PoW +enum PowAlgorithm { + MONERO = 0; + SHA3X = 1; + CUCKAROO = 2; +} - uint64 pow_algo = 1; + PowAlgorithm pow_algo = 1;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
base_layer/core/src/base_node/comms_interface/error.rs(4 hunks)base_layer/core/src/proof_of_work/cuckaroo_pow.rs(1 hunks)base_layer/core/src/proof_of_work/error.rs(3 hunks)base_layer/core/src/proto/block.proto(1 hunks)base_layer/core/src/validation/error.rs(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- base_layer/core/src/proof_of_work/error.rs
- base_layer/core/src/validation/error.rs
🧰 Additional context used
🧬 Code Graph Analysis (2)
base_layer/core/src/base_node/comms_interface/error.rs (3)
base_layer/core/src/base_node/sync/horizon_state_sync/error.rs (3)
from(109-111)from(115-117)from(121-123)base_layer/core/src/transactions/transaction_components/error.rs (1)
from(97-99)base_layer/core/src/proof_of_work/monero_rx/error.rs (1)
from(83-85)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (3)
base_layer/core/src/proof_of_work/siphash.rs (3)
digest(104-106)siphash_block(46-69)new(85-87)base_layer/core/src/blocks/block_header.rs (6)
from(274-293)new(129-148)vec(349-355)vec(365-371)vec(402-408)vec(418-424)base_layer/core/src/proof_of_work/difficulty.rs (1)
big_endian_difficulty(68-71)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: cargo check with stable
- GitHub Check: ci
- GitHub Check: ledger build tests
- GitHub Check: Cucumber tests / FFI
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (12)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (9)
1-50: LGTM! License headers and imports are properly configured.The file includes appropriate dual-license headers (Apache 2.0) and properly imports necessary dependencies for the Cuckaroo PoW implementation.
51-79: Well-structured error enum with comprehensive coverage.The
CuckarooVerificationErrorenum provides clear, specific error variants for all validation scenarios with appropriate error messages.
109-115: LGTM! Proper nonce extraction and validation.The code correctly unpacks nonces and validates that any extra nonces beyond the required cycle length are zero.
124-137: LGTM! Correct nonce hashing for result generation.The function properly hashes the sorted nonces to produce the final 32-byte result.
166-192: LGTM! Robust nonce unpacking with proper validation.The
unpack_noncesfunction correctly handles bit-level unpacking with appropriate bounds checking and validation for edge_bits values.
194-219: LGTM! Comprehensive nonce verification logic.The
verifyfunction properly validates nonces (bounds and ordering) and correctly generates edges using siphash_block with the appropriate parameters.
221-294: Excellent cycle verification implementation!The
verify_from_edgesfunction provides a robust and efficient cycle verification algorithm with comprehensive checks for:
- Cycle length validation
- XOR sum verification for endpoint matching
- Node degree validation (exactly 2 edges per node)
- Cycle traversal without revisiting edges
- Proper cycle closure
The implementation is well-structured and handles all edge cases correctly.
296-303: LGTM! Clean difficulty calculation wrapper.The function properly delegates to
cuckaroo_resultand converts the result to a Difficulty value.
305-472: Comprehensive test coverage!The test suite thoroughly covers:
- Nonce packing/unpacking for various bit widths
- Padding validation
- Edge case handling for 29-bit nonces
- All error conditions in the verification logic
- Both valid and invalid cycle scenarios
Great attention to testing edge cases and error conditions.
base_layer/core/src/base_node/comms_interface/error.rs (3)
34-34: LGTM! Proper import for the new error type.The import correctly adds
CuckarooVerificationErrorto the existing proof_of_work imports.
80-81: LGTM! Well-integrated error variant.The new
CuckarooVerificationErrorvariant follows the established pattern with proper#[from]derive and descriptive error message.
96-96: LGTM! Appropriate ban classification for Cuckaroo errors.Including
CuckarooVerificationErrorin the long ban category is consistent with how other validation errors are handled, ensuring malicious or faulty nodes sending invalid Cuckaroo proofs are appropriately penalized.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (2)
134-134: Fix comment formatting.
266-266: Remove unused variables.Also applies to: 275-275
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (4)
base_layer/core/src/proof_of_work/siphash.rs (3)
digest(104-106)siphash_block(46-69)new(85-87)base_layer/core/src/blocks/block_header.rs (6)
from(274-293)new(129-148)vec(349-355)vec(365-371)vec(402-408)vec(418-424)base_layer/core/src/proof_of_work/proof_of_work_algorithm.rs (1)
try_from(67-74)base_layer/core/src/proof_of_work/difficulty.rs (1)
big_endian_difficulty(68-71)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: Cucumber tests / FFI
- GitHub Check: cargo check with stable
- GitHub Check: Cucumber tests / Base Layer
- GitHub Check: ci
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: ledger build tests
- GitHub Check: test (testnet, esmeralda)
🔇 Additional comments (11)
base_layer/core/src/proof_of_work/cuckaroo_pow.rs (11)
1-36: LGTM! Proper dual-licensing with Apache 2.0 and BSD-3-Clause.The header correctly attributes both Tari Project and Grin Developers copyright, and includes both Apache 2.0 (common in Rust ecosystem) and BSD-3-Clause licensing terms, which is appropriate for code derived from Grin.
54-84: Well-structured error variants for comprehensive validation.The error enum provides excellent coverage of all possible Cuckaroo verification failures with descriptive messages. The
Fromconversion forDifficultyErrorenables clean error propagation.
86-104: LGTM! Input validation and algorithm check are correct.The function properly validates:
- Minimum PoW data size (1 + packed_bytes)
- Algorithm identifier (must be 3 for Cuckaroo)
- Handles zero cycle length via
NonZeroUsizeThe packed bytes calculation
(packed_size + 7) / 8correctly rounds up for bit-to-byte conversion.
105-112: LGTM! Proper Blake2b hash initialization for siphash keys.The implementation correctly:
- Creates 32-byte Blake2b hasher
- Hashes nonce (little-endian) and mining hash
- Pre-allocates output buffer with proper size
- Uses
finalize_variablewith error handling
128-136: LGTM! Siphash key extraction and verification call.The code correctly:
- Extracts four 8-byte little-endian values as siphash keys from the 32-byte hash
- Calls verification with proper parameters
- Uses
unwrap()safely since the slice lengths are guaranteed
137-149: LGTM! Final hash computation from sorted nonces.The implementation correctly:
- Creates a new Blake2b hasher for the final result
- Hashes all nonces in little-endian format
- Returns the 32-byte result
178-204: Efficient nonce unpacking with proper bit manipulation.The implementation correctly handles:
- Bit-level unpacking using a sliding window approach
- Node mask validation to ensure nonces fit in edge_bits
- Padding validation for excess nonces
- Capacity pre-allocation for performance
The algorithm efficiently processes bytes and extracts edge_bits-sized values using bit shifts and masks.
206-231: Comprehensive nonce and edge validation.The verification function properly:
- Validates nonce bounds against node_mask
- Ensures strict ascending order of nonces
- Computes edges using siphash_block with correct parameters (rotation=21, xor_all=true)
- Extracts u,v coordinates using appropriate bit masking
233-309: Robust cycle verification with proper graph validation.The implementation correctly validates:
- Cycle length matches proof size
- XOR sum of all endpoints is zero (necessary condition for valid cycle)
- Each node has exactly 2 edges (degree constraint)
- Cycle traversal without revisiting edges
- Cycle closes back to start node
- All edges are used exactly once
The algorithm efficiently uses HashSet for O(1) edge lookups and maintains proper state during traversal.
311-318: LGTM! Difficulty computation using big-endian hash.The function correctly delegates to
cuckaroo_resultand converts the output to difficulty usingbig_endian_difficulty, which is appropriate for PoW where leading zeros indicate higher difficulty.
320-489: Comprehensive test coverage for all validation scenarios.The tests thoroughly cover:
- Nonce packing/unpacking with various bit widths (3, 29 bits)
- Padding validation for both packed data and unpacked nonces
- Edge case validation (nonce too large, not ascending)
- Cycle validation (endpoints matching, proper length, no revisited edges)
- Graph constraints (node degree, cycle closure)
The tests use appropriate data structures and edge cases, including randomized edge order testing.
hansieodendaal
left a comment
There was a problem hiding this comment.
Overall, I am happy with this; there might still be some embedded/small issues, but testing on esmeralda testnet with a suitable external miner will let it surface.
As a side note, we could introduce seed-derived, epoch-based tweaks in algorithm consensus (and let miners adjust their memory access accordingly). For example, every ~6 months (131,760 blocks) derive parameters from a header hash ~3 weeks prior (~15,120 blocks). This keeps verification cheap and predictable while periodically changing miner implementations.
utACK
Summary by CodeRabbit
New Features
Refactor
Bug Fixes
Documentation
Chores