feat: add sqlite peer_db#6963
Conversation
WalkthroughThis update migrates peer management from LMDB and query-based frameworks to a SQLite-backed storage system. It removes the Changes
Sequence Diagram(s)sequenceDiagram
participant App/Service
participant PeerManager (async)
participant PeerStorageSql (sync)
participant SQLite DB
App/Service->>PeerManager: add_or_update_peer(peer)
PeerManager->>PeerStorageSql: (spawn_blocking) add_or_update_peer(peer)
PeerStorageSql->>SQLite DB: INSERT or UPDATE peer row
SQLite DB-->>PeerStorageSql: Result<PeerId>
PeerStorageSql-->>PeerManager: PeerId
PeerManager-->>App/Service: PeerId
App/Service->>PeerManager: get_seed_peers()
PeerManager->>PeerStorageSql: (spawn_blocking) get_seed_peers()
PeerStorageSql->>SQLite DB: SELECT peers WHERE is_seed
SQLite DB-->>PeerStorageSql: Vec<Peer>
PeerStorageSql-->>PeerManager: Vec<Peer>
PeerManager-->>App/Service: Vec<Peer>
Possibly related PRs
Suggested reviewers
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Test Results (Integration tests) 2 files + 2 1 errors 2 suites +2 48m 32s ⏱️ + 48m 32s For more details on these parsing errors and failures, see this check. Results for commit d53c735. ± Comparison against base commit 9e121b0. ♻️ This comment has been updated with latest results. |
Test Results (CI)1 092 tests 1 088 ✅ 15m 22s ⏱️ For more details on these failures, see this check. Results for commit da7add6. ♻️ This comment has been updated with latest results. |
cea2df7 to
7e573c0
Compare
c1ecd95 to
df35680
Compare
There was a problem hiding this comment.
Actionable comments posted: 19
🔭 Outside diff range comments (2)
comms/dht/src/actor.rs (1)
395-406: 🛠️ Refactor suggestionAvoid multiple database hits for excluded-address checks
get_peer_multi_addresses()executes a query for the current node every timecheck_if_addresses_excludedis called (twice during startup).
Although this is not a hot path, the call goes through Diesel/SQLite synchronously and therefore blocks the Tokio executor thread.If you keep the helper async, prefer using
tokio::task::spawn_blocking(or move the helper to a synchronous context) so DB I/O is shifted to a blocking thread pool:-let addresses = peer_manager.get_peer_multi_addresses(&node_id)?; +let addresses = tokio::task::spawn_blocking(move || peer_manager.get_peer_multi_addresses(&node_id)) + .await + .map_err(|e| DhtActorError::PeerManagerError(e.into_panic().into()))??;This prevents accidental starvation if the SQLite file grows or is on slow storage.
comms/core/src/connectivity/manager.rs (1)
670-709: 🛠️ Refactor suggestion
⚠️ Potential issueSame blocking concern for
find_by_node_id,delete_peer, and failure handling
on_peer_connection_failure()performs two synchronous DB calls while still executing on the Tokio worker thread. A malicious peer that intentionally causes repeated failures could starve the actor.Refactor as above (
spawn_blockingor async DB) to insulate the event loop.Also applies to: 820-820
♻️ Duplicate comments (1)
comms/core/src/protocol/messaging/test.rs (1)
405-406: See note above about concurrent accessThe synchronous
add_peercall here is executed on a background task spawned later in the test, reinforcing the need for interior synchronisation.
🧹 Nitpick comments (48)
common/config/presets/c_base_node_c.toml (1)
144-149: Clarify peer DB path comment and align wording with earlier sectionsNice to see the new SQLite-specific comment. Two small nit-picks:
- “Path to the SQLite data files” could mention that this path is relative to
data_dir(like other path-style keys) to avoid confusion.- A few lines above (12-20) we still describe LMDB and even have a “lmbd” typo. Readers might think the node still relies on LMDB for the peer DB. Consider amending or cross-referencing those comments to make it clear that only the blockchain database may still use LMDB, while the peer database is now SQLite.
No functional code change is required, but the docs will be clearer.
common/config/presets/d_console_wallet.toml (1)
183-188: Add hint that the path is relative todata_dirand avoid ambiguity with LMDBSame remark as for the base-node preset: state explicitly that this path is relative to
data_dirand is used exclusively for the new SQLite peer database. This avoids any assumption that LMDB is still in play for peers.A tiny doc tweak keeps future maintainers from scratching their heads.
comms/dht/src/rpc/service.rs (1)
161-169: Double-check peer filtering logic withSTALE_PEER_THRESHOLD_DURATION
closest_n_active_peersreceives
Some(STALE_PEER_THRESHOLD_DURATION), true, None.
Depending on the implementation, combining
Some(STALE_PEER_THRESHOLD_DURATION)– exclude peers that have not been active within this duration, and- the hard-coded
trueflag (likely include only online peers or strict filtering)can significantly reduce the candidate set. With a small network this may drop the returned peer count below
n, causing poorer routing performance or moreNotEnoughPeerserrors upstream.Please verify that the constant’s value (and the extra
trueflag) are appropriate for all call-sites – e.g. wallet, base-node, and integration tests – and that unit/prop tests cover the edge-case where fewer thannpeers remain after filtering.
If necessary, consider exposing these parameters to the caller or using sensible fall-backs (e.g. relax the staleness constraint when not enough peers are found).applications/minotari_node/src/commands/command/test_peer_liveness.rs (1)
89-90: Handledelete_peererrors explicitlyThe result of
peer_manager.delete_peer(&node_id)is ignored.
If the peer is not in the DB (or the call fails), the silent swallow makes diagnostics harder and still leaves the subsequentadd_peerpath unchanged.- let _res = peer_manager.delete_peer(&node_id); + if let Err(e) = peer_manager.delete_peer(&node_id) { + // Ignore “peer not found” but surface any real IO/DB errors + if !matches!(e.kind(), tari_comms::peer_manager::PeerManagerError::PeerNotFound) { + log::warn!(target: "commands::test_peer_liveness", + "Failed to delete peer {} before liveness test: {}", node_id, e); + } + }This keeps the intent (best-effort removal) while avoiding silent failure and removes the unused
_resbinding.comms/core/tests/tests/substream_stress.rs (1)
67-68: Peer already present?
add_peer(...).unwrap()will panic if the peer was added earlier (e.g. by a previous test run re-using the same temp DB).
Because the call is now synchronous, replacingunwrap()with anexpect_not_existsstyle helper or ignoring the “already exists” error would make the test more robust:- node1.peer_manager().add_peer(node2.node_identity().to_peer()).unwrap(); + if let Err(e) = node1.peer_manager().add_peer(node2.node_identity().to_peer()) { + assert!(matches!(e.kind(), tari_comms::peer_manager::PeerManagerError::PeerExists), + "Unexpected error adding peer: {}", e); + }This keeps the test strict while eliminating a possible nondeterministic panic.
applications/minotari_console_wallet/src/ui/state/app_state.rs (1)
942-944: Log peer-lookup errors instead of silently discarding themErrors returned by
peer_manager.find_by_node_idare currently ignored, which can hide useful diagnostics when the peer DB is unavailable or corrupt.
Capturing these errors atdebug(or eventrace) level adds virtually no overhead but greatly eases troubleshooting.- if let Ok(Some(p)) = peer_manager.find_by_node_id(c.peer_node_id()) { - peers.push(p); - } + match peer_manager.find_by_node_id(c.peer_node_id()) { + Ok(Some(p)) => peers.push(p), + Ok(None) => {}, // not found – ignore + Err(e) => { + debug!( + target: LOG_TARGET, + "Failed to look up peer {}: {}", + c.peer_node_id(), + e + ); + } + }common_sqlite/src/connection.rs (1)
132-138: AcceptInto<String>to avoid unnecessaryStringallocations
connect_memory_and_migrateforces callers to allocate aString.
Accepting anyInto<String>(the same patternconnect_memoryalready uses internally) gives greater flexibility and may save an allocation for&'static strnames used in tests.-pub fn connect_memory_and_migrate(name: String, migrations: EmbeddedMigrations) -> Result<Self, StorageError> { - let conn = Self::connect_memory(name)?; +pub fn connect_memory_and_migrate<S: Into<String>>( + name: S, + migrations: EmbeddedMigrations, +) -> Result<Self, StorageError> { + let conn = Self::connect_memory(name.into())?;comms/dht/src/network_discovery/discovering.rs (2)
76-81: Synchronous DB call inside async context may block reactor
calc_region_thresholdnow hits the SQLite-backed peer DB synchronously. During bootstrap this can execute thousands of distance calculations, blocking the Tokio worker thread and reducing overall throughput.Consider running heavy synchronous DB tasks on a blocking pool, e.g.
let threshold = tokio::task::spawn_blocking({ let pm = self.context.peer_manager.clone(); let node_id = self.context.node_identity.node_id().clone(); move || pm.calc_region_threshold(&node_id, self.config().num_neighbouring_nodes, PeerFeatures::COMMUNICATION_NODE) }) .await??;Or expose a non-blocking variant inside
PeerManager.This applies similarly to the calls below (
find_by_public_key,add_peer).
205-217: Bulk peer insertions are now synchronous ‑ watch for IO stalls
find_by_public_keyandadd_peerperform read/write SQL statements for every peer streamed from the remote.
During large discovery rounds (hundreds of peers) this loop can monopolise an async worker thread for noticeable periods.If benchmarks show stalls, off-loading to
spawn_blockingor batching inserts (e.g., using a transaction) would mitigate the issue.applications/minotari_console_wallet/src/init/mod.rs (2)
350-355: Fix typo in error message (Could not, notCould net)Minor but user-visible:
- format!("Could net get seed peers from peer manager: {}", err), + format!("Could not get seed peers from peer manager: {}", err),
63-66: Remove now-unusedPeerQueryimport
PeerQuerywas eliminated in the SQLite refactor; keeping the import (even under#![allow(unused)]) obscures the codebase and may confuse newcomers.-use tari_comms::{ - multiaddr::Multiaddr, - peer_manager::{Peer, PeerFeatures, PeerQuery}, - types::CommsPublicKey, - NodeIdentity, -}; +use tari_comms::{ + multiaddr::Multiaddr, + peer_manager::{Peer, PeerFeatures}, + types::CommsPublicKey, + NodeIdentity, +};applications/minotari_node/src/commands/command/list_peers.rs (1)
48-55: Gracefully handle unknownfiltervaluesAn unrecognised filter string currently lists all peers without letting the user know their filter was ignored. Consider warning or returning an error to avoid user confusion, e.g.:
- let features = filter.and_then(|value| match value.to_lowercase().as_str() { + let features = filter.as_ref().and_then(|value| match value.to_lowercase().as_str() { "basenode" | "basenodes" | "base_node" | "base-node" | "bn" => Some(PeerFeatures::COMMUNICATION_NODE), "wallet" | "wallets" | "w" => Some(PeerFeatures::COMMUNICATION_CLIENT), _ => None, }); + if filter.is_some() && features.is_none() { + println!("Unknown filter '{}'; showing all peers", filter.unwrap()); + }base_layer/core/src/base_node/state_machine_service/states/listening.rs (1)
183-207: Blocking SQLite calls inside a hot async loop may stall the runtime
is_peer_bannedandset_peer_metadataare synchronous Diesel/SQLite operations executed directly inside theListening::next_eventasync loop.
On a busy node this loop runs frequently; blocking the core Tokio worker thread for DB IO can reduce liveness.Consider off-loading to a blocking thread:
let peer_banned = tokio::task::spawn_blocking({ let peer_manager = shared.peer_manager.clone(); let node_id = peer_metadata.node_id().clone(); move || peer_manager.is_peer_banned(&node_id) }) .await? ?;Same for
set_peer_metadata. If profiling shows the DB cost is negligible you can document this rationale instead.comms/core/src/peer_manager/error.rs (1)
44-48: Added SQLite-specific error variantsThe new
StorageErrorandSqlDatabaseErrorvariants have been added to handle SQLite-related errors. This allows proper error propagation from the new SQLite backend.However, there appears to be duplicate error descriptions between
StorageErrorandSqlDatabaseError. Consider consolidating these variants or using more specific error messages to differentiate them.- #[error("A problem has been encountered with the sql database: {0}")] - StorageError(String), - #[error("A problem has been encountered with the sql database: {0}")] + #[error("A problem has been encountered with the storage layer: {0}")] + StorageError(String), + #[error("A problem has been encountered with the SQL database: {0}")] SqlDatabaseError(String),comms/dht/src/actor.rs (2)
747-760: N² DB calls while filtering peersInside the loop we call
get_peer_multi_addresses(id)?once per candidate.
In worst-case broadcast (BroadcastStrategy::Flood) this can be thousands of calls, each hitting SQLite synchronously.Two quick wins:
- Fetch all candidate addresses in one shot (e.g. new
peer_manager.multi_addresses(&peers[..])helper) and- Re-use
excluded_dial_addressespredicate as a closure toclosest_n_active_peersso the filtering happens in-DB.Even a small refactor halves peer-selection latency:
let filtered_peers = peer_manager .filter_not_excluded(&peers, &config.excluded_dial_addresses)?;This also removes a potential write-lock convoy on
PeerManager.
878-878: Blocking DB call inside async context
peer_manager.exists(&public_key)is synchronous and may block the runtime.
Consider wrapping heavy calls inspawn_blockingor providing async variants to avoid scheduler starvation.comms/dht/src/network_discovery/ready.rs (2)
59-61: Synchronous DB access inside hot polling loop
peer_manager.count()is now synchronous; every discovery round executes it on the Tokio core.
As rounds run on a tight interval, consider off-loading the call:let num_peers = tokio::task::spawn_blocking(move || self.context.peer_manager.count()) .await .unwrap_or(0);This keeps the discovery state-machine CPU-bound and avoids blocking other tasks.
80-84: Random peer selection may return less than requested
random_peers(max, exclude)can legitimately return fewer peers thanmaxwhen the DB has insufficient candidates (especially just after startup).
Down-stream code assumes a non-empty set and transitions toIdlewhenpeers.is_empty(). No issue here, but adding a debug explaining why the set is short guards future readers:if peers.len() < self.config().network_discovery.max_sync_peers { debug!(target: LOG_TARGET, "Only {} eligible sync peers found", peers.len()); }comms/core/src/peer_manager/peer_id.rs (1)
33-36: Potential modulo-bias and readability issue ingenerate_peer_id_as_i64Using
% i64::MAXkeeps the value in range, but it introduces a (tiny) modulo bias becauseu64::MAXis not an exact multiple ofi64::MAX.
If perfectly uniform distribution is important (e.g. for sharding or hashing), consider re-rolling until the value is< i64::MAX.-pub fn generate_peer_id_as_i64() -> i64 { - i64::try_from(generate_peer_key() % u64::try_from(i64::MAX).expect("infallible")).expect("infallible") +pub fn generate_peer_id_as_i64() -> i64 { + // Re-roll until the sample fits into the signed range to avoid modulo bias + loop { + let k = generate_peer_key(); + if k < i64::MAX as u64 { + return k as i64; + } + } }comms/core/src/peer_manager/peer_features.rs (2)
68-72: Simpler & safer truncation helper
from_bits_u32_truncatecurrently masks with0b1111_1111and thentry_from. Because the mask guarantees the value fits, thetry_fromcan never fail—ascast is clearer and avoids theexpect.- pub fn from_bits_u32_truncate(value: u32) -> Option<Self> { - PeerFeatures::from_bits(u8::try_from(value & 0b1111_1111).expect("will not fail")) + pub fn from_bits_u32_truncate(value: u32) -> Option<Self> { + PeerFeatures::from_bits((value & u8::MAX as u32) as u8) }
74-77:to_i32could be a simple castUnless you explicitly need little-endian byte ordering,
self.bits() as i32is shorter and endianness-agnostic. Consider:- pub fn to_i32(&self) -> i32 { - i32::from_le_bytes([self.bits(), 0, 0, 0]) - } + pub fn to_i32(&self) -> i32 { + self.bits() as i32 + }comms/core/src/connection_manager/common.rs (3)
199-206: Graceful failure path on invalid feature bits
from_bits_u32_truncatewill always returnSome(_)because of truncation; the subsequentok_orbranch is therefore dead code.
If you still want to error on unknown high bits, test with(features & !0xFF) != 0before truncating.if features & !0xFF != 0 { return Err(PeerManagerError::InvalidPeerFeatures { bits: features }.into()); } let features = PeerFeatures::from_bits_u32_truncate(features).unwrap();
291-296: Blocking synchronous DB call inside async context
peer_manager.find_by_public_keyis now synchronous and may touch SQLite.
Calling it directly inside an async task can block the Tokio executor and hurt throughput. Consider wrapping heavy, potentially blocking operations intokio::task::spawn_blocking(or migrate to an async SQLite client).
322-323: Same blocking concern forban_peer
peer_manager.ban_peeris synchronous; if it writes to disk it should be executed in a blocking task to avoid stalling the reactor.comms/dht/examples/memory_net/utilities.rs (2)
636-640: Avoid uncheckedunwrap()in helper – promote explicit failure context
DbConnection::connect_memory_and_migrate(..)can fail if the connection cannot be opened or the migrations bomb out.
Because this helper is called in many examples/tests, an uncontrolled panic makes diagnostics harder. Provide an
expect(..)message (or bubble the error upward) so call-sites get a meaningful hint.- let db_connection = DbConnection::connect_memory_and_migrate(database_name, MIGRATIONS).unwrap(); - PeerDatabaseSql::new(db_connection) + let db_connection = DbConnection::connect_memory_and_migrate(database_name, MIGRATIONS) + .expect("Failed to create in-memory SQLite peer database"); + PeerDatabaseSql::new(db_connection)This tiny change keeps the convenience while greatly improving debuggability.
725-726: Provide context when seeding peers instead of rawunwrap()
A failure to insert a seed peer will abort the entire example with a cryptic panic message.- comms.peer_manager().add_peer(peer).unwrap(); + if let Err(e) = comms.peer_manager().add_peer(peer) { + panic!("Unable to seed peer database: {}", e); + }Alternatively, log the error and continue depending on the demo’s tolerance.
base_layer/wallet_ffi/src/lib.rs (3)
6021-6027: Potential blocking DB calls inside an async loop
peer_manager.find_by_node_id(..)is now synchronous and, with the new SQLite backend, can touch the database.
Calling it directly inside an async context risks blocking the Tokio executor for the duration of the SQLite query, which can hurt latency under load.Consider off-loading the synchronous work with
tokio::task::spawn_blocking(or wrapping the whole map/collect in one call) so the async runtime remains responsive.Example sketch:
- let mut public_keys = Vec::with_capacity(connections.len()); - for conn in connections { - if let Some(peer) = peer_manager.find_by_node_id(conn.peer_node_id())? { - public_keys.push(peer.public_key); - } - } + let public_keys = tokio::task::spawn_blocking(move || -> Result<Vec<CommsPublicKey>, PeerManagerError> { + let mut keys = Vec::with_capacity(connections.len()); + for conn in connections { + if let Some(peer) = peer_manager.find_by_node_id(conn.peer_node_id())? { + keys.push(peer.public_key); + } + } + Ok(keys) + }) + .await??;This keeps the heavy synchronous code off the async worker threads.
7603-7608: Minor : pre-allocate vector to avoid extra reallocations
peers.iter().map(..).collect::<Vec<_>>()is fine, but you already know the final length (peers.len()).
For large peer lists a small optimisation is to allocate the capacity upfront:- let public_keys = peers.iter().map(|p| p.public_key.clone()).collect::<Vec<_>>(); + let mut public_keys = Vec::with_capacity(peers.len()); + public_keys.extend(peers.iter().map(|p| p.public_key.clone()));Not critical, just a micro-optimisation.
9433-9437: Same blocking concern when falling back to seed peers
peer_manager.get_seed_peers()is synchronous and may hit SQLite.
If this code runs inside an async task (whichwallet_start_recoveryusually does), the same blocking-in-async caveat applies as above – considerspawn_blockingor similar.comms/core/src/protocol/messaging/test.rs (3)
69-75: Avoid re-implementingrandom_namein every test module
random_name()(and the exact implementation body) already exists in at least three other modules (peer_manager.rs,builder/tests.rs,test_utils/peer_manager.rs). Re-defining it here:
- inflates binary size,
- makes future behaviour-changes error-prone,
- increases maintenance overhead.
Consider moving a single implementation into
comms::test_utils::random(or similar) and re-exporting it in the affected crates.
69-73: Minor: usechar::frominstead ofas charfor clarityCasting the sampled
u8withas charworks but is a little opaque and can trip clippy’scast_possible_truncationlint.
A clearer, zero-cost alternative:- .map(|_| rng.sample(Alphanumeric) as char) + .map(|_| char::from(rng.sample(Alphanumeric)))
77-81:create_peer_managerduplicates logic found elsewhereThe same 4-line helper exists in several crates (
test_helpers/mod.rs,dht/tests/harness.rs, etc.). Extracting a common helper (e.g.test_utils::create_memory_peer_manager()) removes duplication and guarantees all tests use identical migration logic.comms/core/src/peer_manager/peer.rs (2)
59-64:to_i32can be simplified (and avoid endian confusion)pub fn to_i32(&self) -> i32 { i32::from_le_bytes([self.bits(), 0, 0, 0]) }The extra byte shuffling is unnecessary;
u8→i32cast is well-defined and endianness-agnostic:- i32::from_le_bytes([self.bits(), 0, 0, 0]) + self.bits() as i32Unless there is an explicit requirement to serialise in LE byte-order for FFI, the cast is clearer.
If LE encoding is required, add a short comment to avoid future “optimisation”.
129-161:new_with_statsreplicatesPeerinitialisation – consider a builderThe new constructor copies almost the entire struct literal from
new(), increasing duplication risk. A typed builder pattern (or at least callingPeer { ..Peer::new(...) }) would:
- cut repetition,
- guarantee defaulting logic stays consistent,
- make future field additions compile-fail instead of silently defaulting.
Example sketch:
impl Peer { pub fn builder(public_key: CommsPublicKey, node_id: NodeId) -> PeerBuilder { … } }Or reuse
Peer::newand override the few differing fields.base_layer/core/src/test_helpers/mod.rs (1)
194-198: Centralisecreate_peer_managerhelperThe same snippet appears in three different test helper modules. Extracting this into, for example,
tari_test_utils::peer_manager::create_memory_peer_manager()eliminates duplication and keeps migration behaviour consistent.comms/dht/tests/harness.rs (1)
99-103:create_peer_storageduplicates other helpersThis helper is functionally identical to ones introduced in other test crates. Please consolidate into a shared util to avoid divergence (migration list, connection flags, etc.) later.
base_layer/p2p/src/services/liveness/service.rs (2)
426-436: Import hygiene & test-only helpersGood move to swap the in-memory LMDB helper for an in-memory SQLite DB.
Nit: the test-module re-exportsPeerFeatures,PeerFlags, … which are unused in this file after the refactor. Removing them avoids a clippy warning.
458-461: Avoid panicking in unit-tests – propagate the error
PeerManager::new(peers_db, None).unwrap()will SIGABRT if the migrations break – masking the root cause.
Change toexpect("in-memory peer DB should always open")or bubble the error withResult.- Arc::new(PeerManager::new(peers_db, None).unwrap()) + Arc::new(PeerManager::new(peers_db, None) + .expect("in-memory peer DB must initialise in tests"))base_layer/p2p/src/initialization.rs (2)
125-127: Error variant wording
StorageErroralready conveys its meaning; embedding the same term in the display string producesStorageError StorageError(...).
Use#[error(transparent)]or change the format string to avoid repetition.
317-324: Hard-coding extension hides possible double-extension bugsFor consistency with the fix above and user-configurable filenames, consider delegating the full file name (including extension) to
config.peer_database_nameinstead of unconditionally appending.db.comms/dht/examples/propagation/node.rs (1)
65-68: Consider using consistent file naming for the two SQLite databasesThe peer database is created with the
.dbextension while the DHT database uses a hard-codeddht.sqlitename.
For clarity and to avoid tooling confusion it would be more consistent to use the same extension (either.sqliteor.db) or at least document the rationale in a comment.comms/dht/src/connectivity/mod.rs (3)
559-567: Avoid re-computing node distances while sorting
sort_by_keyrecomputes the distance for every comparison which isO(n log n)distance calculations.
For small pools this is fine, but you can make it clearer and marginally faster by pre-computing once:-let mut peers_by_distance = self.peer_manager.get_peers_by_node_ids(&peer_list)?; -peers_by_distance.sort_by_key(|a| a.node_id.distance(self.node_identity.node_id())); +let mut peers_by_distance = self + .peer_manager + .get_peers_by_node_ids(&peer_list)? + .into_iter() + .map(|p| { + let dist = p.node_id.distance(self.node_identity.node_id()); + (dist, p) + }) + .collect::<Vec<_>>(); +peers_by_distance.sort_by_key(|(dist, _)| *dist); +let peers_by_distance: Vec<Peer> = peers_by_distance.into_iter().map(|(_, p)| p).collect();Not critical, but documents intent and avoids redundant work.
951-987: Provide unit coverage for newclosest_n_active_peersselection logicThe previous query builder was removed in favour of the new convenience call.
Edge-cases worth exercising:
try_revive_connections = trueshould still pick peers whose addresses have all failed.- Distance cap logic when
minimize_connectionsis toggled.- Correct honouring of
offline_peer_cooldownvsSTALE_PEER_THRESHOLD_DURATION.Adding targeted tests in
connectivity/test.rs(similar to the excerpt in the snippets) will prevent silent regressions in future refactors.
867-895:get_peer_multi_addressescan be called once per peerCurrently two identical DB look-ups are performed for neighbours and random pools.
A micro-optimisation is to fetch the addresses once and reuse them; this also shortens the critical section if the DB layer ever needs locks.comms/core/src/test_utils/peer_manager.rs (1)
63-69: Simplify random string generationThe current approach (
iter::repeat(()).map(...)) is unconventional and slightly confusing.-use std::{iter, sync::Arc}; +use std::sync::Arc; use rand::{distributions::Alphanumeric, Rng}; // ... pub fn random_name() -> String { - let mut rng = rand::thread_rng(); - iter::repeat(()) - .map(|_| rng.sample(Alphanumeric) as char) - .take(8) - .collect::<String>() + rand::thread_rng() + .sample_iter(&Alphanumeric) + .take(8) + .map(char::from) + .collect() }Cleaner, idiomatic and avoids unnecessary allocation.
comms/core/src/peer_manager/peer_query.rs (1)
23-74:PeerQuerylooks orphaned – consider deleting or re-integrating it
PeerQueryis still compiled but no longer has any executor or call-sites after the switch to the synchronous, SQL-backedPeerManager. Keeping this builder API around:
- Adds dead code that new contributors may try to reuse even though it silently does nothing.
- Creates maintenance overhead whenever
PeerorNodeIdchange.- Risks accumulating bit-rot and eventually failing CI with “unused” or “dead_code” lints.
If the long-term plan is to rely exclusively on SQL queries, please delete this file (or mark it
#[deprecated]with a clear comment as to why).
If you expect to revive in-memory queries later, add at least one integration test exercisingPeerQuery::new()end-to-end so that CI catches accidental breakage.comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/up.sql (1)
1-18: Primary key choice forpeers.peer_idmay forfeit SQLite row-id optimisations
peer_id BIGINT PRIMARY KEYis valid, but in SQLiteINTEGER PRIMARY KEY(exact spelling) is treated specially and aliases the hiddenrowid, giving:
- Slightly smaller on-disk footprint.
- Faster
INSERTandDELETE.- Automatic auto-increment behaviour (when desired).
If
peer_idis never negative and fits in 64-bits, consider:-peer_id BIGINT PRIMARY KEY, +peer_id INTEGER PRIMARY KEY,(or drop
peer_identirely and use the implicitrowid).comms/core/src/peer_manager/peer_storage_sql.rs (1)
355-369:update_eachloads all peers into memory – consider streaming
update_eachfirst callsself.peer_storage_sql.all()which materialises every peer (addresses included) before applying the closure.
On large networks this risks OOM and stalls.A more scalable pattern is to iterate in chunks/stream:
-let all_peers = self.peer_storage_sql.all(features)?; -for peer in all_peers { +for peer in self.peer_storage_sql.peer_db.iter_peers(features)? { if let Some(updated) = (f)(peer) { peers_to_update.push(updated); } }Implementing
iter_peersas a Diesel cursor/LoadIteravoids loading >GBs into RAM.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (104)
applications/minotari_app_grpc/src/conversions/peer.rs(1 hunks)applications/minotari_console_wallet/src/automation/commands.rs(2 hunks)applications/minotari_console_wallet/src/grpc/wallet_grpc_server.rs(0 hunks)applications/minotari_console_wallet/src/init/mod.rs(1 hunks)applications/minotari_console_wallet/src/recovery.rs(0 hunks)applications/minotari_console_wallet/src/ui/state/app_state.rs(1 hunks)applications/minotari_node/src/commands/command/add_peer.rs(1 hunks)applications/minotari_node/src/commands/command/ban_peer.rs(1 hunks)applications/minotari_node/src/commands/command/get_peer.rs(1 hunks)applications/minotari_node/src/commands/command/list_connections.rs(0 hunks)applications/minotari_node/src/commands/command/list_peers.rs(2 hunks)applications/minotari_node/src/commands/command/mod.rs(2 hunks)applications/minotari_node/src/commands/command/reset_offline_peers.rs(2 hunks)applications/minotari_node/src/commands/command/test_peer_liveness.rs(2 hunks)applications/minotari_node/src/commands/command/unban_all_peers.rs(1 hunks)applications/minotari_node/src/grpc/base_node_grpc_server.rs(1 hunks)base_layer/common_types/src/tari_address/dual_address.rs(1 hunks)base_layer/contacts/src/chat_client/src/networking.rs(1 hunks)base_layer/core/src/base_node/state_machine_service/states/listening.rs(2 hunks)base_layer/core/src/base_node/sync/rpc/tests.rs(6 hunks)base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs(1 hunks)base_layer/core/src/chain_storage/lmdb_db/lmdb_tree_writer.rs(4 hunks)base_layer/core/src/mempool/rpc/test.rs(7 hunks)base_layer/core/src/test_helpers/mod.rs(2 hunks)base_layer/core/src/validation/test.rs(1 hunks)base_layer/core/tests/helpers/sync.rs(1 hunks)base_layer/core/tests/tests/node_state_machine.rs(1 hunks)base_layer/p2p/src/config.rs(1 hunks)base_layer/p2p/src/initialization.rs(4 hunks)base_layer/p2p/src/services/liveness/service.rs(4 hunks)base_layer/wallet/src/storage/sqlite_db/wallet.rs(1 hunks)base_layer/wallet/src/wallet.rs(3 hunks)base_layer/wallet_ffi/src/error.rs(2 hunks)base_layer/wallet_ffi/src/lib.rs(5 hunks)common/config/presets/c_base_node_c.toml(1 hunks)common/config/presets/d_console_wallet.toml(1 hunks)common_sqlite/src/connection.rs(1 hunks)common_sqlite/src/error.rs(2 hunks)comms/core/Cargo.toml(3 hunks)comms/core/examples/stress/error.rs(2 hunks)comms/core/examples/stress/node.rs(2 hunks)comms/core/examples/stress/service.rs(1 hunks)comms/core/examples/tor.rs(3 hunks)comms/core/src/builder/mod.rs(3 hunks)comms/core/src/builder/tests.rs(4 hunks)comms/core/src/connection_manager/common.rs(3 hunks)comms/core/src/connection_manager/dialer.rs(1 hunks)comms/core/src/connection_manager/listener.rs(1 hunks)comms/core/src/connection_manager/manager.rs(1 hunks)comms/core/src/connection_manager/tests/listener_dialer.rs(1 hunks)comms/core/src/connection_manager/tests/manager.rs(2 hunks)comms/core/src/connectivity/manager.rs(10 hunks)comms/core/src/connectivity/test.rs(7 hunks)comms/core/src/net_address/multiaddr_with_stats.rs(1 hunks)comms/core/src/peer_manager/error.rs(4 hunks)comms/core/src/peer_manager/manager.rs(13 hunks)comms/core/src/peer_manager/mod.rs(2 hunks)comms/core/src/peer_manager/peer.rs(2 hunks)comms/core/src/peer_manager/peer_features.rs(2 hunks)comms/core/src/peer_manager/peer_id.rs(1 hunks)comms/core/src/peer_manager/peer_identity_claim.rs(1 hunks)comms/core/src/peer_manager/peer_query.rs(1 hunks)comms/core/src/peer_manager/peer_storage_sql.rs(18 hunks)comms/core/src/peer_manager/storage/database.rs(1 hunks)comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/down.sql(1 hunks)comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/up.sql(1 hunks)comms/core/src/peer_manager/storage/mod.rs(1 hunks)comms/core/src/peer_manager/storage/schema.rs(1 hunks)comms/core/src/peer_manager/wrapper.rs(0 hunks)comms/core/src/protocol/identity.rs(3 hunks)comms/core/src/protocol/messaging/test.rs(6 hunks)comms/core/src/protocol/rpc/context.rs(0 hunks)comms/core/src/protocol/rpc/test/comms_integration.rs(3 hunks)comms/core/src/protocol/rpc/test/smoke.rs(3 hunks)comms/core/src/test_utils/peer_manager.rs(1 hunks)comms/core/src/types.rs(2 hunks)comms/core/tests/tests/helpers.rs(1 hunks)comms/core/tests/tests/rpc.rs(7 hunks)comms/core/tests/tests/rpc_stress.rs(1 hunks)comms/core/tests/tests/substream_stress.rs(1 hunks)comms/dht/Cargo.toml(1 hunks)comms/dht/examples/memory_net/utilities.rs(4 hunks)comms/dht/examples/memorynet.rs(1 hunks)comms/dht/examples/propagation/node.rs(4 hunks)comms/dht/examples/propagation_stress.rs(1 hunks)comms/dht/src/actor.rs(9 hunks)comms/dht/src/connectivity/mod.rs(5 hunks)comms/dht/src/connectivity/test.rs(3 hunks)comms/dht/src/dht.rs(3 hunks)comms/dht/src/discovery/service.rs(2 hunks)comms/dht/src/inbound/deserialize.rs(1 hunks)comms/dht/src/inbound/dht_handler/task.rs(4 hunks)comms/dht/src/network_discovery/discovering.rs(3 hunks)comms/dht/src/network_discovery/on_connect.rs(1 hunks)comms/dht/src/network_discovery/ready.rs(5 hunks)comms/dht/src/network_discovery/test.rs(3 hunks)comms/dht/src/proto/mod.rs(4 hunks)comms/dht/src/rpc/peer_info.rs(3 hunks)comms/dht/src/rpc/service.rs(2 hunks)comms/dht/src/rpc/test.rs(6 hunks)comms/dht/src/test_utils/makers.rs(2 hunks)comms/dht/tests/attacks.rs(2 hunks)comms/dht/tests/dht.rs(9 hunks)comms/dht/tests/harness.rs(4 hunks)
💤 Files with no reviewable changes (5)
- applications/minotari_console_wallet/src/recovery.rs
- applications/minotari_node/src/commands/command/list_connections.rs
- applications/minotari_console_wallet/src/grpc/wallet_grpc_server.rs
- comms/core/src/protocol/rpc/context.rs
- comms/core/src/peer_manager/wrapper.rs
🧰 Additional context used
🧬 Code Graph Analysis (16)
comms/dht/examples/memorynet.rs (1)
comms/core/src/peer_manager/manager.rs (1)
count(70-72)
comms/dht/src/network_discovery/ready.rs (1)
comms/dht/src/network_discovery/discovering.rs (1)
peer_manager(285-287)
base_layer/core/src/chain_storage/lmdb_db/lmdb_tree_writer.rs (1)
base_layer/core/src/chain_storage/lmdb_db/lmdb.rs (2)
lmdb_delete(169-182)lmdb_fetch_matching_after(352-366)
comms/dht/src/test_utils/makers.rs (3)
comms/dht/src/actor.rs (1)
db_connection(947-951)common_sqlite/src/connection.rs (1)
connect_memory_and_migrate(133-138)comms/core/src/peer_manager/manager.rs (1)
new(56-63)
comms/core/tests/tests/helpers.rs (2)
comms/dht/src/actor.rs (1)
db_connection(947-951)common_sqlite/src/connection.rs (1)
connect_memory_and_migrate(133-138)
comms/dht/tests/attacks.rs (3)
comms/core/src/peer_manager/manager.rs (1)
exists(143-145)comms/dht/tests/harness.rs (1)
node_identity(66-68)comms/core/src/peer_manager/node_identity.rs (1)
public_key(187-189)
comms/core/src/connectivity/test.rs (1)
comms/dht/src/connectivity/test.rs (3)
node_identities(141-141)node_identities(180-180)node_identities(270-275)
comms/core/src/peer_manager/mod.rs (2)
comms/core/src/peer_manager/peer_id.rs (1)
generate_peer_id_as_i64(34-36)comms/core/src/peer_manager/manager.rs (1)
create_test_peer(407-448)
comms/core/src/protocol/messaging/test.rs (6)
comms/core/src/builder/tests.rs (2)
iter(66-69)random_name(64-70)comms/core/src/peer_manager/manager.rs (4)
iter(465-468)random_name(463-469)create_peer_manager(471-475)new(56-63)comms/core/src/test_utils/peer_manager.rs (2)
iter(65-68)random_name(63-69)base_layer/core/src/test_helpers/mod.rs (1)
create_peer_manager(194-198)comms/dht/src/actor.rs (1)
db_connection(947-951)common_sqlite/src/connection.rs (1)
connect_memory_and_migrate(133-138)
base_layer/wallet/src/storage/sqlite_db/wallet.rs (1)
comms/core/src/peer_manager/peer_features.rs (1)
from_bits_u32_truncate(70-72)
comms/core/src/protocol/rpc/test/comms_integration.rs (3)
comms/dht/src/actor.rs (1)
db_connection(947-951)common_sqlite/src/connection.rs (1)
connect_memory_and_migrate(133-138)comms/core/src/peer_manager/storage/database.rs (1)
peers_db(2152-2157)
comms/dht/examples/memory_net/utilities.rs (2)
comms/dht/tests/harness.rs (1)
create_peer_storage(99-103)common_sqlite/src/connection.rs (1)
connect_memory_and_migrate(133-138)
comms/dht/src/connectivity/mod.rs (1)
comms/dht/src/connectivity/test.rs (1)
peer_manager(106-119)
comms/dht/tests/harness.rs (5)
comms/core/src/peer_manager/node_identity.rs (1)
random(109-113)comms/dht/examples/memory_net/utilities.rs (2)
create_peer_storage(636-640)new(500-532)common_sqlite/src/connection.rs (2)
connect_memory_and_migrate(133-138)new(148-150)comms/core/src/peer_manager/peer.rs (1)
new(103-127)comms/dht/src/storage/database.rs (1)
new(44-46)
applications/minotari_node/src/commands/command/get_peer.rs (3)
comms/dht/src/network_discovery/discovering.rs (1)
peer_manager(285-287)comms/core/src/builder/comms_node.rs (2)
peer_manager(247-249)peer_manager(305-307)applications/minotari_app_utilities/src/utilities.rs (1)
parse_emoji_id_or_public_key(49-54)
comms/core/src/peer_manager/storage/schema.rs (2)
comms/core/src/peer_manager/peer.rs (1)
banned_until(286-288)comms/core/src/net_address/multiaddr_with_stats.rs (8)
connection_attempts(307-309)avg_initial_dial_time(311-313)initial_dial_time_sample_count(315-317)avg_latency(319-321)latency_sample_count(323-325)last_failed_reason(331-333)quality_score(335-337)source(299-301)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: cargo check with stable
- GitHub Check: ci
- GitHub Check: Cucumber tests / FFI
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (146)
comms/dht/Cargo.toml (1)
14-14: Approved: Addtari_common_sqliteworkspace dependencyThe new SQLite-based common crate is correctly introduced alongside other workspace crates, aligning with the overall migration from LMDB to SQLite for peer storage.
applications/minotari_node/src/commands/command/add_peer.rs (1)
63-63: API call updated to use synchronous peer manager methodThe removal of
.awaitreflects the migration from async LMDB-based peer storage to synchronous SQLite-based implementation. This change is consistent with the broader refactoring of the peer manager API.comms/dht/examples/propagation_stress.rs (1)
84-84: Path handling adapted for SQLite databaseChanged from passing a reference to the temporary directory handle to passing a reference to the owned path, which is required for the new SQLite-based implementation.
base_layer/p2p/src/config.rs (1)
123-123: Documentation updated to reflect SQLite implementationDocumentation comment correctly updated to reference SQLite data files instead of LMDB data files, maintaining consistency with the implementation change.
base_layer/core/tests/tests/node_state_machine.rs (1)
311-311: Test updated to use synchronous peer manager APIRemoved
.awaitfrom the peer manager'sadd_peercall to match the new synchronous API. This ensures test compatibility with the SQLite-based peer storage implementation.applications/minotari_node/src/commands/command/ban_peer.rs (1)
90-90: API change: Converted to synchronous callThe
unban_peermethod call has been modified to be synchronous (removing.await), which aligns with the PR's migration from asynchronous LMDB-based peer storage to synchronous SQLite-backed implementation.comms/core/src/peer_manager/storage/mod.rs (1)
23-24: Module structure looks goodThis new module file properly establishes the structure for the SQLite-based peer database implementation, exposing the necessary submodules for database operations and schema definitions.
applications/minotari_app_grpc/src/conversions/peer.rs (1)
53-53: Type conversion improvementAdding
.into()ensures proper type conversion when transforming the peer features bits to the type expected by the gRPC message. This change supports the migration from 32-bit to 8-bit feature flags mentioned in the PR summary.comms/core/tests/tests/rpc_stress.rs (1)
95-95: API change: Converted to synchronous callThe
add_peermethod call has been modified to be synchronous (removing.await), consistent with the PR's conversion of peer manager methods from asynchronous to synchronous. This change properly aligns with the broader refactoring of the peer storage backend.base_layer/core/src/validation/test.rs (1)
232-232: Clean code improvement: Simplified type conversion.Converting directly with
.into()instead of usingtry_into().expect()eliminates unnecessary panic handling while maintaining the same functionality. This makes the code more readable and aligns with the broader refactoring pattern.comms/dht/examples/memorynet.rs (1)
202-202: Properly updated to use synchronous peer manager API.This change correctly converts the previously asynchronous
count()call to the new synchronous version, aligning with the broader database backend change from LMDB to SQLite. The implementation is now more direct without requiringawait.comms/dht/src/dht.rs (3)
421-421: Correctly updated test code to use synchronous peer manager API.This change properly converts the previously asynchronous
add_peer()call to the new synchronous version, with proper error handling viaunwrap(). This is consistent with the SQLite implementation replacing the previous async LMDB implementation.
474-474: Correctly updated test code to use synchronous peer manager API.Properly updated from asynchronous to synchronous call with error handling, consistent with the database backend change from LMDB to SQLite.
528-528: Correctly updated test code to use synchronous peer manager API.Properly updated from asynchronous to synchronous call with error handling, consistent with the database backend change from LMDB to SQLite.
comms/dht/src/network_discovery/test.rs (3)
70-70: Correctly updated to use synchronous peer manager API in test setup.This change properly converts the previously asynchronous
add_peer()call to the new synchronous version, with proper error handling viaunwrap(). This is consistent with the SQLite implementation replacing the previous async LMDB implementation.
131-131: Correctly updated test code to use synchronous peer manager API.Properly updated from asynchronous to synchronous call with error handling, consistent with the database backend change from LMDB to SQLite.
212-212: Correctly updated test code to use synchronous peer manager API.Properly updated from asynchronous to synchronous call with error handling, consistent with the database backend change from LMDB to SQLite.
base_layer/common_types/src/tari_address/dual_address.rs (2)
640-644: Improved error messages in test panicThe updated panic message now provides explicit details about why the validation is failing, including the actual Base58 length and the expected maximum size. This makes debugging test failures much easier.
647-651: Improved error messages in test panicSimilarly to the "too long" case, this panic message has been enhanced to provide clear information about the actual Base58 length and the expected minimum size, improving test diagnostics.
comms/core/src/connection_manager/listener.rs (1)
439-439: Removed .await from add_peer callThis change aligns with the peer database migration from LMDB to SQLite, converting the peer manager API from asynchronous to synchronous. The
add_peermethod now returns a direct result rather than a future that needs to be awaited.comms/core/src/connection_manager/manager.rs (1)
504-504: Converted find_by_node_id from async to syncThis change is part of the peer manager API refactoring from asynchronous to synchronous operations, removing the need to await the result. This matches the broader architectural change from LMDB (async) to SQLite (sync) for peer storage.
comms/core/src/protocol/identity.rs (3)
70-70: Explicit feature bit conversion to u32Added explicit conversion from peer features bits to u32. This change supports the backend refactor where peer features are now stored as u8 internally but converted to u32 for external representation in protocol messages.
268-268: Updated test assertion for feature bitsChanged the test assertion to explicitly convert peer features to u32 when comparing, matching the implementation change in the identity message construction.
278-278: Updated test assertion for feature bitsSimilar to the previous change, this test assertion has been updated to explicitly convert peer features to u32, ensuring consistency with the implementation.
applications/minotari_node/src/commands/command/mod.rs (2)
67-76: RemovedPeerQueryimport in response to peer manager API refactoringThe change correctly updates the import list to remove the
PeerQueryimport that is no longer needed due to the peer manager refactoring from an asynchronous LMDB-based implementation to a synchronous SQLite-based one.
320-322: Simplified peer retrieval by using direct synchronous methodThe implementation has been successfully refactored to use the new synchronous peer manager API. This change eliminates the need for a query builder pattern and async/await syntax, resulting in simpler and more direct code.
comms/dht/src/rpc/test.rs (6)
87-87: Switched to synchronous peer manager APIThe peer manager API call has been correctly converted from asynchronous to synchronous, removing the
.await. This aligns with the broader refactoring of the peer storage system from LMDB to SQLite.
130-130: Switched to synchronous peer manager APIThe peer manager API call has been correctly converted from asynchronous to synchronous, removing the
.await. This change is consistent with the peer storage backend migration to SQLite.
159-159: Switched to synchronous peer manager APIThe peer manager API call has been correctly converted from asynchronous to synchronous, removing the
.await. This is part of the system-wide peer manager refactoring.
230-230: Switched to synchronous peer manager APIThe peer manager API call has been correctly converted from asynchronous to synchronous, removing the
.await. This modification is consistent with the other changes to the peer manager usage throughout the codebase.
268-268: Switched to synchronous peer manager APIThe peer manager API call has been correctly converted from asynchronous to synchronous, removing the
.await. This follows the same pattern as other changes in this file.
306-306: Switched to synchronous peer manager APIThe peer manager API call has been correctly converted from asynchronous to synchronous, removing the
.await. This ensures test compatibility with the refactored peer manager implementation.base_layer/wallet_ffi/src/error.rs (2)
29-29: Updated import to include PeerManagerErrorThe import has been correctly modified to include
PeerManagerErrorfrom the peer_manager module, which is needed for the new error conversion implementation.
549-557: Added PeerManagerError conversion to support SQLite peer databaseThis new implementation of
From<PeerManagerError>forLibWalletErrorproperly handles error propagation from the refactored peer manager to the wallet FFI layer. This ensures that errors from the new SQLite-based peer storage can be properly communicated back to FFI clients.The implementation logs the error and creates a
LibWalletErrorwith code 459, which is appropriate for this category of error.comms/core/src/protocol/rpc/test/smoke.rs (3)
152-152: Switched to synchronous peer manager APIThe peer manager API call has been correctly converted from asynchronous to synchronous, removing the
.await. This change ensures compatibility with the refactored peer manager that now uses a synchronous SQLite-based backend.
502-502: Switched to synchronous peer manager APIThe peer manager API call has been correctly converted from asynchronous to synchronous, removing the
.await. This change is consistent with the new peer manager implementation pattern.
551-551: Switched to synchronous peer manager APIThe peer manager API call has been correctly converted from asynchronous to synchronous, removing the
.await. This modification aligns with the broader peer storage refactoring from LMDB to SQLite.comms/core/examples/stress/service.rs (1)
227-227: Method signature updated to match synchronous peer manager API.The
.awaithas been removed from theadd_peermethod call, which is consistent with the migration from an asynchronous LMDB-based peer storage to a synchronous SQLite-based implementation as mentioned in the PR objectives.base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (1)
1851-1851: Passing WriteTransaction by value instead of by reference.The transaction value is now being passed directly to
LmdbTreeWriter::newinstead of as a reference. This is consistent with similar changes in the implementation of LMDB helper functions as mentioned in the AI summary.comms/dht/tests/dht.rs (8)
75-75: Methods updated to match synchronous peer manager API.The
.awaitcalls have been removed from theexistsmethod calls on the peer manager, aligning with the migration to a synchronous SQLite-based peer storage as indicated in the PR objectives.Also applies to: 81-81
139-139: Method updated to match synchronous peer manager API.The
.awaithas been removed from theadd_peermethod call, consistent with the conversion of peer manager methods from asynchronous to synchronous operations.
168-172: Methods updated to match synchronous peer manager API.Multiple
.awaitcalls have been removed from theexistsmethod calls, reflecting the broader refactoring to synchronous peer manager methods across the codebase.
216-216: Method updated to match synchronous peer manager API.The
.awaithas been removed from theadd_peermethod call on the node_D peer manager, matching the new synchronous API.
245-249: Methods updated to match synchronous peer manager API.Multiple
.awaitcalls have been removed from theexistsmethod calls across different peer managers, consistent with the conversion to synchronous operations.
286-286: Method updated to match synchronous peer manager API.The
.awaithas been removed from theadd_peermethod call, aligning with the conversion to a synchronous SQLite-backed peer manager implementation.
583-585: Methods updated to match synchronous peer manager API.Multiple
.awaitcalls have been removed from consecutiveadd_peermethod calls across different peer managers, reflecting the consistent migration to synchronous operations.
691-691: Methods updated to match synchronous peer manager API.The
.awaithas been removed from theadd_peermethod calls, ensuring consistency with the new synchronous peer manager interface across all usages in the test file.Also applies to: 796-796
comms/core/examples/stress/error.rs (1)
24-24: Added SQLite storage error handling capability.Added a new error variant
StorageErrorto theErrorenum and imported the corresponding type from thetari_common_sqlitemodule. This addition properly supports error propagation from the new SQLite-based peer storage backend, ensuring that storage errors are properly captured and handled in the stress test example.Also applies to: 74-75
applications/minotari_node/src/commands/command/get_peer.rs (2)
67-67: Method signature changed from async to syncThe
.awaitkeyword has been removed from the call tofind_all_starts_with, reflecting the peer manager's migration from an asynchronous to a synchronous API as part of the switch from LMDB to SQLite.
73-75: Method signature changed from async to syncThe
.awaitkeyword has been removed from the call tofind_by_public_key, consistent with the peer manager's API migration from asynchronous to synchronous operations using the new SQLite backend.comms/core/src/connection_manager/tests/listener_dialer.rs (1)
213-213: Method signature changed from async to syncThe
.awaitkeyword has been removed from the call toadd_peerin this test, reflecting the change from asynchronous to synchronous peer manager operations. This correctly updates the test code to use the new synchronous API.comms/core/src/peer_manager/peer_identity_claim.rs (1)
70-72: Updated to handle reduced feature bits sizeChanged from
PeerFeatures::from_bitstoPeerFeatures::from_bits_u32_truncateto accommodate the migration from 32-bit to 8-bit feature flags. This method appropriately truncates the higher bits while preserving the existing error handling for invalid bits.comms/dht/src/rpc/peer_info.rs (3)
77-78: Updated to handle reduced feature bits sizeChanged to use
PeerFeatures::from_bits_u32_truncateinstead offrom_bitsto properly handle the migration from 32-bit to 8-bit feature flags in theDiscoveryMessageconversion implementation.
108-109: Updated to handle reduced feature bits sizeChanged to use
PeerFeatures::from_bits_u32_truncateinstead offrom_bitsto properly handle the migration from 32-bit to 8-bit feature flags in theDiscoveryResponseMessageconversion implementation.
140-141: Updated to handle reduced feature bits sizeChanged to use
PeerFeatures::from_bits_u32_truncateinstead offrom_bitsto properly handle the migration from 32-bit to 8-bit feature flags in theJoinMessageconversion implementation.comms/core/tests/tests/rpc.rs (2)
101-101: Properly adjusted from async to sync callThe method
add_peerhas been converted from asynchronous to synchronous, correctly removing the.awaitkeyword while maintaining error handling withunwrap().
166-166: Multiple peer manager calls properly converted to synchronous patternAll instances of
add_peer()have been consistently converted from asynchronous to synchronous calls by removing the.awaitkeyword, aligning with the new SQLite-based peer storage implementation.Also applies to: 215-215, 271-271, 336-336, 409-409, 468-468
comms/dht/src/inbound/dht_handler/task.rs (2)
219-219: Peer manager methods correctly converted to synchronousMultiple peer manager methods (
find_by_public_key,add_peer,find_by_node_id) have been properly converted from asynchronous to synchronous calls by removing the.awaitkeyword. The error handling remains appropriate for the synchronous implementation.Also applies to: 233-233, 400-400, 402-402, 403-403
435-435: Added type conversion for peer featuresAdded
.into()conversion for peer feature bits to ensure proper type conversion in the message structure. This change is part of standardizing feature bit handling across the codebase, where the new implementation uses different bit width for features.comms/dht/src/connectivity/test.rs (3)
28-28: Added necessary constant importAdded import for
STALE_PEER_THRESHOLD_DURATIONwhich is now required by the updated peer querying method.
62-62: Peer manager add_peer call correctly converted to synchronousThe
add_peermethod has been properly converted from asynchronous to synchronous by removing the.awaitkeyword while maintaining error handling withunwrap().
107-115: Replaced peer querying method with enhanced versionThe method to retrieve closest peers has been updated from the async
closest_peersto the synchronousclosest_n_active_peerswith additional parameters for filtering:
- Added filtering by peer features (
PeerFeatures::COMMUNICATION_NODE)- Added staleness threshold (
STALE_PEER_THRESHOLD_DURATION)- Added flags for active peers (
true)- Added optional parameters for enhanced querying
This represents a significant API improvement by consolidating multiple filtering concerns into a single method call.
comms/dht/src/network_discovery/on_connect.rs (1)
189-189: Peer manager calls correctly converted to synchronousThe peer manager methods
find_by_public_keyandadd_peerhave been properly converted from asynchronous to synchronous calls by removing the.awaitkeyword. Error handling remains consistent with the rest of the codebase.Also applies to: 192-192
comms/dht/src/discovery/service.rs (3)
266-266: Changed from async to sync method call.This change is part of the broader refactoring where peer manager methods have been converted from asynchronous to synchronous as part of the migration from LMDB to SQLite-based storage.
270-270: Changed from async to sync method call.This change correctly removes the
.awaiton theadd_peermethod call, consistent with the peer manager API now being synchronous rather than asynchronous.
350-350:Details
✅ Verification successful
Added type conversion for peer features.
The addition of
.into()handles the conversion of peer feature bits, likely related to the reduction of peer features bitflags from 32 bits to 8 bits as mentioned in the PR summary.
🏁 Script executed:
#!/bin/bash # Check how peer features are handled elsewhere in the codebase to confirm consistency echo "Checking feature bits conversion in other files:" rg "features\(\).bits\(\)" --type rustLength of output: 896
Confirmed consistent type conversion for peer_features
The use of.into()onfeatures().bits()here matches the other DHT modules and tests, ensuring the bitflags (now 8 bits) map correctly to the message field:
- comms/dht/src/proto/mod.rs
- comms/dht/src/inbound/dht_handler/task.rs
- comms/dht/tests/attacks.rs
The core identity protocol still uses
u32::from(node_identity.features().bits()), so this change is fully in line with existing patterns. No further action required.applications/minotari_node/src/grpc/base_node_grpc_server.rs (1)
771-771: Updated method signature to support optional feature filtering.The peer manager's
all()method now takes an optional parameter to filter peers by features. TheNonevalue here means "return all peers regardless of features." This change aligns with the removal of the peer query system, replacing it with direct filtering methods.common_sqlite/src/connection.rs (1)
132-138: Nice convenience helper – LGTMThe helper neatly reduces boilerplate in tests and one-off in-memory usages.
No correctness concerns spotted.comms/core/examples/stress/node.rs (1)
23-29: Migration to SQLite peer database looks clean and well-structured.The changes have successfully migrated from the LMDB-based peer database to SQLite. The code now properly creates a SQLite database connection, applies migrations, and initializes a
PeerDatabaseSqlinstance.Nice touch using the original database path with
.dbextension to maintain file location consistency.Also applies to: 32-33, 37-38, 73-75
comms/core/src/builder/mod.rs (2)
74-77: Good update to example code reflecting SQLite implementation.The example code in the documentation has been appropriately updated to use SQLite instead of LMDB, which maintains consistency between documentation and implementation.
Also applies to: 90-91
315-316: Correctly simplified PeerManager initialization.The
PeerManager::newcall has been simplified to remove the minimum peer version parameter, aligning with the synchronous SQLite-based implementation. This streamlined API makes the code more maintainable.base_layer/core/src/chain_storage/lmdb_db/lmdb_tree_writer.rs (5)
57-58: Correct removal of unnecessary reference.The function signature for
lmdb_delete_keys_starting_withexpects the transaction by value, not by reference, so this change is appropriate.
59-60: Correct removal of unnecessary reference.Same improvement as above, matching the
lmdb_delete_keys_starting_withfunction's expected parameter type.
70-77: Correct removal of unnecessary reference.The
lmdb_deletefunction expects the transaction by value rather than by reference.
102-103: Correct removal of unnecessary reference.The
lmdb_fetch_matching_afterfunction expects the transaction parameter by value, so this change is appropriate.
111-111: Improved code readability with idiomatic Rust pattern.Changed from
.last().map(...).flatten()to the more concise and idiomatic.last().and_then(...), which performs the same operation more elegantly.comms/dht/src/inbound/deserialize.rs (1)
153-153: Correctly updated to synchronous PeerManager API.Removed the
.awaitcall since the peer manager methods are now synchronous in the SQLite implementation. This is consistent with the overall migration from asynchronous LMDB operations to synchronous SQLite operations.comms/core/src/types.rs (2)
35-35: Updated import to use the new SQLite peer database implementation.The import is now using the new
PeerDatabaseSqldirectly from thepeer_manager::databasemodule instead of the previous Peer/PeerId imports. This reflects the architectural shift from LMDB to SQLite for peer storage.
55-55: Type alias now unconditionally uses SQLite database implementation.The
CommsDatabasetype alias now unconditionally refers toPeerDatabaseSqlinstead of conditionally selecting betweenHashmapDatabasefor testing andLMDBWrapperfor production. This simplifies the codebase by using a consistent database implementation across all environments.applications/minotari_console_wallet/src/automation/commands.rs (1)
2605-2606: Removed .await call for synchronous peer manager method.The
.awaitcall has been removed fromget_seed_peers()as part of converting the peer manager methods from asynchronous to synchronous. This simplifies the code by eliminating unnecessary Future handling while maintaining the same functionality.comms/dht/tests/attacks.rs (2)
105-105: Removed .await call for synchronous peer manager exists method.The
.awaitcall has been removed when checking if a peer exists, as the peer manager'sexistsmethod is now synchronous rather than asynchronous. This is part of the broader refactoring of the peer management system to use synchronous SQLite operations.
144-144: Added explicit conversion for peer features.The peer features are now explicitly converted using
.into()to ensure compatibility with the new storage format. This change reflects the reduction of peer features bitflags from 32 bits to 8 bits, requiring proper type conversion to maintain compatibility.comms/core/src/connection_manager/tests/manager.rs (2)
154-155: Removed .await call for synchronous peer manager find_by_node_id method.The
.awaitcall has been removed fromfind_by_node_id()as part of converting peer manager methods from asynchronous to synchronous. This simplifies the test code while maintaining the same functionality.
406-406: Removed .await call for synchronous peer manager add_peer method.The
.awaitcall has been removed fromadd_peer()as this method is now synchronous rather than asynchronous. This is consistent with the broader refactoring of the peer management system to use synchronous SQLite operations instead of asynchronous LMDB operations.comms/core/examples/tor.rs (4)
18-19: New dependency added for SQLite support.This import brings in the SQLite connection handling capabilities needed for the new peer database implementation.
23-29: Updated peer manager imports to use the new SQLite database components.The imports now specifically include
database::{PeerDatabaseSql, MIGRATIONS}which are part of the new SQLite implementation, replacing the previous LMDB-based imports.
164-166: Migrated peer storage from LMDB to SQLite.This code replaces the previous LMDB datastore creation with:
- Creating a SQLite database URL
- Connecting to the database and applying migrations
- Creating a
PeerDatabaseSqlinstanceThis implementation properly handles database creation and schema migration, which is necessary for the SQLite storage backend.
110-122: Peer creation syntax is unchanged but formatting is improved.The
add_peercall's parameters have been reformatted with better vertical alignment, but the functionality remains the same.base_layer/wallet/src/wallet.rs (3)
419-419: Removed.awaitcall to match new synchronous peer manager API.This change reflects the migration from an asynchronous to a synchronous peer manager implementation. The
find_by_public_keymethod now returns a direct result rather than a future.
432-432: Converted asyncadd_peercall to synchronous.Consistent with the new peer manager implementation, the
.awaithas been removed as the method now operates synchronously.
465-465: Converted another asyncadd_peercall to synchronous.Another instance of removing
.awaitfrom theadd_peermethod call, maintaining consistency with the synchronous peer manager implementation.comms/core/src/connection_manager/dialer.rs (1)
276-281: Converted async peer manager call to synchronous.This change adjusts the error handling for the now-synchronous
add_peermethod. The code now directly maps the error result rather than awaiting a future, while maintaining the same error handling logic.base_layer/wallet/src/storage/sqlite_db/wallet.rs (1)
307-307: Updated peer features parsing to properly handle bit truncation.Changed from
PeerFeatures::from_bitstoPeerFeatures::from_bits_u32_truncate, which safely handles the conversion from 32-bit to 8-bit representation by masking to the lowest 8 bits before creating thePeerFeaturesinstance. This change aligns with the reduction in peer feature bitflags from 32 bits to 8 bits elsewhere in the codebase.base_layer/core/src/mempool/rpc/test.rs (8)
33-35: Function signature updated for SQLite peer storageThe
setupfunction has been simplified to no longer return aTempDirand thecreate_peer_manager()function is now called without directory arguments. This aligns with the migration from LMDB (which required filesystem paths) to SQLite in-memory database for peer storage.
38-39: Return type updated for simplified peer managerThe return tuple now correctly omits the temporary directory, consistent with the SQLite-based peer manager implementation that doesn't require filesystem resources.
47-48: Test function updated to use simplified setupThe destructuring pattern has been updated to match the new return type of the
setupfunction, removing the temporary directory variable.
71-72: Test function updated to use simplified setupThe destructuring pattern has been updated to match the new return type of the
setupfunction, removing the temporary directory variable.
101-102: Test function updated to use simplified setupThe destructuring pattern has been updated to match the new return type of the
setupfunction, removing the temporary directory variable.
123-124: Test function updated to use simplified setupThe destructuring pattern has been updated to match the new return type of the
setupfunction, removing the temporary directory variable.
147-148: Test function updated to use simplified setupThe destructuring pattern has been updated to match the new return type of the
setupfunction, removing the temporary directory variable.
174-175: Test function updated to use simplified setupThe destructuring pattern has been updated to match the new return type of the
setupfunction, removing the temporary directory variable.applications/minotari_node/src/commands/command/reset_offline_peers.rs (2)
26-27: Added PeerFeatures import for filteringThe PeerFeatures import is now required to filter peers by their features in the update_each method.
43-49: Updated to use synchronous peer manager API with feature filteringThe peer manager's
update_eachmethod is now called synchronously (without.await) and includes a filter to target only peers with theCOMMUNICATION_NODEfeature. This change:
- Aligns with the migration from asynchronous LMDB to synchronous SQLite-based peer storage
- Improves efficiency by only resetting connection attempts for communication nodes
- Uses the new API structure that supports feature-based filtering
The implementation correctly handles the peer update logic by:
- Passing a mutable closure that resets connection attempts
- Applying the appropriate feature filter
- Capturing the number of updated peers for user feedback
comms/core/Cargo.toml (4)
13-14: Added SQLite workspace dependencyAdded the
tari_common_sqliteworkspace dependency to support the new SQLite-based peer database implementation.
31-37: Added Diesel ORM with SQLite supportAdded the Diesel ORM with necessary features:
sqlite: Core SQLite driver supportserde_json: JSON serialization/deserialization supportchrono: DateTime type supportnumeric: Numeric type supportThese dependencies are required for implementing the SQLite-based peer storage with proper type handling.
38-38: Added Diesel migrations supportAdded the
diesel_migrationscrate to support SQL schema migrations for the peer database, which is essential for managing schema changes over time.
50-51: Moved serde_json to main dependenciesMoved
serde_jsonfrom dev-dependencies to main dependencies as it's now required for the SQLite implementation with Diesel ORM.base_layer/core/tests/helpers/sync.rs (1)
359-360: Updated to use synchronous peer manager APIThe
.awaithas been removed from theis_peer_bannedcall, reflecting that the peer manager now provides a synchronous API. This change is consistent with the migration from asynchronous LMDB-based to synchronous SQLite-based peer storage.comms/core/src/connectivity/test.rs (7)
93-103: Successfully refactored from async to sync peer manager calls.The function has been properly converted from an async function to a synchronous one, aligning with the broader architectural change from LMDB to SQLite for peer storage. The removal of
.awaitfrom theadd_peercall reflects the new synchronous peer manager API.
109-110: Consistent synchronous approach applied correctly.The synchronous call to
add_test_peerscorrectly reflects the synchronous peer manager API changes, removing the need for.await.
148-152: Properly implemented synchronous peer manager operations.Both the call to
add_test_peersand directadd_peercalls have been correctly refactored to use the synchronous API without.await.
280-281: Synchronous function call correctly implemented.The call to
add_test_peershas been properly converted to use the synchronous API.
309-310: Properly converted from async to sync peer lookup.The
find_by_node_idcall has been correctly refactored to use the synchronous API without.await.
324-325: Correctly implemented synchronous peer manager operation.The call to
add_test_peershas been properly converted to use the synchronous API.
387-388: Successful refactoring to synchronous API.The call to
add_test_peershas been properly converted to use the synchronous API.comms/core/src/builder/tests.rs (5)
23-28: Added necessary imports for SQLite and random name generation.The changes correctly add imports for
randandtari_common_sqlite, which are needed for the new SQLite-based peer storage implementation.
46-50: Updated peer_manager imports to include database-related types.The imports have been correctly updated to include
databasemodule withPeerDatabaseSqlandMIGRATIONSwhich are needed for the SQLite implementation.
64-70: Well-implemented random name generation function.The
random_namefunction provides a clean way to generate random database names for SQLite in-memory connections, which is necessary for isolated test databases.
90-92: Properly implemented SQLite database initialization.The code correctly creates an in-memory SQLite database with migrations applied, replacing the previous in-memory hashmap approach.
100-101: Successfully updated peer storage configuration.The call to
.with_peer_storagenow correctly uses the SQLite-based peer database and passesNonefor the file lock, aligning with the new storage approach.common_sqlite/src/error.rs (3)
25-27: Updated imports to support hex conversion errors.The import statement has been correctly updated to include
HexErrorfromtari_utilities, which is needed for the new error handling capabilities.
66-73: Enhanced error handling with new SQLite-specific error variants.Added appropriate new error variants to support SQLite operations:
HexErrorfor hex encoding/decoding issuesJsonErrorfor JSON conversion problemsTryFromIntErrorfor integer conversion failuresPeerNotFoundErrorfor when a requested peer doesn't existThese additions are essential for robust error handling in the SQLite storage layer.
82-86: Added proper conversion from HexError to StorageError.The implementation of
From<HexError>forStorageErrorenables clean error propagation when hex conversion fails during database operations.comms/dht/src/test_utils/makers.rs (2)
25-43: Updated imports to support SQLite database implementation.The imports have been correctly updated to include SQLite connection types and remove unused LMDB-related dependencies. The peer manager imports have been reorganized for better readability.
241-245: Successfully refactored peer manager creation to use SQLite.The
build_peer_managerfunction now correctly:
- Creates an in-memory SQLite database with a random name
- Applies the required migrations
- Initializes the database with
PeerDatabaseSql- Creates a
PeerManagerwith the SQLite databaseThis change aligns perfectly with the PR objective of replacing LMDB with SQLite for peer database storage.
comms/core/tests/tests/helpers.rs (2)
26-37: Updated imports for SQLite database integrationThe imports have been properly updated to include SQLite-related dependencies required for the new database implementation. The change aligns with the PR objective of migrating from LMDB to SQLite for peer storage.
40-43: Implementation of in-memory SQLite database for testsThe
create_peer_storagefunction has been correctly refactored to use an in-memory SQLite database instead of LMDB. This implementation properly:
- Creates an in-memory connection with a random name
- Applies required migrations
- Initializes the new
PeerDatabaseSqlinstanceThis change aligns with the PR objective and provides a clean test database implementation.
comms/core/src/protocol/rpc/test/comms_integration.rs (4)
44-50: Well-implemented random name generator for database isolationGood implementation of the
random_namefunction that generates unique 8-character alphanumeric names for database isolation. This ensures each test gets its own isolated in-memory database instance.
58-65: Proper SQLite database setup for test comms nodeThe implementation correctly sets up an in-memory SQLite database with migrations and uses it for peer storage. This approach provides test isolation and aligns with the broader SQLite migration.
73-80: Consistent database setup across comms instancesThe second comms instance is properly configured with its own isolated in-memory SQLite database, maintaining test integrity and following the same pattern as the first instance.
83-86: Synchronous peer manager API usageThe code correctly uses the new synchronous
add_peermethod without.await, reflecting the API changes from asynchronous to synchronous in the peer manager implementation.comms/core/src/peer_manager/error.rs (4)
26-28: Added necessary imports for SQLite and hex error handlingThe imports have been properly updated to include SQLite storage errors and hex conversion errors, which are needed for the new error variants.
68-70: Added HexError variantAdded appropriate error variant for hex conversion errors, which is required for handling hex encoding/decoding issues in the SQLite implementation.
72-76: Implemented From trait for StorageError conversionProperly implemented the
Fromtrait to convert SQLite storage errors intoPeerManagerError, ensuring clean error propagation.
91-95: Implemented From trait for HexError conversionAdded a clean implementation of the
Fromtrait to convert hex conversion errors intoPeerManagerError, completing the error handling for the new SQLite implementation.comms/dht/src/proto/mod.rs (3)
75-75: Updated peer_features serializationProperly updated to use explicit conversion with
bits().into()instead of direct assignment, ensuring compatibility with the newPeerFeaturesimplementation that changed fromu32tou8.
89-89: Updated PeerFeatures deserializationChanged to use
from_bits_u32_truncateinstead offrom_bits, which correctly handles the conversion fromu32to the new truncatedu8representation of peer features.
109-109: Consistent peer_features bit handlingThe conversion is consistent with the updated approach across the codebase, using explicit conversion with
bits().into()for proper serialization of peer features.comms/dht/src/actor.rs (1)
795-808: Double-checkallow_clientsflag when selecting propagation peers
closest_n_active_peers(.., /*allow_clients =*/ true, …)will include wallet/client-only peers even though we already restrict byPeerFeatures::MESSAGE_PROPAGATION.If the intent is to target only propagation-capable communication nodes, pass
falsehere to avoid relaying messages to nodes that cannot forward them:- Some(STALE_PEER_THRESHOLD_DURATION), - true, + Some(STALE_PEER_THRESHOLD_DURATION), + false,Please verify against the new signature of
closest_n_active_peers; an incorrect flag silently relaxes the selection criteria.base_layer/core/src/base_node/sync/rpc/tests.rs (1)
45-46: Unit-test isolation improvement acknowledgedRemoving the
tempfiledependency and switching to an in-memory SQLite peer database simplifies the test harness and speeds up execution. Nice cleanup!comms/dht/src/network_discovery/ready.rs (1)
139-148: Parameter order & flag sanity forclosest_n_active_peersEnsure the new argument order matches the implementation: the added parameters
(Some(STALE_PEER_THRESHOLD_DURATION), true, None)look correct, but if the last boolean isallow_clientsit should probably befalse(see earlier comment in actor.rs).A mismatched order will compile yet silently degrade peer selection quality.
comms/core/src/peer_manager/peer_features.rs (1)
31-44: Type downsizing – double-check external storage and wire formats
PeerFeatureschanged fromu32→u8. Ensure every DB column, protobuf/int representation, and ABI expecting 32-bit fields is migrated, otherwise silent truncation or UB can occur.comms/core/src/peer_manager/mod.rs (1)
97-109: Re-exports look consistent – no action needed
The new exports (generate_peer_id_as_i64,create_test_peer,PeerStorageSqlalias) keep backwards-compatibility and are
scoped sensibly (pub(crate)/cfg(test)). Nice job!base_layer/wallet_ffi/src/lib.rs (1)
135-141:Details
❌ Incorrect review comment
NodeIdentitymay be an unused import – please double-check and clean up if not required
The newly-addedNodeIdentityimport does not appear in any of the modified hunks. If the symbol is not referenced elsewhere in this module the build will emit anunused importwarning (or even a hard error if#![deny(warnings)]is enabled).You can verify with a quick search:
If the only hit is this import line, consider removing it:
- peer_manager::NodeIdentity,
🏁 Script executed:
#!/bin/bash # Look for occurrences of `NodeIdentity` in this file rg -n 'NodeIdentity' base_layer/wallet_ffi/src/lib.rsLength of output: 288
Ignore unused import suggestion:
NodeIdentityis referenced in this file
TheNodeIdentityimport is used in multiple places later inbase_layer/wallet_ffi/src/lib.rsand should be retained:
- Line 6591:
let node_identity = Arc::new(NodeIdentity::with_signature_unchecked(...));- Line 12941:
NodeIdentity::random(&mut OsRng, get_next_memory_address(), PeerFeatures::COMMUNICATION_NODE);Likely an incorrect or invalid review comment.
comms/core/src/protocol/messaging/test.rs (1)
146-147: Thread-safety of synchronousPeerManagerin async tests
add_peeris now synchronous, but the sameArc<PeerManager>is accessed concurrently from multiple async tasks in these tests (e.g. the messaging protocol itself).
Please double-check that the internalPeerManagerwraps the SQLite connection in a thread-safe synchronisation primitive (e.g.RwLock<Connection>). Otherwise simultaneous reads/writes will cause undefined behaviour or “use after close” panics.comms/dht/tests/harness.rs (1)
179-181: Concurrent writes via synchronous API – confirm safety
comms.peer_manager().add_peer(peer).unwrap();is invoked inside a loop that may run concurrently with other peer-manager calls from the Comms/DHT runtime. As the API is now synchronous:
- Verify that
PeerManagerinternally serialises access (e.g. viaMutex/RwLock), or- Expose an async interface again that offloads DB calls to a dedicated thread.
comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/down.sql
Outdated
Show resolved
Hide resolved
applications/minotari_node/src/commands/command/unban_all_peers.rs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
base_layer/p2p/src/services/liveness/service.rs (1)
232-242: Properly using spawn_blocking for synchronous DB operationsGreat implementation that properly offloads synchronous SQLite operations to a separate thread pool via
spawn_blocking. This prevents the async event loop from being blocked during potentially slow disk I/O operations.comms/core/src/peer_manager/storage/database.rs (3)
88-123:⚠️ Potential issueAvoid raw SQL string-building – vulnerable to injection and SQLite syntax errors
The code constructs SQL queries using string concatenation and
format!with user-controlled fields, which is vulnerable to SQL injection attacks.Use Diesel's query builder instead of raw SQL string manipulation:
- let mut peer_query = String::from( - "INSERT INTO peers (peer_id, public_key, node_id, flags, banned_until, banned_reason, features, \ - supported_protocols, added_at, user_agent, metadata, deleted_at) VALUES ", - ); - - peer_query.push_str( - &peers_with_addresses - .iter() - .map(|p| { - format!( - "({}, '{}', '{}', {}, {}, {}, {}, '{}', '{}', '{}', {}, {})", - generate_peer_id_as_i64(), - p.peer.public_key, - p.peer.node_id, - // ... other fields ... - ) - }) - .collect::<Vec<String>>() - .join(", "), - ); - - peer_query.push_str(" ON CONFLICT (node_id) DO NOTHING"); - conn.batch_execute(&peer_query)?; + use diesel::insert_into; + + // Assign IDs first + let peers_to_insert: Vec<_> = peers_with_addresses.iter().map(|p| { + let mut peer = p.peer.clone(); + peer.peer_id = generate_peer_id_as_i64(); + peer + }).collect(); + + insert_into(peers::table) + .values(&peers_to_insert) + .on_conflict(peers::node_id) + .do_nothing() + .execute(conn)?;
193-194:⚠️ Potential issue
INSERT … VALUESwith zero rows yields invalid SQLIf all multi-addresses already exist, the SQL query will be invalid as it would have no values after the
VALUESkeyword.Ensure the query is only executed when there are addresses to insert:
- address_query.push_str(" ON CONFLICT (address) DO NOTHING"); - conn.batch_execute(&address_query)?; + if total_addresses_inserted > 0 { + address_query.push_str(" ON CONFLICT (address) DO NOTHING"); + conn.batch_execute(&address_query)?; + }
330-331:⚠️ Potential issueHex-literal prefix (
x'...') used for plain text columns – will failUsing
x'{}'denotes a BLOB literal in SQLite and expects an even-length hex string. For multi-address strings (which are not hex data), this will causemalformed hexadecimal literalerrors.Use proper string escaping instead:
- ids.push(format!("x'{}'", address_update.address)); + ids.push(format!("'{}'", address_update.address.replace('\'', "''")));
🧹 Nitpick comments (5)
base_layer/p2p/src/services/liveness/service.rs (1)
236-238: Fix typo in commentThere's a small typo in the comment.
-// tokio::task::spawn_blocking is used herr to offload the blocking operation to a separate thread +// tokio::task::spawn_blocking is used here to offload the blocking operation to a separate threadbase_layer/p2p/src/initialization.rs (1)
438-439: Consider synchronizing peer database filenames between test and production codeThe test code uses "peers.db" while production code uses
config.peer_database_namewith a .db extension. Consider standardizing on one approach for better consistency.comms/core/src/peer_manager/storage/database.rs (3)
1476-1478: Potential overflow without adequate handlingThe
duration_to_i64_infalliblefunction usesunwrap_or(i64::MAX)which silently converts overflows to the maximum value without logging or alerting.Consider adding logging for overflow cases:
fn duration_to_i64_infallible(duration: Option<Duration>) -> Option<i64> { - duration.map(|d| d.as_millis().try_into().unwrap_or(i64::MAX)) + duration.map(|d| { + match d.as_millis().try_into() { + Ok(value) => value, + Err(_) => { + log::warn!("Duration overflow: {:?} ms exceeds i64::MAX", d.as_millis()); + i64::MAX + } + } + }) }
62-63: Consider adding error handling for database connection failuresThe function correctly creates a
PeerManagerwith thePeerDatabaseSqlbackend, but when using in production, more context for database connection failures would be helpful.Consider mapping database connection errors to provide more context:
- let db_connection = DbConnection::connect_and_migrate(&database_url, MIGRATIONS)?; + let db_connection = DbConnection::connect_and_migrate(&database_url, MIGRATIONS) + .map_err(|e| StorageError::ConnectionError(format!("Failed to connect to peer database at {:?}: {}", &database_url, e)))?;
1155-1156: Potential inconsistency in returned random peersThe code doesn't consider the case where there are fewer qualified peers than requested.
Consider adding a debug log when fewer peers than requested are returned:
node_ids_hex.shuffle(&mut rand::thread_rng()); node_ids_hex.truncate(n); + +if node_ids_hex.len() < n { + log::debug!("Requested {} random active peers, but only {} were available", n, node_ids_hex.len()); +}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (21)
applications/minotari_node/src/commands/command/unban_all_peers.rs(1 hunks)base_layer/p2p/src/initialization.rs(4 hunks)base_layer/p2p/src/services/liveness/error.rs(2 hunks)base_layer/p2p/src/services/liveness/service.rs(4 hunks)comms/core/examples/stress/node.rs(2 hunks)comms/core/examples/tor.rs(3 hunks)comms/core/src/connection_manager/tests/listener_dialer.rs(9 hunks)comms/core/src/connection_manager/tests/manager.rs(10 hunks)comms/core/src/connectivity/manager.rs(10 hunks)comms/core/src/connectivity/test.rs(8 hunks)comms/core/src/net_address/multiaddr_with_stats.rs(1 hunks)comms/core/src/peer_manager/error.rs(4 hunks)comms/core/src/peer_manager/peer_features.rs(2 hunks)comms/core/src/peer_manager/peer_id.rs(1 hunks)comms/core/src/peer_manager/storage/database.rs(1 hunks)comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/down.sql(1 hunks)comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/up.sql(1 hunks)comms/core/src/peer_manager/storage/schema.rs(1 hunks)comms/core/src/protocol/rpc/test/mock.rs(1 hunks)comms/core/src/test_utils/peer_manager.rs(1 hunks)comms/dht/examples/propagation/node.rs(4 hunks)
✅ Files skipped from review due to trivial changes (1)
- comms/core/src/peer_manager/peer_features.rs
🚧 Files skipped from review as they are similar to previous changes (14)
- comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/down.sql
- comms/core/examples/stress/node.rs
- comms/core/examples/tor.rs
- comms/core/src/connection_manager/tests/manager.rs
- comms/core/src/connection_manager/tests/listener_dialer.rs
- comms/core/src/peer_manager/error.rs
- comms/core/src/net_address/multiaddr_with_stats.rs
- applications/minotari_node/src/commands/command/unban_all_peers.rs
- comms/core/src/peer_manager/peer_id.rs
- comms/core/src/connectivity/test.rs
- comms/core/src/peer_manager/storage/schema.rs
- comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/up.sql
- comms/dht/examples/propagation/node.rs
- comms/core/src/connectivity/manager.rs
🔇 Additional comments (12)
base_layer/p2p/src/services/liveness/error.rs (1)
32-32: Addition of JoinError aligns with the SQLite sync implementationGood addition of the JoinError variant to handle potential failures from tokio task joins. This supports the pattern of offloading synchronous SQLite operations to blocking threads.
Also applies to: 64-65
comms/core/src/protocol/rpc/test/mock.rs (1)
180-180: Added unwrap to handle the fallible peer manager creationGood change to handle the fact that SQLite-based peer manager creation can now fail. The unwrap is appropriate in this test context.
base_layer/p2p/src/services/liveness/service.rs (1)
462-466: Updated test setup for SQLite peer databaseSuccessfully replaced the LMDB database with SQLite in-memory database for tests. The migration approach using
DbConnection::connect_memory_and_migrateis appropriate for testing.base_layer/p2p/src/initialization.rs (3)
161-165: Properly initialized SQLite database with joined filenameThis correctly creates a "peers.db" file in the provided data directory, addressing previous feedback about path construction.
436-440: Successfully migrated peer manager importsAll the necessary imports for SQLite peer database have been properly added to replace the LMDB ones.
317-324:Details
❓ Verification inconclusive
SQLite database path construction in production code
This approach uses the configured peer database name and adds a .db extension, which is appropriate as long as
config.peer_database_namedoesn't already contain an extension.To verify the peer database name configuration doesn't contain a file extension already, you can check where it's defined:
🏁 Script executed:
#!/bin/bash # Find where peer_database_name is defined and how it's typically set rg -A 3 "peer_database_name.*=" --type rustLength of output: 418
Verify
peer_database_nameDoesn’t Already Include an ExtensionI searched the Rust codebase and only found
peer_database_nameused in test utilities (where it’s set without an extension). Before shipping, please confirm that your productionconfig.peer_database_name(from your config file or CLI defaults) never includes a file extension—otherwise you’ll end up with names likefoo.db.db.• Check where
peer_database_nameis loaded or defaulted in your config definitions.
• Ensure any documentation or examples emphasize that this value should be provided without “.db.”comms/core/src/test_utils/peer_manager.rs (3)
35-40: Properly propagates errors instead of using unwrap() - LGTM.The function now returns a
Result<Arc<PeerManager>, PeerManagerError>instead of unwrapping, which allows tests to handle potential errors. This change aligns with the previous review suggestion.
54-63: Non-test implementation properly handles file paths and database creation - LGTM.This implementation:
- Creates directories if they don't exist
- Generates a random filename with
.dbextension- Properly propagates errors through the Result type
66-72: Simple, effective random name generator - LGTM.The function generates an 8-character random alphanumeric name, which is sufficient for testing purposes and for avoiding collisions in most practical scenarios.
comms/core/src/peer_manager/storage/database.rs (3)
1007-1014: Good handling of zero case without unnecessary database operationsThe function correctly avoids database operations when n=0, which is an efficient pattern.
23-61: Well-structured imports and module organizationThe imports are properly organized and grouped by standard library, external crates, and internal modules.
65-1573: Comprehensive peer database implementation with proper transactionsThe
PeerDatabaseSqlimplementation provides all necessary operations for managing peers in a SQLite database. The consistent use of transactions for operations that involve multiple updates ensures data consistency.Despite the SQL injection concerns flagged in other comments, the overall structure and logic of the implementation is solid, with proper error handling, connection pooling, and query structuring.
Description
Motivation and Context
The lmdb peer_db was memory and resource intensive.
How Has This Been Tested?
New unit tests
Existing unit tests depending on the new peer_db implementation pass
System-level testing passed
What process can a PR reviewer use to test or verify this change?
Code review
System-level testing
Breaking Changes
BREAKING CHANGES:
fn closest_peersis replaced withfn closest_n_active_peersas amore flexible alternative.Summary by CodeRabbit
Summary by CodeRabbit
add_or_update_peerfor consistency across modules.