Skip to content

feat: add sqlite peer_db#6963

Merged
SWvheerden merged 28 commits intotari-project:developmentfrom
hansieodendaal:ho_sqlite_peer_db
May 26, 2025
Merged

feat: add sqlite peer_db#6963
SWvheerden merged 28 commits intotari-project:developmentfrom
hansieodendaal:ho_sqlite_peer_db

Conversation

@hansieodendaal
Copy link
Copy Markdown
Contributor

@hansieodendaal hansieodendaal commented Apr 17, 2025

Description

  • Added an sqlite implementation of peer_db.
  • Removed the lmdb implementation of peer_db.

Motivation and Context

The lmdb peer_db was memory and resource intensive.

How Has This Been Tested?

New unit tests
Existing unit tests depending on the new peer_db implementation pass
System-level testing passed

What process can a PR reviewer use to test or verify this change?

Code review
System-level testing

Breaking Changes

  • None
  • Requires data directory on base node to be deleted
  • Requires hard fork
  • Other - Please specify

BREAKING CHANGES:

  • The existing lmbd peer_db will be discarded (left intact but not used anymore).
  • In PeerManager, the method fn closest_peers is replaced with fn closest_n_active_peers as amore flexible alternative.

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features
    • Transitioned peer storage backend from LMDB to SQLite, enhancing reliability and simplifying setup.
    • Added test utilities for creating peers with address stats and recent last-seen timestamps.
    • Extended peer and address data structures with constructors, conversion methods, and reset functions.
  • Bug Fixes
    • Unified peer add/update operations into add_or_update_peer for consistency across modules.
    • Corrected configuration comments to specify SQLite database paths instead of LMDB.
  • Refactor
    • Removed legacy peer query framework and migration code; replaced with async wrappers over SQLite.
    • Enhanced peer selection, filtering, and banning logic to leverage new database queries.
    • Simplified batch peer retrieval in wallet and FFI layers with direct SQL queries.
    • Updated test setups to use SQLite, replacing in-memory or LMDB dependencies.
    • Replaced direct sync calls with async wrappers in discovery, network, and connection modules.
    • Improved asynchronous handling and batch processing for peer management.
  • Chores
    • Removed LMDB and related dependencies; added SQLite, Diesel, and migration crates.
    • Created SQL migration scripts for node identity, peers, and addresses schemas.
    • Managed temporary database cleanup in tests and connection pools.
    • Cleaned imports and streamlined peer manager initialization in examples and tests.
  • Documentation
    • Updated configuration comments to clarify transition from LMDB to SQLite storage paths.

@hansieodendaal hansieodendaal marked this pull request as draft April 17, 2025 07:14
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 17, 2025

Walkthrough

This update migrates peer management from LMDB and query-based frameworks to a SQLite-backed storage system. It removes the PeerQuery framework, refactors APIs to use add_or_update_peer(), and updates related logic and tests to utilize the new database schema, migration scripts, and error handling. Test helpers and application code are also adjusted accordingly.

Changes

File(s) / Group Change Summary
comms/core/src/peer_manager/peer_query.rs, comms/core/src/peer_manager/migrations.rs, comms/core/src/peer_manager/migrations/v7.rs, comms/core/src/peer_manager/wrapper.rs Deleted peer querying framework, LMDB migration logic, and wrapper for reserved key handling.
comms/core/src/peer_manager/peer_storage_sql.rs, comms/core/src/peer_manager/storage/database.rs, comms/core/src/peer_manager/storage/mod.rs, comms/core/src/peer_manager/storage/schema.rs, comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/up.sql, comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/down.sql Added SQLite-based peer storage, schema definitions, and migration scripts.
comms/core/src/peer_manager/manager.rs, comms/core/src/peer_manager/mod.rs, comms/core/src/peer_manager/peer_features.rs, comms/core/src/peer_manager/peer_id.rs, comms/core/src/peer_manager/peer.rs, comms/core/src/peer_manager/error.rs, comms/core/src/peer_manager/node_distance.rs Refactored PeerManager to use SQLite backend, updated public API, added new methods, error variants, and utility functions.
comms/core/src/types.rs, comms/core/src/builder/mod.rs, comms/core/src/builder/tests.rs, comms/core/src/test_utils/peer_manager.rs, comms/core/src/test_utils/factories/peer_manager.rs, comms/core/src/protocol/messaging/test.rs, comms/core/src/protocol/rpc/test/comms_integration.rs, comms/core/src/protocol/rpc/test/mock.rs, comms/core/src/protocol/rpc/test/smoke.rs, comms/core/src/connection_manager/tests/listener_dialer.rs, comms/core/src/connection_manager/tests/manager.rs, comms/core/src/connection_manager/tests/mod.rs, comms/core/src/net_address/multiaddr_with_stats.rs, comms/core/src/net_address/mutliaddresses_with_stats.rs, comms/core/src/connection_manager/dialer.rs, comms/core/src/connection_manager/listener.rs, comms/core/src/connectivity/manager.rs, comms/core/src/connectivity/requester.rs, comms/core/src/connectivity/test.rs Updated to use SQLite peer storage, refactored test and builder utilities, updated method calls, added/modified test helpers, and improved error handling.
comms/core/examples/stress/service.rs, comms/core/examples/stress/node.rs, comms/core/examples/tor.rs, comms/core/examples/memory_net/mod.rs, comms/core/examples/memory_net/utilities.rs, comms/core/examples/propagation/node.rs Migrated example code to use SQLite peer database, added test peer helpers, updated peer addition logic.
comms/dht/src/actor.rs, comms/dht/src/connectivity/mod.rs, comms/dht/src/connectivity/test.rs, comms/dht/src/network_discovery/discovering.rs, comms/dht/src/network_discovery/ready.rs, comms/dht/src/network_discovery/error.rs, comms/dht/src/network_discovery/on_connect.rs, comms/dht/src/network_discovery/test.rs, comms/dht/src/discovery/service.rs, comms/dht/src/inbound/deserialize.rs, comms/dht/src/inbound/dht_handler/task.rs, comms/dht/src/rpc/service.rs, comms/dht/src/rpc/test.rs, comms/dht/src/test_utils/makers.rs, comms/dht/src/test_utils/mod.rs, comms/dht/tests/attacks.rs, comms/dht/tests/dht.rs, comms/dht/tests/harness.rs, comms/dht/examples/propagation_stress.rs, comms/dht/examples/memorynet.rs Updated DHT modules and tests to use new peer manager API, SQLite storage, new peer addition/update methods, and refactored test helpers.
base_layer/p2p/src/services/liveness/service.rs, base_layer/p2p/src/services/liveness/error.rs, base_layer/p2p/src/lib.rs, base_layer/p2p/src/config.rs, base_layer/p2p/src/test_utils.rs Updated liveness service for blocking peer updates, added error handling, updated test utilities, and documentation for SQLite.
base_layer/core/src/base_node/sync/rpc/tests.rs, base_layer/core/src/mempool/rpc/test.rs, base_layer/core/src/test_helpers/mod.rs, base_layer/core/tests/tests/node_state_machine.rs, base_layer/core/tests/tests/block_sync.rs Refactored test helpers to use SQLite, updated peer addition logic, enabled logging in tests.
base_layer/wallet_ffi/src/error.rs, base_layer/wallet_ffi/src/lib.rs Added error conversion from PeerManagerError, refactored FFI peer retrieval to use new API and SQLite.
base_layer/wallet/src/wallet.rs, base_layer/wallet/src/storage/sqlite_utilities/mod.rs, base_layer/wallet/src/test_utils.rs, base_layer/wallet/tests/other/mod.rs, base_layer/wallet/tests/transaction_service_tests/service.rs Updated wallet peer management to use add_or_update_peer, refactored SQLite connection utilities, updated test directory handling.
base_layer/contacts/src/chat_client/src/networking.rs, base_layer/contacts/tests/contacts_service.rs Updated peer addition logic for contacts, improved temp directory handling in tests.
applications/minotari_console_wallet/src/automation/commands.rs, applications/minotari_console_wallet/src/init/mod.rs, applications/minotari_console_wallet/src/ui/state/app_state.rs, applications/minotari_console_wallet/src/recovery.rs Refactored peer retrieval and addition to use new peer manager API.
applications/minotari_node/src/commands/command/add_peer.rs, applications/minotari_node/src/commands/command/list_peers.rs, applications/minotari_node/src/commands/command/mod.rs, applications/minotari_node/src/commands/command/reset_offline_peers.rs, applications/minotari_node/src/commands/command/test_peer_liveness.rs, applications/minotari_node/src/commands/command/unban_all_peers.rs, applications/minotari_node/src/grpc/base_node_grpc_server.rs Updated node command handlers to use new peer manager methods and simplified peer queries.
common_sqlite/src/connection.rs, common_sqlite/src/sqlite_connection_pool.rs, common_sqlite/src/error.rs, common_sqlite/Cargo.toml Added global migration lock, improved temp file handling, extended error types, added new dependencies.
comms/core/Cargo.toml, comms/dht/Cargo.toml, base_layer/p2p/Cargo.toml Updated dependencies: removed LMDB, added SQLite, Diesel, and related crates.
common/config/presets/c_base_node_c.toml, common/config/presets/d_console_wallet.toml Updated config comments to reflect SQLite usage.
base_layer/contacts/tests/contacts_service.rs Changed temp directory handling to keep directory alive.
base_layer/wallet/tests/transaction_service_tests/service.rs Changed wallet tests to use distinct temp directories per wallet.
base_layer/wallet/src/storage/sqlite_utilities/mod.rs, base_layer/wallet/src/test_utils.rs Simplified SQLite connection pool creation, removed unused pool size parameter.
comms/core/src/net_address/multiaaddr_with_stats.rs Added test-only method to reset stats for a multiaddress.
comms/core/src/peer_manager/identity_signature.rs Converted addresses iterator to vector before hashing.

Sequence Diagram(s)

sequenceDiagram
    participant App/Service
    participant PeerManager (async)
    participant PeerStorageSql (sync)
    participant SQLite DB

    App/Service->>PeerManager: add_or_update_peer(peer)
    PeerManager->>PeerStorageSql: (spawn_blocking) add_or_update_peer(peer)
    PeerStorageSql->>SQLite DB: INSERT or UPDATE peer row
    SQLite DB-->>PeerStorageSql: Result<PeerId>
    PeerStorageSql-->>PeerManager: PeerId
    PeerManager-->>App/Service: PeerId

    App/Service->>PeerManager: get_seed_peers()
    PeerManager->>PeerStorageSql: (spawn_blocking) get_seed_peers()
    PeerStorageSql->>SQLite DB: SELECT peers WHERE is_seed
    SQLite DB-->>PeerStorageSql: Vec<Peer>
    PeerStorageSql-->>PeerManager: Vec<Peer>
    PeerManager-->>App/Service: Vec<Peer>
Loading

Possibly related PRs

  • feat: peer health check #6951: Adds get_seed_peers method and related functionality to support seed peer retrieval, which this PR refactors to use.
  • feat: add min peer version option #7006: Introduces minimum peer version validation in add_peer; this PR removes that feature as part of the SQLite migration and replaces add_peer with add_or_update_peer.

Suggested reviewers

  • SWvheerden
  • hansieodendaal

Poem

SQLite burrows deep, where peers now safely sleep,
No more LMDB to keep, nor queries winding steep.
Rabbits tap their keys with cheer, as tests and code align,
New tables, locks, and helpers here—our peer store’s doing fine!
🐇💾🎉

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 17, 2025

Test Results (Integration tests)

 2 files  + 2   1 errors  2 suites  +2   48m 32s ⏱️ + 48m 32s
18 tests +18  17 ✅ +17  0 💤 ±0  1 ❌ +1 
20 runs  +20  17 ✅ +17  0 💤 ±0  3 ❌ +3 

For more details on these parsing errors and failures, see this check.

Results for commit d53c735. ± Comparison against base commit 9e121b0.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 17, 2025

Test Results (CI)

1 092 tests   1 088 ✅  15m 22s ⏱️
   30 suites      0 💤
    1 files        4 ❌

For more details on these failures, see this check.

Results for commit da7add6.

♻️ This comment has been updated with latest results.

@hansieodendaal hansieodendaal force-pushed the ho_sqlite_peer_db branch 11 times, most recently from cea2df7 to 7e573c0 Compare April 24, 2025 05:41
@hansieodendaal hansieodendaal force-pushed the ho_sqlite_peer_db branch 10 times, most recently from c1ecd95 to df35680 Compare May 1, 2025 17:01
@hansieodendaal hansieodendaal marked this pull request as ready for review May 1, 2025 17:01
@hansieodendaal hansieodendaal requested a review from a team as a code owner May 1, 2025 17:01
@hansieodendaal hansieodendaal changed the title feat: add sqlite peer_db feat!: add sqlite peer_db May 1, 2025
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

🔭 Outside diff range comments (2)
comms/dht/src/actor.rs (1)

395-406: 🛠️ Refactor suggestion

Avoid multiple database hits for excluded-address checks

get_peer_multi_addresses() executes a query for the current node every time check_if_addresses_excluded is called (twice during startup).
Although this is not a hot path, the call goes through Diesel/SQLite synchronously and therefore blocks the Tokio executor thread.

If you keep the helper async, prefer using tokio::task::spawn_blocking (or move the helper to a synchronous context) so DB I/O is shifted to a blocking thread pool:

-let addresses = peer_manager.get_peer_multi_addresses(&node_id)?;
+let addresses = tokio::task::spawn_blocking(move || peer_manager.get_peer_multi_addresses(&node_id))
+    .await
+    .map_err(|e| DhtActorError::PeerManagerError(e.into_panic().into()))??;

This prevents accidental starvation if the SQLite file grows or is on slow storage.

comms/core/src/connectivity/manager.rs (1)

670-709: 🛠️ Refactor suggestion

⚠️ Potential issue

Same blocking concern for find_by_node_id, delete_peer, and failure handling

on_peer_connection_failure() performs two synchronous DB calls while still executing on the Tokio worker thread. A malicious peer that intentionally causes repeated failures could starve the actor.

Refactor as above (spawn_blocking or async DB) to insulate the event loop.

Also applies to: 820-820

♻️ Duplicate comments (1)
comms/core/src/protocol/messaging/test.rs (1)

405-406: See note above about concurrent access

The synchronous add_peer call here is executed on a background task spawned later in the test, reinforcing the need for interior synchronisation.

🧹 Nitpick comments (48)
common/config/presets/c_base_node_c.toml (1)

144-149: Clarify peer DB path comment and align wording with earlier sections

Nice to see the new SQLite-specific comment. Two small nit-picks:

  1. “Path to the SQLite data files” could mention that this path is relative to data_dir (like other path-style keys) to avoid confusion.
  2. A few lines above (12-20) we still describe LMDB and even have a “lmbd” typo. Readers might think the node still relies on LMDB for the peer DB. Consider amending or cross-referencing those comments to make it clear that only the blockchain database may still use LMDB, while the peer database is now SQLite.

No functional code change is required, but the docs will be clearer.

common/config/presets/d_console_wallet.toml (1)

183-188: Add hint that the path is relative to data_dir and avoid ambiguity with LMDB

Same remark as for the base-node preset: state explicitly that this path is relative to data_dir and is used exclusively for the new SQLite peer database. This avoids any assumption that LMDB is still in play for peers.

A tiny doc tweak keeps future maintainers from scratching their heads.

comms/dht/src/rpc/service.rs (1)

161-169: Double-check peer filtering logic with STALE_PEER_THRESHOLD_DURATION

closest_n_active_peers receives
Some(STALE_PEER_THRESHOLD_DURATION), true, None.
Depending on the implementation, combining

  1. Some(STALE_PEER_THRESHOLD_DURATION) – exclude peers that have not been active within this duration, and
  2. the hard-coded true flag (likely include only online peers or strict filtering)

can significantly reduce the candidate set. With a small network this may drop the returned peer count below n, causing poorer routing performance or more NotEnoughPeers errors upstream.

Please verify that the constant’s value (and the extra true flag) are appropriate for all call-sites – e.g. wallet, base-node, and integration tests – and that unit/prop tests cover the edge-case where fewer than n peers remain after filtering.
If necessary, consider exposing these parameters to the caller or using sensible fall-backs (e.g. relax the staleness constraint when not enough peers are found).

applications/minotari_node/src/commands/command/test_peer_liveness.rs (1)

89-90: Handle delete_peer errors explicitly

The result of peer_manager.delete_peer(&node_id) is ignored.
If the peer is not in the DB (or the call fails), the silent swallow makes diagnostics harder and still leaves the subsequent add_peer path unchanged.

-        let _res = peer_manager.delete_peer(&node_id);
+        if let Err(e) = peer_manager.delete_peer(&node_id) {
+            // Ignore “peer not found” but surface any real IO/DB errors
+            if !matches!(e.kind(), tari_comms::peer_manager::PeerManagerError::PeerNotFound) {
+                log::warn!(target: "commands::test_peer_liveness",
+                           "Failed to delete peer {} before liveness test: {}", node_id, e);
+            }
+        }

This keeps the intent (best-effort removal) while avoiding silent failure and removes the unused _res binding.

comms/core/tests/tests/substream_stress.rs (1)

67-68: Peer already present?

add_peer(...).unwrap() will panic if the peer was added earlier (e.g. by a previous test run re-using the same temp DB).
Because the call is now synchronous, replacing unwrap() with an expect_not_exists style helper or ignoring the “already exists” error would make the test more robust:

-    node1.peer_manager().add_peer(node2.node_identity().to_peer()).unwrap();
+    if let Err(e) = node1.peer_manager().add_peer(node2.node_identity().to_peer()) {
+        assert!(matches!(e.kind(), tari_comms::peer_manager::PeerManagerError::PeerExists),
+                "Unexpected error adding peer: {}", e);
+    }

This keeps the test strict while eliminating a possible nondeterministic panic.

applications/minotari_console_wallet/src/ui/state/app_state.rs (1)

942-944: Log peer-lookup errors instead of silently discarding them

Errors returned by peer_manager.find_by_node_id are currently ignored, which can hide useful diagnostics when the peer DB is unavailable or corrupt.
Capturing these errors at debug (or even trace) level adds virtually no overhead but greatly eases troubleshooting.

-            if let Ok(Some(p)) = peer_manager.find_by_node_id(c.peer_node_id()) {
-                peers.push(p);
-            }
+            match peer_manager.find_by_node_id(c.peer_node_id()) {
+                Ok(Some(p)) => peers.push(p),
+                Ok(None) => {},           // not found – ignore
+                Err(e) => {
+                    debug!(
+                        target: LOG_TARGET,
+                        "Failed to look up peer {}: {}",
+                        c.peer_node_id(),
+                        e
+                    );
+                }
+            }
common_sqlite/src/connection.rs (1)

132-138: Accept Into<String> to avoid unnecessary String allocations

connect_memory_and_migrate forces callers to allocate a String.
Accepting any Into<String> (the same pattern connect_memory already uses internally) gives greater flexibility and may save an allocation for &'static str names used in tests.

-pub fn connect_memory_and_migrate(name: String, migrations: EmbeddedMigrations) -> Result<Self, StorageError> {
-    let conn = Self::connect_memory(name)?;
+pub fn connect_memory_and_migrate<S: Into<String>>(
+    name: S,
+    migrations: EmbeddedMigrations,
+) -> Result<Self, StorageError> {
+    let conn = Self::connect_memory(name.into())?;
comms/dht/src/network_discovery/discovering.rs (2)

76-81: Synchronous DB call inside async context may block reactor

calc_region_threshold now hits the SQLite-backed peer DB synchronously. During bootstrap this can execute thousands of distance calculations, blocking the Tokio worker thread and reducing overall throughput.

Consider running heavy synchronous DB tasks on a blocking pool, e.g.

let threshold = tokio::task::spawn_blocking({
    let pm = self.context.peer_manager.clone();
    let node_id = self.context.node_identity.node_id().clone();
    move || pm.calc_region_threshold(&node_id, self.config().num_neighbouring_nodes, PeerFeatures::COMMUNICATION_NODE)
})
.await??;

Or expose a non-blocking variant inside PeerManager.

This applies similarly to the calls below (find_by_public_key, add_peer).


205-217: Bulk peer insertions are now synchronous ‑ watch for IO stalls

find_by_public_key and add_peer perform read/write SQL statements for every peer streamed from the remote.
During large discovery rounds (hundreds of peers) this loop can monopolise an async worker thread for noticeable periods.

If benchmarks show stalls, off-loading to spawn_blocking or batching inserts (e.g., using a transaction) would mitigate the issue.

applications/minotari_console_wallet/src/init/mod.rs (2)

350-355: Fix typo in error message (Could not, not Could net)

Minor but user-visible:

-            format!("Could net get seed peers from peer manager: {}", err),
+            format!("Could not get seed peers from peer manager: {}", err),

63-66: Remove now-unused PeerQuery import

PeerQuery was eliminated in the SQLite refactor; keeping the import (even under #![allow(unused)]) obscures the codebase and may confuse newcomers.

-use tari_comms::{
-    multiaddr::Multiaddr,
-    peer_manager::{Peer, PeerFeatures, PeerQuery},
-    types::CommsPublicKey,
-    NodeIdentity,
-};
+use tari_comms::{
+    multiaddr::Multiaddr,
+    peer_manager::{Peer, PeerFeatures},
+    types::CommsPublicKey,
+    NodeIdentity,
+};
applications/minotari_node/src/commands/command/list_peers.rs (1)

48-55: Gracefully handle unknown filter values

An unrecognised filter string currently lists all peers without letting the user know their filter was ignored. Consider warning or returning an error to avoid user confusion, e.g.:

-        let features = filter.and_then(|value| match value.to_lowercase().as_str() {
+        let features = filter.as_ref().and_then(|value| match value.to_lowercase().as_str() {
             "basenode" | "basenodes" | "base_node" | "base-node" | "bn" => Some(PeerFeatures::COMMUNICATION_NODE),
             "wallet" | "wallets" | "w" => Some(PeerFeatures::COMMUNICATION_CLIENT),
             _ => None,
         });
+        if filter.is_some() && features.is_none() {
+            println!("Unknown filter '{}'; showing all peers", filter.unwrap());
+        }
base_layer/core/src/base_node/state_machine_service/states/listening.rs (1)

183-207: Blocking SQLite calls inside a hot async loop may stall the runtime

is_peer_banned and set_peer_metadata are synchronous Diesel/SQLite operations executed directly inside the Listening::next_event async loop.
On a busy node this loop runs frequently; blocking the core Tokio worker thread for DB IO can reduce liveness.

Consider off-loading to a blocking thread:

let peer_banned = tokio::task::spawn_blocking({
    let peer_manager = shared.peer_manager.clone();
    let node_id = peer_metadata.node_id().clone();
    move || peer_manager.is_peer_banned(&node_id)
})
.await?
?;

Same for set_peer_metadata. If profiling shows the DB cost is negligible you can document this rationale instead.

comms/core/src/peer_manager/error.rs (1)

44-48: Added SQLite-specific error variants

The new StorageError and SqlDatabaseError variants have been added to handle SQLite-related errors. This allows proper error propagation from the new SQLite backend.

However, there appears to be duplicate error descriptions between StorageError and SqlDatabaseError. Consider consolidating these variants or using more specific error messages to differentiate them.

-    #[error("A problem has been encountered with the sql database: {0}")]
-    StorageError(String),
-    #[error("A problem has been encountered with the sql database: {0}")]
+    #[error("A problem has been encountered with the storage layer: {0}")]
+    StorageError(String),
+    #[error("A problem has been encountered with the SQL database: {0}")]
     SqlDatabaseError(String),
comms/dht/src/actor.rs (2)

747-760: N² DB calls while filtering peers

Inside the loop we call get_peer_multi_addresses(id)? once per candidate.
In worst-case broadcast (BroadcastStrategy::Flood) this can be thousands of calls, each hitting SQLite synchronously.

Two quick wins:

  1. Fetch all candidate addresses in one shot (e.g. new peer_manager.multi_addresses(&peers[..]) helper) and
  2. Re-use excluded_dial_addresses predicate as a closure to closest_n_active_peers so the filtering happens in-DB.

Even a small refactor halves peer-selection latency:

let filtered_peers = peer_manager
    .filter_not_excluded(&peers, &config.excluded_dial_addresses)?;

This also removes a potential write-lock convoy on PeerManager.


878-878: Blocking DB call inside async context

peer_manager.exists(&public_key) is synchronous and may block the runtime.
Consider wrapping heavy calls in spawn_blocking or providing async variants to avoid scheduler starvation.

comms/dht/src/network_discovery/ready.rs (2)

59-61: Synchronous DB access inside hot polling loop

peer_manager.count() is now synchronous; every discovery round executes it on the Tokio core.
As rounds run on a tight interval, consider off-loading the call:

let num_peers = tokio::task::spawn_blocking(move || self.context.peer_manager.count())
    .await
    .unwrap_or(0);

This keeps the discovery state-machine CPU-bound and avoids blocking other tasks.


80-84: Random peer selection may return less than requested

random_peers(max, exclude) can legitimately return fewer peers than max when the DB has insufficient candidates (especially just after startup).
Down-stream code assumes a non-empty set and transitions to Idle when peers.is_empty(). No issue here, but adding a debug explaining why the set is short guards future readers:

if peers.len() < self.config().network_discovery.max_sync_peers {
    debug!(target: LOG_TARGET, "Only {} eligible sync peers found", peers.len());
}
comms/core/src/peer_manager/peer_id.rs (1)

33-36: Potential modulo-bias and readability issue in generate_peer_id_as_i64

Using % i64::MAX keeps the value in range, but it introduces a (tiny) modulo bias because u64::MAX is not an exact multiple of i64::MAX.
If perfectly uniform distribution is important (e.g. for sharding or hashing), consider re-rolling until the value is < i64::MAX.

-pub fn generate_peer_id_as_i64() -> i64 {
-    i64::try_from(generate_peer_key() % u64::try_from(i64::MAX).expect("infallible")).expect("infallible")
+pub fn generate_peer_id_as_i64() -> i64 {
+    // Re-roll until the sample fits into the signed range to avoid modulo bias
+    loop {
+        let k = generate_peer_key();
+        if k < i64::MAX as u64 {
+            return k as i64;
+        }
+    }
 }
comms/core/src/peer_manager/peer_features.rs (2)

68-72: Simpler & safer truncation helper

from_bits_u32_truncate currently masks with 0b1111_1111 and then try_from. Because the mask guarantees the value fits, the try_from can never fail—as cast is clearer and avoids the expect.

-    pub fn from_bits_u32_truncate(value: u32) -> Option<Self> {
-        PeerFeatures::from_bits(u8::try_from(value & 0b1111_1111).expect("will not fail"))
+    pub fn from_bits_u32_truncate(value: u32) -> Option<Self> {
+        PeerFeatures::from_bits((value & u8::MAX as u32) as u8)
     }

74-77: to_i32 could be a simple cast

Unless you explicitly need little-endian byte ordering, self.bits() as i32 is shorter and endianness-agnostic. Consider:

-    pub fn to_i32(&self) -> i32 {
-        i32::from_le_bytes([self.bits(), 0, 0, 0])
-    }
+    pub fn to_i32(&self) -> i32 {
+        self.bits() as i32
+    }
comms/core/src/connection_manager/common.rs (3)

199-206: Graceful failure path on invalid feature bits

from_bits_u32_truncate will always return Some(_) because of truncation; the subsequent ok_or branch is therefore dead code.
If you still want to error on unknown high bits, test with (features & !0xFF) != 0 before truncating.

if features & !0xFF != 0 {
    return Err(PeerManagerError::InvalidPeerFeatures { bits: features }.into());
}
let features = PeerFeatures::from_bits_u32_truncate(features).unwrap(); 

291-296: Blocking synchronous DB call inside async context

peer_manager.find_by_public_key is now synchronous and may touch SQLite.
Calling it directly inside an async task can block the Tokio executor and hurt throughput. Consider wrapping heavy, potentially blocking operations in tokio::task::spawn_blocking (or migrate to an async SQLite client).


322-323: Same blocking concern for ban_peer

peer_manager.ban_peer is synchronous; if it writes to disk it should be executed in a blocking task to avoid stalling the reactor.

comms/dht/examples/memory_net/utilities.rs (2)

636-640: Avoid unchecked unwrap() in helper – promote explicit failure context
DbConnection::connect_memory_and_migrate(..) can fail if the connection cannot be opened or the migrations bomb out.
Because this helper is called in many examples/tests, an uncontrolled panic makes diagnostics harder. Provide an
expect(..) message (or bubble the error upward) so call-sites get a meaningful hint.

-    let db_connection = DbConnection::connect_memory_and_migrate(database_name, MIGRATIONS).unwrap();
-    PeerDatabaseSql::new(db_connection)
+    let db_connection = DbConnection::connect_memory_and_migrate(database_name, MIGRATIONS)
+        .expect("Failed to create in-memory SQLite peer database");
+    PeerDatabaseSql::new(db_connection)

This tiny change keeps the convenience while greatly improving debuggability.


725-726: Provide context when seeding peers instead of raw unwrap()
A failure to insert a seed peer will abort the entire example with a cryptic panic message.

-        comms.peer_manager().add_peer(peer).unwrap();
+        if let Err(e) = comms.peer_manager().add_peer(peer) {
+            panic!("Unable to seed peer database: {}", e);
+        }

Alternatively, log the error and continue depending on the demo’s tolerance.

base_layer/wallet_ffi/src/lib.rs (3)

6021-6027: Potential blocking DB calls inside an async loop
peer_manager.find_by_node_id(..) is now synchronous and, with the new SQLite backend, can touch the database.
Calling it directly inside an async context risks blocking the Tokio executor for the duration of the SQLite query, which can hurt latency under load.

Consider off-loading the synchronous work with tokio::task::spawn_blocking (or wrapping the whole map/collect in one call) so the async runtime remains responsive.

Example sketch:

-        let mut public_keys = Vec::with_capacity(connections.len());
-        for conn in connections {
-            if let Some(peer) = peer_manager.find_by_node_id(conn.peer_node_id())? {
-                public_keys.push(peer.public_key);
-            }
-        }
+        let public_keys = tokio::task::spawn_blocking(move || -> Result<Vec<CommsPublicKey>, PeerManagerError> {
+            let mut keys = Vec::with_capacity(connections.len());
+            for conn in connections {
+                if let Some(peer) = peer_manager.find_by_node_id(conn.peer_node_id())? {
+                    keys.push(peer.public_key);
+                }
+            }
+            Ok(keys)
+        })
+        .await??;

This keeps the heavy synchronous code off the async worker threads.


7603-7608: Minor : pre-allocate vector to avoid extra reallocations
peers.iter().map(..).collect::<Vec<_>>() is fine, but you already know the final length (peers.len()).
For large peer lists a small optimisation is to allocate the capacity upfront:

-            let public_keys = peers.iter().map(|p| p.public_key.clone()).collect::<Vec<_>>();
+            let mut public_keys = Vec::with_capacity(peers.len());
+            public_keys.extend(peers.iter().map(|p| p.public_key.clone()));

Not critical, just a micro-optimisation.


9433-9437: Same blocking concern when falling back to seed peers
peer_manager.get_seed_peers() is synchronous and may hit SQLite.
If this code runs inside an async task (which wallet_start_recovery usually does), the same blocking-in-async caveat applies as above – consider spawn_blocking or similar.

comms/core/src/protocol/messaging/test.rs (3)

69-75: Avoid re-implementing random_name in every test module

random_name() (and the exact implementation body) already exists in at least three other modules (peer_manager.rs, builder/tests.rs, test_utils/peer_manager.rs). Re-defining it here:

  1. inflates binary size,
  2. makes future behaviour-changes error-prone,
  3. increases maintenance overhead.

Consider moving a single implementation into comms::test_utils::random (or similar) and re-exporting it in the affected crates.


69-73: Minor: use char::from instead of as char for clarity

Casting the sampled u8 with as char works but is a little opaque and can trip clippy’s cast_possible_truncation lint.
A clearer, zero-cost alternative:

-        .map(|_| rng.sample(Alphanumeric) as char)
+        .map(|_| char::from(rng.sample(Alphanumeric)))

77-81: create_peer_manager duplicates logic found elsewhere

The same 4-line helper exists in several crates (test_helpers/mod.rs, dht/tests/harness.rs, etc.). Extracting a common helper (e.g. test_utils::create_memory_peer_manager()) removes duplication and guarantees all tests use identical migration logic.

comms/core/src/peer_manager/peer.rs (2)

59-64: to_i32 can be simplified (and avoid endian confusion)

pub fn to_i32(&self) -> i32 {
    i32::from_le_bytes([self.bits(), 0, 0, 0])
}

The extra byte shuffling is unnecessary; u8i32 cast is well-defined and endianness-agnostic:

-    i32::from_le_bytes([self.bits(), 0, 0, 0])
+    self.bits() as i32

Unless there is an explicit requirement to serialise in LE byte-order for FFI, the cast is clearer.
If LE encoding is required, add a short comment to avoid future “optimisation”.


129-161: new_with_stats replicates Peer initialisation – consider a builder

The new constructor copies almost the entire struct literal from new(), increasing duplication risk. A typed builder pattern (or at least calling Peer { ..Peer::new(...) }) would:

  • cut repetition,
  • guarantee defaulting logic stays consistent,
  • make future field additions compile-fail instead of silently defaulting.

Example sketch:

impl Peer {
    pub fn builder(public_key: CommsPublicKey, node_id: NodeId) -> PeerBuilder {}
}

Or reuse Peer::new and override the few differing fields.

base_layer/core/src/test_helpers/mod.rs (1)

194-198: Centralise create_peer_manager helper

The same snippet appears in three different test helper modules. Extracting this into, for example, tari_test_utils::peer_manager::create_memory_peer_manager() eliminates duplication and keeps migration behaviour consistent.

comms/dht/tests/harness.rs (1)

99-103: create_peer_storage duplicates other helpers

This helper is functionally identical to ones introduced in other test crates. Please consolidate into a shared util to avoid divergence (migration list, connection flags, etc.) later.

base_layer/p2p/src/services/liveness/service.rs (2)

426-436: Import hygiene & test-only helpers

Good move to swap the in-memory LMDB helper for an in-memory SQLite DB.
Nit: the test-module re-exports PeerFeatures, PeerFlags, … which are unused in this file after the refactor. Removing them avoids a clippy warning.


458-461: Avoid panicking in unit-tests – propagate the error

PeerManager::new(peers_db, None).unwrap() will SIGABRT if the migrations break – masking the root cause.
Change to expect("in-memory peer DB should always open") or bubble the error with Result.

- Arc::new(PeerManager::new(peers_db, None).unwrap())
+ Arc::new(PeerManager::new(peers_db, None)
+     .expect("in-memory peer DB must initialise in tests"))
base_layer/p2p/src/initialization.rs (2)

125-127: Error variant wording

StorageError already conveys its meaning; embedding the same term in the display string produces StorageError StorageError(...).
Use #[error(transparent)] or change the format string to avoid repetition.


317-324: Hard-coding extension hides possible double-extension bugs

For consistency with the fix above and user-configurable filenames, consider delegating the full file name (including extension) to config.peer_database_name instead of unconditionally appending .db.

comms/dht/examples/propagation/node.rs (1)

65-68: Consider using consistent file naming for the two SQLite databases

The peer database is created with the .db extension while the DHT database uses a hard-coded dht.sqlite name.
For clarity and to avoid tooling confusion it would be more consistent to use the same extension (either .sqlite or .db) or at least document the rationale in a comment.

comms/dht/src/connectivity/mod.rs (3)

559-567: Avoid re-computing node distances while sorting

sort_by_key recomputes the distance for every comparison which is O(n log n) distance calculations.
For small pools this is fine, but you can make it clearer and marginally faster by pre-computing once:

-let mut peers_by_distance = self.peer_manager.get_peers_by_node_ids(&peer_list)?;
-peers_by_distance.sort_by_key(|a| a.node_id.distance(self.node_identity.node_id()));
+let mut peers_by_distance = self
+    .peer_manager
+    .get_peers_by_node_ids(&peer_list)?
+    .into_iter()
+    .map(|p| {
+        let dist = p.node_id.distance(self.node_identity.node_id());
+        (dist, p)
+    })
+    .collect::<Vec<_>>();
+peers_by_distance.sort_by_key(|(dist, _)| *dist);
+let peers_by_distance: Vec<Peer> = peers_by_distance.into_iter().map(|(_, p)| p).collect();

Not critical, but documents intent and avoids redundant work.


951-987: Provide unit coverage for new closest_n_active_peers selection logic

The previous query builder was removed in favour of the new convenience call.
Edge-cases worth exercising:

  • try_revive_connections = true should still pick peers whose addresses have all failed.
  • Distance cap logic when minimize_connections is toggled.
  • Correct honouring of offline_peer_cooldown vs STALE_PEER_THRESHOLD_DURATION.

Adding targeted tests in connectivity/test.rs (similar to the excerpt in the snippets) will prevent silent regressions in future refactors.


867-895: get_peer_multi_addresses can be called once per peer

Currently two identical DB look-ups are performed for neighbours and random pools.
A micro-optimisation is to fetch the addresses once and reuse them; this also shortens the critical section if the DB layer ever needs locks.

comms/core/src/test_utils/peer_manager.rs (1)

63-69: Simplify random string generation

The current approach (iter::repeat(()).map(...)) is unconventional and slightly confusing.

-use std::{iter, sync::Arc};
+use std::sync::Arc;

 use rand::{distributions::Alphanumeric, Rng};

 // ...

 pub fn random_name() -> String {
-    let mut rng = rand::thread_rng();
-    iter::repeat(())
-        .map(|_| rng.sample(Alphanumeric) as char)
-        .take(8)
-        .collect::<String>()
+    rand::thread_rng()
+        .sample_iter(&Alphanumeric)
+        .take(8)
+        .map(char::from)
+        .collect()
 }

Cleaner, idiomatic and avoids unnecessary allocation.

comms/core/src/peer_manager/peer_query.rs (1)

23-74: PeerQuery looks orphaned – consider deleting or re-integrating it

PeerQuery is still compiled but no longer has any executor or call-sites after the switch to the synchronous, SQL-backed PeerManager. Keeping this builder API around:

  • Adds dead code that new contributors may try to reuse even though it silently does nothing.
  • Creates maintenance overhead whenever Peer or NodeId change.
  • Risks accumulating bit-rot and eventually failing CI with “unused” or “dead_code” lints.

If the long-term plan is to rely exclusively on SQL queries, please delete this file (or mark it #[deprecated] with a clear comment as to why).
If you expect to revive in-memory queries later, add at least one integration test exercising PeerQuery::new() end-to-end so that CI catches accidental breakage.

comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/up.sql (1)

1-18: Primary key choice for peers.peer_id may forfeit SQLite row-id optimisations

peer_id BIGINT PRIMARY KEY is valid, but in SQLite INTEGER PRIMARY KEY (exact spelling) is treated specially and aliases the hidden rowid, giving:

  • Slightly smaller on-disk footprint.
  • Faster INSERT and DELETE.
  • Automatic auto-increment behaviour (when desired).

If peer_id is never negative and fits in 64-bits, consider:

-peer_id BIGINT PRIMARY KEY,
+peer_id INTEGER PRIMARY KEY,

(or drop peer_id entirely and use the implicit rowid).

comms/core/src/peer_manager/peer_storage_sql.rs (1)

355-369: update_each loads all peers into memory – consider streaming

update_each first calls self.peer_storage_sql.all() which materialises every peer (addresses included) before applying the closure.
On large networks this risks OOM and stalls.

A more scalable pattern is to iterate in chunks/stream:

-let all_peers = self.peer_storage_sql.all(features)?;
-for peer in all_peers {
+for peer in self.peer_storage_sql.peer_db.iter_peers(features)? {
     if let Some(updated) = (f)(peer) {
         peers_to_update.push(updated);
     }
 }

Implementing iter_peers as a Diesel cursor/LoadIter avoids loading >GBs into RAM.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2f117a5 and df35680.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (104)
  • applications/minotari_app_grpc/src/conversions/peer.rs (1 hunks)
  • applications/minotari_console_wallet/src/automation/commands.rs (2 hunks)
  • applications/minotari_console_wallet/src/grpc/wallet_grpc_server.rs (0 hunks)
  • applications/minotari_console_wallet/src/init/mod.rs (1 hunks)
  • applications/minotari_console_wallet/src/recovery.rs (0 hunks)
  • applications/minotari_console_wallet/src/ui/state/app_state.rs (1 hunks)
  • applications/minotari_node/src/commands/command/add_peer.rs (1 hunks)
  • applications/minotari_node/src/commands/command/ban_peer.rs (1 hunks)
  • applications/minotari_node/src/commands/command/get_peer.rs (1 hunks)
  • applications/minotari_node/src/commands/command/list_connections.rs (0 hunks)
  • applications/minotari_node/src/commands/command/list_peers.rs (2 hunks)
  • applications/minotari_node/src/commands/command/mod.rs (2 hunks)
  • applications/minotari_node/src/commands/command/reset_offline_peers.rs (2 hunks)
  • applications/minotari_node/src/commands/command/test_peer_liveness.rs (2 hunks)
  • applications/minotari_node/src/commands/command/unban_all_peers.rs (1 hunks)
  • applications/minotari_node/src/grpc/base_node_grpc_server.rs (1 hunks)
  • base_layer/common_types/src/tari_address/dual_address.rs (1 hunks)
  • base_layer/contacts/src/chat_client/src/networking.rs (1 hunks)
  • base_layer/core/src/base_node/state_machine_service/states/listening.rs (2 hunks)
  • base_layer/core/src/base_node/sync/rpc/tests.rs (6 hunks)
  • base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (1 hunks)
  • base_layer/core/src/chain_storage/lmdb_db/lmdb_tree_writer.rs (4 hunks)
  • base_layer/core/src/mempool/rpc/test.rs (7 hunks)
  • base_layer/core/src/test_helpers/mod.rs (2 hunks)
  • base_layer/core/src/validation/test.rs (1 hunks)
  • base_layer/core/tests/helpers/sync.rs (1 hunks)
  • base_layer/core/tests/tests/node_state_machine.rs (1 hunks)
  • base_layer/p2p/src/config.rs (1 hunks)
  • base_layer/p2p/src/initialization.rs (4 hunks)
  • base_layer/p2p/src/services/liveness/service.rs (4 hunks)
  • base_layer/wallet/src/storage/sqlite_db/wallet.rs (1 hunks)
  • base_layer/wallet/src/wallet.rs (3 hunks)
  • base_layer/wallet_ffi/src/error.rs (2 hunks)
  • base_layer/wallet_ffi/src/lib.rs (5 hunks)
  • common/config/presets/c_base_node_c.toml (1 hunks)
  • common/config/presets/d_console_wallet.toml (1 hunks)
  • common_sqlite/src/connection.rs (1 hunks)
  • common_sqlite/src/error.rs (2 hunks)
  • comms/core/Cargo.toml (3 hunks)
  • comms/core/examples/stress/error.rs (2 hunks)
  • comms/core/examples/stress/node.rs (2 hunks)
  • comms/core/examples/stress/service.rs (1 hunks)
  • comms/core/examples/tor.rs (3 hunks)
  • comms/core/src/builder/mod.rs (3 hunks)
  • comms/core/src/builder/tests.rs (4 hunks)
  • comms/core/src/connection_manager/common.rs (3 hunks)
  • comms/core/src/connection_manager/dialer.rs (1 hunks)
  • comms/core/src/connection_manager/listener.rs (1 hunks)
  • comms/core/src/connection_manager/manager.rs (1 hunks)
  • comms/core/src/connection_manager/tests/listener_dialer.rs (1 hunks)
  • comms/core/src/connection_manager/tests/manager.rs (2 hunks)
  • comms/core/src/connectivity/manager.rs (10 hunks)
  • comms/core/src/connectivity/test.rs (7 hunks)
  • comms/core/src/net_address/multiaddr_with_stats.rs (1 hunks)
  • comms/core/src/peer_manager/error.rs (4 hunks)
  • comms/core/src/peer_manager/manager.rs (13 hunks)
  • comms/core/src/peer_manager/mod.rs (2 hunks)
  • comms/core/src/peer_manager/peer.rs (2 hunks)
  • comms/core/src/peer_manager/peer_features.rs (2 hunks)
  • comms/core/src/peer_manager/peer_id.rs (1 hunks)
  • comms/core/src/peer_manager/peer_identity_claim.rs (1 hunks)
  • comms/core/src/peer_manager/peer_query.rs (1 hunks)
  • comms/core/src/peer_manager/peer_storage_sql.rs (18 hunks)
  • comms/core/src/peer_manager/storage/database.rs (1 hunks)
  • comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/down.sql (1 hunks)
  • comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/up.sql (1 hunks)
  • comms/core/src/peer_manager/storage/mod.rs (1 hunks)
  • comms/core/src/peer_manager/storage/schema.rs (1 hunks)
  • comms/core/src/peer_manager/wrapper.rs (0 hunks)
  • comms/core/src/protocol/identity.rs (3 hunks)
  • comms/core/src/protocol/messaging/test.rs (6 hunks)
  • comms/core/src/protocol/rpc/context.rs (0 hunks)
  • comms/core/src/protocol/rpc/test/comms_integration.rs (3 hunks)
  • comms/core/src/protocol/rpc/test/smoke.rs (3 hunks)
  • comms/core/src/test_utils/peer_manager.rs (1 hunks)
  • comms/core/src/types.rs (2 hunks)
  • comms/core/tests/tests/helpers.rs (1 hunks)
  • comms/core/tests/tests/rpc.rs (7 hunks)
  • comms/core/tests/tests/rpc_stress.rs (1 hunks)
  • comms/core/tests/tests/substream_stress.rs (1 hunks)
  • comms/dht/Cargo.toml (1 hunks)
  • comms/dht/examples/memory_net/utilities.rs (4 hunks)
  • comms/dht/examples/memorynet.rs (1 hunks)
  • comms/dht/examples/propagation/node.rs (4 hunks)
  • comms/dht/examples/propagation_stress.rs (1 hunks)
  • comms/dht/src/actor.rs (9 hunks)
  • comms/dht/src/connectivity/mod.rs (5 hunks)
  • comms/dht/src/connectivity/test.rs (3 hunks)
  • comms/dht/src/dht.rs (3 hunks)
  • comms/dht/src/discovery/service.rs (2 hunks)
  • comms/dht/src/inbound/deserialize.rs (1 hunks)
  • comms/dht/src/inbound/dht_handler/task.rs (4 hunks)
  • comms/dht/src/network_discovery/discovering.rs (3 hunks)
  • comms/dht/src/network_discovery/on_connect.rs (1 hunks)
  • comms/dht/src/network_discovery/ready.rs (5 hunks)
  • comms/dht/src/network_discovery/test.rs (3 hunks)
  • comms/dht/src/proto/mod.rs (4 hunks)
  • comms/dht/src/rpc/peer_info.rs (3 hunks)
  • comms/dht/src/rpc/service.rs (2 hunks)
  • comms/dht/src/rpc/test.rs (6 hunks)
  • comms/dht/src/test_utils/makers.rs (2 hunks)
  • comms/dht/tests/attacks.rs (2 hunks)
  • comms/dht/tests/dht.rs (9 hunks)
  • comms/dht/tests/harness.rs (4 hunks)
💤 Files with no reviewable changes (5)
  • applications/minotari_console_wallet/src/recovery.rs
  • applications/minotari_node/src/commands/command/list_connections.rs
  • applications/minotari_console_wallet/src/grpc/wallet_grpc_server.rs
  • comms/core/src/protocol/rpc/context.rs
  • comms/core/src/peer_manager/wrapper.rs
🧰 Additional context used
🧬 Code Graph Analysis (16)
comms/dht/examples/memorynet.rs (1)
comms/core/src/peer_manager/manager.rs (1)
  • count (70-72)
comms/dht/src/network_discovery/ready.rs (1)
comms/dht/src/network_discovery/discovering.rs (1)
  • peer_manager (285-287)
base_layer/core/src/chain_storage/lmdb_db/lmdb_tree_writer.rs (1)
base_layer/core/src/chain_storage/lmdb_db/lmdb.rs (2)
  • lmdb_delete (169-182)
  • lmdb_fetch_matching_after (352-366)
comms/dht/src/test_utils/makers.rs (3)
comms/dht/src/actor.rs (1)
  • db_connection (947-951)
common_sqlite/src/connection.rs (1)
  • connect_memory_and_migrate (133-138)
comms/core/src/peer_manager/manager.rs (1)
  • new (56-63)
comms/core/tests/tests/helpers.rs (2)
comms/dht/src/actor.rs (1)
  • db_connection (947-951)
common_sqlite/src/connection.rs (1)
  • connect_memory_and_migrate (133-138)
comms/dht/tests/attacks.rs (3)
comms/core/src/peer_manager/manager.rs (1)
  • exists (143-145)
comms/dht/tests/harness.rs (1)
  • node_identity (66-68)
comms/core/src/peer_manager/node_identity.rs (1)
  • public_key (187-189)
comms/core/src/connectivity/test.rs (1)
comms/dht/src/connectivity/test.rs (3)
  • node_identities (141-141)
  • node_identities (180-180)
  • node_identities (270-275)
comms/core/src/peer_manager/mod.rs (2)
comms/core/src/peer_manager/peer_id.rs (1)
  • generate_peer_id_as_i64 (34-36)
comms/core/src/peer_manager/manager.rs (1)
  • create_test_peer (407-448)
comms/core/src/protocol/messaging/test.rs (6)
comms/core/src/builder/tests.rs (2)
  • iter (66-69)
  • random_name (64-70)
comms/core/src/peer_manager/manager.rs (4)
  • iter (465-468)
  • random_name (463-469)
  • create_peer_manager (471-475)
  • new (56-63)
comms/core/src/test_utils/peer_manager.rs (2)
  • iter (65-68)
  • random_name (63-69)
base_layer/core/src/test_helpers/mod.rs (1)
  • create_peer_manager (194-198)
comms/dht/src/actor.rs (1)
  • db_connection (947-951)
common_sqlite/src/connection.rs (1)
  • connect_memory_and_migrate (133-138)
base_layer/wallet/src/storage/sqlite_db/wallet.rs (1)
comms/core/src/peer_manager/peer_features.rs (1)
  • from_bits_u32_truncate (70-72)
comms/core/src/protocol/rpc/test/comms_integration.rs (3)
comms/dht/src/actor.rs (1)
  • db_connection (947-951)
common_sqlite/src/connection.rs (1)
  • connect_memory_and_migrate (133-138)
comms/core/src/peer_manager/storage/database.rs (1)
  • peers_db (2152-2157)
comms/dht/examples/memory_net/utilities.rs (2)
comms/dht/tests/harness.rs (1)
  • create_peer_storage (99-103)
common_sqlite/src/connection.rs (1)
  • connect_memory_and_migrate (133-138)
comms/dht/src/connectivity/mod.rs (1)
comms/dht/src/connectivity/test.rs (1)
  • peer_manager (106-119)
comms/dht/tests/harness.rs (5)
comms/core/src/peer_manager/node_identity.rs (1)
  • random (109-113)
comms/dht/examples/memory_net/utilities.rs (2)
  • create_peer_storage (636-640)
  • new (500-532)
common_sqlite/src/connection.rs (2)
  • connect_memory_and_migrate (133-138)
  • new (148-150)
comms/core/src/peer_manager/peer.rs (1)
  • new (103-127)
comms/dht/src/storage/database.rs (1)
  • new (44-46)
applications/minotari_node/src/commands/command/get_peer.rs (3)
comms/dht/src/network_discovery/discovering.rs (1)
  • peer_manager (285-287)
comms/core/src/builder/comms_node.rs (2)
  • peer_manager (247-249)
  • peer_manager (305-307)
applications/minotari_app_utilities/src/utilities.rs (1)
  • parse_emoji_id_or_public_key (49-54)
comms/core/src/peer_manager/storage/schema.rs (2)
comms/core/src/peer_manager/peer.rs (1)
  • banned_until (286-288)
comms/core/src/net_address/multiaddr_with_stats.rs (8)
  • connection_attempts (307-309)
  • avg_initial_dial_time (311-313)
  • initial_dial_time_sample_count (315-317)
  • avg_latency (319-321)
  • latency_sample_count (323-325)
  • last_failed_reason (331-333)
  • quality_score (335-337)
  • source (299-301)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: cargo check with stable
  • GitHub Check: ci
  • GitHub Check: Cucumber tests / FFI
  • GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (146)
comms/dht/Cargo.toml (1)

14-14: Approved: Add tari_common_sqlite workspace dependency

The new SQLite-based common crate is correctly introduced alongside other workspace crates, aligning with the overall migration from LMDB to SQLite for peer storage.

applications/minotari_node/src/commands/command/add_peer.rs (1)

63-63: API call updated to use synchronous peer manager method

The removal of .await reflects the migration from async LMDB-based peer storage to synchronous SQLite-based implementation. This change is consistent with the broader refactoring of the peer manager API.

comms/dht/examples/propagation_stress.rs (1)

84-84: Path handling adapted for SQLite database

Changed from passing a reference to the temporary directory handle to passing a reference to the owned path, which is required for the new SQLite-based implementation.

base_layer/p2p/src/config.rs (1)

123-123: Documentation updated to reflect SQLite implementation

Documentation comment correctly updated to reference SQLite data files instead of LMDB data files, maintaining consistency with the implementation change.

base_layer/core/tests/tests/node_state_machine.rs (1)

311-311: Test updated to use synchronous peer manager API

Removed .await from the peer manager's add_peer call to match the new synchronous API. This ensures test compatibility with the SQLite-based peer storage implementation.

applications/minotari_node/src/commands/command/ban_peer.rs (1)

90-90: API change: Converted to synchronous call

The unban_peer method call has been modified to be synchronous (removing .await), which aligns with the PR's migration from asynchronous LMDB-based peer storage to synchronous SQLite-backed implementation.

comms/core/src/peer_manager/storage/mod.rs (1)

23-24: Module structure looks good

This new module file properly establishes the structure for the SQLite-based peer database implementation, exposing the necessary submodules for database operations and schema definitions.

applications/minotari_app_grpc/src/conversions/peer.rs (1)

53-53: Type conversion improvement

Adding .into() ensures proper type conversion when transforming the peer features bits to the type expected by the gRPC message. This change supports the migration from 32-bit to 8-bit feature flags mentioned in the PR summary.

comms/core/tests/tests/rpc_stress.rs (1)

95-95: API change: Converted to synchronous call

The add_peer method call has been modified to be synchronous (removing .await), consistent with the PR's conversion of peer manager methods from asynchronous to synchronous. This change properly aligns with the broader refactoring of the peer storage backend.

base_layer/core/src/validation/test.rs (1)

232-232: Clean code improvement: Simplified type conversion.

Converting directly with .into() instead of using try_into().expect() eliminates unnecessary panic handling while maintaining the same functionality. This makes the code more readable and aligns with the broader refactoring pattern.

comms/dht/examples/memorynet.rs (1)

202-202: Properly updated to use synchronous peer manager API.

This change correctly converts the previously asynchronous count() call to the new synchronous version, aligning with the broader database backend change from LMDB to SQLite. The implementation is now more direct without requiring await.

comms/dht/src/dht.rs (3)

421-421: Correctly updated test code to use synchronous peer manager API.

This change properly converts the previously asynchronous add_peer() call to the new synchronous version, with proper error handling via unwrap(). This is consistent with the SQLite implementation replacing the previous async LMDB implementation.


474-474: Correctly updated test code to use synchronous peer manager API.

Properly updated from asynchronous to synchronous call with error handling, consistent with the database backend change from LMDB to SQLite.


528-528: Correctly updated test code to use synchronous peer manager API.

Properly updated from asynchronous to synchronous call with error handling, consistent with the database backend change from LMDB to SQLite.

comms/dht/src/network_discovery/test.rs (3)

70-70: Correctly updated to use synchronous peer manager API in test setup.

This change properly converts the previously asynchronous add_peer() call to the new synchronous version, with proper error handling via unwrap(). This is consistent with the SQLite implementation replacing the previous async LMDB implementation.


131-131: Correctly updated test code to use synchronous peer manager API.

Properly updated from asynchronous to synchronous call with error handling, consistent with the database backend change from LMDB to SQLite.


212-212: Correctly updated test code to use synchronous peer manager API.

Properly updated from asynchronous to synchronous call with error handling, consistent with the database backend change from LMDB to SQLite.

base_layer/common_types/src/tari_address/dual_address.rs (2)

640-644: Improved error messages in test panic

The updated panic message now provides explicit details about why the validation is failing, including the actual Base58 length and the expected maximum size. This makes debugging test failures much easier.


647-651: Improved error messages in test panic

Similarly to the "too long" case, this panic message has been enhanced to provide clear information about the actual Base58 length and the expected minimum size, improving test diagnostics.

comms/core/src/connection_manager/listener.rs (1)

439-439: Removed .await from add_peer call

This change aligns with the peer database migration from LMDB to SQLite, converting the peer manager API from asynchronous to synchronous. The add_peer method now returns a direct result rather than a future that needs to be awaited.

comms/core/src/connection_manager/manager.rs (1)

504-504: Converted find_by_node_id from async to sync

This change is part of the peer manager API refactoring from asynchronous to synchronous operations, removing the need to await the result. This matches the broader architectural change from LMDB (async) to SQLite (sync) for peer storage.

comms/core/src/protocol/identity.rs (3)

70-70: Explicit feature bit conversion to u32

Added explicit conversion from peer features bits to u32. This change supports the backend refactor where peer features are now stored as u8 internally but converted to u32 for external representation in protocol messages.


268-268: Updated test assertion for feature bits

Changed the test assertion to explicitly convert peer features to u32 when comparing, matching the implementation change in the identity message construction.


278-278: Updated test assertion for feature bits

Similar to the previous change, this test assertion has been updated to explicitly convert peer features to u32, ensuring consistency with the implementation.

applications/minotari_node/src/commands/command/mod.rs (2)

67-76: Removed PeerQuery import in response to peer manager API refactoring

The change correctly updates the import list to remove the PeerQuery import that is no longer needed due to the peer manager refactoring from an asynchronous LMDB-based implementation to a synchronous SQLite-based one.


320-322: Simplified peer retrieval by using direct synchronous method

The implementation has been successfully refactored to use the new synchronous peer manager API. This change eliminates the need for a query builder pattern and async/await syntax, resulting in simpler and more direct code.

comms/dht/src/rpc/test.rs (6)

87-87: Switched to synchronous peer manager API

The peer manager API call has been correctly converted from asynchronous to synchronous, removing the .await. This aligns with the broader refactoring of the peer storage system from LMDB to SQLite.


130-130: Switched to synchronous peer manager API

The peer manager API call has been correctly converted from asynchronous to synchronous, removing the .await. This change is consistent with the peer storage backend migration to SQLite.


159-159: Switched to synchronous peer manager API

The peer manager API call has been correctly converted from asynchronous to synchronous, removing the .await. This is part of the system-wide peer manager refactoring.


230-230: Switched to synchronous peer manager API

The peer manager API call has been correctly converted from asynchronous to synchronous, removing the .await. This modification is consistent with the other changes to the peer manager usage throughout the codebase.


268-268: Switched to synchronous peer manager API

The peer manager API call has been correctly converted from asynchronous to synchronous, removing the .await. This follows the same pattern as other changes in this file.


306-306: Switched to synchronous peer manager API

The peer manager API call has been correctly converted from asynchronous to synchronous, removing the .await. This ensures test compatibility with the refactored peer manager implementation.

base_layer/wallet_ffi/src/error.rs (2)

29-29: Updated import to include PeerManagerError

The import has been correctly modified to include PeerManagerError from the peer_manager module, which is needed for the new error conversion implementation.


549-557: Added PeerManagerError conversion to support SQLite peer database

This new implementation of From<PeerManagerError> for LibWalletError properly handles error propagation from the refactored peer manager to the wallet FFI layer. This ensures that errors from the new SQLite-based peer storage can be properly communicated back to FFI clients.

The implementation logs the error and creates a LibWalletError with code 459, which is appropriate for this category of error.

comms/core/src/protocol/rpc/test/smoke.rs (3)

152-152: Switched to synchronous peer manager API

The peer manager API call has been correctly converted from asynchronous to synchronous, removing the .await. This change ensures compatibility with the refactored peer manager that now uses a synchronous SQLite-based backend.


502-502: Switched to synchronous peer manager API

The peer manager API call has been correctly converted from asynchronous to synchronous, removing the .await. This change is consistent with the new peer manager implementation pattern.


551-551: Switched to synchronous peer manager API

The peer manager API call has been correctly converted from asynchronous to synchronous, removing the .await. This modification aligns with the broader peer storage refactoring from LMDB to SQLite.

comms/core/examples/stress/service.rs (1)

227-227: Method signature updated to match synchronous peer manager API.

The .await has been removed from the add_peer method call, which is consistent with the migration from an asynchronous LMDB-based peer storage to a synchronous SQLite-based implementation as mentioned in the PR objectives.

base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs (1)

1851-1851: Passing WriteTransaction by value instead of by reference.

The transaction value is now being passed directly to LmdbTreeWriter::new instead of as a reference. This is consistent with similar changes in the implementation of LMDB helper functions as mentioned in the AI summary.

comms/dht/tests/dht.rs (8)

75-75: Methods updated to match synchronous peer manager API.

The .await calls have been removed from the exists method calls on the peer manager, aligning with the migration to a synchronous SQLite-based peer storage as indicated in the PR objectives.

Also applies to: 81-81


139-139: Method updated to match synchronous peer manager API.

The .await has been removed from the add_peer method call, consistent with the conversion of peer manager methods from asynchronous to synchronous operations.


168-172: Methods updated to match synchronous peer manager API.

Multiple .await calls have been removed from the exists method calls, reflecting the broader refactoring to synchronous peer manager methods across the codebase.


216-216: Method updated to match synchronous peer manager API.

The .await has been removed from the add_peer method call on the node_D peer manager, matching the new synchronous API.


245-249: Methods updated to match synchronous peer manager API.

Multiple .await calls have been removed from the exists method calls across different peer managers, consistent with the conversion to synchronous operations.


286-286: Method updated to match synchronous peer manager API.

The .await has been removed from the add_peer method call, aligning with the conversion to a synchronous SQLite-backed peer manager implementation.


583-585: Methods updated to match synchronous peer manager API.

Multiple .await calls have been removed from consecutive add_peer method calls across different peer managers, reflecting the consistent migration to synchronous operations.


691-691: Methods updated to match synchronous peer manager API.

The .await has been removed from the add_peer method calls, ensuring consistency with the new synchronous peer manager interface across all usages in the test file.

Also applies to: 796-796

comms/core/examples/stress/error.rs (1)

24-24: Added SQLite storage error handling capability.

Added a new error variant StorageError to the Error enum and imported the corresponding type from the tari_common_sqlite module. This addition properly supports error propagation from the new SQLite-based peer storage backend, ensuring that storage errors are properly captured and handled in the stress test example.

Also applies to: 74-75

applications/minotari_node/src/commands/command/get_peer.rs (2)

67-67: Method signature changed from async to sync

The .await keyword has been removed from the call to find_all_starts_with, reflecting the peer manager's migration from an asynchronous to a synchronous API as part of the switch from LMDB to SQLite.


73-75: Method signature changed from async to sync

The .await keyword has been removed from the call to find_by_public_key, consistent with the peer manager's API migration from asynchronous to synchronous operations using the new SQLite backend.

comms/core/src/connection_manager/tests/listener_dialer.rs (1)

213-213: Method signature changed from async to sync

The .await keyword has been removed from the call to add_peer in this test, reflecting the change from asynchronous to synchronous peer manager operations. This correctly updates the test code to use the new synchronous API.

comms/core/src/peer_manager/peer_identity_claim.rs (1)

70-72: Updated to handle reduced feature bits size

Changed from PeerFeatures::from_bits to PeerFeatures::from_bits_u32_truncate to accommodate the migration from 32-bit to 8-bit feature flags. This method appropriately truncates the higher bits while preserving the existing error handling for invalid bits.

comms/dht/src/rpc/peer_info.rs (3)

77-78: Updated to handle reduced feature bits size

Changed to use PeerFeatures::from_bits_u32_truncate instead of from_bits to properly handle the migration from 32-bit to 8-bit feature flags in the DiscoveryMessage conversion implementation.


108-109: Updated to handle reduced feature bits size

Changed to use PeerFeatures::from_bits_u32_truncate instead of from_bits to properly handle the migration from 32-bit to 8-bit feature flags in the DiscoveryResponseMessage conversion implementation.


140-141: Updated to handle reduced feature bits size

Changed to use PeerFeatures::from_bits_u32_truncate instead of from_bits to properly handle the migration from 32-bit to 8-bit feature flags in the JoinMessage conversion implementation.

comms/core/tests/tests/rpc.rs (2)

101-101: Properly adjusted from async to sync call

The method add_peer has been converted from asynchronous to synchronous, correctly removing the .await keyword while maintaining error handling with unwrap().


166-166: Multiple peer manager calls properly converted to synchronous pattern

All instances of add_peer() have been consistently converted from asynchronous to synchronous calls by removing the .await keyword, aligning with the new SQLite-based peer storage implementation.

Also applies to: 215-215, 271-271, 336-336, 409-409, 468-468

comms/dht/src/inbound/dht_handler/task.rs (2)

219-219: Peer manager methods correctly converted to synchronous

Multiple peer manager methods (find_by_public_key, add_peer, find_by_node_id) have been properly converted from asynchronous to synchronous calls by removing the .await keyword. The error handling remains appropriate for the synchronous implementation.

Also applies to: 233-233, 400-400, 402-402, 403-403


435-435: Added type conversion for peer features

Added .into() conversion for peer feature bits to ensure proper type conversion in the message structure. This change is part of standardizing feature bit handling across the codebase, where the new implementation uses different bit width for features.

comms/dht/src/connectivity/test.rs (3)

28-28: Added necessary constant import

Added import for STALE_PEER_THRESHOLD_DURATION which is now required by the updated peer querying method.


62-62: Peer manager add_peer call correctly converted to synchronous

The add_peer method has been properly converted from asynchronous to synchronous by removing the .await keyword while maintaining error handling with unwrap().


107-115: Replaced peer querying method with enhanced version

The method to retrieve closest peers has been updated from the async closest_peers to the synchronous closest_n_active_peers with additional parameters for filtering:

  • Added filtering by peer features (PeerFeatures::COMMUNICATION_NODE)
  • Added staleness threshold (STALE_PEER_THRESHOLD_DURATION)
  • Added flags for active peers (true)
  • Added optional parameters for enhanced querying

This represents a significant API improvement by consolidating multiple filtering concerns into a single method call.

comms/dht/src/network_discovery/on_connect.rs (1)

189-189: Peer manager calls correctly converted to synchronous

The peer manager methods find_by_public_key and add_peer have been properly converted from asynchronous to synchronous calls by removing the .await keyword. Error handling remains consistent with the rest of the codebase.

Also applies to: 192-192

comms/dht/src/discovery/service.rs (3)

266-266: Changed from async to sync method call.

This change is part of the broader refactoring where peer manager methods have been converted from asynchronous to synchronous as part of the migration from LMDB to SQLite-based storage.


270-270: Changed from async to sync method call.

This change correctly removes the .await on the add_peer method call, consistent with the peer manager API now being synchronous rather than asynchronous.


350-350:

Details

✅ Verification successful

Added type conversion for peer features.

The addition of .into() handles the conversion of peer feature bits, likely related to the reduction of peer features bitflags from 32 bits to 8 bits as mentioned in the PR summary.


🏁 Script executed:

#!/bin/bash
# Check how peer features are handled elsewhere in the codebase to confirm consistency

echo "Checking feature bits conversion in other files:"
rg "features\(\).bits\(\)" --type rust

Length of output: 896


Confirmed consistent type conversion for peer_features
The use of .into() on features().bits() here matches the other DHT modules and tests, ensuring the bitflags (now 8 bits) map correctly to the message field:

  • comms/dht/src/proto/mod.rs
  • comms/dht/src/inbound/dht_handler/task.rs
  • comms/dht/tests/attacks.rs

The core identity protocol still uses u32::from(node_identity.features().bits()), so this change is fully in line with existing patterns. No further action required.

applications/minotari_node/src/grpc/base_node_grpc_server.rs (1)

771-771: Updated method signature to support optional feature filtering.

The peer manager's all() method now takes an optional parameter to filter peers by features. The None value here means "return all peers regardless of features." This change aligns with the removal of the peer query system, replacing it with direct filtering methods.

common_sqlite/src/connection.rs (1)

132-138: Nice convenience helper – LGTM

The helper neatly reduces boilerplate in tests and one-off in-memory usages.
No correctness concerns spotted.

comms/core/examples/stress/node.rs (1)

23-29: Migration to SQLite peer database looks clean and well-structured.

The changes have successfully migrated from the LMDB-based peer database to SQLite. The code now properly creates a SQLite database connection, applies migrations, and initializes a PeerDatabaseSql instance.

Nice touch using the original database path with .db extension to maintain file location consistency.

Also applies to: 32-33, 37-38, 73-75

comms/core/src/builder/mod.rs (2)

74-77: Good update to example code reflecting SQLite implementation.

The example code in the documentation has been appropriately updated to use SQLite instead of LMDB, which maintains consistency between documentation and implementation.

Also applies to: 90-91


315-316: Correctly simplified PeerManager initialization.

The PeerManager::new call has been simplified to remove the minimum peer version parameter, aligning with the synchronous SQLite-based implementation. This streamlined API makes the code more maintainable.

base_layer/core/src/chain_storage/lmdb_db/lmdb_tree_writer.rs (5)

57-58: Correct removal of unnecessary reference.

The function signature for lmdb_delete_keys_starting_with expects the transaction by value, not by reference, so this change is appropriate.


59-60: Correct removal of unnecessary reference.

Same improvement as above, matching the lmdb_delete_keys_starting_with function's expected parameter type.


70-77: Correct removal of unnecessary reference.

The lmdb_delete function expects the transaction by value rather than by reference.


102-103: Correct removal of unnecessary reference.

The lmdb_fetch_matching_after function expects the transaction parameter by value, so this change is appropriate.


111-111: Improved code readability with idiomatic Rust pattern.

Changed from .last().map(...).flatten() to the more concise and idiomatic .last().and_then(...), which performs the same operation more elegantly.

comms/dht/src/inbound/deserialize.rs (1)

153-153: Correctly updated to synchronous PeerManager API.

Removed the .await call since the peer manager methods are now synchronous in the SQLite implementation. This is consistent with the overall migration from asynchronous LMDB operations to synchronous SQLite operations.

comms/core/src/types.rs (2)

35-35: Updated import to use the new SQLite peer database implementation.

The import is now using the new PeerDatabaseSql directly from the peer_manager::database module instead of the previous Peer/PeerId imports. This reflects the architectural shift from LMDB to SQLite for peer storage.


55-55: Type alias now unconditionally uses SQLite database implementation.

The CommsDatabase type alias now unconditionally refers to PeerDatabaseSql instead of conditionally selecting between HashmapDatabase for testing and LMDBWrapper for production. This simplifies the codebase by using a consistent database implementation across all environments.

applications/minotari_console_wallet/src/automation/commands.rs (1)

2605-2606: Removed .await call for synchronous peer manager method.

The .await call has been removed from get_seed_peers() as part of converting the peer manager methods from asynchronous to synchronous. This simplifies the code by eliminating unnecessary Future handling while maintaining the same functionality.

comms/dht/tests/attacks.rs (2)

105-105: Removed .await call for synchronous peer manager exists method.

The .await call has been removed when checking if a peer exists, as the peer manager's exists method is now synchronous rather than asynchronous. This is part of the broader refactoring of the peer management system to use synchronous SQLite operations.


144-144: Added explicit conversion for peer features.

The peer features are now explicitly converted using .into() to ensure compatibility with the new storage format. This change reflects the reduction of peer features bitflags from 32 bits to 8 bits, requiring proper type conversion to maintain compatibility.

comms/core/src/connection_manager/tests/manager.rs (2)

154-155: Removed .await call for synchronous peer manager find_by_node_id method.

The .await call has been removed from find_by_node_id() as part of converting peer manager methods from asynchronous to synchronous. This simplifies the test code while maintaining the same functionality.


406-406: Removed .await call for synchronous peer manager add_peer method.

The .await call has been removed from add_peer() as this method is now synchronous rather than asynchronous. This is consistent with the broader refactoring of the peer management system to use synchronous SQLite operations instead of asynchronous LMDB operations.

comms/core/examples/tor.rs (4)

18-19: New dependency added for SQLite support.

This import brings in the SQLite connection handling capabilities needed for the new peer database implementation.


23-29: Updated peer manager imports to use the new SQLite database components.

The imports now specifically include database::{PeerDatabaseSql, MIGRATIONS} which are part of the new SQLite implementation, replacing the previous LMDB-based imports.


164-166: Migrated peer storage from LMDB to SQLite.

This code replaces the previous LMDB datastore creation with:

  1. Creating a SQLite database URL
  2. Connecting to the database and applying migrations
  3. Creating a PeerDatabaseSql instance

This implementation properly handles database creation and schema migration, which is necessary for the SQLite storage backend.


110-122: Peer creation syntax is unchanged but formatting is improved.

The add_peer call's parameters have been reformatted with better vertical alignment, but the functionality remains the same.

base_layer/wallet/src/wallet.rs (3)

419-419: Removed .await call to match new synchronous peer manager API.

This change reflects the migration from an asynchronous to a synchronous peer manager implementation. The find_by_public_key method now returns a direct result rather than a future.


432-432: Converted async add_peer call to synchronous.

Consistent with the new peer manager implementation, the .await has been removed as the method now operates synchronously.


465-465: Converted another async add_peer call to synchronous.

Another instance of removing .await from the add_peer method call, maintaining consistency with the synchronous peer manager implementation.

comms/core/src/connection_manager/dialer.rs (1)

276-281: Converted async peer manager call to synchronous.

This change adjusts the error handling for the now-synchronous add_peer method. The code now directly maps the error result rather than awaiting a future, while maintaining the same error handling logic.

base_layer/wallet/src/storage/sqlite_db/wallet.rs (1)

307-307: Updated peer features parsing to properly handle bit truncation.

Changed from PeerFeatures::from_bits to PeerFeatures::from_bits_u32_truncate, which safely handles the conversion from 32-bit to 8-bit representation by masking to the lowest 8 bits before creating the PeerFeatures instance. This change aligns with the reduction in peer feature bitflags from 32 bits to 8 bits elsewhere in the codebase.

base_layer/core/src/mempool/rpc/test.rs (8)

33-35: Function signature updated for SQLite peer storage

The setup function has been simplified to no longer return a TempDir and the create_peer_manager() function is now called without directory arguments. This aligns with the migration from LMDB (which required filesystem paths) to SQLite in-memory database for peer storage.


38-39: Return type updated for simplified peer manager

The return tuple now correctly omits the temporary directory, consistent with the SQLite-based peer manager implementation that doesn't require filesystem resources.


47-48: Test function updated to use simplified setup

The destructuring pattern has been updated to match the new return type of the setup function, removing the temporary directory variable.


71-72: Test function updated to use simplified setup

The destructuring pattern has been updated to match the new return type of the setup function, removing the temporary directory variable.


101-102: Test function updated to use simplified setup

The destructuring pattern has been updated to match the new return type of the setup function, removing the temporary directory variable.


123-124: Test function updated to use simplified setup

The destructuring pattern has been updated to match the new return type of the setup function, removing the temporary directory variable.


147-148: Test function updated to use simplified setup

The destructuring pattern has been updated to match the new return type of the setup function, removing the temporary directory variable.


174-175: Test function updated to use simplified setup

The destructuring pattern has been updated to match the new return type of the setup function, removing the temporary directory variable.

applications/minotari_node/src/commands/command/reset_offline_peers.rs (2)

26-27: Added PeerFeatures import for filtering

The PeerFeatures import is now required to filter peers by their features in the update_each method.


43-49: Updated to use synchronous peer manager API with feature filtering

The peer manager's update_each method is now called synchronously (without .await) and includes a filter to target only peers with the COMMUNICATION_NODE feature. This change:

  1. Aligns with the migration from asynchronous LMDB to synchronous SQLite-based peer storage
  2. Improves efficiency by only resetting connection attempts for communication nodes
  3. Uses the new API structure that supports feature-based filtering

The implementation correctly handles the peer update logic by:

  • Passing a mutable closure that resets connection attempts
  • Applying the appropriate feature filter
  • Capturing the number of updated peers for user feedback
comms/core/Cargo.toml (4)

13-14: Added SQLite workspace dependency

Added the tari_common_sqlite workspace dependency to support the new SQLite-based peer database implementation.


31-37: Added Diesel ORM with SQLite support

Added the Diesel ORM with necessary features:

  • sqlite: Core SQLite driver support
  • serde_json: JSON serialization/deserialization support
  • chrono: DateTime type support
  • numeric: Numeric type support

These dependencies are required for implementing the SQLite-based peer storage with proper type handling.


38-38: Added Diesel migrations support

Added the diesel_migrations crate to support SQL schema migrations for the peer database, which is essential for managing schema changes over time.


50-51: Moved serde_json to main dependencies

Moved serde_json from dev-dependencies to main dependencies as it's now required for the SQLite implementation with Diesel ORM.

base_layer/core/tests/helpers/sync.rs (1)

359-360: Updated to use synchronous peer manager API

The .await has been removed from the is_peer_banned call, reflecting that the peer manager now provides a synchronous API. This change is consistent with the migration from asynchronous LMDB-based to synchronous SQLite-based peer storage.

comms/core/src/connectivity/test.rs (7)

93-103: Successfully refactored from async to sync peer manager calls.

The function has been properly converted from an async function to a synchronous one, aligning with the broader architectural change from LMDB to SQLite for peer storage. The removal of .await from the add_peer call reflects the new synchronous peer manager API.


109-110: Consistent synchronous approach applied correctly.

The synchronous call to add_test_peers correctly reflects the synchronous peer manager API changes, removing the need for .await.


148-152: Properly implemented synchronous peer manager operations.

Both the call to add_test_peers and direct add_peer calls have been correctly refactored to use the synchronous API without .await.


280-281: Synchronous function call correctly implemented.

The call to add_test_peers has been properly converted to use the synchronous API.


309-310: Properly converted from async to sync peer lookup.

The find_by_node_id call has been correctly refactored to use the synchronous API without .await.


324-325: Correctly implemented synchronous peer manager operation.

The call to add_test_peers has been properly converted to use the synchronous API.


387-388: Successful refactoring to synchronous API.

The call to add_test_peers has been properly converted to use the synchronous API.

comms/core/src/builder/tests.rs (5)

23-28: Added necessary imports for SQLite and random name generation.

The changes correctly add imports for rand and tari_common_sqlite, which are needed for the new SQLite-based peer storage implementation.


46-50: Updated peer_manager imports to include database-related types.

The imports have been correctly updated to include database module with PeerDatabaseSql and MIGRATIONS which are needed for the SQLite implementation.


64-70: Well-implemented random name generation function.

The random_name function provides a clean way to generate random database names for SQLite in-memory connections, which is necessary for isolated test databases.


90-92: Properly implemented SQLite database initialization.

The code correctly creates an in-memory SQLite database with migrations applied, replacing the previous in-memory hashmap approach.


100-101: Successfully updated peer storage configuration.

The call to .with_peer_storage now correctly uses the SQLite-based peer database and passes None for the file lock, aligning with the new storage approach.

common_sqlite/src/error.rs (3)

25-27: Updated imports to support hex conversion errors.

The import statement has been correctly updated to include HexError from tari_utilities, which is needed for the new error handling capabilities.


66-73: Enhanced error handling with new SQLite-specific error variants.

Added appropriate new error variants to support SQLite operations:

  • HexError for hex encoding/decoding issues
  • JsonError for JSON conversion problems
  • TryFromIntError for integer conversion failures
  • PeerNotFoundError for when a requested peer doesn't exist

These additions are essential for robust error handling in the SQLite storage layer.


82-86: Added proper conversion from HexError to StorageError.

The implementation of From<HexError> for StorageError enables clean error propagation when hex conversion fails during database operations.

comms/dht/src/test_utils/makers.rs (2)

25-43: Updated imports to support SQLite database implementation.

The imports have been correctly updated to include SQLite connection types and remove unused LMDB-related dependencies. The peer manager imports have been reorganized for better readability.


241-245: Successfully refactored peer manager creation to use SQLite.

The build_peer_manager function now correctly:

  1. Creates an in-memory SQLite database with a random name
  2. Applies the required migrations
  3. Initializes the database with PeerDatabaseSql
  4. Creates a PeerManager with the SQLite database

This change aligns perfectly with the PR objective of replacing LMDB with SQLite for peer database storage.

comms/core/tests/tests/helpers.rs (2)

26-37: Updated imports for SQLite database integration

The imports have been properly updated to include SQLite-related dependencies required for the new database implementation. The change aligns with the PR objective of migrating from LMDB to SQLite for peer storage.


40-43: Implementation of in-memory SQLite database for tests

The create_peer_storage function has been correctly refactored to use an in-memory SQLite database instead of LMDB. This implementation properly:

  1. Creates an in-memory connection with a random name
  2. Applies required migrations
  3. Initializes the new PeerDatabaseSql instance

This change aligns with the PR objective and provides a clean test database implementation.

comms/core/src/protocol/rpc/test/comms_integration.rs (4)

44-50: Well-implemented random name generator for database isolation

Good implementation of the random_name function that generates unique 8-character alphanumeric names for database isolation. This ensures each test gets its own isolated in-memory database instance.


58-65: Proper SQLite database setup for test comms node

The implementation correctly sets up an in-memory SQLite database with migrations and uses it for peer storage. This approach provides test isolation and aligns with the broader SQLite migration.


73-80: Consistent database setup across comms instances

The second comms instance is properly configured with its own isolated in-memory SQLite database, maintaining test integrity and following the same pattern as the first instance.


83-86: Synchronous peer manager API usage

The code correctly uses the new synchronous add_peer method without .await, reflecting the API changes from asynchronous to synchronous in the peer manager implementation.

comms/core/src/peer_manager/error.rs (4)

26-28: Added necessary imports for SQLite and hex error handling

The imports have been properly updated to include SQLite storage errors and hex conversion errors, which are needed for the new error variants.


68-70: Added HexError variant

Added appropriate error variant for hex conversion errors, which is required for handling hex encoding/decoding issues in the SQLite implementation.


72-76: Implemented From trait for StorageError conversion

Properly implemented the From trait to convert SQLite storage errors into PeerManagerError, ensuring clean error propagation.


91-95: Implemented From trait for HexError conversion

Added a clean implementation of the From trait to convert hex conversion errors into PeerManagerError, completing the error handling for the new SQLite implementation.

comms/dht/src/proto/mod.rs (3)

75-75: Updated peer_features serialization

Properly updated to use explicit conversion with bits().into() instead of direct assignment, ensuring compatibility with the new PeerFeatures implementation that changed from u32 to u8.


89-89: Updated PeerFeatures deserialization

Changed to use from_bits_u32_truncate instead of from_bits, which correctly handles the conversion from u32 to the new truncated u8 representation of peer features.


109-109: Consistent peer_features bit handling

The conversion is consistent with the updated approach across the codebase, using explicit conversion with bits().into() for proper serialization of peer features.

comms/dht/src/actor.rs (1)

795-808: Double-check allow_clients flag when selecting propagation peers

closest_n_active_peers(.., /*allow_clients =*/ true, …) will include wallet/client-only peers even though we already restrict by PeerFeatures::MESSAGE_PROPAGATION.

If the intent is to target only propagation-capable communication nodes, pass false here to avoid relaying messages to nodes that cannot forward them:

-    Some(STALE_PEER_THRESHOLD_DURATION),
-    true,
+    Some(STALE_PEER_THRESHOLD_DURATION),
+    false,

Please verify against the new signature of closest_n_active_peers; an incorrect flag silently relaxes the selection criteria.

base_layer/core/src/base_node/sync/rpc/tests.rs (1)

45-46: Unit-test isolation improvement acknowledged

Removing the tempfile dependency and switching to an in-memory SQLite peer database simplifies the test harness and speeds up execution. Nice cleanup!

comms/dht/src/network_discovery/ready.rs (1)

139-148: Parameter order & flag sanity for closest_n_active_peers

Ensure the new argument order matches the implementation: the added parameters (Some(STALE_PEER_THRESHOLD_DURATION), true, None) look correct, but if the last boolean is allow_clients it should probably be false (see earlier comment in actor.rs).

A mismatched order will compile yet silently degrade peer selection quality.

comms/core/src/peer_manager/peer_features.rs (1)

31-44: Type downsizing – double-check external storage and wire formats

PeerFeatures changed from u32u8. Ensure every DB column, protobuf/int representation, and ABI expecting 32-bit fields is migrated, otherwise silent truncation or UB can occur.

comms/core/src/peer_manager/mod.rs (1)

97-109: Re-exports look consistent – no action needed
The new exports (generate_peer_id_as_i64, create_test_peer, PeerStorageSql alias) keep backwards-compatibility and are
scoped sensibly (pub(crate) / cfg(test)). Nice job!

base_layer/wallet_ffi/src/lib.rs (1)

135-141:

Details

❌ Incorrect review comment

NodeIdentity may be an unused import – please double-check and clean up if not required
The newly-added NodeIdentity import does not appear in any of the modified hunks. If the symbol is not referenced elsewhere in this module the build will emit an unused import warning (or even a hard error if #![deny(warnings)] is enabled).

You can verify with a quick search:

If the only hit is this import line, consider removing it:

-    peer_manager::NodeIdentity,

🏁 Script executed:

#!/bin/bash
# Look for occurrences of `NodeIdentity` in this file
rg -n 'NodeIdentity' base_layer/wallet_ffi/src/lib.rs

Length of output: 288


Ignore unused import suggestion: NodeIdentity is referenced in this file
The NodeIdentity import is used in multiple places later in base_layer/wallet_ffi/src/lib.rs and should be retained:

  • Line 6591: let node_identity = Arc::new(NodeIdentity::with_signature_unchecked(...));
  • Line 12941: NodeIdentity::random(&mut OsRng, get_next_memory_address(), PeerFeatures::COMMUNICATION_NODE);

Likely an incorrect or invalid review comment.

comms/core/src/protocol/messaging/test.rs (1)

146-147: Thread-safety of synchronous PeerManager in async tests

add_peer is now synchronous, but the same Arc<PeerManager> is accessed concurrently from multiple async tasks in these tests (e.g. the messaging protocol itself).
Please double-check that the internal PeerManager wraps the SQLite connection in a thread-safe synchronisation primitive (e.g. RwLock<Connection>). Otherwise simultaneous reads/writes will cause undefined behaviour or “use after close” panics.

comms/dht/tests/harness.rs (1)

179-181: Concurrent writes via synchronous API – confirm safety

comms.peer_manager().add_peer(peer).unwrap(); is invoked inside a loop that may run concurrently with other peer-manager calls from the Comms/DHT runtime. As the API is now synchronous:

  1. Verify that PeerManager internally serialises access (e.g. via Mutex/RwLock), or
  2. Expose an async interface again that offloads DB calls to a dedicated thread.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
base_layer/p2p/src/services/liveness/service.rs (1)

232-242: Properly using spawn_blocking for synchronous DB operations

Great implementation that properly offloads synchronous SQLite operations to a separate thread pool via spawn_blocking. This prevents the async event loop from being blocked during potentially slow disk I/O operations.

comms/core/src/peer_manager/storage/database.rs (3)

88-123: ⚠️ Potential issue

Avoid raw SQL string-building – vulnerable to injection and SQLite syntax errors

The code constructs SQL queries using string concatenation and format! with user-controlled fields, which is vulnerable to SQL injection attacks.

Use Diesel's query builder instead of raw SQL string manipulation:

- let mut peer_query = String::from(
-     "INSERT INTO peers (peer_id, public_key, node_id, flags, banned_until, banned_reason, features, \
-      supported_protocols, added_at, user_agent, metadata, deleted_at) VALUES ",
- );
- 
- peer_query.push_str(
-     &peers_with_addresses
-         .iter()
-         .map(|p| {
-             format!(
-                 "({}, '{}', '{}', {}, {}, {}, {}, '{}', '{}', '{}', {}, {})",
-                 generate_peer_id_as_i64(),
-                 p.peer.public_key,
-                 p.peer.node_id,
-                 // ... other fields ...
-             )
-         })
-         .collect::<Vec<String>>()
-         .join(", "),
- );
- 
- peer_query.push_str(" ON CONFLICT (node_id) DO NOTHING");
- conn.batch_execute(&peer_query)?;
+ use diesel::insert_into;
+ 
+ // Assign IDs first
+ let peers_to_insert: Vec<_> = peers_with_addresses.iter().map(|p| {
+     let mut peer = p.peer.clone();
+     peer.peer_id = generate_peer_id_as_i64();
+     peer
+ }).collect();
+ 
+ insert_into(peers::table)
+     .values(&peers_to_insert)
+     .on_conflict(peers::node_id)
+     .do_nothing()
+     .execute(conn)?;

193-194: ⚠️ Potential issue

INSERT … VALUES with zero rows yields invalid SQL

If all multi-addresses already exist, the SQL query will be invalid as it would have no values after the VALUES keyword.

Ensure the query is only executed when there are addresses to insert:

- address_query.push_str(" ON CONFLICT (address) DO NOTHING");
- conn.batch_execute(&address_query)?;
+ if total_addresses_inserted > 0 {
+     address_query.push_str(" ON CONFLICT (address) DO NOTHING");
+     conn.batch_execute(&address_query)?;
+ }

330-331: ⚠️ Potential issue

Hex-literal prefix (x'...') used for plain text columns – will fail

Using x'{}' denotes a BLOB literal in SQLite and expects an even-length hex string. For multi-address strings (which are not hex data), this will cause malformed hexadecimal literal errors.

Use proper string escaping instead:

- ids.push(format!("x'{}'", address_update.address));
+ ids.push(format!("'{}'", address_update.address.replace('\'', "''")));
🧹 Nitpick comments (5)
base_layer/p2p/src/services/liveness/service.rs (1)

236-238: Fix typo in comment

There's a small typo in the comment.

-// tokio::task::spawn_blocking is used herr to offload the blocking operation to a separate thread
+// tokio::task::spawn_blocking is used here to offload the blocking operation to a separate thread
base_layer/p2p/src/initialization.rs (1)

438-439: Consider synchronizing peer database filenames between test and production code

The test code uses "peers.db" while production code uses config.peer_database_name with a .db extension. Consider standardizing on one approach for better consistency.

comms/core/src/peer_manager/storage/database.rs (3)

1476-1478: Potential overflow without adequate handling

The duration_to_i64_infallible function uses unwrap_or(i64::MAX) which silently converts overflows to the maximum value without logging or alerting.

Consider adding logging for overflow cases:

fn duration_to_i64_infallible(duration: Option<Duration>) -> Option<i64> {
-    duration.map(|d| d.as_millis().try_into().unwrap_or(i64::MAX))
+    duration.map(|d| {
+        match d.as_millis().try_into() {
+            Ok(value) => value,
+            Err(_) => {
+                log::warn!("Duration overflow: {:?} ms exceeds i64::MAX", d.as_millis());
+                i64::MAX
+            }
+        }
+    })
}

62-63: Consider adding error handling for database connection failures

The function correctly creates a PeerManager with the PeerDatabaseSql backend, but when using in production, more context for database connection failures would be helpful.

Consider mapping database connection errors to provide more context:

- let db_connection = DbConnection::connect_and_migrate(&database_url, MIGRATIONS)?;
+ let db_connection = DbConnection::connect_and_migrate(&database_url, MIGRATIONS)
+     .map_err(|e| StorageError::ConnectionError(format!("Failed to connect to peer database at {:?}: {}", &database_url, e)))?;

1155-1156: Potential inconsistency in returned random peers

The code doesn't consider the case where there are fewer qualified peers than requested.

Consider adding a debug log when fewer peers than requested are returned:

node_ids_hex.shuffle(&mut rand::thread_rng());
node_ids_hex.truncate(n);
+
+if node_ids_hex.len() < n {
+    log::debug!("Requested {} random active peers, but only {} were available", n, node_ids_hex.len());
+}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between df35680 and bca83fb.

📒 Files selected for processing (21)
  • applications/minotari_node/src/commands/command/unban_all_peers.rs (1 hunks)
  • base_layer/p2p/src/initialization.rs (4 hunks)
  • base_layer/p2p/src/services/liveness/error.rs (2 hunks)
  • base_layer/p2p/src/services/liveness/service.rs (4 hunks)
  • comms/core/examples/stress/node.rs (2 hunks)
  • comms/core/examples/tor.rs (3 hunks)
  • comms/core/src/connection_manager/tests/listener_dialer.rs (9 hunks)
  • comms/core/src/connection_manager/tests/manager.rs (10 hunks)
  • comms/core/src/connectivity/manager.rs (10 hunks)
  • comms/core/src/connectivity/test.rs (8 hunks)
  • comms/core/src/net_address/multiaddr_with_stats.rs (1 hunks)
  • comms/core/src/peer_manager/error.rs (4 hunks)
  • comms/core/src/peer_manager/peer_features.rs (2 hunks)
  • comms/core/src/peer_manager/peer_id.rs (1 hunks)
  • comms/core/src/peer_manager/storage/database.rs (1 hunks)
  • comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/down.sql (1 hunks)
  • comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/up.sql (1 hunks)
  • comms/core/src/peer_manager/storage/schema.rs (1 hunks)
  • comms/core/src/protocol/rpc/test/mock.rs (1 hunks)
  • comms/core/src/test_utils/peer_manager.rs (1 hunks)
  • comms/dht/examples/propagation/node.rs (4 hunks)
✅ Files skipped from review due to trivial changes (1)
  • comms/core/src/peer_manager/peer_features.rs
🚧 Files skipped from review as they are similar to previous changes (14)
  • comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/down.sql
  • comms/core/examples/stress/node.rs
  • comms/core/examples/tor.rs
  • comms/core/src/connection_manager/tests/manager.rs
  • comms/core/src/connection_manager/tests/listener_dialer.rs
  • comms/core/src/peer_manager/error.rs
  • comms/core/src/net_address/multiaddr_with_stats.rs
  • applications/minotari_node/src/commands/command/unban_all_peers.rs
  • comms/core/src/peer_manager/peer_id.rs
  • comms/core/src/connectivity/test.rs
  • comms/core/src/peer_manager/storage/schema.rs
  • comms/core/src/peer_manager/storage/migrations/2025-04-14-072200_initial/up.sql
  • comms/dht/examples/propagation/node.rs
  • comms/core/src/connectivity/manager.rs
🔇 Additional comments (12)
base_layer/p2p/src/services/liveness/error.rs (1)

32-32: Addition of JoinError aligns with the SQLite sync implementation

Good addition of the JoinError variant to handle potential failures from tokio task joins. This supports the pattern of offloading synchronous SQLite operations to blocking threads.

Also applies to: 64-65

comms/core/src/protocol/rpc/test/mock.rs (1)

180-180: Added unwrap to handle the fallible peer manager creation

Good change to handle the fact that SQLite-based peer manager creation can now fail. The unwrap is appropriate in this test context.

base_layer/p2p/src/services/liveness/service.rs (1)

462-466: Updated test setup for SQLite peer database

Successfully replaced the LMDB database with SQLite in-memory database for tests. The migration approach using DbConnection::connect_memory_and_migrate is appropriate for testing.

base_layer/p2p/src/initialization.rs (3)

161-165: Properly initialized SQLite database with joined filename

This correctly creates a "peers.db" file in the provided data directory, addressing previous feedback about path construction.


436-440: Successfully migrated peer manager imports

All the necessary imports for SQLite peer database have been properly added to replace the LMDB ones.


317-324:

Details

❓ Verification inconclusive

SQLite database path construction in production code

This approach uses the configured peer database name and adds a .db extension, which is appropriate as long as config.peer_database_name doesn't already contain an extension.

To verify the peer database name configuration doesn't contain a file extension already, you can check where it's defined:


🏁 Script executed:

#!/bin/bash
# Find where peer_database_name is defined and how it's typically set
rg -A 3 "peer_database_name.*=" --type rust

Length of output: 418


Verify peer_database_name Doesn’t Already Include an Extension

I searched the Rust codebase and only found peer_database_name used in test utilities (where it’s set without an extension). Before shipping, please confirm that your production config.peer_database_name (from your config file or CLI defaults) never includes a file extension—otherwise you’ll end up with names like foo.db.db.

• Check where peer_database_name is loaded or defaulted in your config definitions.
• Ensure any documentation or examples emphasize that this value should be provided without “.db.”

comms/core/src/test_utils/peer_manager.rs (3)

35-40: Properly propagates errors instead of using unwrap() - LGTM.

The function now returns a Result<Arc<PeerManager>, PeerManagerError> instead of unwrapping, which allows tests to handle potential errors. This change aligns with the previous review suggestion.


54-63: Non-test implementation properly handles file paths and database creation - LGTM.

This implementation:

  1. Creates directories if they don't exist
  2. Generates a random filename with .db extension
  3. Properly propagates errors through the Result type

66-72: Simple, effective random name generator - LGTM.

The function generates an 8-character random alphanumeric name, which is sufficient for testing purposes and for avoiding collisions in most practical scenarios.

comms/core/src/peer_manager/storage/database.rs (3)

1007-1014: Good handling of zero case without unnecessary database operations

The function correctly avoids database operations when n=0, which is an efficient pattern.


23-61: Well-structured imports and module organization

The imports are properly organized and grouped by standard library, external crates, and internal modules.


65-1573: Comprehensive peer database implementation with proper transactions

The PeerDatabaseSql implementation provides all necessary operations for managing peers in a SQLite database. The consistent use of transactions for operations that involve multiple updates ensures data consistency.

Despite the SQL injection concerns flagged in other comments, the overall structure and logic of the implementation is solid, with proper error handling, connection pooling, and query structuring.

@SWvheerden SWvheerden merged commit 0f1b0dc into tari-project:development May 26, 2025
14 of 18 checks passed
@hansieodendaal hansieodendaal deleted the ho_sqlite_peer_db branch May 26, 2025 13:39
@coderabbitai coderabbitai bot mentioned this pull request Jun 18, 2025
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants