Every time I deploy sha-p2pool v1.0.1 (mid-June 2025 commit) on a brand-new VPS, wiping the entire base directory and regenerating identity.json, I still get the same “self-connection” log nearly immediately after startup:
INFO … [DIRECT_PEER_EXCHANGE_RESP] New peer info: 12D3KooWC…Uy with 1 peers
ERROR … Peer 12D3KooWD…CUkEU has same local id as us, skipping
Steps to reproduce
Provision a fresh Ubuntu (or similar) VPS.
Clone sha-p2pool, build, then:
bash
Wipe data dir
rm -rf ~/.tari/mainnet/p2pool
mkdir -p ~/.tari/mainnet/p2pool
Regenerate identity
sha_p2pool generate-identity --base-dir ~/.tari/mainnet/p2pool
Start Tari base node
tari_base_node --config ~/.tari/mainnet/config/base_node/config.toml
Launch sha-p2pool
sha_p2pool start
--base-dir ~/.tari/mainnet/p2pool
--base-node-address http://127.0.0.1:18102
--grpc-port 18145
--p2p-port 18200
--external-address /ip4/<PUBLIC_IP>/tcp/18200
Watch the logs: immediately see the DIRECT_PEER_EXCHANGE_RESP followed by a self-connection error.
What I’ve ruled out
I always regenerate a brand-new identity.json (Ed25519 keypair) per VPS.
I’m only pointing the miner to two gRPC endpoints (base-node & p2pool), which do not participate in peer-exchange.
No code in the gRPC handlers references or compares peer IDs—self-connection detection lives only in the P2P layer:
rust
// p2pool/src/server/p2p/network.rs:≈441
if peer_id == *self.local_peer_id() {
error!(…, "Peer {} has same local id as us, skipping", peer_id);
continue;
}
No community issue or forum thread links dual-gRPC or mining_enabled=true on the base node to this behavior.
Suspected cause Libp2p’s peer-exchange gossip protocol “echoes” every known PeerId back to each participant. Since other peers learned of my freshly generated PeerId and include it in their peer lists, sha-p2pool sees its own ID in the response and logs the self-connection.
Questions / Requests
Is my understanding correct that this log is purely a result of DIRECT_PEER_EXCHANGE echo, and not an underlying bug?
Would you consider:
Providing a built-in config flag (e.g. --skip-self-peer) to filter out the local PeerId?
Lowering the default log level for this event from error! to debug!?
If there’s any other recommendation to suppress or avoid this “self-connection” log in a clean deployment, I’d love to hear it.
Thank you! Any pointers or feedback are greatly appreciated.
Every time I deploy sha-p2pool v1.0.1 (mid-June 2025 commit) on a brand-new VPS, wiping the entire base directory and regenerating identity.json, I still get the same “self-connection” log nearly immediately after startup:
INFO … [DIRECT_PEER_EXCHANGE_RESP] New peer info: 12D3KooWC…Uy with 1 peers
ERROR … Peer 12D3KooWD…CUkEU has same local id as us, skipping
Steps to reproduce
Provision a fresh Ubuntu (or similar) VPS.
Clone sha-p2pool, build, then:
bash
Wipe data dir
rm -rf ~/.tari/mainnet/p2pool
mkdir -p ~/.tari/mainnet/p2pool
Regenerate identity
sha_p2pool generate-identity --base-dir ~/.tari/mainnet/p2pool
Start Tari base node
tari_base_node --config ~/.tari/mainnet/config/base_node/config.toml
Launch sha-p2pool
sha_p2pool start
--base-dir ~/.tari/mainnet/p2pool
--base-node-address http://127.0.0.1:18102
--grpc-port 18145
--p2p-port 18200
--external-address /ip4/<PUBLIC_IP>/tcp/18200
Watch the logs: immediately see the DIRECT_PEER_EXCHANGE_RESP followed by a self-connection error.
What I’ve ruled out
I always regenerate a brand-new identity.json (Ed25519 keypair) per VPS.
I’m only pointing the miner to two gRPC endpoints (base-node & p2pool), which do not participate in peer-exchange.
No code in the gRPC handlers references or compares peer IDs—self-connection detection lives only in the P2P layer:
rust
// p2pool/src/server/p2p/network.rs:≈441
if peer_id == *self.local_peer_id() {
error!(…, "Peer {} has same local id as us, skipping", peer_id);
continue;
}
No community issue or forum thread links dual-gRPC or mining_enabled=true on the base node to this behavior.
Suspected cause Libp2p’s peer-exchange gossip protocol “echoes” every known PeerId back to each participant. Since other peers learned of my freshly generated PeerId and include it in their peer lists, sha-p2pool sees its own ID in the response and logs the self-connection.
Questions / Requests
Is my understanding correct that this log is purely a result of DIRECT_PEER_EXCHANGE echo, and not an underlying bug?
Would you consider:
Providing a built-in config flag (e.g. --skip-self-peer) to filter out the local PeerId?
Lowering the default log level for this event from error! to debug!?
If there’s any other recommendation to suppress or avoid this “self-connection” log in a clean deployment, I’d love to hear it.
Thank you! Any pointers or feedback are greatly appreciated.