All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
When editing this file, please respect a line length of 100.
- The
antnodebinary now supports automatic upgrades on Windows. The same periodic upgrade check that was previously available on macOS and Linux will now also run on Windows. Theself-replacecrate is used to handle binary replacement while the process is running. Retries with backoff are used to handle transient antivirus file locks.
- The connection prune deadline for non-routing-table peers (e.g. clients) has been increased from
60 seconds to 120 seconds and is now reset each time a new request is received on the connection.
This prevents premature connection closure during long-running operations such as
put_record.
- Updated to accommodate automatic upgrades on Windows. This includes using an
OnFailurerestart policy for Windows services, handling theself-replacecrate's.__relocated__filename pattern for process identification, and updating theservice-managerdependency to0.11.0.
- The
merkle_payments_addresssetting is now retained when upgrading a node service viaantctl. Previously,build_upgrade_install_context()was not passing this argument when reinstalling, causing the setting to be lost.
- A scrollbar on the node grid that appears when the content overflows the visible area.
- The connection mode display now correctly maps
no_upnpand default flags to their respective modes. Previously, UPnP and Manual modes were swapped. - The TUI no longer blocks on a full node registry refresh at startup. Instead, it renders immediately using disk-cached state and triggers a background refresh once the action handler is registered.
- On Windows, WinSW is now placed at
C:\ProgramData\antctl\winsw.exeinstead of the launchpad's own data directory. This avoids MSIX filesystem virtualisation redirecting the path and preventing the service management code from finding the executable. - On Windows, the launchpad now uses
C:\ProgramData\antctl\datafor the node data directory instead ofdirs_next::data_dir(). The MSIX filesystem virtualises the%APPDATA%path, causing the antnode binary to be written to a virtualised location that the service manager cannot find.
- The minimum required node version has been updated to
0.4.15. - Peer candidate selection now includes an additional verification tier that checks popular
close-range peers via
get_versionqueries before selecting them. This improves peer selection quality by confirming peer responsiveness.
MerklePaymentReceipt::add_uploadedmethod to record chunks that were successfully uploaded, enabling more efficient upload resume by skipping re-upload of known chunks.uploadedfield onMerklePaymentReceiptto persist the set of successfully uploaded chunks across upload resume attempts.
- Merkle payment candidate selection now falls back to KAD-only peer discovery when initial Kademlia queries do not return enough candidates, improving upload reliability during network churn.
- Merkle upload resume now skips network existence checks for chunks that were successfully uploaded in previous batches, in addition to chunks that already existed on the network.
- Merkle uploads now display progress at info level every 10 chunks, providing better visibility into upload status.
- The
--retry-failedflag is now honoured for merkle uploads. Previously, the flag only affected regular uploads and merkle uploads would always use the default retry count.
- Node status transitions are now reflected in the UI in real-time. The status panel polls more frequently during transitional states (starting, stopping) and updates the display as soon as the node registry confirms the new state.
- Stale error messages from node start operations are no longer shown when the underlying services have already transitioned successfully.
- Version-based peer rejection system that enforces minimum node version requirements during the identify protocol exchange. Nodes running versions below the minimum are disconnected.
- New metrics for version gating:
version_check_result(tracking accepted, rejected, legacy, and parse error outcomes) andpeer_type(tracking node, client, and unknown peer types).
- Trust-based replication scoring has been re-enabled, allowing nodes to factor peer trust scores into replication decisions.
- The minimum required node version has been set to
0.4.14.
- Merkle payment calldata has been compacted by packing data type and total cost unit into a single
U256value. This reduces on-chain transaction size and gas costs for merkle batch payments.
MerkleBatchPaymentCompleteclient event, emitted after each merkle tree batch payment completes. This enables progressive saving of receipts to disk for upload resume support.RegularBatchPaymentCompleteclient event, emitted after each regular batch payment completes, providing the same progressive saving capability for non-merkle uploads.MerklePaymentReceipt::add_already_existedmethod to record chunks that were found to already exist on the network, avoiding redundant network queries on upload resume.already_existedfield onMerklePaymentReceiptto persist the set of chunks known to exist on the network across upload resume attempts.
- Merkle upload resume now skips network existence checks for chunks that are already known to exist or have been previously paid for, reducing unnecessary network queries.
- Stream batch size and upload concurrency are now configured independently in merkle uploads. Stream
batch size uses
UPLOAD_FLOW_BATCH_SIZEwhile upload concurrency usesCHUNK_UPLOAD_BATCH_SIZE.
- Chunks that already existed on the network are no longer re-quoted during upload resume, preventing unnecessary payment attempts.
- The
file uploadcommand now displays the network address for private uploads alongside the private address.
- Payment receipts are now progressively saved to disk during uploads for both merkle and regular payment modes. This improves upload resume reliability by preserving payment state as batches complete rather than only on failure.
- Cached payment receipt files are now cleaned up before saving a new receipt, preventing stale files with different timestamp prefixes from accumulating.
BulkPaymentOption::ForceMerklevariant to force merkle tree payments regardless of chunk count.BulkPaymentOption::ForceRegularvariant to force regular per-batch payments regardless of chunk count.BulkPaymentOption::is_force_merkle()method to check if the option forces merkle payment.BulkPaymentOption::is_force_regular()method to check if the option forces regular payment.- Request/response direct message fallback for mutable data fetch operations (pointers, scratchpads, graph entries), improving reliability when KAD queries fail.
MERKLE_PAYMENT_THRESHOLDconstant is now publicly exported for use by consuming crates.
- The
file_content_upload,file_content_upload_public,dir_content_upload, anddir_content_upload_publicmethods now require explicit selection of payment mode via the--merkleor--regularflags in the CLI, orForceMerkle/ForceRegularvariants in the API. [BREAKING] - Client request timeout extended to 120 seconds to improve reliability on slower connections.
- Substream timeout increased to 120 seconds to match the request timeout.
- Cost estimation code improved for more accurate payment calculations.
- Upload flow refactored to centralise reporting and retry logic.
- Double slashes in normalised archive paths are now prevented.
GetVersionquery errors are now properly caught and handled instead of propagating unexpected failures.
DevGetClosestPeersFromNetworkquery for developer analytics, allowing nodes to perform full network lookups rather than just returning local routing table entries. This query is only available when thedeveloperfeature is enabled.
- Request timeout increased to 120 seconds to improve reliability for slower network operations.
- Nodes now approve merkle uploads even when they lack full network knowledge (fewer than K_VALUE peers in routing table), improving upload success rates during network churn.
- Peers are now removed from the routing table when version fetch operations fail, improving routing table accuracy.
- Gas estimation now uses EIP-1559 fee estimation for more accurate and predictable transaction costs.
- Gas cost information is now printed during payment operations, providing better visibility into transaction costs.
- Local merkle pricing now uses the correct formula for cost calculations.
- Invalid gas cost aggregation corrected to provide accurate total cost reporting.
- A
developersubcommand adds various tools for querying the network.
- The
file costandfile uploadcommands now require explicit selection of payment mode using--merkleor--regularflags. The previous automatic selection based on chunk-count threshold has been removed. [BREAKING] - Improved consistency between cost estimation and upload commands.
- Payment selection logging improved to reduce confusion.
- macOS signed and notarized pkg installer for the CLI suite (
ant,antnode,antctl), providing a standard macOS installation experience with binaries installed to/usr/local/bin. - Windows signed MSI installer for the CLI suite (
ant,antnode,antctl), installing toC:\Program Files\Autonomi\and adding binaries to PATH. - Linux signed Debian package (
.deb) for the CLI suite (ant,antnode,antctl) on Debian/Ubuntu systems, with support for x86_64, aarch64, armv7, and arm architectures. Includes detached GPG signature for verification. - Linux signed RPM package (
.rpm) for the CLI suite (ant,antnode,antctl) on Fedora/RHEL/CentOS systems, with the same architecture support and GPG signing as the Debian package.
- macOS signed and notarized app bundle (
.dmg) for Node Launchpad, allowing users to drag the application to their Applications folder for standard macOS app installation. - Windows signed MSIX installer for Node Launchpad, with automatic update support via
.appinstallerfiles and Start Menu integration.
- The binaries in the automatic upgrade process are now fetched from
autonomi.comURLs, with the S3 location being used as a fallback.
- The automatic upgrades process was being confused by the fact that our last release was a
client-only release. It was detecting it as a new release and scheduling a restart of the node,
although the binary wasn't being replaced. The process has now been changed to only check and
compare the sha256 hash of the
antnodebinary rather than the commit hash of the release.
Client::check_records_exist_batchasync method for checking the existence of multiple records in batch operations, improving efficiency when verifying data presence on the network.Client::retry_failed_merkle_chunksasync method for retrying failed chunk uploads with configurable retry attempts and pause duration between attempts.NetworkAddress::xornamemethod to extract the XorName from a NetworkAddress, returningNonefor PeerId and RecordKey variants.MerkleBatchUploadStatestruct to track failed chunks from a merkle batch upload, including chunk data for retry operations.MerkleBatchUploadResultstruct returned from merkle batch uploads containing remaining streams, completed files, and any failed chunks.MerklePutError::Batchvariant for tracking batch upload state with failed chunks.
Client::upload_batch_with_merklemethod signature now requires an additionaldont_reupload: &mut HashSet<XorName>parameter to track and skip chunks that already exist on the network. The return type has changed fromResult<(Vec<EncryptionStream>, Vec<(PathBuf, DataMapChunk, Metadata)>), MerklePutError>toResult<MerkleBatchUploadResult, MerklePutError>. [BREAKING]- Merkle payment uploads now skip redundant encryption when all chunks already exist on the network, significantly improving performance for re-uploads of existing data.
- Upload operations no longer pay for chunks that already exist on the network, reducing unnecessary costs when re-uploading or syncing existing data.
- Chunk existence checking now reports progress during Merkle payment operations, providing better visibility into upload status.
- File upload operations with merkle payments now provide high-level retry logic with exponential backoff, improving reliability for large file uploads.
- Issue where one-chunk files would fail to create merkle trees correctly, preventing successful uploads of very small files using merkle payments.
- Incorrect chunk upload count display during merkle payment operations.
- Excessive use of XorName in merkle hashing flows, improving efficiency and correctness of merkle tree operations.
- Gas limit buffer increased to 50% to provide more headroom for transaction execution and reduce the risk of out-of-gas errors.
- Transaction receipts are now validated after mining to ensure successful execution before considering the transaction complete.
- Pool creation for merkle payments is now processed in parallel, significantly reducing the time required to prepare batch payments.
- Merkle tree maximum depth validation corrected to prevent tree construction errors with large datasets.
- Duplicate hash function removed from merkle payment implementation, simplifying the codebase.
- The
file costcommand now supports a--merkleflag to estimate costs using Merkle batch payment mode instead of traditional per-chunk payments. - The
file uploadcommand now supports a--merkleflag to upload files using Merkle batch payment mode, enabling more efficient bulk uploads. - Merkle payment upload operations now cache successful payments and support retry of failed chunks
through
cached_merkle_paymentsmodule.
- Upload operations display clearer progress information when checking for existing chunks on the network.
- Python bindings for ARM architectures now compile correctly by setting appropriate CFLAGS to define ARM architecture version for cross-compilation targets. This resolves the ring crate compilation failures on older GCC toolchains.
Client::get_storage_proofs_from_peermethod signature now requires two additional parameters:data_type: DataTypesanddata_size: usize. The return type has also changed fromVec<(NetworkAddress, Result<ChunkProof, Error>)>toPeerQuoteWithStorageProof, which includes an optionalPaymentQuotealongside the storage proofs. [BREAKING]
- Retry logic with exponential backoff to closest peers lookup operations, improving robustness when querying the network.
- Support for querying a specific number of closest peers, providing more control over network queries.
- Bad peer detection system that verifies fetched record copies match expected values and blocks peers that fail validation checks.
- Metrics tracking for bad peers to monitor network health.
- Node and client now use the same network
get_closestscheme for consistent peer resolution across the network. - Blocklist behaviour is limited to prevent unlimited growth by using a circular buffer mechanism.
- Record store indexing cache is now pruned to remove out-of-sync entries and improve accuracy.
- Replication is skipped when a close-up peer restart pattern is detected to reduce unnecessary network traffic.
- Increased
KAD_QUERY_TIMEOUTto 120 seconds from 10 seconds. Improves node-side KAD query reliability and should preventGetClosestTimeouterrors when doing get_closest_peers lookups.
- Record copy verification now ensures fetched records match the expected content before accepting them.
- Peers that have been dropped from the routing table are no longer blocked during recheck operations, preventing false positives in bad peer detection.
- Client
get_closestcheck now returns more consistent results across multiple queries. - Client operations no longer miss existing copies when verifying data storage.
- Mutable data re-uploads now use the same quoting range consistently.
- Services now stop correctly on macOS when using the
antctl stopcommand. The underlying service manager crate was incorrectly specifying an 'on success' restart policy that prevented proper service termination. - Services no longer start automatically when added with the
antctl addcommand on macOS, restoring expected behaviour consistent with other service managers. - Node registry manager is now properly cleared during reset operations, preventing phantom nodes from appearing after reset.
- Input mode now only switches when error popup is dismissed, preventing unintended mode changes.
- Node registry is properly synchronized between local storage and in-memory representation after reset operations.
- Merkle payment support for batch payments through cryptographic proof verification.
MerklePaymentOptionenum to enable merkle payment mode in upload operations.MerkleUploadErrorandMerkleUploadErrorWithReceipterror types for merkle-specific upload failures.MerklePaymentErrorfor merkle payment processing errors.MerklePaymentReceipttype to track successful merkle payment transactions.MerklePutErrorfor merkle-specific put operation failures.Client::upload_with_merkle_paymentsmethod to upload data using merkle batch payment mode.Client::put_with_merkle_paymentmethod for storing individual records with merkle payment proofs.
Query::GetMerkleCandidateQuotequery variant for requesting merkle payment candidate quotes from nodes.- Includes
key(target address for topology verification),data_type,data_size, andmerkle_payment_timestampfields. - Node signs its current state (metrics + reward address) with the provided timestamp, creating a cryptographic commitment binding PeerId to RewardsAddress.
RecordKind::DataWithMerklePaymentvariant for records paid via merkle proofs.- Uses index range starting at
RECORD_KIND_MERKLE_PAYMENT_STARTING_INDEX(20) to differentiate from traditional payment records. - Merkle payment timestamp verification to ensure payments are neither expired nor in the future.
- Record serialization now supports merkle payment proof headers alongside data payloads.
RecordHeaderserialization extended to handle merkle payment record types.
- Merkle payment proof verification in
PutValidationfor all data types PutValidationError::MerklePaymentVerificationFailederror variant for failed merkle proof validations.- Merkle payment quote generation signed with node metrics and reward address.
- Cryptographic verification of merkle proofs against on-chain merkle tree roots.
- Merkle payment expiration validation to reject expired payment proofs.
- Support for deserializing records containing
(MerklePaymentProof, T)tuples whereTis the data type. - The
antnodebinary now supports automatic upgrades on macOS and Linux. Every 3 days the node will check for the availability of a new version. If available, the new version will be downloaded and the current binary will be replaced (this is permitted for running processes on Unix-based systems and it also works with symlinks). Each node is then assigned a restart time based on its peer ID and the network size. This ensures the restarts will be spread out quite evenly. When the node restarts it will retain its peer ID and data.
- Put record validation now branches on
RecordKind::DataWithMerklePaymentto verify merkle proofs before accepting records. - Node quote responses now include merkle payment candidate signatures when requested via
GetMerkleCandidateQuote. - Payment proof structure extended to accommodate merkle tree proofs alongside traditional payment receipts.
ant-evm::merkle_paymentsmodule providing core merkle payment infrastructure:MerkleTreefor constructing and managing merkle trees with up toMAX_LEAVES(1024) leaves.MerkleBranchtype representing the path from a leaf to the root in a merkle tree.MerklePaymentProofcontaining the merkle branch, on-chain commitment info, and signatures.MerklePaymentCandidateNoderepresenting a node's signed commitment for merkle payment selection.MerklePaymentCandidatePoolfor organizing candidates into payment pools.verify_merkle_prooffunction for cryptographic verification of merkle proofs.MidpointProoffor proving the midpoint of merkle tree ranges.
MERKLE_PAYMENT_EXPIRATIONconstant defining the validity period for merkle payment proofs.MAX_MERKLE_DEPTHconstant defining the maximum depth of merkle trees.CANDIDATES_PER_POOLconstant for pool organization.MerklePaymentVerificationErrorfor merkle-specific verification failures.PoolCommitmenttype for on-chain merkle root commitments.OnChainPaymentInfocontaining merkle root and payment metadata from smart contracts.expected_reward_poolsfunction to calculate expected number of reward pools based on data size.
- EVM network configuration now includes optional
merkle_payments_addressfor merkle payment vault contracts. - Wallet and testnet setup extended to support merkle payment contract deployment and interaction.
- Upload strategy now supports both traditional per-chunk payments and merkle batch payments.
- The client will now carry out a check of its close group candidates to see whether each peer is known by the others in the group. This improves uploads because more robust nodes will be chosen, and in turn downloads will also be more stable.
- The quoting candidate range is extended from 7 to 10 peers to attempt to provide more stability.
- The service definitions emitted by the
addcommand now use an 'on success' restart policy on Unix platforms. This is to allow the service manager to restart the node for use with automatic upgrades. Before upgrading to the new version ofantctl, users should useantctl resetwith their current version to clear any existing node deployments. The new node is not compatible with the old service definitions produced byantctl. New deployments can then be created with the new version ofantctl. [BREAKING] - The
addcommand will assign a random port in the service definition rather than delegating that to the node. This is to ensure the node will restart with the same port when it is automatically upgraded. - The
statuscommand now obtains the version number of each service. This makes the command work naturally with automatic upgrades.
- As with the
antctlchange above, old service definitions are not compatible with the new version of the nodes to have automatic upgrades work correctly. Before upgrading to the newnode-launchpad, users should use the reset command withCtrl+Rto clear out their current node deployments. New deployments can then be created with the new version ofnode-launchpad. [BREAKING] - The
Statuspanel will update with the new node versions when automatic upgrades take place in the background.
Client::analyze_address_recursivelymethod for recursively analyzing addresses by following all discovered addresses, returning a map of analyses.Client::get_record_from_peermethod to retrieve a record directly from a specific peer.Client::get_record_and_holdersmethod that returns both the record and a list of peer holders, supporting custom quorum requirements.Client::get_storage_proofs_from_peermethod to retrieve storage proofs from a specific peer.encrypt_directory_filesfunction now publicly exported from theself_encryptionmodule for external directory encryption operations.EncryptionStreamtype now publicly exported from theself_encryptionmodule.DataMapfield added toGetError::TooLargeForMemoryvariant to provide the data map when a file is too large for in-memory processing. [BREAKING]
Client::get_closest_to_addressnow accepts an optionalcountparameter to specify the number of peers to retrieve; ifNone, returnsCLOSE_GROUP+2peers. [BREAKING]Analysisenum now usescustom_debug::Debugfor improved debug output.Analysis::DataMapandAnalysis::Filevariants now skip thedatafield in debug output to reduce noise when debugging large data structures.Analysis::Chunkdisplay no longer prints chunk content in hex format.- Chunk retrieval strategy changed to use fallback approach with
closest_20peers when normal DHT queries fail, and chunk get retry strategy set toRetryStrategy::Noneto rely on this fallback. - Maximum stream data size reduced to 1 MB for both client and node connections to optimize network performance.
- Upload fallback approach removed to ensure proper replication with multiple copies instead of only a single copy being uploaded.
- Symlink handling in self-encryption to skip encryption of both symlinks and directories.
- Network-wide replication of all node keys to improve data availability.
- Dynamic replication range expansion when the network is under load.
- Replication range expanded to give holders more trust in data availability.
- Network-wide replication deadline decreased to 4 days to ensure fresher data distribution.
GetReplicatedRecordrequest error responses made more accurate for better diagnostics.
- Replication behaviour modified according to review feedback for improved efficiency.
- The
analyzecommand now supports extensive new functionality for network health monitoring:- Use the
--recursiveflag to recursively analyze datamaps and other data structures to fetch all referenced data. - Use the
--holdersflag to see the peers holding that piece of data. - Use the
--closest-nodesflag to query the closest peers to check if they're holding that piece of data. - Use the
--jsonto display all analyzed information as json. - Use the
--addr-rangeflag during blind scan operations to specify address ranges.
- Use the
- Analyze functionality refactored into its own module for better code organization.
- The
analyzecommand properly obtains target addresses from all enum entry types. - Public archive address extraction now returns correct addresses.
- Analysis output is printed to screen before storing to JSON file.
- KAD
get_recordqueries now useQuorum::N(20)to fetch records in a best-effort manner.
- Automatic permission elevation on Windows; the user can now simply double click on the exe to run it, rather than running from an elevated cmd.exe or Powershell session.
- Removed custom panic handler in favor of using the
ant-loggingcrate for consistent error handling and logging.
- Logging path handling improved for tokio tests with dynamic file or directory-based logging.
- Metrics initialization now properly uses the metrics feature flag.
- An inefficient mechanism was being used when various tasks were being processed in parallel. This has been replaced with a more optimised approach. As a result we can expect to see improved performance in various areas, including file downloads and uploads, and quotations.
- An issue caused the bootstrap process to halt before being connected to the network. It will now correctly stop either when we are connected to the network or have exhausted all the addresses.
- Provide a timeout in the bootstrapping process that prevents indefinite waiting to fetch contacts,
which in turn could result in a timeout error. We will also now complete bootstrapping when we've
obtained five addresses. This fix will apply to both the
antnodeandantbinaries.
Python:
Clientclass:with_payment_modemethod for setting the payment mode.PaymentModeenum for defining the available payment modes.
NodeJS:
Clientclass:with_payment_modemethod for setting the payment mode.PaymentModeenum for defining the available payment modes.
- Nodes now use the
Bootstrapstruct to drive the initial bootstrapping process and the bootstrap cache. - Nodes now evict peers immediately if they notice their peer ID has changed. This allows the network to flush out old peers much quicker. This should resolve some performance issues we've seen on the production network that have been the result of node operators who are over-provisioning and pulling large numbers of nodes in short time periods.
- A new
Bootstrapstruct is introduced that provides a single interface to bootstrap aNodeorClientto the network. - The
BootstrapConfigallows the user to modify various configurations for bootstrapping to the network. Some options include providing a manual address, setting a custom bootstrap cache directory, disabling bootstrap cache reading, setting custom network contacts url and so on. - The
Bootstrapstruct dials peers from all provided sources before terminating: environment variables, CLI arguments, bootstrap cache, and network contact URLs. This solves the major issue of using an outdated bootstrap cache to dial the network. - Implement file locking for bootstrap cache such that concurrent accesses do not error out or produce empty values.
- The old method of obtaining the bootstrap peers as
Vec<MultiAddress>usingInitialPeersConfig::get_bootstrap_addrs()has now been removed in favour of the automated bootstrapping process.
- Introduce a new payment mode: single node. This reduces gas fees by making a single transaction to the median-priced node with 3x the quote amount, rather than 3 separate transactions to 3 highest nodes.
PaymentModeenum for controlling upload payment strategy withStandard(pay 3 nodes) andSingleNode(pay 1 node with 3x amount) variants.Client::with_payment_mode()method for setting the payment mode on the client.Client::get_raw_quote_from_peer()method for obtaining quotes from specific peers without market prices. This is useful for testing and obtaining reward addresses.Client::get_node_version()async method for requesting the node version of a specific peer on the network.
self_encryptiondependency upgraded to version0.34.1for improved encryption performance.ClientConfig::bootstrap_cache_configandClientConfig::init_peers_confighas been deprecated in favour ofClientConfig::bootstrap_config. This new config combines all the options from the deprecated fields.
- Payment vault smart contract upgraded from V2 to V6. This upgrade supports the new single-node payment verification logic while maintaining backward compatibility.
- The
file costcommand provides a--disable-single-node-paymentflag to switch from the default single-node payment mode to the multi-node payment mode. - The
file uploadcommand provides a--disable-single-node-paymentflag to switch from the default single-node payment mode to the multi-node payment mode. - The
analyzecommand now has ananalysealias for British English spelling preference. - The
analyzecommand now supports a--closest-nodesflag argument that will display the closest nodes to the address being analysed.
- Single-node payment is now enabled by default for both the
file costandfile uploadcommands, reducing gas fees for users. The previous behaviour can be restored using the--disable-single-node-paymentflag. - The
NetworkDrivernow uses theBoostrapstruct to drive the initial bootstrapping process and the bootstrap cache.
- Various nightly CI workflows have been removed as they were not being actively used.
- GitHub Actions
setup-pythonupgraded from v5 to v6.
DataStreamstruct with streaming data access methods:data_size()returns the original data sizeget_range(start, len)decrypts and returns a specific byte rangerange(range)convenience method using Range syntaxrange_from(start)gets range from starting position to endrange_to(end)gets range from beginning to end positionrange_full()gets the entire file contentrange_inclusive(start, end)gets an inclusive range
data_stream(&DataMapChunk)async method onClientfor creating streaming access to private data.data_stream_public(&DataAddress)async method onClientfor creating streaming access to public data.scratchpad_put_updateasync method for wallet-free scratchpad updates with caller-controlled management.print_fork_analysisfunction for detailed scratchpad fork error analysis and display.vault_expand_capacityasync method for expanding vault storage capacity.vault_claimed_capacityasync method for checking claimed vault capacity.vault_split_bytesfunction for splitting bytes for vault storage.
- Client initialization now includes automatic network connectivity verification via
wait_for_connectivity()during theinitprocess, improving reliability and error diagnostics. - Scratchpad error handling enhanced with fork resolution capabilities in update operations, solving code duplication issues.
- Vault function names updated for consistency:
fetch_and_decrypt_vault→vault_get(deprecated function retained for compatibility)write_bytes_to_vault→vault_put(deprecated function retained for compatibility)app_name_to_vault_content_type→vault_content_type_from_app_name(deprecated function retained for compatibility)
- Code organization improved by moving encryption and utility modules out of the client module to
top-level
self_encryptionandutilsmodules.
- Analyze functionality now properly handles old datamap format types for backward compatibility.
- Scratchpad fork display and resolution issues resolved across all API operations.
- Streaming operations now validate destination paths before processing to prevent errors.
Python:
GraphEntryclass methods for member access:content()returns the entry contentparents()returns parent entry referencesdescendants()returns descendant entry references
- Data streaming bindings providing Python access to the new streaming data APIs.
- Enhanced fork error display functionality for scratchpad operations with comprehensive error details.
- Comprehensive test coverage for all Python binding functionality including address format validation.
Node.js:
- Updated vault operation support with new function names matching the renamed API standards.
- Python binding tests updated to handle 96-character address hex format and proper
from_hexround-trip conversions. GraphEntrybindings now properly expose all member access methods with correct error handling.
- Enhanced
scratchpadcommand functionality with improved fork error handling and resolution capabilities. - Better error reporting for scratchpad operations with detailed fork analysis output.
- The
downloadcommand has improved error handling to immediately fail if the chosen download path cannot be used.
- Scratchpad fork display and resolution functionality now works correctly across all client command operations.
- Get record operations now only perform early returns when unique content is received from sufficient peers, improving data retrieval reliability.
- The
analyzecommand now properly handles file references in the old datamap format.
- Node storage size handling corrected for ARM v7 architecture devices.
- Node addition process on Windows now functions properly without configuration conflicts.
chunk_batch_uploadfunction is now public, allowing developers to upload multiple chunks in batches with custom receipt handling.deserialize_data_mapfunction inDataMapChunkfor backward compatibility with old data map schemes.pointer_update_fromasync method for updating pointers from specific sources.scratchpad_update_fromasync method for updating scratchpads from specific sources.EncryptionStreamstruct with methods:total_chunks()to get the total number of chunksnext_batch()to retrieve the next batch of chunks for processingdata_map_chunk()to get the associated data map chunkdata_address()to retrieve the data addressnew_in_memory_with(),new_in_memory(), andnew_stream_from_file()constructors for different encryption modes
DataMapChunkfield visibility changed frompub(crate)topub, making the innerChunkpublicly accessible.- Enhanced error handling and retry mechanisms for chunk upload operations through improved helper functions.
- File upload workflow now uses the same approach as directory uploads for consistency.
- Improved streaming encryption support with updated self-encryption dependency integration.
- Enhanced language usage in user-facing messages for better clarity across client operations.
- Unified approach for
PointerandScratchpadsplit resolution throughresolve_split_recordsfunction. - Reduced
IN_MEMORY_ENCRYPTION_MAX_SIZEthreshold to 50MB for improved memory management during encryption operations. - Streaming download capability in high-level file operations for
file_download,file_download_public,dir_download_public,dir_download. Allows downloading larger files without spikes in memory usage experienced previously. - The streaming capability results in a new datamap format that requires four extra chunks. If there is an attempt to re-upload files uploaded before the streaming implementation, there will be a cost for these extra chunks.
- The new datamap format always returns a root datamap that points to three chunks. These three extra chunks will now be paid for in uploads.
- Vault operations now properly support single file uploads and access.
- If there were failed chunks in the final batch of an upload they were not retried. This has now been fixed with improved error handling.
- Deduplication logic for fetched scratchpads with identical highest counter values.
Python:
AttoTokensclass with methods:zero(),is_zero(),from_atto(),from_u64(),from_u128(),as_atto(),checked_add(),checked_sub(),as_bytes(),from_str(),__str__(),__repr__()ClientOperatingStrategyclass with getters:get_chunks(),get_graph_entry(),get_pointer(),get_scratchpad()BootstrapCacheConfigclass with configuration methods:new(),empty(),with_addr_expiry_duration(),with_cache_dir(),with_max_peers(),with_addrs_per_peer(),with_disable_cache_writing()InitialPeersConfigclass with peer management methods:new(), getters/setters forfirst,addrs,network_contacts_url,local,ignore_cache,bootstrap_cache_dir,get_bootstrap_addr(),read_bootstrap_addr_from_env()MainPubkeyclass with methods:new(),verify(),derive_key(),as_bytes(),as_hex(),from_hex(),__str__(),__repr__()MainSecretKeyclass with methods:new(),public_key(),sign(),derive_key(),to_bytes(),random(),random_derived_key(),__repr__()Signatureclass with methods:parity(),from_bytes(),to_bytes(),__str__(),__repr__()StoreQuoteclass with methods:price(),len(),is_empty(),payments()RetryStrategyclass with methods:none(),quick(),balanced(),persistent(),default(),attempts(),backoff(),__str__(),__repr__()Quorumclass with string representation methodsStrategyclass with getters:get_put_quorum(),get_put_retry(),get_verification_quorum(),get_get_quorum(),get_get_retry()QuoteForAddressclass withprice()methodRegisterAddressclass with methods:new(),owner(),as_underlying_graph_root(),as_underlying_head_pointer(),as_hex(),from_hex()DerivationIndex,DerivedPubkey,DerivedSecretKeyclasses for key derivation functionality- Enhanced
ChunkAddressclass withxorname()andfrom_hex()methods - Enhanced
TransactionConfigclass withnew()constructor andmax_fee_per_gasgetter
Node.js:
- Complete ant-node package with network spawning capabilities
- Python
get_bootstrap_addr()method updated to match original Rust API changes - Python
cache_save_scaling_factorreturn type corrected from u64 to u32 to match Rust API - Python
PyTransactionConfigclass fixes for proper configuration handling
- New metrics:
antnode_branch - Improved logging for query response types to aid in network debugging and monitoring. This will help us measure the success of the next node upgrade.
- Race condition in local network startup where bootstrap cache where the bootstrap cache was never written with newer addresses.
- Issue with the bootstrap cache where newer nodes were not updated when the cache became full.
- Expected holder calculation now properly capped to majority of
CLOSE_GROUP_SIZEfor improved consensus reliability. - Replication accept range expanded from 5 to 7 nodes for middle range records to improve data availability.
- The
local runcommand will now automatically provide the EVM setup for a local testnet without having to run theevm-testnetbinary separately.
- The
local runcommand has extra wait time before launching a second node to prevent startup conflicts and improve reliability.
- The
evm-testnetbinary will be included in each release. This will make it easier to work with local testnets and we will also use it in our CI processes.
- Previously, the
file downloadcommand required access to RAM in proportion to the size of the file being downloaded, making it prohibitive to download large files. The command has now been changed to utilise new streaming features that keep memory usage low and consistent, so larger files can be downloaded without issue.
- Logging is restored for events in the
antbinary after it was inadvertently disabled. - In some cases the chunk cache was not correctly cleared after a download.
- New
chunk_cachemodule that provides a mechanism for caching downloaded chunks. - Use the chunk cache for downloads to enable resuming failed downloads.
- Public files without archives had their content downloaded twice
- For the client connection, nodes that do not identify as KAD will not be added to the routing table. The client's routing table included nodes that were not upgraded, and these nodes were not identifying themselves as KAD nodes. If any of those older, non-KAD nodes were returned in a query for the closest peers, this resulted in no close peers being obtained. These older nodes did not identify themselves as KAD due to the removal of the external address manager. Having them in the routing table then had cascading effects, resulting in failed downloads and uploads. Excluding them using a block list restores reliable uploads and downloads. These older nodes already constitute a small percentage of the network and will eventually be filtered out with more upgrades.
- The
file downloadcommand now supports resuming downloads. The command will attempt to fetch all the chunks for a file, and in doing so, they will be saved to a temporary location on the local disk. If there's a failure to retrieve some chunks, users can run the same command again and it will only attempt to download the missing chunks. When all the chunks have been retrieved and the file is reassembled, the cached chunks will be deleted. - The
file downloadcommand supports a--disable-cacheargument, if for some reason users want to disable the caching behaviour that applies by default. - When connecting to the network, the client will now use the local bootstrap cache if it exists. If it doesn't exist, the initial connection will use a set of pre-defined bootstrap servers to obtain a peer list, and the cache will then be written periodically. This improves decentralization.
- The
ScratchpadErrortype has a newForkvariant. When there was a forked scratchpad with two or more scratchpads at the same version, the API would only return one of them, meaning a merge couldn't be performed correctly. Now when this situation occurs, theForkerror type is returned, and along with it, all the scratchpad versions, which can then be used for merge and conflict resolution.
- Reintroduce the external address manager. The removal of this component caused an issue with clients whereby they sometimes couldn't communicate with nodes, though node-to-node communication was fine. This resulted in problems such as randomly failing to retrieve chunks during downloads, and it also affected emissions payments, because the client in the emissions service wasn't communicating with certain types of nodes. It seemed that port-forwarded nodes were the most affected. The removal of the external address manager was based on the assumption that addresses could be obtained from the connection information, but we suspect the libp2p client doesn't have that part of the code. Reintroducing the component resolves emissions for nodes configured with port-forwarding and should also very significantly improve the situation with uploads and downloads.
RetryStrategy::N(usize): New retry strategy for data operations that allows specification of a custom count
- Extend libp2p client substreams timeout to 30 seconds. This should allow a client with a poor connection to upload larger records with a higher success rate.
- Enhanced logging and progress tracking for chunk data operations.
- Improved error handling and retry mechanisms for chunk operations.
- Paths in archives now use forward slashes on all platforms for cross-platform compatibility.
- The bootstrap peer cache is changed to use a simple FIFO mechanism to maintain the cache, rather
than attempting to track the reliability of a peer. There is a
--write-older-cache-filesargument provided for backwards compatibility. This enables the peer cache servers to still provide the bootstrap cache in the old format until everyone has upgraded. - The
libp2plibrary was upgraded from0.55.0to0.56.0. The main benefit of this was to enable the request/response model for uploads on the client. - The
ant-networkingcode was moved to a module within theant-nodecrate, and in turnant-networkingwas removed. This enabled the refactor and simplification of network initialisation and it also opens the door for further refactors. The networking code is now much more maintainable.
- The node's external address manager was removed. This component was responsible for advertising a node's address to others, but we now favour obtaining the address from the connection information, which is more accurate and less error prone.
- The
file downloadcommand supports a--retriesargument that allows the user to specify a custom retry count for pulling chunks. If you are on a slower connection, you can consider trying a value like20, and you should see better and more consistent downloads. - Use a request/response model for storing records on the network. This was a feature enabled by the
libp2pupgrade and in internal testing it significantly improved the speed of uploads. This is because KAD requests inlibp2pare not as well optimised as request/response.
- The output of the
file downloadcommand was enhanced to use text to show chunks being obtained. The purpose being to provide the user with more feedback that progress is being made on the download.
- The progress bar was removed from the
file downloadcommand in favour of text output that provides more informative feedback. The bar was only useful for downloads with multiple files and did not make sense for single file downloads. Better progress indicators will be added as later enhancements.
Key changes are the new networking module, Quorum type replacement, pointer counter expansion to
u64.
- Networking:
pub mod networking: new network module.networking::Quorum: enum for consensus operations.- Network driver, retry strategies, and utilities.
- Enhanced transaction configuration with the new
MaxFeePerGastype.
- Type updates:
ResponseQuorum→networking::Quorum(breaking change)- Pointer counter type:
u32→u64(with backward compatibility)
- Error handling improvements:
- Enhanced
PointerErrorerror variants, e.g.,PutError,GetError. - More specific error handling in pointer operations.
- Enhanced
- Internal infrastructure:
- Enhanced networking layer with better retry mechanisms.
- Improved put/get record operations with retries.
- Better split record handling for pointers.
Client::pointer_check_existance()→Client::pointer_check_existence()(typo fix)
- A peer's address is obtained from its connection info rather than self-advertised addresses from identify requests. The earlier method was incorrect and error prone.
- Peers are now added to the routing table only if they can be dialled back after 180 seconds, giving enough time for the UDP mapping to expire. Dialling back immediately would always succeed even if the peer was not externally reachable. When the routing table consists more of these reachable peers, it improves network health and should subsequently lead to better performance and reliability.
- Introduce a
DoNotDisturbbehaviour to fix an edge case with the 180-second delayed dial-back queue, where a peer would get re-added if it sent constant requests. This would happen if the peer thinks we are close and spams periodic messages.
- The
antbinary now has ascratchpadsubcommand for working with scratchpads. These are a mutable 4MB blob of memory on the network. An initial payment is made to create one, but all further mutations are free. Personal user vaults have been implemented with scratchpads and some people have been using them for chat applications. - The
antbinary now has apointersubcommand for managing pointers. Pointers enable building mutable data structures by providing authenticated, updatable references that only the owner can modify. Here are the available subcommands: +cost: calculate payment required before creating pointers +create: point to different data types (chunks, graphs, scratchpads, other pointers) that can be updated over time +edit: update an existing pointer reference +generate-key: create cryptographic keys for owning and updating pointers +get: retrieve the target of a pointer from the network +list: view all pointers controlled by the current user +share: give others read/write access by sharing pointer secret keys
The client has fundamentally changed with a large refactor we call 'light client networking'. The previous networking implementation was complex and also shared between both node and client, leading to all kinds of exceptions and special cases in the code. After the refactor, the client now has its own, simpler networking implementation, and now the client and node network can evolve independently.
This refactor made it easier to make other changes that featured in the release:
- Retry strategies were adapted to limit retrying without reason.
- Connect to peers in advance when performing libp2p put queries.
- No periodic network discovery.
- Improve connection success rate by using libp2p's
add_addressrather than dialling. - Use a batched upload flow to reduce payment rejection resulting from quote expiration.
All these changes led to the following observations in our testing:
- Improved performance and throughput for uploads.
- Improved reliability and elimination of errors in uploads that were not related to payments. We do still see some errors related to gas prices on the Ethereum network, but these would hopefully be mitigated with retrying.
- Improved reliability and reduction of errors in downloads.
The download performance had parity with the current stable release.
There should be further improvements coming for the next release. In particular we are waiting on a
new libp2p release which will have a feature contributed by us that should further improve
performance. We've been told by the libp2p team that this release is now forthcoming.
- Provide a 3-minute timeout for NAT detection. If not successful, relaying will be used.
- Correctly obtain the EVM network on the
wallet balancecommand. The change for the--alphaflag unintentionally introduced a regression that resulted in not being able to obtain the balance unless the EVM network was explicitly set. It will now be correctly selected based on the network ID.
- Display a dialog to indicate NAT detection is running when a new node service is requested. Without this, the launchpad appeared to be unresponsive.
- Introduce a check for the latest version. To encourage upgrades, a dialog will now appear to indicate the availability of a new version.
- Improve the grammar of some text used on the
Optionspanel.
- The
antnodebinary now provides an--alphaflag argument. When used, it will connect the node to the alpha network and use Arbitrum Sepolia as the EVM provider.
- Provide an
init_alphafunction on theClient. It will return a client that is initialised specifically for the alpha network. - Provide a
set_register_keyfunction onUserData. It sets the register key and returns the old one if it was already set. - Provide a
display_statsfunction onUserDatato print out the current user data.
- Use the same set of payees for verification of quotes. This improves upload success rate.
- The
antbinary now provides an--alphaflag argument. When used, it will connect the client to the alpha network and use Arbitrum Sepolia as the EVM provider. - Synchronise the register signing key in the vault.
- Correct the formatting of
AttoTokensfrom 32 to 18 decimal places.
- Peers are dialled before a put record request. This improves upload success rate.
- Provide backwards compatibility in the form of reading old node registries that do not have new fields.
- The
addcommand now provides an--alphaflag argument. When used, the node services will connect to the alpha network.
- Several new classes were added to the Python bindings:
ChunkClientEventClientEventReceiverDataTypesPaymentQuoteQuotingMetricsReceiptStoreQuoteUploadSummary
- The Python
Clientclass also has several methods added:init_alphaenable_client_eventsevm_networkfile_content_uploadfile_content_upload_publicget_raw_quotesget_store_quotespointer_verifyscratchpad_verifyupload_chunks_with_retries
- Disable
libp2pdisjoint_query_path. This improves resolving the closest nodes in the network. - Reduce logging by changing the level of some messages. These were generating a lot of traffic and making life difficult for our ELK setup.
These changes were implemented in the API but are also manifest in the ant client.
- The "Paying for X chunks" output was moved and added to the payment process.
- The number of quotes we attempt to obtain in parallel is reduced to the value of
CHUNK_UPLOAD_BATCH_SIZEmultiplied by8, and capped at128. Recently the default value ofCHUNK_UPLOAD_BATCH_SIZEwas changed to1, so in in turn the new default for how many quotes we obtain in parallel is significantly reduced. This works much better for poorer connections. Users with better connections can experiment with slightly larger values forCHUNK_UPLOAD_BATCH_SIZE. - The
FILE_UPLOAD_BATCH_SIZEvariable now defaults to1rather than being based on the number of available threads. This means when a directory is being uploaded, only a single file will be uploaded at a time. This proved to be much better for poorer connections. Users with better connections can experiment and adjust the value as they see fit; for easier control we will probably add it as an argument on thefile uploadcommand. - The "Paying for X chunks" output is changed to "Quoting for X chunks". The previous message was misleading because the payment doesn't take place until the chunk is uploaded.
- Obtaining quotes will now have retries when there is a failure resolving the closest nodes.
- Increase the default query timeout from 60 to 120 seconds. On the production network we need more time for queries.
- While refreshing the routing table the node adds itself as one of the closest targets. This improves network discovery.
- A strict condition based on the
identifyagent info has now been removed. This would allow us to use this field to transfer node/client version information. - The node relay server is now only enabled if the node is detected as public and is not a relay client. Previously it was not disabled, but rather unused.
- Change logging output for
NetworkAddressto assist in investigating issues using ELK. - To reduce resource usage, peer versions are only filtered when metrics are updated.
- Several improvements for reducing resource usage:
- Peer versions are only filtered when metrics are updated.
- While refreshing the routing table we avoid unnecessary random generation of indexes for picking closest candidates.
- Reduce the frequency of network discovery.
- While refreshing the routing table we fill up as many of the closest targets among empty buckets as possible. This improves network discovery.
- For using
evm-testnetin a remote setup, some code for instructing Anvil to read the binding address from theANVIL_IP_ADDRwas removed. It was restored again for this functionality. - Added some margin for the max chunk size to account for encryption and compression overhead. In rare cases, valid chunks were considered too big.
- Improve network discovery by avoiding holes between full buckets.
- Eliminate the chance that network discovery
round_robin_indexgot stalled due to edge case. - Avoid division by zero errors in an edge case with empty buckets.
- We can now set
network_idvia the client API. This allows for programming against different networks rather than just mainnet. - Public function
get_raw_quotesto get quotes from nodes without the market prices from the smart contract. - Public function
get_closest_to_addressto get the closest peers to an address.
- Various improvements in the client API for better usability.
- Documentation was moved from the autonomi repository to https://github.com/maidsafe/docs.
- Archive serialization was made backwards compatible.
- Issue that prevented pointer mutation in some cases.
- The pointer
xornamefunction now returns its own address rather than the target.
- Documentation for behaviour of the Scratchpad in Python bindings.
- Documentation and examples for using wallets in Python bindings.
- Initial bindings for NodeJS have been provided using
napi-rs.
- Provide an
analyzecommand, which queries the details of an address on the network. This should help developers debugging their apps and understanding the network. - Provide a
register historycommand, which is useful for showing how the register's content has changed over time. - The
registercreate,editandgetcommands now provide a--hexvalue for working with hex addresses. - The
file downloadcommand now supports downloading from a data map or public address. - The
file uploadcommand has extra output for indicating uploaded chunks. - File uploads and downloads now emit unique error codes for different error scenarios.
- Provided documentation for connecting to a custom network.
- Individual node management.
- Improve logging for the addition and removal of peers from the routing table.
- Enhanced strategy for refreshing the node's routing table that aims to maintain an accurate picture of the network. It incorporates periodic liveness checks and will remove inactive nodes.
- Incorporate the use of distance range to verify payments. The payee could have been blocked or churned out, but we should still consider the payment valid if the payee is close enough.
- Stop logging too many faults on the node listeners. This produced a lot of spam in the logs.
- Issue with version upgrades not being detected correctly during periodic version checks.
- Do not add client peers to the routing table.
- Only check the expiry date on quotes from the current node. Checking the date on a quote from another node can fail due to differences in the operating system's clock.
- During re-attempts for requests, when the address of the target is not provided, e.g., in replication-related requests, the addresses will be provided from the local node.
- Increase the timeout for the query that obtains the closest nodes, from 10 seconds to 60 seconds. This is currently necessary because our routing table refresh is not optimised to purge dead nodes, and those dead entries cause the query to take more time. With more time, the query result is better and thus the upload and download performance is improved.
- Decrease the time between some retries during uploads. In many cases, the larger interval will not help. This allows the uploading process to fail faster if need be.
- Log the
evmlibcrate by default. - When a node receives a payment from a client, failed payment verification will be retried after 5 seconds. The client and node could have queried different EVM nodes that were not synchronised yet. This could have resulted in a chunk proof verification error on the client.
- During payee verification, use closest peers to target, rather than self. In some edge cases, the latter could cause payment to be rejected and result in a chunk proof verification error on uploads.
- Improve the efficiency of network discovery by handling an edge case where there is a 'hole' in the routing table.
- A peer will be dialled before sending it a request. This helped in the elimination of 'not enough quotes' errors.
- When obtaining peers, use
get_closest_local_peersrather thanfind_closest_local_peers. This helped eliminate chunk proof verification errors.
- Add missing class exports for several Python bindings.
- Use a single error variant for 'not enough quotes' error. This facilitated easier internal testing when investigating the errors.
- Various changes improved the efficiency of obtaining quotes for uploads:
- Use a cloned network to increase parallelism.
- Use 10 seconds for query timeout, rather than the default 60 seconds.
- Use redials only during reattempts to avoid unnecessary timeout.
- All content addresses across files are merged into a single call for obtaining the quotes.
- The client no longer dials back when it receives an identify request; it has to assume nodes are OK. This may help to reduce open connections.
- Do not fetch mainnet contacts when the
--testnetargument is used.
- When
UPnPwas selected, nodes would be started usingManualmode. They will now start as expected whenUPnPis used.
- The node outputs critical start up and runtime failures to a
critical_failure.logfile. This is to helpantctlfeedback failure information to the user, but it should hopefully be generally useful to more advanced users. - New metrics:
connected_relay_clientsrelay_peers_in_routing_tablepeers_in_non_full_bucketsrelay_peers_in_non_full_bucketspercentage_of_relay_peers
- We also add a
node_versionsmetric. This will be used to help us gauge what versions of nodes are present in the network and how many nodes have upgraded to the latest releases. It will also assist us in ensuring backward compatibility.
- The network bootstrapping process is changed to dial three of the initial peer addresses rather than all of them concurrently. When the routing table reaches five peers, network discovery takes over the rest of the bootstrapping process, and no more peers are dialled. This mechanism is much more efficient and avoids overloading the peers in the bootstrap cache.
- Network discovery rate has been increased during the start up phase, but it should slow down exponentially as more peers are added to the routing table.
- Several items aim to address uploading issues:
- Avoid deadlocks on record store cache access
- Do not fetch from the network when a replication fetch failed
- Lower the number of parallel replication fetches
- Issues that come in too quick will not trigger an extra action
- Disable the current black list (possibly to be re-enabled when we have more data)
They may also help reduce open connections and
libp2pidentify attempts
- Remove relay clients from the swarm driver tracker if the reservation has been closed.
- The
peers_in_rtmetric is improved by calculating it directly from kbuckets rather than usinglibp2pevents.
- Support uploading files with empty metadata
- Several file-related functions were renamed [BREAKING]:
dir_uploadtodir_content_uploaddir_and_archive_uploadtodir_uploadfile_uploadtofile_content_uploaddir_upload_publictodir_content_upload_publicdir_and_archive_upload_publictodir_upload_publicfile_upload_publictofile_content_upload_public
- Improved address management to make it easier to use [BREAKING]:
- All address types have the same methods:
to_hexandfrom_hex. - All public-key addressed data types have the public key in their address.
- High level
DataAddressshares the values above instead of the low-levelXorNamethat can't be constructed from hex. - Python now uses accurate addresses instead of clunky hex strings, and addresses for other types.
- Fix inaccurate/missing python bindings for addresses: now all have
to_hexandfrom_hex.
- All address types have the same methods:
- Support merging one archive into another.
- Introduce a maximum limit of 0.2 Gwei on the gas price when uploading files or creating/editing
registers. If the gas exceeds this value, operations will be aborted. The commands provide a
--max-fee-per-gasargument to override the value. This measure has been taken to avoid involuntarily paying excessive fees when the gas price fluctuates.
- The
ant file downloadcommand can download directly from aXorName. - The
ant file downloadcommand can download data directly from aDataMapChunkto a file.
- A
--no-upnpflag to disable launching nodes with UPnP. - A failure column is added to the
statuscommand.
- The
addcommand will create services that will launch the node with--upnpby default. For home networking we want to try encourage people to use UPnP rather than relaying. - The
addcommand does not apply the 'on failure' restart policy to services. This is to prevent the node from continually restarting if UPnP is not working. - The
--home-networkargument has been renamed--relay[BREAKING].
- A debug logging statement used during the upgrade process caused an error if there were no nodes in the node registry.
- New column in the nodes panel for node failure reason.
- New column in the nodes panel to indicate UPnP support.
- New column in the nodes panel to the connection mode chosen by
Automatic.
- Remove
Home Networkfrom the connection modes. Relay can only be selected by usingAutomaticin the case where UPnP fails. We are trying to avoid the use of relays when UPnP is available.
- Removed encrypt data compile time flag (now always on).
- Refactor of data types.
- Removed the default trait for
QuotingMetricsand it is now initialized with the correct values everywhere. - Compile UPnP support by default; will still require
--upnpwhen launching the node to activate. - Removed the old flawed
Registernative data type. - Creating
DataTypesas the sole place to show network natively supported data types. And use it to replace existingRecordKind. - Rename
RecordTypetoValidationType. - Remove MDNS. For local nodes will bootstrap via the peer cache mechanism.
- Upgrade
libp2pto0.55.0and use some small configuration changes it makes available.
GraphEntrydata native type as a generic graph for building collections.Pointerdata native type that points to other data on the network.- Relay client events to the metrics endpoint.
- Relay reservation score to the metrics endpoint. This measures the health of a relay server that we are connected to, by tracking all the recent connections that were routed through that server.
- Allow override QUIC max stream window with
ANT_MAX_STREAM_DATA. - Added an easy way to spawn nodes or an entire network from code, with
ant_node::spawn::node_spawner::NodeSpawnerandant_node::spawn::network_spawner::NetworkSpawner. - Added a
data_typeverification when receiving records with proof of payment. - Added extra logging around payment verification.
- Make
QuotingMetricssupport data type variant pricing. - Avoid free upload via replication.
- External Address Manager will not consider
IncomingConnectionErrorthat originates from multiple dial attempts as a serious issue. MultiAddressNotSupportederror is not considered as a critical error if the error set contains at least one different error.- The record count metrics is now set as soon as a node is restarted.
- Push our Identify info if we make a new reservation with a relay server. This reduces the number
of
CircuitReqDeniederrors throughout the network. - All connection errors are now more forgiving and does not result in a peer being evicted from the routing table immediately. These errors are tracked and the action is taken only if we go over a threshold.
- Only replicate fresh uploads to other payees.
- During quoting re-attempts, use non-blocking sleep instead.
- Update python bindings and docs. Added initial NodeJS typescript integration.
- Updated test suit and added comprehensive documentation.
- Deprecate storing registers references in user data.
- Correctly report on chunks that were already uploaded to the network when syncing or re-uploading the same data.
- Add version field to archive data structure for backwards compatibility. And add future compatibility serialization into file metadata.
- Changed default EVM network to
Arbitrum One. - Removed the deprecated
Client::connectfunction! Please useClient::initinstead. - Removed the old
Registernative data type, although the newRegisterhigh level type does the same job but better. - Removed the feature flags and the complexities around those, now everything is configurable at runtime (no need to recompile).
- NodeJS/Typescript bindings.
- 36 different configurations for publish Python bindings.
- Client examples.
- Added
evm_networkfield to client config. - Added a better retry strategy for getting market prices and sending transactions. This reduces the frequency of RPC related upload errors significantly.
- Added a
data_typeverification when receiving quotes from nodes. - Client API for all four data types:
Chunk,GraphEntry,Scratchpad,Pointer. - High level
Registerdata type that works similarly to old registers but without the update limit they had: now infinitely mutable. - key derivation tooling
- Rust optimization: Use parallelised chunk cloning in self encryption.
- Deterministically serialize archives. This leads to de-duplication and less payments when syncing folders and files.
- Patched and refactored client Python bindings to reflect almost the whole Rust API.
- EVM network uses default if not supplied by ENV.
- Event receiver panic after completing client operations.
- Use balanced retry strategy for downloading chunks. Sometimes it would be possible we wouldn't
find a chunk if we tried to retrieve it on the first attempt, so as with uploads, we will use a
balanced retry strategy for downloads. This should make the
ant file downloadcommand more robust.
- Remove unallocated static IP from the bootstrap mechanism. We have five static IP addresses
allocated to five hosts, each of which run nodes and a minimal web server. The web server makes a
list of peers available to nodes and clients to enable them to join the network. These static IP
addresses are hard-coded in the
antnodeandantbinaries. It was discovered we had accidentally added six IPs and one of those was unallocated. Removing the unallocated IP should reduce the time to connect to the network.
- Reduce the frequency of metrics collection in the node's metrics server, from fifteen to sixty seconds. This should reduce resource usage and improve performance.
- Do not refresh all CPU information in the metrics collection process in the node's metrics server. Again, this should reduce resource usage and improve performance.
- Remove the 50% CPU usage safety measure. We added a safety measure to the node to cause the process to terminate if the system's CPU usage exceeded 50% for five consecutive minutes. This was to prevent cascading failures resulting from too much churn when a large node operator pulled the plug on tens of thousands of nodes in a very short period of time. If other operators had provisioned to max capacity and not left some buffer room for their own nodes, many other node processes could die from the resulting churn. After an internal discussion, the decision was taken to remove the safety measure.
- Remove
uploadedtimestamp from archive metadata to prevent unnecessary re-uploads when archive contents remain unchanged. This ensures we do not charge when uploading the same file more than once onant file upload. - Switch from
HashMaptoBTreeMapfor archive to ensure deterministic serialization, which also prevents unnecessary re-uploads. As above, this facilitates the fix for the duplicate payment issue.
- Network discovery no longer queries the farthest full buckets. This significantly reduces the number of messages as the network grows, resulting in fewer open connections and reduced resource usage.
- Memory and CPU metrics use more precise
f64measurements
- Apply a timeout for EVM transactions. This fixes an issue where some uploads would freeze indefinitely.
- The
antCLI was not selecting its network consistently from the environment variable.
- Do not dial back when a new peer is detected. This resulted in a large number of open connections, in turn causing increased CPU usage.
- Remove the 'dial error' output on the
file uploadcommand
- For a branding alignment that moves Safe Network to Autonomi, all crates in the workspace prefixed
sn-were renamed with anant-prefix. For example,sn-nodewas renamedant-node. - To further support this alignment, several binaries were renamed:
autonomi->antsafenode->antnodesafenode-manager->antctlsafenode_rpc_client->antnode_rpc_client
- The location of data directories used by the binaries were changed from
~/.local/share/safeto~/.local/share/autonomi. The same is true of the equivalent locations on macOS and Windows. - The prefixes of metric names in the
safenodebinary (nowantnode) were changed fromsn_toant_.
- Provide Python bindings for
antnode. - Generic
Transactiondata type - Upgraded quoting with smart-contract-based pricing. This makes pricing fairer, as more nodes are rewarded and there are less incentives to cheat.
- Upgraded data payments verification.
- New storage proof verification which attempts to avoid outsourcing attack
- RBS support, dynamic
responsible_rangebased onnetwork_densityequation estimation. - Node support for client’s RBS
get_closestquery. - More quoting metrics for potential future quoting scheme.
- Implement bootstrap cache for local, decentralized network contacts.
- Increased the number of peers returned for the
get_closestquery result.
- The
SignedSpenddata type was replaced byTransaction. - Removed
group_consensusonBadNodeto support RBS in the future. - Removed node-side quoting history check as part of the new quoting scheme.
- Rename
continuous_bootstraptonetwork_discovery. - Convert
DistanceintoU256via output string. This avoids the need to access thelibp2p::Distanceprivate field because the change for it has not been published yet. - For node and protocol versioning we remove the use of various keys in favour of a simple
integer between
0and255. We reserve the value1for the main production network. - The
websocketsfeature was removed from the node binary. We will no longer support thewsprotocol for connections.
- Populate
records_by_bucketduring restart so that proper quoting can be retained after restart. - Scramble
libp2pnative bootstrap to avoid patterned spike of resource usage. - Replicate fresh
ScratchPad - Accumulate and merge
ScratchPadon record get. - Remove an external address if it is unreliable.
- Bootstrap nodes were being replaced too frequently in the routing table.
- Provide Python bindings.
- Support for generic
Transactiondata type. - Upgraded quoting with smart contract.
- Upgraded data payments with new quoting.
- Retry failed PUTs. This will retry when chunks failed to upload.
- WASM function to generate a vault key from a wallet signature.
- Use bootstrap cache mechanism to initialize
Clientobject. - Exposed many types at top-level, for more ergonomic use of the API. Together with more examples on function usage.
- Deprecated registers for the client, planning on replacing them fully with transactions and pointers.
- Wait a short while for initial network discovery to settle before quoting or uploading tasks begin.
- Stress tests for the register features of the vault.
- Improved logging for vault end-to-end test cases.
- More debugging logging for the client API and
evmlib. - Added support for adding a wallet from an environment variable if no wallet files are present.
- Provide
wallet exportcommand to export a wallet’s private key
- Added and modified documentation in various places to improve developer experience.
- Renamed various methods to 'default' to private uploading, while public will have
_publicsuffixed. Also has various changes to allow more granular uploading of archives and data maps. - Archives now store relative paths to files instead of absolute paths.
- The
wallet create --private-keycommand has been changed towallet import.
- Files now download to a specific destination path.
- Retry when the number of quotes obtained are not enough.
- Return the wallet from an environment variable rather than creating a file.
- Error when decrypting a wallet that was imported without the
0xprefix. - Issue when selecting a wallet that had multiple wallet files (unencrypted & encrypted).
- Added
--network-idand--antnode-pathargs for testing
- Make native kad bootstrap interval more random. So that when running multiple nodes on one machine, there is no resource usage spike appears with fixed interval.
- During a restart, the node builds a cache of locally restored records, which is used to improve the speed of the relevant records calculation. The restored records were not being added to the cache. This has now been corrected.
- Enable the
websocketsconnection feature, for compatibility with the webapp.
- Reduce incorrect logging of connection errors.
- Fixed verification for crdt operations.
- Pick chunk-proof verification (for storage confirmation) candidates more equally.
- Display an error when Launchpad is not whitelisted on Windows devices.
- Ctrl+V can paste rewards address on pop up section.
- Help section copy changed after beta phase.
- Update ratatui and throbbber library versions.
- We display starting status when not running nodes
- Support pre-paid put operations.
- Add the necessary WASM bindings for the webapp to be able to upload private data to a vault and fetch it again.
- Chunks are now downloaded in parallel.
- Rename some WASM methods to be more conventional for web.
- You can select a node. Pressing L will show its logs.
- The upgrade screen has an estimated time.
- Launchpad now uses multiple threads. This allows the UI to be functional while nodes are being started, upgraded, and so on.
- Mbps vs Mb units on status screen.
- Spinners now move when updating.
- Remove outdated record copies that cannot be decrypted. This is used when a node is restarted.
- The node will only restart at the end of its process if it has explicitly been requested in the RPC restart command. This removes the potential for creation of undesired new processes.
- Range search optimization to reduce resource usage.
- Trigger record_store pruning earlier. The threshold lowered from 90% to 10% to improve the disk usage efficiency.
- Derive node-side record encryption details from the node's keypair. This ensures data is retained in a restart.
- When paying for quotes through the API, the contract allowance will be set to ~infinite instead of the specific amount needed. This is to reduce the amount of approval transactions needed for doing quote payments.
- The
--rewards-addressargument is retained on an upgrade
- Support for upgrading nodes version
- Support for Ctrl+V on rewards address
- More error handling
- Use 5 minute interval between upgrades
- Help screen after beta
- New Ratatui version 0.29.0
- Private data support.
- Local user data support.
- Network Vault containing user data encrypted.
- Archives with Metadata.
- Prepaid upload support for data_put using receipts.
- Contract token approval amount set to infinite before doing data payments.
- Expose APIs in WASM (e.g. archives, vault and user data within vault).
- Uploads are not run in parallel.
- Support for local wallets.
- Provide
wallet createcommand. - Provide
wallet balancecommand.
- Take metadata from file system and add
uploadedfield for time of upload.
- Make sure we use the new client path throughout the codebase
- Get range used for store cost and register queries.
- Re-enabled large_file_upload, memcheck, benchmark CI tests.
- Scratchpad modifications to support multiple data encodings.
- Registers are now merged at the network level, preventing failures during update and during replication.
- Libp2p config and get range tweaks reduce intensity of operations. Brings down CPU usage considerably.
- Libp2p’s native kad bootstrap interval introduced in 0.54.1 is intensive, and as we roll our own, we significantly reduce the kad period to lighten the CPU load.
- Wipe node’s storage dir when restarting for new network
- Fixes in networking code for WASM compatibility (replacing
std::timewith compatible alternative). - Event dropped errors should not happen if the event is not dropped.
- Reduce outdated connection pruning frequency.
- Local node register is cleaned up when --clean flag applied (prevents some errors when register changes).
- Status screen is updated after nodes have been reset.
- Rewards Address is required before starting nodes. User input is required.
- Spinner does not stop spinning after two minutes when nodes are running.
- The
websocketsfeature is removed because it was observed to cause instability.
- PR #2281 was reverted to restore prior behaviour.
- The Discord username was replaced with the rewards address.
- Remove the reject terms and conditions pop-up screen.
Unfortunately the entry for this release will not have fully detailed changes. This release is special in that it's very large and moves us to a new, EVM-based payments system. The Github Release description has a list of all the merged PRs. If you want more detail, consult the PR list. Normal service will resume for subsequent releases.
Here is a brief summary of the changes:
- A new
autonomiCLI that uses EVM payments and replaces the previoussafeCLI. - A new
autonomiAPI that replacessn_clientwith a simpler interface. - The node has been changed to use EVM payments.
- The node runs without a wallet. This increases security and removes the need for forwarding.
- Data is paid for through an EVM smart contract. Payment proofs are not linked to the original data.
- Payment royalties have been removed, resulting in less centralization and fees.
- Optimize auditor tracking by not to re-attempt fetched spend.
- Optimize auditor tracking function by using DashMap and stream.
- Increase chunk size to 4MB with node size remaining at 32GB
- Bootstrap peer parsing in CI was changed to accommodate new log format in libp2p
- The
addcommand has new--max-log-filesand--max-archived-log-filesarguments to support capping node log output
- The Discord username on the
--ownerargument will always be converted to lower case
- Increased logging related to app configuration. This could help solving issues on launchpad start up.
- Upgrade to
Ratatuiv0.28.1 - Styling and layout fixes
- Drives that don't have enough space are being shown and flagged
- Error handling and generic error popup
- New metrics in the
Statussection - Confirmation needed when changing connection mode
- NAT mode only on first start in
Automatic Connection Mode - Force Discord username to be in lowercase
- Disable node selection on status screen
- We change node size from 5GB to 35GB
- Increase node storage size from 2GB to 32GB
- The auditor now uses width-first tracking, to bring it in alignment with the new wallet.
- The client will perform quote validation to avoid invalid quotes.
- A new high-level client API,
autonomi. The crate provides most of the features necessary to build apps for the Autonomi network.
- The node manager status command was not functioning correctly when used with a local network. The mechanism for determining whether a node was running was changed to use the path of the service process, but this did not work for a local network. The status command now differentiates between a local and a service-based network, and the command now behaves as expected when using a local network.
- In the main README for the repository, the four network keys were updated to reflect the keys being used by the new stable network.
- The circuit-bytes limit is increased. This enables
libp2p-relayto forward large records, such asChunkWithPayment, enabling home nodes to be notified that they have been paid.
- More logging for storage errors and setting the responsible range.
- The node's store cost calculation has had various updates:
- The minimum and maximum were previously set to 10 and infinity. They've now been updated to 1 and 1 million, respectively.
- We are now using a sigmoid curve, rather than a linear curve, as the base curve. The previous curve only grew steep when the storage capacity was 40 to 60 percent.
- The overall calculation is simplified.
- We expect the updates to the store cost calculation to prevent 'lottery' payments, where one node would have abnormally high earnings.
- The network version string, which is used when both nodes and clients connect to the network, now
uses the version number from the
sn_protocolcrate rather thansn_networking. This is a breaking change insn_networking. - External address management is improved. Before, if anyone observed us at a certain public IP+port, we would trust that and add it if it matches our local port. Now, we’re keeping track and making sure we only have a single external address that we set when we’ve been observed as that address a certain amount of times (3 by default). It should even handle cases where our IP changes because of (mobile) roaming.
- The
Spendnetwork data type has been refactored to make it lighter and simpler. - The entire transaction system has been redesigned; the code size and complexity have been reduced by an order of magnitude.
- In addition, almost 10 types were removed from the transaction code, further reducing the complexity.
- The internals of the
TransferandCashNotetypes have been reworked. - The replication range has been reduced, which in turn reduces the base traffic for replication.
- Registers are fetched and merged correctly.
- A connection mode feature enables users to select whether they want their nodes to connect to the network using automatic NAT detection, upnp, home network, or custom port mappings in their connection. Previously, the launchpad used NAT detection on the user’s behalf. By providing the ability to explore more connection modes, hopefully this will get more users connected.
- On the drive selection dialog, drives to which the user does not have read or write access are marked as such.
- A README was provided for the
sn_registerscrate. It intends to give a comprehensive understanding of the register data type and how it can be used by developers.
- Provided more information on connecting to the network using the four keys related to funds, fees and royalties.
- Some users encountered an error when the launchpad started, related to the storage mountpoint not
being set. We fix the error by providing default values for the mountpoint settings when the
app_data.jsonfile doesn't exist (fresh install). In the case where it does exist, we validate the contents.
- The node will now report its bandwidth usage through the metrics endpoint.
- The metrics server has a new
/metadatapath which will provide static information about the node, including peer ID and version. - The metrics server exposes more metrics on store cost derivation. These include relevant record count and number of payments received.
- The metrics server exposes metrics related to bad node detection.
- Test to confirm main key can’t verify signature signed by child key.
- Avoid excessively high quotes by pruning records that are not relevant.
- Bad node detection and bootstrap intervals have been increased. This should reduce the number of messages being sent.
- The spend parent verification strategy was refactored to be more aligned with the public network.
- Nodes now prioritize local work over new work from the network, which reduces memory footprint.
- Multiple GET queries to the same address are now de-duplicated and will result in a single query being processed.
- Improve efficiency of command handling and the record store cache.
- A parent spend is now trusted with a majority of close group nodes, rather than all of them. This increases the chance of the spend being stored successfully when some percentage of nodes are slow to respond.
- The amount of bytes a home node could send and receive per relay connection is increased. This solves a problem where transmission of data is interrupted, causing home nodes to malfunction.
- Fetching the network contacts now times out and retries. Previously we would wait for an excessive amount of time, which could cause the node to hang during start up.
- If a node has been shunned, we inform that node before blocking all communication to it.
- The current wallet balance metric is updated more frequently and will now reflect the correct state.
- Avoid burnt spend during forwarding by correctly handling repeated CashNotes and confirmed spends.
- Fix logging for CashNote and confirmed spend disk ops
- Check whether a CashNote has already been received to avoid duplicate CashNotes in the wallet.
- The
local runcommand supports--metrics-port,--node-portand--rpc-portarguments. - The
startcommand waits for the node to connect to the network before attempting to start the next node. If it takes more than 300 seconds to connect, we consider that a failure and move to the next node. The--connection-timeoutargument can be used to vary the timeout. If you prefer the old behaviour, you can use the--intervalargument, which will continue to apply a static, time-based interval.
- On an upgrade, the node registry is saved after each node is processed, as opposed to waiting until the end. This means if there is an unexpected failure, the registry will have the information about which nodes have already been upgraded.
- The user can choose a different drive for the node's data directory.
- New sections in the UI:
OptionsandHelp. - A navigation bar has been added with
Status,OptionsandHelpsections. - The node's logs can be viewed from the
Optionssection.
- Increased spacing for title and paragraphs.
- Increased spacing on footer.
- Increased spacing on box titles.
- Moved
Discord Usernamefrom the top title into theDevice Statussection. - Made the general layout of
Device Statusmore compact.
- The
safe files downloadcommand now displays duration per file.
- Adjust the put and get configuration scheme to align the client with a more realistic network which would have some percentage of slow nodes.
- Improved spend logging to help debug the upload process.
- Avoid a corrupt wallet by terminating the payment process during an unrecoverable error.
- Protection against an attack allowing bad nodes or clients to shadow a spend (make it disappear) through spamming.
- Nodes allow more relayed connections through them. Also, home nodes will relay through 4 nodes instead of 2. Without these changes, relays were denying new connections to home nodes, making them difficult to reach.
- Auditor tracks forwarded payments using the default key.
- Auditor tracks burnt spend attempts and only credits them once.
- Auditor collects balance of UTXOs.
- Added different attack types to the spend simulation test to ensure spend validation is solid.
- Bad nodes and nodes with a mismatched protocol are now added to a block list. This reduces the chance of a network interference and the impact of a bad node in the network.
- The introduction of a record-store cache has significantly reduced the node's disk IO. As a side effect, the CPU does less work, and performance improves. RAM usage has increased by around 25MB per node, but we view this as a reasonable trade off.
- For the time being, hole punching has been removed. It was causing handshake time outs, resulting in home nodes being less stable. It will be re-enabled in the future.
- Force connection closure if a peer is using a different protocol.
- Reserve trace level logs for tracking event statistics. Now you can use
SN_LOG=vto get more relevant logs without being overwhelmed by event handling stats. - Chunk verification is now probabilistic, which should reduce messaging. In combination with replication messages also being reduced, this should result in a bandwidth usage reduction of ~20%.
- During payment forwarding, CashNotes are removed from disk and confirmed spends are stored to disk. This is necessary for resolving burnt spend attempts for forwarded payments.
- Fix a bug where the auditor was not storing data to disk because of a missing directory.
- Bootstrap peers are not added as relay candidates as we do not want to overwhelm them.
- Basic global documentation for the
sn_clientcrate. - Option to encrypt the wallet private key with a password, in a file called
main_secret_key.encrypted, inside the wallet directory. - Option to load a wallet from an encrypted secret-key file using a password.
- The
wallet createcommand provides a--passwordargument to encrypt the wallet. - The
wallet createcommand provides a--no-passwordargument skip encryption. - The
wallet createcommand provides a--no-replaceargument to suppress a prompt to replace an existing wallet. - The
wallet createcommand provides a--keyargument to create a wallet from a hex-encoded private key. - The
wallet createcommand provides a--derivationargument to set a derivation passphrase to be used with the mnemonic to create a new private key. - A new
wallet encryptcommand encrypts an existing wallet.
- The
wallet addresscommand no longer creates a new wallet if no wallet exists. - The
wallet createcommand creates a wallet using the account mnemonic instead of requiring a hex-encoded secret key. - The
wallet create--keyand--derivationarguments are mutually exclusive.
- The
Total Nanos Earnedstat no longer resets on restart.
- A
--versionargument shows the binary version
- Native Apple Silicon (M-series) binaries have been added to our releases, meaning M-series Mac users do not have to rely on running Intel binaries with Rosetta.
- The node exposes more metrics, including its uptime, number of connected peers, number of peers in the routing table, and the number of open connections. These will help us more effectively diagnose user issues.
- Communication between node and client is strictly limited through synchronised public keys. The current beta network allows the node and client to use different public keys, resulting in undefined behaviour and performance issues. This change mitigates some of those issues and we also expect it to prevent other double spend issues.
- Reduced base traffic for nodes, resulting in better upload performance. This will result in better distribution of nanos, meaning users with a smaller number of nodes will be expected to receive nanos more often.
- In the case where a client retries a failed upload, they would re-send their payment. In a rare circumstance, the node would forward this reward for a second time too. This is fixed on the node.
- Nodes are prevented from double spending under rare circumstances.
- ARM builds are no longer prevented from connecting to the network.
- Global
--debugand--tracearguments are provided. These will output debugging and trace-level logging, respectively, direct to stderr.
- The mechanism used by the node manager to refresh its state is significantly changed to address
issues that caused commands to hang for long periods of time. Now, when using commands like
start,stop, andreset, users should no longer experience the commands taking excessively long to complete. - The
nat-detection runcommand provides a default list of servers, meaning the--serversargument is now optional.
- Launchpad and node versions are displayed on the user interface.
- The node manager change for refreshing its state also applies to the launchpad. Users should experience improvements in operations that appeared to be hanging but were actually just taking an excessive amount of time to complete.
- The correct primary storage will now be selected on Linux and macOS.