Draft
Conversation
b559b7c to
9bab604
Compare
…orage Signed-off-by: Jason Frame <jason.frame@consensys.net>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Jason Frame <jason.frame@consensys.net>
…chive read storage Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Jason Frame <jason.frame@consensys.net>
…latDbToArchiveMigrator - Add boundaryDistance constructor param; migration loop stops at HEAD - boundaryDistance - Add startOngoingMigration() to register a permanent block observer post-migration - Extract archiveTarget() helper to encapsulate Math.max(0, block - boundaryDistance) - Replace ExecutorService with ScheduledExecutorService; each migrator owns its executor - Use OptionalLong for blockObserverId to avoid sentinel values - Inline observer cleanup into close() instead of a separate stop() method - Wire boundaryDistance (maxLayersToLoad) and ongoing migration in BesuControllerBuilder - Use Awaitility instead of Thread.sleep in observer tests Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Jason Frame <jason.frame@consensys.net>
…c fix Replace single-block processBlockFromObserver with catchUp() which reads saved progress and calls migrateBlocks() up to the new archive target. This handles multiple blocks behind and fixes migratedBlockNumber not being updated during ongoing migration. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Jason Frame <jason.frame@consensys.net>
…ngMigration On restart the metric gauge started at 0 instead of reflecting actual archived progress. Initialize from getMigrationProgress() on startup. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Jason Frame <jason.frame@consensys.net>
Signed-off-by: Jason Frame <jason.frame@consensys.net>
Archive CFs (ACCOUNT_INFO_STATE_ARCHIVE, ACCOUNT_STORAGE_ARCHIVE, ACCOUNT_INFO_STATE_FREEZER, ACCOUNT_STORAGE_FREEZER) accumulate large numbers of SST files during migration. With cache_index_and_filter_blocks=false (the prior hardcoded default), each SST file's index and filter blocks are held in unbounded native memory outside the block cache, causing RSS to grow ~1.5 GB/hr and reach 15+ GB above control nodes by the time the full 24M-block migration completes. Add isCacheIndexAndFilterBlocks() to SegmentIdentifier (default false, preserving existing behaviour for all other segments) and set it true on the four archive CFs so their index/filter blocks are evictable via the bounded block cache. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Jason Frame <jason.frame@consensys.net>
BesuCommand gated isParallelTxProcessingEnabled behind a strict DataStorageFormat.BONSAI equality check. Archive nodes use X_BONSAI_ARCHIVE, so the flag was never set and defaulted to false, silently disabling parallel transaction processing on all archive nodes. Changed to isBonsaiFormat() which covers both BONSAI and X_BONSAI_ARCHIVE. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Jason Frame <jason.frame@consensys.net>
12d13ca to
df1ee44
Compare
Signed-off-by: Jason Frame <jason.frame@consensys.net>
Signed-off-by: Jason Frame <jason.frame@consensys.net>
Signed-off-by: Jason Frame <jason.frame@consensys.net>
Signed-off-by: Jason Frame <jason.frame@consensys.net>
…onsaiFlatDbToArchiveMigrator Signed-off-by: Jason Frame <jason.frame@consensys.net>
Signed-off-by: Jason Frame <jason.frame@consensys.net>
Signed-off-by: Jason Frame <jason.frame@consensys.net>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Implements hybrid Bonsai Archive so that recent block reads within 512 Bonsai archive limit used Bonsai and Bonsai archive used for historical queries.
Fixed Issue(s)
fixes #9981
Thanks for sending a pull request! Have you done the following?
doc-change-requiredlabel to this PR if updates are required.Locally, you can run these tests to catch failures early:
./gradlew spotlessApply./gradlew build./gradlew acceptanceTest./gradlew integrationTest./gradlew ethereum:referenceTests:referenceTests