IQSS/10697 - Improve batch permission indexing#10698
Merged
stevenwinship merged 15 commits intoIQSS:developfrom Oct 16, 2024
Merged
IQSS/10697 - Improve batch permission indexing#10698stevenwinship merged 15 commits intoIQSS:developfrom
stevenwinship merged 15 commits intoIQSS:developfrom
Conversation
Avoids keeping everything in memory, also helps in tracking progress as you can see the permissionindextime getting updated per dataset.
…rmission-indexing
…rmission-indexing
pdurbin
approved these changes
Oct 2, 2024
Member
pdurbin
left a comment
There was a problem hiding this comment.
I made some cosmetic tweaks. I didn't run the code but it seems fine. Approved.
Contributor
…rmission-indexing
…b.com/QualitativeDataRepository/dataverse.git into QDR-improve-batch-permission-indexing
Contributor
|
Tested heavily on Perf cluster. No CPU or Memory issues detected. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

What this PR does / why we need it: This PR adds checks within the loops over all children of a definition point and all files of those children and now calls the code to actually process files to create the permissions docs for them and then to submit those documents to solr, thus allowing the memory used to cache them to be released.
Which issue(s) this PR closes:
Special notes for your reviewer:
Changes are clearer if you look at https://github.com/IQSS/dataverse/pull/10698/files?w=1.
I have somewhat arbitrarily chosen to trigger creating documents every 100 files and to then send those docs to solr every 20 docs. Those could be configurable or just optimized with some testing, but the key thing is that, prior to this PR, a change to permissions on the root DV loads all files and all docs into memory, so any cap to avoid doing that should be OK. Choosing too small number is hopefully not a big issue any more as we are relying on auto soft/hard commits, so if we make updates more often than 1 sec/30 seconds, solr is just going to cache the submissions on its side.
Note that one of the problems we detected at QDR was that after a permission change and subsequent out-of-memory failure of our test server, when we restarted, we had out-of-date permissions documents. Those are not orphans so they aren't detected by the /api/admin/index/status call, they just result in odd behavior such as a search showing a closed lock on a restricted file when, if you go the page, you can actually download, etc.
This PR doesn't change the fact that all file permission docs have to be updated, which still take a long time (minutes/hours) on a big installation. It does appear to stop the memory use from climbing though.
If someone had a memory failure due to this, they could/should reindex permissions to fix it. From what I can see, there is an api call for that: /admin/index/perms, but as far as I can tell, this is undocumented. Not sure why (other than it is slow - and it does everything in one batch and writes permissionindextimes to files as well). Doing an in place reindex of everything would also work, so this is what I've put in the release note for now.
Suggestions on how to test this:
Create a db with a significant number of files in one or more datasets (thousands?), change the permissions on the root collection, e.g. adding or removing a person from the curator role. Watch the memory use during processing (top?) and verify that it doesn't keep increasing until the permission updates finish (%cpu should drop and there's a message in the log).
Does this PR introduce a user interface change? If mockups are available, please link/include them here:
Is there a release notes update needed for this change?:
Additional documentation: