Conversation
|
| cpus = { check_max( 1, 'cpus' ) } | ||
| memory = { check_max( 1.GB * task.attempt, 'memory' ) } | ||
| time = { check_max( 2.h * task.attempt, 'time' ) } | ||
| time = { check_max( 24.h * task.attempt, 'time' ) } |
There was a problem hiding this comment.
Shouldn't tiny jobs not also be less than 24hours? e.g. lets say 3 for tiny, 6 for small, 8 for medium and 16 for large?
There was a problem hiding this comment.
My fear here is that some clusters actually send you to the end of the queue if you ask for more hours than you actually need - so it shouldn't be set to too much more than the initial process will most likely need.
There was a problem hiding this comment.
Tiny is actually referring to the resources rather than time (at least in the eager concept).
Hmm ok. That's tricky then. I originally honestly felt 2.h was enough but #545 made me realise this could occur when people run their own data. Maybe 4.h for everything? Half a day I think would be pretty solid for testing with your own data... (run at 8, check again at lunch?).
Get final python script version
nf-core/eager pull request
This PR is to close #546. When users move on from the small tests, they may want to run their own test on 'real sized' data, which may exceed 2.h base walltimes. This bumps this up to 24.h, and sets limits for specific profiles (1h, to also speed up during CI pipeline exiting if crashes occur, given most test runs take ~2m)
This includes changes from #544 , please merge that first (sorry, wrong checkout procedure...)
PR checklist
nextflow run . -profile test,docker --paired_end).nf-core lint .).docsis updatedCHANGELOG.mdis updatedREADME.mdis updatedLearn more about contributing: CONTRIBUTING.md