Skip to content

Commit dc4d422

Browse files
author
Matthew McEneaney
committed
docs: Add instructions for running detector timeline jobs.
1 parent e25670c commit dc4d422

2 files changed

Lines changed: 17 additions & 1 deletion

File tree

doc/chef_guide.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,12 @@ Output files will appear in your chosen output directory, within `hist/detectors
1717
## :green_circle: Step 2: Make the timelines
1818

1919
```bash
20-
run-detectors-timelines.sh -d $dataset -i $out_dir/hist/detectors
20+
run-detectors-timelines.sh -d $dataset -i $out_dir/hist/detectors --run-slurm
2121
```
2222
where `$out_dir` is your output directory from **Step 1** and `$dataset` is a unique name for this cook, _e.g._, `rga_v1.23`.
2323

24+
Notice the `--run-slurm` option which will set up a slurm script for each detector timeline. To submit the jobs, run the printed sbatch command **or** run with the `--submit-slurm` option to automatically submit. Once all jobs have finished successfully, rerun using the `--after-slurm` option instead to complete the organization of the output.
25+
2426
Output will appear in `./outfiles/$dataset/`.
2527

2628
## :green_circle: Step 3: Deploy the timelines

doc/procedure.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,20 @@ bin/run-physics-timelines.sh -d rga_sp19_v5 # for physics timelines
7474
```
7575
- the dataset name must match that of Step 1, otherwise you need to specify the path to the input files with `-i`
7676

77+
### Distributing Detector Timelines with SLURM
78+
The detector timelines can take quite a while to run locally, and this is only recommended if you are running a single timeline or debugging. Instead, it is more efficient to distribute each timeline to a different slurm job. Continuing along with the example scenario above, you would first run with the `--run-slurm` option:
79+
```bash
80+
bin/run-detectors-timelines.sh -d rga_sp19_v5 --run-slurm # for detector timelines
81+
```
82+
which will set up the slurm scripts and print out the appropriate sbatch command to submit all the detector timeline jobs. This is analagous to when you are submitting jobs with `bin/run-monitoring.sh` in [Step 1](#-step-1-data-monitoring) so see the directions there for monitoring your jobs and checking the slurm output files. If you trust yourself well enough, you can just submit automatically too by adding the `--submit-slurm` option:
83+
```bash
84+
bin/run-detectors-timelines.sh -d rga_sp19_v5 --run-slurm --submit-slurm # for detector timelines
85+
```
86+
**After** all the detector timeline jobs finish successfully, you will then need to run the script again with the `--after-slurm` option:
87+
```bash
88+
bin/run-detectors-timelines.sh -d rga_sp19_v5 --after-slurm # for detector timelines
89+
```
90+
which will finish organizing all the output files in the appropriate directories.
7791

7892
> [!NOTE]
7993
> - detector timeline production is handled by the [`detectors/` subdirectory](/detectors);

0 commit comments

Comments
 (0)