Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update biohpc_gen config to reduce load on slurm DB #834

Merged
merged 2 commits into from
Jan 27, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions conf/biohpc_gen.config
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,15 @@ process {
executor = 'slurm'
queue = { task.memory <= 1536.GB ? (task.time > 2.d || task.memory > 384.GB ? 'biohpc_gen_production' : 'biohpc_gen_normal') : 'biohpc_gen_highmem' }
clusterOptions = '--clusters=biohpc_gen'
array = 25
}

executor {
$slurm {
queueStatInterval = '10 min'
pollInterval = '30 sec'
submitRateLimit = '25sec'
}
}

charliecloud {
Expand Down
4 changes: 3 additions & 1 deletion docs/biohpc_gen.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,6 @@ These are then available as modules (please confirm the module name using module
module load nextflow/24.04.2-gcc12 charliecloud/0.35-gcc12
```

> NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes.
> NB: bioHPC compute nodes are submit hosts. This means you can submit the nextflow head job via sbatch.

> NB: Sometimes you may want to have jobs submitted 'locally' in a large nextflow job. Details on this can be found here https://doku.lrz.de/nextflow-on-hpc-systems-test-operation-788693597.html
Loading