Skip to content

Commit 97ea5f8

Browse files
authored
Merge pull request #77 from jfy133/documentation_improvements
Documentation improvements
2 parents fcc941d + 3b7df22 commit 97ea5f8

5 files changed

Lines changed: 33 additions & 47 deletions

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,4 +45,4 @@ The nf-core/eager pipeline comes with documentation about the pipeline, found in
4545
5. [Troubleshooting](docs/troubleshooting.md)
4646

4747
### Credits
48-
This pipeline was written by Alexander Peltzer ([apeltzer](https://github.com/apeltzer)), with major contributions from Stephen Clayton, ideas and documentation from James Fellows-Yates, Raphael Eisenhofer and Judith Neukamm. If you want to contribute, please open an issue and ask to be added to the project - happy to do so and everyone is welcome to contribute here!
48+
This pipeline was written by Alexander Peltzer ([apeltzer](https://github.com/apeltzer)), with major contributions from Stephen Clayton, ideas and documentation from James Fellows Yates, Raphael Eisenhofer and Judith Neukamm. If you want to contribute, please open an issue and ask to be added to the project - happy to do so and everyone is welcome to contribute here!

docs/configuration/adding_your_own.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,6 @@ Note that the dockerhub organisation name annoyingly can't have a hyphen, so is
5151
### Singularity image
5252
Many HPC environments are not able to run Docker due to security issues.
5353
[Singularity](http://singularity.lbl.gov/) is a tool designed to run on such HPC systems which is very similar to Docker.
54-
>>>>>>> TEMPLATE
5554

5655
To specify singularity usage in your pipeline config file, add the following:
5756

@@ -81,5 +80,4 @@ To use conda in your own config file, add the following:
8180

8281
```nextflow
8382
process.conda = "$baseDir/environment.yml"
84-
>>>>>>> TEMPLATE
8583
```

docs/installation.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,19 @@ Be warned of two important points about this default configuration:
6969
* See the [nextflow docs](https://www.nextflow.io/docs/latest/executor.html) for information about running with other hardware backends. Most job scheduler systems are natively supported.
7070
2. Nextflow will expect all software to be installed and available on the `PATH`
7171

72+
The following software is currently required to be installed:
73+
74+
* [FastQC](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/)
75+
* [Picard Tools](https://broadinstitute.github.io/picard/)
76+
* [Samtools](http://www.htslib.org/)
77+
* [Preseq](http://smithlabresearch.org/software/preseq/)
78+
* [MultiQC](https://multiqc.info/)
79+
* [BWA](http://bio-bwa.sourceforge.net/)
80+
* [Qualimap](http://qualimap.bioinfo.cipf.es/)
81+
* [GATK](https://software.broadinstitute.org/gatk/)
82+
* [bamUtil](https://genome.sph.umich.edu/wiki/BamUtil)
83+
* [fastP](https://github.com/OpenGene/fastp)
84+
7285
#### 3.1) Software deps: Docker
7386
First, install docker on your system: [Docker Installation Instructions](https://docs.docker.com/engine/installation/)
7487

docs/usage.md

Lines changed: 11 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -7,38 +7,12 @@
77
* [Updating the pipeline](#updating-the-pipeline)
88
* [Reproducibility](#reproducibility)
99
* [Main arguments](#main-arguments)
10-
* [`-profile`](#-profile-single-dash)
11-
* [`docker`](#docker)
12-
* [`awsbatch`](#awsbatch)
13-
* [`standard`](#standard)
14-
* [`binac`](#binac)
15-
* [`cfc`](#cfc)
16-
* [`uzh`](#uzh)
17-
* [`none`](#none)
18-
* [`--reads`](#--reads)
19-
* [`--singleEnd`](#--singleend)
20-
* [Reference Genomes](#reference-genomes)
21-
* [`--genome`](#--genome)
22-
* [`--fasta`](#--fasta)
2310
* [Job Resources](#job-resources)
2411
* [Automatic resubmission](#automatic-resubmission)
2512
* [Custom resource requests](#custom-resource-requests)
2613
* [AWS batch specific parameters](#aws-batch-specific-parameters)
27-
* [`-awsbatch`](#-awsbatch)
28-
* [`--awsqueue`](#--awsqueue)
29-
* [`--awsregion`](#--awsregion)
3014
* [Other command line parameters](#other-command-line-parameters)
31-
* [`--outdir`](#--outdir)
32-
* [`--email`](#--email)
33-
* [`-name`](#-name-single-dash)
34-
* [`-resume`](#-resume-single-dash)
35-
* [`-c`](#-c-single-dash)
36-
* [`--max_memory`](#--max_memory)
37-
* [`--max_time`](#--max_time)
38-
* [`--max_cpus`](#--max_cpus)
39-
* [`--plaintext_emails`](#--plaintext_emails)
40-
* [`--sampleLevel`](#--sampleLevel)
41-
* [`--multiqc_config`](#--multiqc_config)
15+
* [Adjustable parameters for nf-core/eager](#adjustable-parameters-for-nf-coreeager)
4216

4317
## General Nextflow info
4418
Nextflow handles job submissions on SLURM or other environments, and supervises running the jobs. Thus the Nextflow process must run until the pipeline is finished. We recommend that you put the process running in the background through `screen` / `tmux` or similar tool. Alternatively you can run nextflow within a cluster job submitted your job scheduler.
@@ -116,7 +90,6 @@ Use this parameter to choose a configuration profile. Profiles can give configur
11690
* `test`
11791
* A profile with a complete configuration for automated testing
11892
* Includes links to test data so needs no other parameters
119-
>>>>>>> TEMPLATE
12093
* `none`
12194
* No configuration at all. Useful if you want to build your own config from scratch and want to avoid loading in the default `base` config profile (not recommended).
12295

@@ -155,9 +128,18 @@ A normal glob pattern, enclosed in quotation marks, can then be used for `--read
155128

156129
## Reference Genomes
157130

158-
The pipeline config files come bundled with paths to the illumina iGenomes reference index files. If running with docker or AWS, the configuration is set up to use the [AWS-iGenomes](https://ewels.github.io/AWS-iGenomes/) resource.
131+
### `--fasta`
132+
If you prefer, you can specify the full path to your reference genome when you run the pipeline:
133+
134+
```bash
135+
--fasta '[path to Fasta reference]'
136+
```
137+
> If you don't specify appropriate `--bwa_index`, `--fasta_index` parameters, the pipeline will create these indices for you automatically. Note, that saving these for later has to be turned on using `--saveReference`.
159138
160139
### `--genome` (using iGenomes)
140+
141+
The pipeline config files come bundled with paths to the illumina iGenomes reference index files. If running with docker or AWS, the configuration is set up to use the [AWS-iGenomes](https://ewels.github.io/AWS-iGenomes/) resource.
142+
161143
There are 31 different species supported in the iGenomes references. To run the pipeline, you must specify which to use with the `--genome` flag.
162144

163145
You can find the keys to specify the genomes in the [iGenomes config file](../conf/igenomes.config). Common genomes that are supported are:
@@ -189,14 +171,6 @@ params {
189171
}
190172
```
191173

192-
### `--fasta`
193-
If you prefer, you can specify the full path to your reference genome when you run the pipeline:
194-
195-
```bash
196-
--fasta '[path to Fasta reference]'
197-
```
198-
> If you don't specify appropriate `--bwa_index`, `--fasta_index` parameters, the pipeline will create these indices for you automatically. Note, that saving these for later has to be turned on using `--saveReference`.
199-
200174
### `--bwa_index`
201175

202176
Use this to specify a previously created BWA index. This saves time in pipeline execution and is especially advised when running multiple times on the same cluster system for example. You can even add a resource specific profile that sets paths to pre-computed reference genomes, saving even time when specifying these.

main.nf

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -26,17 +26,18 @@ def helpMessage() {
2626
2727
Mandatory arguments:
2828
--reads Path to input data (must be surrounded with quotes)
29-
-profile Hardware config to use. docker / aws
29+
-profile Hardware config to use (e.g. standard, docker, singularity, conda, aws). Ask your system admin if unsure, or check documentatoin.
30+
--singleEnd Specifies that the input is single end reads (required if not pairedEnd)
31+
--pairedEnd Specifies that the input is paired end reads (required if not singleend)
32+
--fasta Path to Fasta reference (required if not iGenome reference)
33+
--genome Name of iGenomes reference (required if not fasta reference)
3034
31-
Options:
32-
--genome Name of iGenomes reference
33-
--singleEnd Specifies that the input is single end reads
35+
Input Data Additional Options:
3436
--snpcapture Runs in SNPCapture mode (specify a BED file if you do this!)
3537
--udg Specify that your libraries are treated with UDG
3638
--udg_type Specify here if you have UDG half treated libraries, Set to 'Half' in that case
3739
3840
References If not specified in the configuration file or you wish to overwrite any of the references.
39-
--fasta Path to Fasta reference
4041
--bwa_index Path to BWA index
4142
--bedfile Path to BED file for SNPCapture methods
4243
--seq_dict Path to sequence dictionary file
@@ -54,8 +55,8 @@ def helpMessage() {
5455
--complexity_filter_poly_g_min Specify poly-g min filter (default: 10) for filtering
5556
5657
Clipping / Merging
57-
--clip_forward_adaptor Specify adapter to be clipped off (forward)
58-
--clip_reverse_adaptor Specify adapter to be clipped off (reverse)
58+
--clip_forward_adaptor Specify adapter sequence to be clipped off (forward)
59+
--clip_reverse_adaptor Specify adapter sequence to be clipped off (reverse)
5960
--clip_readlength Specify read minimum length to be kept for downstream analysis
6061
--clip_min_read_quality Specify minimum base quality for not trimming off bases
6162
--min_adap_overlap Specify minimum adapter overlap

0 commit comments

Comments
 (0)