Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add reviewer suggestions in PR #122 #124

Merged
merged 3 commits into from
Apr 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 1 addition & 5 deletions .editorconfig
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,6 @@ indent_style = unset
[/assets/email*]
indent_size = unset

# ignore Readme
[README.md]
indent_style = unset

# ignore python
# ignore python and markdown
[*.{py,md}]
indent_style = unset
14 changes: 9 additions & 5 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,7 @@ If you're not used to this workflow with git, you can start with some [docs from

## Tests

You have the option to test your changes locally by running the pipeline. For receiving warnings about process selectors and other `debug` information, it is recommended to use the debug profile. Execute all the tests with the following command:

```bash
nf-test test --profile debug,test,docker --verbose
```
You have the option to test your changes locally by running the pipeline. For receiving warnings about process selectors and other `debug` information, it is recommended to use the debug profile.

When you create a pull request with changes, [GitHub Actions](https://github.com/features/actions) will run automatic tests.
Typically, pull-requests are only fully reviewed when these tests are passing, though of course we can help out before then.
Expand All @@ -51,6 +47,14 @@ Each `nf-core` pipeline should be set up with a minimal set of test-data.
If there are any failures then the automated tests fail.
These tests are run both with the latest available version of `Nextflow` and also the minimum required version that is stated in the pipeline code.

You can run pipeline tests with the following command:

```bash
nextflow run nf-core/bacass \
-profile <test,test_long,test_hybrid,...>,<docker/singularity/.../institute> \
--outdir <OUTDIR>
```

## Patch

:warning: Only in the unlikely and regretful event of a release happening with a bug.
Expand Down
1 change: 0 additions & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ Learn more about contributing: [CONTRIBUTING.md](https://github.com/nf-core/baca
- [ ] If you've added a new tool - have you followed the pipeline conventions in the [contribution docs](https://github.com/nf-core/bacass/tree/master/.github/CONTRIBUTING.md)
- [ ] If necessary, also make a PR on the nf-core/bacass _branch_ on the [nf-core/test-datasets](https://github.com/nf-core/test-datasets) repository.
- [ ] Make sure your code lints (`nf-core lint`).
- [ ] Ensure the test suite passes (`nf-test test main.nf.test -profile test,docker`).
- [ ] Check for unexpected warnings in debug mode (`nextflow run . -profile debug,test,docker --outdir <OUTDIR>`).
- [ ] Usage Documentation in `docs/usage.md` is updated.
- [ ] Output Documentation in `docs/output.md` is updated.
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/release-announcements.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
- name: get topics and convert to hashtags
id: get_topics
run: |
curl -s https://nf-co.re/pipelines.json | jq -r '.remote_workflows[] | select(.full_name == "${{ github.repository }}") | .topics[]' | awk '{print "#"$0}' | tr '\n' ' ' >> $GITHUB_OUTPUT
echo "topics=$(curl -s https://nf-co.re/pipelines.json | jq -r '.remote_workflows[] | select(.full_name == "${{ github.repository }}") | .topics[]' | awk '{print "#"$0}' | tr '\n' ' ')" >> $GITHUB_OUTPUT

- uses: rzr/fediverse-action@master
with:
Expand All @@ -25,7 +25,7 @@ jobs:

Please see the changelog: ${{ github.event.release.html_url }}

${{ steps.get_topics.outputs.GITHUB_OUTPUT }} #nfcore #openscience #nextflow #bioinformatics
${{ steps.get_topics.outputs.topics }} #nfcore #openscience #nextflow #bioinformatics

send-tweet:
runs-on: ubuntu-latest
Expand Down
3 changes: 3 additions & 0 deletions .nf-core.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ repository_type: pipeline
lint:
files_exist:
- conf/igenomes.config
files_unchanged:
- .github/CONTRIBUTING.md
- .github/PULL_REQUEST_TEMPLATE.md
nextflow_config:
- config_defaults:
- params.dfast_config
25 changes: 24 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### `Changed`

- [#111](https://github.com/nf-core/bacass/pull/111) - Update nf-core/bacass to 2.12, and [#117](https://github.com/nf-core/bacass/pull/117) to 2.13.1 `TEMPLATE`.
- [#120](https://github.com/nf-core/bacass/pull/120) - Update local and nf-core modules (version update an minnor code changes).

### `Added`

Expand All @@ -20,6 +19,30 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### `Dependencies`

- [#120](https://github.com/nf-core/bacass/pull/120) - Update local and nf-core modules (version update an minnor code changes).

| Tool | Previous version | New version |
| ---------- | ---------------- | ----------- |
| Bakta | 1.8.2 | 1.9.3 |
| Canu | 2.2 | - |
| Dragonflye | 1.1.2 | - |
| Fastp | 0.23.4 | - |
| Kraken2 | 2.1.2 | - |
| Miniasm | 0.3_r179 | - |
| Minimap2 | 2.2 | 2.24 |
| Nanoplot | 1.41.6 | - |
| Porechop | 0.2.4 | - |
| Prokka | 1.14.6 | - |
| Quast | 5.2.0 | - |
| Racon | 1.4.20 | - |
| Samtools | 1.17 | 1.19.2 |
| Untar | 1.34 | - |
| Dfast | 1.2.20 | - |
| Medaka | 1.4.3-0 | - |
| Nanopolish | 0.14.0 | - |
| PycoQC | 2.5.2 | - |
| Unicycler | 0.4.8 | - |

### `Deprecated`

## v2.1.0 nf-core/bacass: "Navy Steel Swordfish" - 2023/10/20
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ For users that only have Nanopore data, the pipeline quality trims these using [
The pipeline can then perform long read assembly utilizing [Unicycler](https://github.com/rrwick/Unicycler), [Miniasm](https://github.com/lh3/miniasm) in combination with [Racon](https://github.com/isovic/racon), [Canu](https://github.com/marbl/canu) or [Flye](https://github.com/fenderglass/Flye) by using the [Dragonflye](https://github.com/rpetit3/dragonflye)(\*) pipeline. Long reads assembly can be polished using [Medaka](https://github.com/nanoporetech/medaka) or [NanoPolish](https://github.com/jts/nanopolish) with Fast5 files.

> [!NOTE]
> Dragonflye is a comprehensive pipeline designed for genome assembly of Oxford Nanopore Reads. It facilitates the utilization of Flye (default), Miniasm, and Raven assemblers, along with Racon(default) and Medaka polishers. For more information, visit the [Dragonflye GitHub](https://github.com/rpetit3/dragonflye) repository.
> Dragonflye is a comprehensive pipeline designed for genome assembly of Oxford Nanopore Reads. It facilitates the utilization of Flye (default), Miniasm, and Raven assemblers, along with Racon (default) and Medaka polishers. For more information, visit the [Dragonflye GitHub](https://github.com/rpetit3/dragonflye) repository.

### Hybrid Assembly

Expand Down
10 changes: 5 additions & 5 deletions conf/test.config
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ params {
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/bacass/bacass_short.tsv'

// some extra args to speed tests up
unicycler_args="--no_correct --no_pilon"
prokka_args=" --fast"
assembly_type = 'short'
skip_pycoqc = true
skip_kraken2 = true
unicycler_args = "--no_correct --no_pilon"
prokka_args = " --fast"
assembly_type = 'short'
skip_pycoqc = true
skip_kraken2 = true
}
8 changes: 4 additions & 4 deletions conf/test_dfast.config
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ params {
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/bacass/bacass_short.tsv'

// some extra args to speed tests up
unicycler_args="--no_correct --no_pilon"
unicycler_args = "--no_correct --no_pilon"
annotation_tool = 'dfast'
assembly_type = 'short'
skip_pycoqc = true
skip_kraken2 = true
assembly_type = 'short'
skip_pycoqc = true
skip_kraken2 = true
}
4 changes: 2 additions & 2 deletions conf/test_full.config
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,6 @@ params {
config_profile_description = 'Full test dataset to check pipeline function'

// Input data for full size test
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/bacass/bacass_full.tsv'
kraken2db = 'https://genome-idx.s3.amazonaws.com/kraken/k2_standard_8gb_20210517.tar.gz'
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/bacass/bacass_full.tsv'
kraken2db = 'https://genome-idx.s3.amazonaws.com/kraken/k2_standard_8gb_20210517.tar.gz'
}
6 changes: 3 additions & 3 deletions conf/test_hybrid.config
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ params {
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/bacass/bacass_hybrid.tsv'

// some extra args to speed tests up
assembly_type='hybrid'
prokka_args=" --fast"
skip_kraken2 = true
assembly_type = 'hybrid'
prokka_args = " --fast"
skip_kraken2 = true
}
8 changes: 4 additions & 4 deletions conf/test_hybrid_dragonflye.config
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ params {
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/bacass/bacass_hybrid_dragonflye.tsv'

// some extra args to speed tests up
assembly_type='hybrid'
assembler='dragonflye'
prokka_args=" --fast"
skip_kraken2 = true
assembly_type = 'hybrid'
assembler = 'dragonflye'
prokka_args = " --fast"
skip_kraken2 = true
}
8 changes: 4 additions & 4 deletions conf/test_long.config
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ params {
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/bacass/bacass_long_miniasm.tsv'

// some extra args to speed tests up
prokka_args = " --fast"
assembly_type = 'long'
skip_polish = true
skip_kraken2 = true
prokka_args = " --fast"
assembly_type = 'long'
skip_polish = true
skip_kraken2 = true
}
10 changes: 5 additions & 5 deletions conf/test_long_dragonflye.config
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ params {
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/bacass/bacass_long_miniasm.tsv'

// some extra args to speed tests up
prokka_args = " --fast"
assembly_type = 'long'
assembler = 'dragonflye'
skip_kraken2 = true
skip_polish = true
prokka_args = " --fast"
assembly_type = 'long'
assembler = 'dragonflye'
skip_kraken2 = true
skip_polish = true
}
8 changes: 4 additions & 4 deletions conf/test_long_miniasm.config
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ params {
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/bacass/bacass_long_miniasm.tsv'

// some extra args to speed tests up
prokka_args = " --fast"
assembly_type = 'long'
assembler = 'miniasm'
kraken2db = "https://genome-idx.s3.amazonaws.com/kraken/16S_Greengenes13.5_20200326.tgz"
prokka_args = " --fast"
assembly_type = 'long'
assembler = 'miniasm'
kraken2db = "https://genome-idx.s3.amazonaws.com/kraken/16S_Greengenes13.5_20200326.tgz"
}
2 changes: 1 addition & 1 deletion main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ nextflow.enable.dsl = 2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/

include { BACASS } from './workflows/bacass'
include { BACASS } from './workflows/bacass'
include { PIPELINE_INITIALISATION } from './subworkflows/local/utils_nfcore_bacass_pipeline'
include { PIPELINE_COMPLETION } from './subworkflows/local/utils_nfcore_bacass_pipeline'

Expand Down
2 changes: 1 addition & 1 deletion nextflow.config
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ params {
dragonflye_args = ''

// Assembly polishing
polish_method = 'medaka'
polish_method = 'medaka' // Allowed: ['medaka', 'nanopolish']

// Annotation
annotation_tool = 'prokka' // Allowed: ['prokka', 'bakta','dfast']
Expand Down
18 changes: 10 additions & 8 deletions nextflow_schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -87,15 +87,16 @@
"type": "string",
"default": "unicycler",
"fa_icon": "fas fa-puzzle-piece",
"description": "The assembler to use for assembly. Available options are `Unicycler`, `Canu`, `Miniasm`, or `Dragonflye`. The latter trhee are only available for long-read data, whereas Unicycler can be used for short or hybrid assembly projects.",
"description": "The assembler to use for assembly. Use the appropriate assembler according to the chosen assembly_type. Refer to the README.md for further clarification.",
"enum": ["unicycler", "canu", "miniasm", "dragonflye"]
},
"assembly_type": {
"type": "string",
"default": "short",
"fa_icon": "fas fa-fingerprint",
"help_text": "This adjusts the type of assembly done with the input data and can be any of `long`, `short` or `hybrid`. Short & Hybrid assembly will always run Unicycler, whereas long-read assembly can be configured separately using the `--assembler` parameter.",
"description": "Which type of assembly to perform."
"help_text": "This adjusts the type of assembly done with the input data and can be any of `long`, `short` or `hybrid`.",
"description": "Which type of assembly to perform.",
"enum": ["short", "long", "hybrid"]
},
"unicycler_args": {
"type": "string",
Expand All @@ -106,7 +107,7 @@
"canu_mode": {
"type": "string",
"enum": ["-pacbio", "-nanopore", "-pacbio-hifi", "null"],
"description": "Allowed technologies for long read assembly : [\"-pacbio\", \"-nanopore\", \"-pacbio-hifi\"]"
"description": "Allowed technologies for long read assembly."
},
"canu_args": {
"type": "string",
Expand All @@ -116,7 +117,7 @@
"dragonflye_args": {
"type": "string",
"description": "Extra arguments for [Dragonflye](https://github.com/rpetit3/dragonflye#usage)",
"help_text": "This advanced option allows you to add extra arguments to Dragonflye (e.g.: `\"--gsize 2.4m\"`). For those arguments with no values/options associated (e.g.: `\"--nopolish\"` or `\"--nofilter\"`...) you need to add an extra space at the begining of the input string to params.dragonflye_args. Example: --params.dragonflye_args ' --nopolish'"
"help_text": "This advanced option allows you to add extra arguments to Dragonflye (e.g.: `\"--gsize 2.4m\"`). For those arguments with no values/options associated (e.g.: `\"--nopolish\"` or `\"--nofilter\"`...) you need to add an extra space at the begining of the input string to params.dragonflye_args. Example: `--params.dragonflye_args ' --nopolish'`"
}
}
},
Expand All @@ -132,7 +133,8 @@
"default": "medaka",
"fa_icon": "fas fa-hotdog",
"description": "Which assembly polishing method to use.",
"help_text": "Can be used to define which polishing method is used by default for long reads. Default is `medaka`, available options are `nanopolish` or `medaka`."
"help_text": "Can be used to define which polishing method is used by default for long reads.",
"enum": ["medaka", "nanopolish"]
}
}
},
Expand All @@ -146,7 +148,7 @@
"annotation_tool": {
"type": "string",
"default": "prokka",
"description": "The annotation method to annotate the final assembly. Default choice is `prokka`, but the `dfast` tool is also available. For the latter, make sure to create your specific config if you're not happy with the default one provided. See [#dfast_config](#dfastconfig) to find out how.",
"description": "The annotation method to annotate the final assembly.",
"enum": ["prokka", "bakta", "dfast"]
},
"prokka_args": {
Expand All @@ -172,7 +174,7 @@
"type": "string",
"default": "assets/test_config_dfast.py",
"description": "Specifies a configuration file for the [DFAST](https://github.com/nigyta/dfast_core) annotation method.",
"help_text": "This can be used instead of PROKKA if required to specify a specific config file for annotation. If you want to know how to create your config file, please refer to the [DFAST](https://github.com/nigyta/dfast_core) readme on how to create one. The default config (`assets/test_config_dfast.py`) is just included for testing, so if you want to annotate using DFAST, you have to create a config!"
"help_text": "If you want to know how to create your config file, please refer to the [DFAST](https://github.com/nigyta/dfast_core) readme on how to create one. The default config (`assets/test_config_dfast.py`) is just included for testing, so if you want to annotate using DFAST, you have to create a config!"
}
}
},
Expand Down
31 changes: 3 additions & 28 deletions subworkflows/local/utils_nfcore_bacass_pipeline/main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ workflow PIPELINE_INITIALISATION {
//
// Custom validation for pipeline parameters
//
validateInputParameters()
//validateInputParameters()

//
// Create channel from input file provided through params.input
Expand Down Expand Up @@ -151,7 +151,7 @@ workflow PIPELINE_COMPLETION {
// Check and validate pipeline parameters
//
def validateInputParameters() {
genomeExistsError()
// Add functions here for parameters validation
}

//
Expand All @@ -167,31 +167,6 @@ def validateInputSamplesheet(input) {

return [ metas[0], fastqs, longread, fast5]
}
//
// Get attribute from genome config file e.g. fasta
//
def getGenomeAttribute(attribute) {
if (params.genomes && params.genome && params.genomes.containsKey(params.genome)) {
if (params.genomes[ params.genome ].containsKey(attribute)) {
return params.genomes[ params.genome ][ attribute ]
}
}
return null
}

//
// Exit pipeline if incorrect --genome key provided
//
def genomeExistsError() {
if (params.genomes && params.genome && !params.genomes.containsKey(params.genome)) {
def error_string = "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n" +
" Genome '${params.genome}' not found in any config files provided to the pipeline.\n" +
" Currently, the available genome keys are:\n" +
" ${params.genomes.keySet().join(", ")}\n" +
"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
error(error_string)
}
}

//
// Generate methods description for MultiQC
Expand Down Expand Up @@ -235,7 +210,7 @@ def toolBibliographyText() {
"<li>Wood, D.E., Lu, J. & Langmead, B. Improved metagenomic analysis with Kraken 2. Genome Biol 20, 257 (2019). https://doi.org/10.1186/s13059-019-1891-0</li>",
"<li>Wick RR, Judd LM, Gorrie CL, Holt KE. Unicycler: Resolving bacterial genome assemblies from short and long sequencing reads. PLoS Comput Biol. 2017 Jun 8;13(6):e1005595. doi: 10.1371/journal.pcbi.1005595.</li>",
"<li>Heng Li, Minimap and miniasm: fast mapping and de novo assembly for noisy long sequences, Bioinformatics, Volume 32, Issue 14, July 2016, Pages 2103–2110, https://doi.org/10.1093/bioinformatics/btw152</li>",
"<li>[rpetit/dragonflye](https://github.com/rpetit3/dragonflye)</li>",
"<li>Petit III, R. A. dragonflye: assemble bacterial isolate genomes from Nanopore reads (Version 1.1.2). https://github.com/rpetit3/dragonflye</li>",
"<li>Vaser R, Sović I, Nagarajan N, Šikić M. Fast and accurate de novo genome assembly from long uncorrected reads. Genome Res. 2017 May;27(5):737-746. doi: 10.1101/gr.214270.116.</li>",
"<li>Koren S, Walenz BP, Berlin K, Miller JR, Bergman NH, Phillippy AM. Canu: scalable and accurate long-read assembly via adaptive k-mer weighting and repeat separation. Genome Res. 2017 May;27(5):722-736. doi: 10.1101/gr.215087.116.</li>",
"<li>Medaka: Sequence correction provided by ONT Research. https://github.com/nanoporetech/medaka,</li>",
Expand Down
Loading