diff --git a/CHANGELOG.md b/CHANGELOG.md index 261d7c6..80bad8e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -3,6 +3,16 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## v1.2 - [30/01/24] + +- Remove `--brackendb` parameter as redundant. Bracken will now use the database location specified with `--krakendb`. +- Documentation updated. + +## v1.1 - [16/03/23] + +- Fix check_samplesheet.py bug +- Various other fixes + ## v1.0 - [15/11/22] - Initial release of avantonder/bacQC, created with the [nf-core](https://nf-co.re/) template. diff --git a/README.md b/README.md index 37208fb..a3e59bc 100644 --- a/README.md +++ b/README.md @@ -65,7 +65,6 @@ Alternatively the samplesheet.csv file created by nf-core/fetchngs can also be u -profile \ --input samplesheet.csv \ --kraken2db minikraken2_v1_8GB \ - --brackendb minikraken2_v1_8GB \ --genome_size 4300000 \ --outdir ``` @@ -77,7 +76,6 @@ Alternatively the samplesheet.csv file created by nf-core/fetchngs can also be u -profile \ --input samplesheet.csv \ --kraken2db minikraken2_v1_8GB \ - --brackendb minikraken2_v1_8GB \ --genome_size 4300000 \ --kraken_extract \ --tax_id \ diff --git a/docs/parameters.md b/docs/parameters.md index b17530c..5d590bc 100644 --- a/docs/parameters.md +++ b/docs/parameters.md @@ -9,8 +9,7 @@ Define where the pipeline should find input data and save output data. | Parameter | Description | Type | Default | Required | Hidden | |-----------|-----------|-----------|-----------|-----------|-----------| | `input` | Path to comma-separated file containing information about the samples in the experiment.
HelpYou will need to create a design file with information about the samples in your experiment before running the pipeline. Use this parameter to specify its location. It has to be a comma-separated file with 3 columns, and a header row.
| `string` | | | | -| `kraken2db` | | `string` | None | | | -| `brackendb` | | `string` | None | | | +| `kraken2db` | Path to Kraken 2 database | `string` | None | | | | `outdir` | The output directory where the results will be saved. You have to use absolute paths to storage on Cloud infrastructure. | `string` | | | | | `email` | Email address for completion summary.
HelpSet this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (`~/.nextflow/config`) then you don't need to specify this on the command line for every run.
| `string` | | | | diff --git a/docs/usage.md b/docs/usage.md index 0c1231c..b06f22b 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -22,9 +22,6 @@ Use the `--input` parameter to specify the location of `samplesheet.csv`. It has --input '[path to samplesheet file]' ``` -```console ---input '[path to samplesheet file]' -``` ### Full samplesheet The pipeline will auto-detect whether a sample is single- or paired-end using the information provided in the samplesheet. The samplesheet can have as many columns as you desire, however, there is a strict requirement for the first 3 columns to match those defined in the table below. @@ -47,11 +44,10 @@ An [example samplesheet](../assets/samplesheet.csv) has been provided with the p ## Kraken 2 database -The pipeline can be provided with a path to a Kraken 2 database which is used, along with Bracken, to assign sequence reads to a particular taxon. Use the `--kraken2db` and `--brackendb` parameters to specify the location of the Kraken 2 database: +The pipeline can be provided with a path to a Kraken 2 database which is used, along with Bracken, to assign sequence reads to a particular taxon. Use the `--kraken2db` parameter to specify the location of the Kraken 2 database: ```console --kraken2db '[path to Kraken 2 database]' ---brackendb '[path to Kraken 2 database]' ``` The Kraken 2 and Bracken steps can by skipped by specifying the `--skip_kraken2` parameter. @@ -73,7 +69,6 @@ nextflow run avantonder/bacQC \ --input samplesheet.csv \ -profile singularity \ --kraken2db path/to/kraken2/dir \ - --bracken path/to/kraken2/dir/ \ --genome_size \ --outdir \ -resume @@ -102,7 +97,7 @@ nextflow pull avantonder/bacQC It's a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you'll be running the same version of the pipeline, even if there have been changes to the code since. -First, go to the [avantonder/bacQC releases page](https://github.com/avantonder/bacQC/releases) and find the latest version number - numeric only (eg. `1.3.1`). Then specify this when running the pipeline with `-r` (one hyphen) - eg. `-r 1.3.1`. +First, go to the [avantonder/bacQC releases page](https://github.com/avantonder/bacQC/releases) and find the latest version number - numeric only (eg. `1.1`). Then specify this when running the pipeline with `-r` (one hyphen) - eg. `-r 1.1`. This version number will be logged in reports when you run the pipeline, so that you'll know what you used when you look back in the future. @@ -118,8 +113,6 @@ Several generic profiles are bundled with the pipeline which instruct the pipeli > We highly recommend the use of Docker or Singularity containers for full pipeline reproducibility, however when this is not possible, Conda is also supported. -The pipeline also dynamically loads configurations from [https://github.com/nf-core/configs](https://github.com/nf-core/configs) when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to see if your system is available in these configs please see the [nf-core/configs documentation](https://github.com/nf-core/configs#documentation). - Note that multiple profiles can be loaded, for example: `-profile test,docker` - the order of arguments is important! They are loaded in sequence, so later profiles can overwrite earlier profiles. @@ -127,13 +120,10 @@ If `-profile` is not specified, the pipeline will run locally and expect all sof * `docker` * A generic configuration profile to be used with [Docker](https://docker.com/) - * Pulls software from Docker Hub: [`nfcore/bacqc`](https://hub.docker.com/r/nfcore/bacqc/) * `singularity` * A generic configuration profile to be used with [Singularity](https://sylabs.io/docs/) - * Pulls software from Docker Hub: [`nfcore/bacqc`](https://hub.docker.com/r/nfcore/bacqc/) * `podman` * A generic configuration profile to be used with [Podman](https://podman.io/) - * Pulls software from Docker Hub: [`nfcore/bacqc`](https://hub.docker.com/r/nfcore/bacqc/) * `conda` * Please only use Conda as a last resort i.e. when it's not possible to run the pipeline with Docker, Singularity or Podman. * A generic configuration profile to be used with [Conda](https://conda.io/docs/) @@ -168,10 +158,6 @@ process { See the main [Nextflow documentation](https://www.nextflow.io/docs/latest/config.html) for more information. -If you are likely to be running `nf-core` pipelines regularly it may be a good idea to request that your custom config file is uploaded to the `nf-core/configs` git repository. Before you do this please can you test that the config file works with your pipeline of choice using the `-c` parameter (see definition above). You can then create a pull request to the `nf-core/configs` repository with the addition of your config file, associated documentation file (see examples in [`nf-core/configs/docs`](https://github.com/nf-core/configs/tree/master/docs)), and amending [`nfcore_custom.config`](https://github.com/nf-core/configs/blob/master/nfcore_custom.config) to include your custom profile. - -If you have any questions or issues please send us a message on [Slack](https://nf-co.re/join/slack) on the [`#configs` channel](https://nfcore.slack.com/channels/configs). - ### Running in the background Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished. diff --git a/modules/local/samplesheet_check.nf b/modules/local/samplesheet_check.nf index 10411e3..80e1b50 100644 --- a/modules/local/samplesheet_check.nf +++ b/modules/local/samplesheet_check.nf @@ -1,8 +1,9 @@ process SAMPLESHEET_CHECK { tag "$samplesheet" - executor 'local' - memory 100.MB + label 'process_low' + //executor 'local' + //memory 100.MB conda (params.enable_conda ? "conda-forge::python=3.8.3" : null) container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? diff --git a/nextflow.config b/nextflow.config index 0575fa5..6b26cc4 100644 --- a/nextflow.config +++ b/nextflow.config @@ -10,11 +10,11 @@ params { // Input options - input = null + input = null + genome_size = null // Databases - kraken2db = null - brackendb = null + kraken2db = null // MultiQC options multiqc_config = null @@ -171,7 +171,7 @@ manifest { description = 'Pipeline for running QC on bacterial sequence data' mainScript = 'main.nf' nextflowVersion = '!>=22.04.3' - version = '1.0' + version = '1.2' } // Load modules.config for DSL2 module specific options diff --git a/nextflow_schema.json b/nextflow_schema.json index 427e254..c866c2d 100644 --- a/nextflow_schema.json +++ b/nextflow_schema.json @@ -1,305 +1,299 @@ { - "$schema": "http://json-schema.org/draft-07/schema", - "$id": "https://raw.githubusercontent.com/avantonder/bacQC/master/nextflow_schema.json", - "title": "avantonder/bacQC pipeline parameters", - "description": "Pipeline for running QC on bacterial sequence data", - "type": "object", - "definitions": { - "input_output_options": { - "title": "Input/output options", - "type": "object", - "fa_icon": "fas fa-terminal", - "description": "Define where the pipeline should find input data and save output data.", - "required": [ - "input", - "outdir" - ], - "properties": { - "input": { - "type": "string", - "format": "file-path", - "mimetype": "text/csv", - "pattern": "^\\S+\\.csv$", - "schema": "assets/schema_input.json", - "description": "Path to comma-separated file containing information about the samples in the experiment.", - "help_text": "You will need to create a design file with information about the samples in your experiment before running the pipeline. Use this parameter to specify its location. It has to be a comma-separated file with 3 columns, and a header row. See [usage docs](https://nf-co.re/bovisanalyzer/usage#samplesheet-input).", - "fa_icon": "fas fa-file-csv" - }, - "kraken2db": { - "type": "string", - "default": null - }, - "brackendb": { - "type": "string", - "default": null - }, - "outdir": { - "type": "string", - "format": "directory-path", - "description": "The output directory where the results will be saved. You have to use absolute paths to storage on Cloud infrastructure.", - "fa_icon": "fas fa-folder-open" - }, - "email": { - "type": "string", - "description": "Email address for completion summary.", - "fa_icon": "fas fa-envelope", - "help_text": "Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (`~/.nextflow/config`) then you don't need to specify this on the command line for every run.", - "pattern": "^([a-zA-Z0-9_\\-\\.]+)@([a-zA-Z0-9_\\-\\.]+)\\.([a-zA-Z]{2,5})$" - }, - "multiqc_title": { - "type": "string", - "description": "MultiQC report title. Printed as page header, used for filename if not otherwise specified.", - "fa_icon": "fas fa-file-signature" - } - } + "$schema": "http://json-schema.org/draft-07/schema", + "$id": "https://raw.githubusercontent.com/avantonder/bacQC/master/nextflow_schema.json", + "title": "avantonder/bacQC pipeline parameters", + "description": "Pipeline for running QC on bacterial sequence data", + "type": "object", + "definitions": { + "input_output_options": { + "title": "Input/output options", + "type": "object", + "fa_icon": "fas fa-terminal", + "description": "Define where the pipeline should find input data and save output data.", + "required": ["input", "outdir"], + "properties": { + "input": { + "type": "string", + "format": "file-path", + "mimetype": "text/csv", + "pattern": "^\\S+\\.csv$", + "schema": "assets/schema_input.json", + "description": "Path to comma-separated file containing information about the samples in the experiment.", + "help_text": "You will need to create a design file with information about the samples in your experiment before running the pipeline. Use this parameter to specify its location. It has to be a comma-separated file with 3 columns, and a header row. See [usage docs](https://nf-co.re/bovisanalyzer/usage#samplesheet-input).", + "fa_icon": "fas fa-file-csv" }, - "quality_control_options": { - "title": "Quality Control options", - "type": "object", - "description": "", - "default": "", - "properties": { - "skip_fastp": { - "type": "boolean", - "description": "Skip the fastp trimming step." - }, - "skip_fastqc": { - "type": "boolean", - "description": "Skip the fastQC step." - }, - "save_trimmed_fail": { - "type": "boolean", - "description": "Save failed trimmed reads." - }, - "skip_multiqc": { - "type": "boolean", - "description": "Skip MultiQC." - }, - "adapter_file": { - "type": "string", - "default": "'${baseDir}/assets/adapters.fas'", - "description": "Path to file containing adapters in FASTA format." - }, - "skip_kraken2": { - "type": "boolean", - "description": "Skip Kraken 2 and Bracken." - }, - "genome_size": { - "type": "integer", - "description": "Specify a genome size to be used by fastq-scan to calculate coverage" - } - } + "kraken2db": { + "type": "string", + "default": "None", + "description": "Path to Kraken 2 database" }, - "extract_reads_options": { - "title": "Extract reads options", - "type": "object", - "description": "", - "default": "", - "properties": { - "kraken_extract": { - "type": "boolean", - "description": "Extract reads from fastq files based on taxon id" - }, - "tax_id": { - "type": "string", - "description": "If --kraken_extract is used, --tax_is specifies the taxon id to be used to extract reads" - } - } + "outdir": { + "type": "string", + "format": "directory-path", + "description": "The output directory where the results will be saved. You have to use absolute paths to storage on Cloud infrastructure.", + "fa_icon": "fas fa-folder-open" }, - "institutional_config_options": { - "title": "Institutional config options", - "type": "object", - "fa_icon": "fas fa-university", - "description": "Parameters used to describe centralised config profiles. These should not be edited.", - "help_text": "The centralised nf-core configuration profiles use a handful of pipeline parameters to describe themselves. This information is then printed to the Nextflow log when you run a pipeline. You should not need to change these values when you run a pipeline.", - "properties": { - "custom_config_version": { - "type": "string", - "description": "Git commit id for Institutional configs.", - "default": "master", - "hidden": true, - "fa_icon": "fas fa-users-cog" - }, - "custom_config_base": { - "type": "string", - "description": "Base directory for Institutional configs.", - "default": "https://raw.githubusercontent.com/nf-core/configs/master", - "hidden": true, - "help_text": "If you're running offline, Nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell Nextflow where to find them with this parameter.", - "fa_icon": "fas fa-users-cog" - }, - "config_profile_name": { - "type": "string", - "description": "Institutional config name.", - "hidden": true, - "fa_icon": "fas fa-users-cog" - }, - "config_profile_description": { - "type": "string", - "description": "Institutional config description.", - "hidden": true, - "fa_icon": "fas fa-users-cog" - }, - "config_profile_contact": { - "type": "string", - "description": "Institutional config contact information.", - "hidden": true, - "fa_icon": "fas fa-users-cog" - }, - "config_profile_url": { - "type": "string", - "description": "Institutional config URL link.", - "hidden": true, - "fa_icon": "fas fa-users-cog" - } - } + "email": { + "type": "string", + "description": "Email address for completion summary.", + "fa_icon": "fas fa-envelope", + "help_text": "Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (`~/.nextflow/config`) then you don't need to specify this on the command line for every run.", + "pattern": "^([a-zA-Z0-9_\\-\\.]+)@([a-zA-Z0-9_\\-\\.]+)\\.([a-zA-Z]{2,5})$" }, - "max_job_request_options": { - "title": "Max job request options", - "type": "object", - "fa_icon": "fab fa-acquisitions-incorporated", - "description": "Set the top limit for requested resources for any single job.", - "help_text": "If you are running on a smaller system, a pipeline step requesting more resources than are available may cause the Nextflow to stop the run with an error. These options allow you to cap the maximum resources requested by any single job so that the pipeline will run on your system.\n\nNote that you can not _increase_ the resources requested by any job using these options. For that you will need your own configuration file. See [the nf-core website](https://nf-co.re/usage/configuration) for details.", - "properties": { - "max_cpus": { - "type": "integer", - "description": "Maximum number of CPUs that can be requested for any single job.", - "default": 16, - "fa_icon": "fas fa-microchip", - "hidden": true, - "help_text": "Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. `--max_cpus 1`" - }, - "max_memory": { - "type": "string", - "description": "Maximum amount of memory that can be requested for any single job.", - "default": "128.GB", - "fa_icon": "fas fa-memory", - "pattern": "^\\d+(\\.\\d+)?\\.?\\s*(K|M|G|T)?B$", - "hidden": true, - "help_text": "Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. `--max_memory '8.GB'`" - }, - "max_time": { - "type": "string", - "description": "Maximum amount of time that can be requested for any single job.", - "default": "240.h", - "fa_icon": "far fa-clock", - "pattern": "^(\\d+\\.?\\s*(s|m|h|day)\\s*)+$", - "hidden": true, - "help_text": "Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. `--max_time '2.h'`" - } - } + "multiqc_title": { + "type": "string", + "description": "MultiQC report title. Printed as page header, used for filename if not otherwise specified.", + "fa_icon": "fas fa-file-signature" + } + } + }, + "quality_control_options": { + "title": "Quality Control options", + "type": "object", + "description": "", + "default": "", + "properties": { + "skip_fastp": { + "type": "boolean", + "description": "Skip the fastp trimming step." + }, + "skip_fastqc": { + "type": "boolean", + "description": "Skip the fastQC step." + }, + "save_trimmed_fail": { + "type": "boolean", + "description": "Save failed trimmed reads." + }, + "skip_multiqc": { + "type": "boolean", + "description": "Skip MultiQC." + }, + "adapter_file": { + "type": "string", + "default": "'${baseDir}/assets/adapters.fas'", + "description": "Path to file containing adapters in FASTA format." + }, + "skip_kraken2": { + "type": "boolean", + "description": "Skip Kraken 2 and Bracken." + }, + "genome_size": { + "type": "integer", + "description": "Specify a genome size to be used by fastq-scan to calculate coverage" + } + } + }, + "extract_reads_options": { + "title": "Extract reads options", + "type": "object", + "description": "", + "default": "", + "properties": { + "kraken_extract": { + "type": "boolean", + "description": "Extract reads from fastq files based on taxon id" }, - "generic_options": { - "title": "Generic options", - "type": "object", - "fa_icon": "fas fa-file-import", - "description": "Less common options for the pipeline, typically set in a config file.", - "help_text": "These options are common to all nf-core pipelines and allow you to customise some of the core preferences for how the pipeline runs.\n\nTypically these options would be set in a Nextflow config file loaded for all pipeline runs, such as `~/.nextflow/config`.", - "properties": { - "help": { - "type": "boolean", - "description": "Display help text.", - "fa_icon": "fas fa-question-circle", - "hidden": true - }, - "publish_dir_mode": { - "type": "string", - "default": "copy", - "description": "Method used to save pipeline results to output directory.", - "help_text": "The Nextflow `publishDir` option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See [Nextflow docs](https://www.nextflow.io/docs/latest/process.html#publishdir) for details.", - "fa_icon": "fas fa-copy", - "enum": [ - "symlink", - "rellink", - "link", - "copy", - "copyNoFollow", - "move" - ], - "hidden": true - }, - "email_on_fail": { - "type": "string", - "description": "Email address for completion summary, only when pipeline fails.", - "fa_icon": "fas fa-exclamation-triangle", - "pattern": "^([a-zA-Z0-9_\\-\\.]+)@([a-zA-Z0-9_\\-\\.]+)\\.([a-zA-Z]{2,5})$", - "help_text": "An email address to send a summary email to when the pipeline is completed - ONLY sent if the pipeline does not exit successfully.", - "hidden": true - }, - "plaintext_email": { - "type": "boolean", - "description": "Send plain-text email instead of HTML.", - "fa_icon": "fas fa-remove-format", - "hidden": true - }, - "max_multiqc_email_size": { - "type": "string", - "description": "File size limit when attaching MultiQC reports to summary emails.", - "pattern": "^\\d+(\\.\\d+)?\\.?\\s*(K|M|G|T)?B$", - "default": "25.MB", - "fa_icon": "fas fa-file-upload", - "hidden": true - }, - "monochrome_logs": { - "type": "boolean", - "description": "Do not use coloured log outputs.", - "fa_icon": "fas fa-palette", - "hidden": true - }, - "multiqc_config": { - "type": "string", - "description": "Custom config file to supply to MultiQC.", - "fa_icon": "fas fa-cog", - "hidden": true - }, - "tracedir": { - "type": "string", - "description": "Directory to keep pipeline Nextflow logs and reports.", - "default": "${params.outdir}/pipeline_info", - "fa_icon": "fas fa-cogs", - "hidden": true - }, - "validate_params": { - "type": "boolean", - "description": "Boolean whether to validate parameters against the schema at runtime", - "default": true, - "fa_icon": "fas fa-check-square", - "hidden": true - }, - "show_hidden_params": { - "type": "boolean", - "fa_icon": "far fa-eye-slash", - "description": "Show all params when using `--help`", - "hidden": true, - "help_text": "By default, parameters set as _hidden_ in the schema are not shown on the command line when a user runs with `--help`. Specifying this option will tell the pipeline to show all parameters." - }, - "enable_conda": { - "type": "boolean", - "description": "Run this workflow with Conda. You can also use '-profile conda' instead of providing this parameter.", - "hidden": true, - "fa_icon": "fas fa-bacon" - } - } + "tax_id": { + "type": "string", + "description": "If --kraken_extract is used, --tax_is specifies the taxon id to be used to extract reads" } + } }, - "allOf": [ - { - "$ref": "#/definitions/input_output_options" + "institutional_config_options": { + "title": "Institutional config options", + "type": "object", + "fa_icon": "fas fa-university", + "description": "Parameters used to describe centralised config profiles. These should not be edited.", + "help_text": "The centralised nf-core configuration profiles use a handful of pipeline parameters to describe themselves. This information is then printed to the Nextflow log when you run a pipeline. You should not need to change these values when you run a pipeline.", + "properties": { + "custom_config_version": { + "type": "string", + "description": "Git commit id for Institutional configs.", + "default": "master", + "hidden": true, + "fa_icon": "fas fa-users-cog" + }, + "custom_config_base": { + "type": "string", + "description": "Base directory for Institutional configs.", + "default": "https://raw.githubusercontent.com/nf-core/configs/master", + "hidden": true, + "help_text": "If you're running offline, Nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell Nextflow where to find them with this parameter.", + "fa_icon": "fas fa-users-cog" + }, + "config_profile_name": { + "type": "string", + "description": "Institutional config name.", + "hidden": true, + "fa_icon": "fas fa-users-cog" }, - { - "$ref": "#/definitions/quality_control_options" + "config_profile_description": { + "type": "string", + "description": "Institutional config description.", + "hidden": true, + "fa_icon": "fas fa-users-cog" }, - { - "$ref": "#/definitions/extract_reads_options" + "config_profile_contact": { + "type": "string", + "description": "Institutional config contact information.", + "hidden": true, + "fa_icon": "fas fa-users-cog" }, - { - "$ref": "#/definitions/institutional_config_options" + "config_profile_url": { + "type": "string", + "description": "Institutional config URL link.", + "hidden": true, + "fa_icon": "fas fa-users-cog" + } + } + }, + "max_job_request_options": { + "title": "Max job request options", + "type": "object", + "fa_icon": "fab fa-acquisitions-incorporated", + "description": "Set the top limit for requested resources for any single job.", + "help_text": "If you are running on a smaller system, a pipeline step requesting more resources than are available may cause the Nextflow to stop the run with an error. These options allow you to cap the maximum resources requested by any single job so that the pipeline will run on your system.\n\nNote that you can not _increase_ the resources requested by any job using these options. For that you will need your own configuration file. See [the nf-core website](https://nf-co.re/usage/configuration) for details.", + "properties": { + "max_cpus": { + "type": "integer", + "description": "Maximum number of CPUs that can be requested for any single job.", + "default": 16, + "fa_icon": "fas fa-microchip", + "hidden": true, + "help_text": "Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. `--max_cpus 1`" }, - { - "$ref": "#/definitions/max_job_request_options" + "max_memory": { + "type": "string", + "description": "Maximum amount of memory that can be requested for any single job.", + "default": "128.GB", + "fa_icon": "fas fa-memory", + "pattern": "^\\d+(\\.\\d+)?\\.?\\s*(K|M|G|T)?B$", + "hidden": true, + "help_text": "Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. `--max_memory '8.GB'`" }, - { - "$ref": "#/definitions/generic_options" + "max_time": { + "type": "string", + "description": "Maximum amount of time that can be requested for any single job.", + "default": "240.h", + "fa_icon": "far fa-clock", + "pattern": "^(\\d+\\.?\\s*(s|m|h|day)\\s*)+$", + "hidden": true, + "help_text": "Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. `--max_time '2.h'`" } - ] + } + }, + "generic_options": { + "title": "Generic options", + "type": "object", + "fa_icon": "fas fa-file-import", + "description": "Less common options for the pipeline, typically set in a config file.", + "help_text": "These options are common to all nf-core pipelines and allow you to customise some of the core preferences for how the pipeline runs.\n\nTypically these options would be set in a Nextflow config file loaded for all pipeline runs, such as `~/.nextflow/config`.", + "properties": { + "help": { + "type": "boolean", + "description": "Display help text.", + "fa_icon": "fas fa-question-circle", + "hidden": true + }, + "publish_dir_mode": { + "type": "string", + "default": "copy", + "description": "Method used to save pipeline results to output directory.", + "help_text": "The Nextflow `publishDir` option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See [Nextflow docs](https://www.nextflow.io/docs/latest/process.html#publishdir) for details.", + "fa_icon": "fas fa-copy", + "enum": [ + "symlink", + "rellink", + "link", + "copy", + "copyNoFollow", + "move" + ], + "hidden": true + }, + "email_on_fail": { + "type": "string", + "description": "Email address for completion summary, only when pipeline fails.", + "fa_icon": "fas fa-exclamation-triangle", + "pattern": "^([a-zA-Z0-9_\\-\\.]+)@([a-zA-Z0-9_\\-\\.]+)\\.([a-zA-Z]{2,5})$", + "help_text": "An email address to send a summary email to when the pipeline is completed - ONLY sent if the pipeline does not exit successfully.", + "hidden": true + }, + "plaintext_email": { + "type": "boolean", + "description": "Send plain-text email instead of HTML.", + "fa_icon": "fas fa-remove-format", + "hidden": true + }, + "max_multiqc_email_size": { + "type": "string", + "description": "File size limit when attaching MultiQC reports to summary emails.", + "pattern": "^\\d+(\\.\\d+)?\\.?\\s*(K|M|G|T)?B$", + "default": "25.MB", + "fa_icon": "fas fa-file-upload", + "hidden": true + }, + "monochrome_logs": { + "type": "boolean", + "description": "Do not use coloured log outputs.", + "fa_icon": "fas fa-palette", + "hidden": true + }, + "multiqc_config": { + "type": "string", + "description": "Custom config file to supply to MultiQC.", + "fa_icon": "fas fa-cog", + "hidden": true + }, + "tracedir": { + "type": "string", + "description": "Directory to keep pipeline Nextflow logs and reports.", + "default": "${params.outdir}/pipeline_info", + "fa_icon": "fas fa-cogs", + "hidden": true + }, + "validate_params": { + "type": "boolean", + "description": "Boolean whether to validate parameters against the schema at runtime", + "default": true, + "fa_icon": "fas fa-check-square", + "hidden": true + }, + "show_hidden_params": { + "type": "boolean", + "fa_icon": "far fa-eye-slash", + "description": "Show all params when using `--help`", + "hidden": true, + "help_text": "By default, parameters set as _hidden_ in the schema are not shown on the command line when a user runs with `--help`. Specifying this option will tell the pipeline to show all parameters." + }, + "enable_conda": { + "type": "boolean", + "description": "Run this workflow with Conda. You can also use '-profile conda' instead of providing this parameter.", + "hidden": true, + "fa_icon": "fas fa-bacon" + } + } + } + }, + "allOf": [ + { + "$ref": "#/definitions/input_output_options" + }, + { + "$ref": "#/definitions/quality_control_options" + }, + { + "$ref": "#/definitions/extract_reads_options" + }, + { + "$ref": "#/definitions/institutional_config_options" + }, + { + "$ref": "#/definitions/max_job_request_options" + }, + { + "$ref": "#/definitions/generic_options" + } + ] } diff --git a/workflows/bacqc.nf b/workflows/bacqc.nf index 2b7f243..4cc1665 100644 --- a/workflows/bacqc.nf +++ b/workflows/bacqc.nf @@ -10,13 +10,11 @@ def summary_params = NfcoreSchema.paramsSummaryMap(workflow, params) WorkflowBacQC.initialise(params, log) // Check input path parameters to see if they exist -def checkPathParamList = [ params.input, params.multiqc_config, params.kraken2db, params.brackendb] +def checkPathParamList = [ params.input, params.multiqc_config, params.kraken2db ] for (param in checkPathParamList) { if (param) { file(param, checkIfExists: true) } } // Check mandatory parameters if (params.input) { ch_input = file(params.input) } else { exit 1, 'Input samplesheet not specified!' } -//if (params.kraken2db) { ch_kraken2db = file(params.kraken2db) } else { exit 1, 'kraken2 database not specified!' } -//if (params.brackendb) { ch_brackendb = file(params.brackendb) } else { exit 1, 'bracken database not specified!' } /* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -198,10 +196,8 @@ workflow BACQC { // ch_kraken2_multiqc = Channel.empty() ch_kraken2db = Channel.empty() - ch_brackendb = Channel.empty() if (!params.skip_kraken2) { ch_kraken2db = file(params.kraken2db) - ch_brackendb = file(params.brackendb) KRAKEN2_KRAKEN2 ( ch_variants_fastq, @@ -216,7 +212,7 @@ workflow BACQC { // BRACKEN_BRACKEN ( ch_kraken2_bracken, - ch_brackendb + ch_kraken2db ) ch_bracken_krakenparse = BRACKEN_BRACKEN.out.reports ch_versions = ch_versions.mix(BRACKEN_BRACKEN.out.versions.first())