A Comparison of Calling vs. Dispatching Workflows in GitHub Actions

Workflow_call vs. workflow_dispatch vs. repository_dispatch and how it affects execution

Β·

9 min read

There are two types of execution units in GitHub Actions. One is called a workflow, which organizes global pipeline execution into smaller units like jobs and steps. The second one is called a custom/composed action, and it can represent one (or multiple) steps in the workflow job.

I’ll be focusing here on workflows and how they can be executed in three different ways:

  • calling (reusable) workflow - using workflow_call event

  • dispatching - using workflow_dispatch event (with workflow rest API)

  • disaptching - using repository_dispatch event (with repository rest API)

πŸ’‘
Teaser: Usually some combination of these methods will work best.

A basic workflow

Every workflow is essentially an event handler for events generated by GitHub, defined in YAML. It starts with its name and on section, where we define which events it will be listening to. Then follows the jobs section that defines what will happen when event(s) occur.

name: An Example Workflow

# what events to handle
on: 
  push:
    branches: [main, develop]

jobs:
  first:
    runs-on: ubuntu-latest
    steps:
      - run: echo "I'm the first job"

  second:
    runs-on: ubuntu-latest
    steps:
      - run: echo "I'm the second job"
#...etc

As the workflow file gets larger, with more jobs or longer lists of steps, you can split it into several different YAML files.

⚠️ The first limitation is that all workflows must be located in the specific directory .github/workflows in the repository. No subdirectory structure is allowed. So, sometimes you need to be creative with names like:

scheduled-nightly-php8.4-tests.yaml
preview-create-command.yaml
scheduled-nightly-percona-db-tests.yaml
reusable-cypress-tests.yaml
#...etc

⚠️ Another issue is the amount of code you need to write each time you split a workflow, such as the name or on section (or others like concurrency, env, paths, paths-ignore, etc.) and many job definitions:

# one-big-workflow.yaml
name: An Example Workflow
env: 
  SOME_CONSTANT: value

concurrency:
  cancel-in-progress: ${{ github.ref_name != 'master' }}

on: 
  push:
    branches: [main, develop]
    paths-ignore:
      - **/*.md

jobs:
  first:
    runs-on: ubuntu-latest
    steps:
      - run: echo "I'm the first job"
  second:
    runs-on: ubuntu-latest
    steps:
      - run: echo "I'm the second job"

# first-workflow.yaml
name: First workflow
env: 
  SOME_CONSTANT: value

concurrency:
  cancel-in-progress: ${{ github.ref_name != 'master' }}

on: 
  push:
    branches: [main, develop]
    paths-ignore:
      - **/*.md

jobs:
  first:
    runs-on: ubuntu-latest
    steps:
      - run: echo "I'm the first job"

# second-workflow.yaml
name: Second workflow
env: 
  SOME_CONSTANT: value

concurrency:
  cancel-in-progress: ${{ github.ref_name != 'master' }}

on: 
  push:
    branches: [main, develop]
    paths-ignore:
      - **/*.md

jobs:
  second:
    runs-on: ubuntu-latest
    steps:
      - run: echo "I'm the second job"

⚠️ Finally, you lose the ability to make jobs dependent on each other (using needs prerequisites) because they are now completely independent.

Reusable workflows

To avoid repeating code, GitHub lets you reuse a specific parameterized workflow in a separate YAML file, known as a reusable workflow. You can call this workflow multiple times in different jobs.

TL;DR of reusable workflow:

  • It works like a normal workflow, but it listens to the special event workflow_call.

  • It can call another reusable workflow.

  • It can use needs prerequisites that would be lost with separate normal workflows.

  • It requires less boilerplate code than separate normal workflows.

For example, I might have several Cypress/Playwright tests that I want to run on different templates. I can create a reusable workflow that runs Cypress, takes one parameter for the template name, and then call it multiple times from the main workflow:

# reusable-cypress.yaml
name: "πŸ”„ Reusable Cypress Tests"

on:
  workflow_call:
  inputs:
    template-name: 
      description: Template name
      type: string
      required: true

jobs:
  run-cypress:
    name: Run cypress ${{ inputs.template-name }}
    runs-on: ubuntu-latest
    steps:
      - run: cypress ...

# main.yaml
name: Main workflow

on: 
  push:
    branches: main

jobs:
  cypress-blue:
    name: Cypress on Blue template
    uses: ./.github/workflows/reusable-cypress.yaml
    with:
      template-name: Blue

  cypress-red:
    name: Cypress on Red template
    uses: ./.github/workflows/reusable-cypress.yaml
    with:
      template-name: Red

It is an elegant solution, but it is also very limited. The main limitations from an execution standpoint are:

  • ⚠️ There are only four levels of workflow nesting allowed.

  • ⚠️ A maximum of 20 unique reusable workflow files can be used in a single workflow run.

  • ⚠️ Direct cross-repository calls are not possible.

  • ⚠️ It improves code organization but also increases code maintenance and reduces code readability.

In more complex repositories, like monorepos, we can reach these limits quickly. To overcome them, one solution might be to switch to using dispatch (running workflows through the REST API) with normal workflows instead of calling reusable ones.

Dispatching workflows

Dispatching workflows solves limitations mentioned above, but there are several tradeoffs here to consider:

  • ⚠️ Unclear execution visibility - When running a workflow, all the reusable workflows used during the run are clearly visible in the run view, all in one place. Dispatched workflows are not connected, so extra coding is needed to address this issue.

  • ⚠️ The number of required runners will likely double compared to using reusable workflows. Typically, one runner is used to monitor the other one where the dispatched workflow is running tests and gathering data (like test results).

  • ⚠️ More code is needed. Even more than with reusable workflows to provide at least some level of visibility mentioned in the first point.

  • ⚠️ Synchronous dispatching is slower. When waiting for the result of a dispatched workflow, regular polling (using a REST API call) is needed to get the result. This process always takes longer than directly calling a reusable workflow.

  • ⚠️ The number of inputs is limited to a maximum of ten inputs. Compared to unlimited inputs of reusable workflow.

Reusable workflows vs. workflow_dispatch comparison table

Reusable workflows - workflow_calWorkflow dispatching - workflow_dispatch
Nesting and uniq. files limited❌ max 20 uniq. files and 4 levelsβœ… no limits
Max. input params count limitedβœ… no limits❌ max. 10 inputs
Max. input payload limitedβœ… no limits❌ max. 64 kb
Can return outputsβœ… yes❌ no
Visualised in the GitHub run viewβœ… yes❌ no
Cross-repository calls❌ noβœ… yes
Can run multiple workflows without explicit naming❌ no❌ no
Full artifact accessβœ… yes❌ no - needs extra permission
Waiting for completionβœ… yes❌ no - needs extra runner
Secrets accessβœ… yes - but have to be explicitly definedβœ… yes

The first two were explained already. Let me briefly explain the rest.

Max. inputs payload

Payload refers to the total size of all the values passed into your workflow through inputs. Dispatch events limit this to 64 KB.:

  • The maximum payload for inputs is 65,535 characters.

Can return outputs

Reusable workflows can define outputs like this:

# reusable-unit-tests.yaml

on: 
  workflow_call:
    outputs:
      result:
        description: "All tests result"
        value: ${{ jobs.unit-tests-run.outputs.result }}

jobs:
  unit-tests-run:
#...

⚠️ This is not possible when dispatching, and you have to use artifacts to pass data back to the dispatching workflow.

Cross-repository calls

Reusable workflow calls are always processed in the action runs of the original repository. It is not possible to call a remote reusable workflow within the context of the remote repository.

Dispatched workflow will always run in the context of the repository where it exists (in .github/workflows directory).

Full artifact access

The main workflow can always view and access all artifacts from all reusable workflows within its scope.

When dispatching, you always have to extend default GITHUB_TOKEN with extra permissions:

permissions:
  actions: read

Note: I wrote another article called Sending data between GitHub workflows about this topic.

Waiting for completion

With reusable workflows it’s easy to wait for their completion with the help of needs YAML keyword.

With dispatching, you need to use a special action like actions/workflow-dispatch-and-wait, which uses the REST API to repeatedly check the workflow status until it is complete or fails. This process is slower and requires an additional runner just for the check-and-wait job.

Secrets access

One difference here is that dispatched (normal) workflows always have access to sectet.XYZ secret variables.

With reusable workflows you always have to explicitly allow secret access with secrets: inherit when calling them:

  cypress-tests:
    name: Cypress
    uses: ./.github/workflows/reusable-cypress-smoke-migrations.yaml
    secrets: inherit

Read more about secrets in reusable workflows in GitHub docs.

Using repository_disaptch event

Another option how to dispatch some workflow is to use repository_dispatch event.

Everything mentioned in wokflow_dispatch section applies here, with the following main differences:

  • repository_dispatch can dispatch multiple workflows with one dispatch event.

  • repository_dispatch can run only on the default branch. There is no ref parameter here.

  • Input here is defined as a JSON, but the number of top keys is also limited to max. 10 keys.

Reusable workflows vs. repository_dispatch comparison table

Reusable workflows - workflow_calWorkflow dispatching - repository_dispatch
Nesting and uniq. files limited❌ max 20 uniq. files and 4 levelsβœ… no limits
Max. input params count limitedβœ… no limits❌ max. 10 top level JSON keys
Max. input payload limitedβœ… no limits❌ max. 64 kb
Can return outputsβœ… yes❌ no
Visualised in the GitHub run viewβœ… yes❌ no
Cross-repository calls❌ noβœ… yes
Can run multiple workflows without explicit naming❌ noβœ… yes
Full artifact accessβœ… yes❌ no - needs extra permission
Waiting for completionβœ… yes❌ no - needs extra runner
Secrets accessβœ… yes - but have to be explicitly definedβœ… yes

Dispatching multiple workflows with one dispatch event

When you dispatch a repository_dispatch event, you define its type like this (example taken from GitHub docs):

{
  "event_type": "test_result",
  "client_payload": {
    "passed": false,
    "message": "Error: timeout"
  }
}
πŸ’‘
You can use action like actions/repository-dispatch to do the actual dispatch.

This repository_dispatch event type test_result can then be caught by multiple workflows that listen for it like this:

on:
  repository_dispatch:
    types: [test_result]

repository_dispatch can run only on the default branch

With workflow_disaptch you can define extra ref on which dispatched workflow will run. This is not possible with repository_disaptch and it runs always on the default branch of the repository (cited from the docs):

This event will only trigger a workflow run if the workflow file exists on the default branch.

Possible use cases

  • creating custom events - a very nice example of this is an action providing support for custom comment slash commands: peter-evans/slash-command-dispatch

  • when you need to call many workflows at the same time - like to call publish workflow on many libraries in a single monorepo

In conclusion

As mentioned above, there are several ways to organize code and execution in GitHub Actions, but each has its limitations.

The recommended way by GitHub would likely be to use reusable workflows only. However, due to their limitations, they can't be easily used for complex pipelines or monorepos.

The best practical solution would likely be a combination of reusable workflows and workflow (or repository) dispatching, especially for large codebases.

That is all for now. Thank you. πŸ‘‹

Happy coding :)

Β