Skip to main content

Continuous Deployment

Merge Trigger Production Job

To set up continuous deployment of your data with dbt Cloud, we recommend creating a job that triggers a dbt Cloud production run when changes are pushed to main.

Then, select this job as the "Job that creates dbt artifacts" when setting up your dbt Cloud Integration.

  • Why?
    • To deploy new changes from pull requests immediately.
    • This will keep production up to date and enable accurate Datafold diffs.
    • By default, dbt Cloud runs the production job on a schedule, not on merges.
name: Trigger dbt Cloud

on:
push:
branches:
- main

jobs:
run:
runs-on: ubuntu-20.04
timeout-minutes: 15

steps:
- name: checkout
uses: actions/checkout@v2

- name: Trigger dbt Cloud job
run: |
output=$(curl -X POST --fail \
--header "Authorization: Token ${DBT_API_KEY}" \
--header "Content-Type: application/json" \
--data '{"cause": "Commit '"${GIT_SHA}"'"}' \
https://cloud.getdbt.com/api/v2/accounts/${ACCOUNT_ID}/jobs/${JOB_ID}/run/)

echo "Triggered dbt Cloud run at:"
echo ${output} | jq -r .data.href
env:
DBT_API_KEY: ${{ secrets.DBT_API_KEY }}
ACCOUNT_ID: 1234 # dbt account id
JOB_ID: 4567 # dbt job id of the production tables
GIT_SHA: "${{ github.ref == 'refs/heads/master' && github.sha || github.event.pull_request.head.sha }}"

You need to add the dbt Cloud API key as a secret in GitHub Actions, and you need to set the IDs of the account and the job id that builds the production job. You can find these easily in the dbt Cloud UI.