A No-Nonsense GitHub Actions Pipeline for Laravel

/ Content

Every Laravel project I start these days gets a GitHub Actions pipeline on day one. Not because I'm disciplined (I'm really not) but because I got tired of finding out things were broken three PRs after the fact. Setting up CI early is one of those "30 minutes now saves you 3 hours later" investments.

I've iterated on my pipeline a lot over the past couple years, and I've landed on something I'm pretty happy with. Here's the walkthrough.

The Full Workflow

Here's the whole thing. I'll break it down section by section after:

name: CI

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup PHP
        uses: shivammathur/setup-php@v2
        with:
          php-version: '8.4'
          tools: composer:v2

      - name: Cache Composer
        uses: actions/cache@v4
        with:
          path: vendor
          key: composer-${{ hashFiles('composer.lock') }}
          restore-keys: composer-

      - name: Install dependencies
        run: composer install --no-interaction --prefer-dist

      - name: Run Pint
        run: vendor/bin/pint --test

  test:
    runs-on: ubuntu-latest
    services:
      mysql:
        image: mysql:8.0
        env:
          MYSQL_ROOT_PASSWORD: password
          MYSQL_DATABASE: testing
        ports:
          - 3306:3306
        options: >-
          --health-cmd="mysqladmin ping"
          --health-interval=10s
          --health-timeout=5s
          --health-retries=3

    steps:
      - uses: actions/checkout@v4

      - name: Setup PHP
        uses: shivammathur/setup-php@v2
        with:
          php-version: '8.4'
          extensions: pdo_mysql, mbstring, xml, bcmath
          tools: composer:v2
          coverage: xdebug

      - name: Cache Composer
        uses: actions/cache@v4
        with:
          path: vendor
          key: composer-${{ hashFiles('composer.lock') }}
          restore-keys: composer-

      - name: Cache npm
        uses: actions/cache@v4
        with:
          path: node_modules
          key: npm-${{ hashFiles('package-lock.json') }}
          restore-keys: npm-

      - name: Install PHP dependencies
        run: composer install --no-interaction --prefer-dist

      - name: Install Node dependencies
        run: npm ci

      - name: Build assets
        run: npm run build

      - name: Prepare environment
        run: |
          cp .env.example .env
          php artisan key:generate

      - name: Run migrations
        run: php artisan migrate --force
        env:
          DB_CONNECTION: mysql
          DB_HOST: 127.0.0.1
          DB_PORT: 3306
          DB_DATABASE: testing
          DB_USERNAME: root
          DB_PASSWORD: password

      - name: Run tests
        run: php artisan test --compact
        env:
          DB_CONNECTION: mysql
          DB_HOST: 127.0.0.1
          DB_PORT: 3306
          DB_DATABASE: testing
          DB_USERNAME: root
          DB_PASSWORD: password

  deploy:
    needs: [lint, test]
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main' && github.event_name == 'push'
    steps:
      - name: Deploy to production
        run: |
          echo "Add your deployment command here"
          # curl -X POST ${{ secrets.DEPLOY_WEBHOOK_URL }}

That's a lot of YAML. Let me break down the decisions.

Two Separate Jobs: Lint and Test

I split linting and testing into separate jobs on purpose. They run in parallel, which is faster, but more importantly it gives you clear signal about what failed. "Tests failed" and "code style failed" are very different problems with very different fixes. Lumping them together means you have to read the logs to figure out which one broke.

The lint job is simple: install Composer dependencies, run Pint with --test so it fails without modifying anything. If someone pushes code that doesn't match the project's style, it fails fast. No need to spin up MySQL or build frontend assets for a style check.

The MySQL Service Container

This is the part that tripped me up the most when I first set it up. GitHub Actions lets you spin up Docker containers as "services" that run alongside your job. The MySQL service starts a real MySQL instance, and you connect to it via 127.0.0.1:3306.

The options block with health checks is critical. Without it, your job might try to run migrations before MySQL is actually ready to accept connections. The health check polls mysqladmin ping every 10 seconds and retries 3 times before giving up. Learned this one after several flaky pipeline runs.

Some people use SQLite for CI testing to avoid the MySQL complexity. I get it, but I've been burned by tests passing on SQLite and failing on MySQL because of differences in how they handle things like JSON columns, strict mode, and foreign key constraints. I'd rather test against what I'm running in production.

Caching Matters More Than You Think

Without caching, every pipeline run does a fresh composer install and npm ci. That's easily 60-90 seconds of just downloading packages. With the cache steps, subsequent runs skip most of that. The cache key is based on the lockfile hash, so it automatically busts when dependencies change.

Worth noting: I'm caching vendor and node_modules directly, not the Composer/npm cache directories. Some guides cache ~/.composer/cache instead, which still requires running the full install step. Caching the vendor directory directly means the install step becomes a near-instant no-op when nothing's changed.

The restore-keys fallback (composer- without the hash) means you'll get a partial cache hit even if the lockfile changed. Composer will just update the diff. It's a small optimization but it adds up.

The Environment Setup

- name: Prepare environment
  run: |
    cp .env.example .env
    php artisan key:generate

Keep your .env.example file up to date. I can't tell you how many times I've seen pipelines fail because someone added a new required env variable and didn't update the example file. Your CI pipeline is actually a great canary for this. If .env.example is missing something critical, the tests will fail.

The database credentials are passed as environment variables directly in the test and migration steps. This overrides whatever's in .env, so you don't need a separate .env.testing file in CI. Less files to manage, less things to go wrong.

The Deploy Job

The deploy job only runs on pushes to main (not on PRs) and only after both lint and test pass. The needs: [lint, test] line ensures this. I've left the actual deployment command as a placeholder because everyone's setup is different.

For most of my projects, I use a webhook-based deployment. A POST request to a deployment service (Forge, Envoyer, or a custom script) triggers the actual deploy. The webhook URL goes in GitHub Secrets so it doesn't leak.

If you're doing something more involved, like building a Docker image, pushing to ECR, or triggering a CodeDeploy, the deploy job is where that goes. The important thing is that it only runs when tests pass. No green tests, no deploy.

Things I've Learned the Hard Way

Pin your PHP version. Don't use 8.x or latest. When PHP 8.4 dropped, a bunch of deprecation warnings turned into errors and my pipelines started failing unexpectedly. Pin to 8.4 and update intentionally.

Use npm ci, not npm install. The ci command installs from the lockfile exactly, which is faster and more reproducible. npm install might update the lockfile, which isn't what you want in CI.

Don't skip the asset build step. Even if your tests don't test the frontend directly, some Laravel features (like Inertia SSR or Vite manifest lookups) will blow up if assets aren't built. Just build them, it takes 15 seconds.

Set up branch protection rules. Go to your repo settings, require status checks to pass before merging, and require the branch to be up to date. This way nobody merges a PR that breaks tests. It seems obvious, but I've seen teams with full CI pipelines that still merge broken code because the checks weren't required.

Is It Worth It?

My typical pipeline runs in about 2-3 minutes. The lint job finishes in under a minute, and the test job takes 2-3 depending on the test suite size. That's fast enough that I don't context-switch while waiting, which is the whole point. If your pipeline takes 15 minutes, people stop waiting for it and merge anyway.

Invest the 30 minutes to set this up. You won't regret it.