Interesting Tidbits About Queues and Jobs

/ Content

I wrote a post a while back about how queue workers actually work and touched on serialization briefly. Since then I've seen a couple posts on X and I've been poking around the internals more, and I keep finding things that aren't obvious from the docs alone. This post is a collection of those findings. Stuff about the job lifecycle, what SerializesModels is actually doing, how middleware executes, and how rate limiters work under the hood.

Job lifecycle

Consider the following job. It's intentionally weird because I'm trying to show something specific about how state works across retries.

class TestJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public int $tries = 3;

    public int $test = 1;

    public function __construct()
    {
        ray('construct triggered');
        ray($this->test);
        $this->test = 2;
        ray($this->test);
    }

    public function handle(): void
    {
        ray('handle triggered');
        ray($this->test);
        $this->test = 3;
        ray($this->test);

        $this->release(5);
    }

    public function middleware(): array
    {
        ray('middleware triggered');
        ray($this->test);
        $this->test = 4;
        ray($this->test);

        return [new KnownApiExceptionMiddleware('Keap')];
    }
}

When this job is dispatched, here is the output in Ray:

construct triggered
1
2

middleware triggered
2
4

handle triggered
4
3

middleware triggered
2
4

handle triggered
4
3

middleware triggered
2
4

handle triggered
4
3

The constructor runs once, followed by the middleware, and then the handle method. That part is expected. What's interesting is what happens on retry.

After the job is released and picked up again, two things stand out:

  1. The constructor is not called again.
  2. The value of $test is not 1 or 3 or 4. It's 2, the value from the end of the original __construct() call.

This tells us exactly how retries work. When a job is dispatched, it gets serialized in the state it was in after the constructor finished. That serialized payload is what sits in the queue. When a worker picks it up for a retry, it doesn't re-construct the job. It deserializes it back to that post-constructor state and runs middleware and handle again.

Anything you do in the constructor happens once and only once. Any state changes made during handle() or middleware() don't persist between attempts. The job always rehydrates from the same serialized snapshot.

Model serialization

Now that we know the job gets serialized after construction, what does that serialization actually look like? Here's a simpler job:

class TestJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public int $tries = 3;

    public function __construct(public App\Models\Report $report) {}

    public function handle(): void
    {
        ray($this->job->getRawBody());
        $this->release(5);
    }
}

If you enable query logging, you'll see that every time a worker picks up this job, it runs a query:

select * from `reports` where `reports`.`id` = 1 limit 1

Every single time. Not just on the first attempt. This is SerializesModels at work. The raw job payload (from getRawBody()) looks like this:

{
  "uuid": "d2362a64-d6d4-4ae6-9731-07d5c460d142",
  "displayName": "App\\Jobs\\TestJob",
  "job": "Illuminate\\Queue\\CallQueuedHandler@call",
  "maxTries": 3,
  "maxExceptions": null,
  "failOnTimeout": false,
  "backoff": null,
  "timeout": null,
  "retryUntil": null,
  "data": {
    "commandName": "App\\Jobs\\TestJob",
    "command": "O:16:\"App\\Jobs\\TestJob\":1:{s:6:\"report\";O:45:\"Illuminate\\Contracts\\Database\\ModelIdentifier\":5:{s:5:\"class\";s:17:\"App\\Models\\Report\";s:2:\"id\";i:8169;s:9:\"relations\";a:0:{}s:10:\"connection\";s:5:\"mysql\";s:15:\"collectionClass\";N;}}"
  },
  "id": "d2362a64-d6d4-4ae6-9731-07d5c460d142",
  "attempts": 0,
  "type": "job",
  "tags": [
    "App\\Models\\Report:8169"
  ],
  "silenced": false,
  "pushedAt": "1733244636.7464"
}

The important part is data.command. That's a PHP serialized string. You can run unserialize() on it and get back the job instance in its post-constructor state. Look at how the model is stored inside it:

{s:6:"report";O:45:"Illuminate\Contracts\Database\ModelIdentifier":5:{s:5:"class";s:17:"App\Models\Report";s:2:"id";i:8169;s:9:"relations";a:0:{}s:10:"connection";s:5:"mysql";s:15:"collectionClass";N;}}

The report property isn't an App\Models\Report. It's an Illuminate\Contracts\Database\ModelIdentifier with just enough info to fetch the real thing later:

Property Value Purpose
class App\Models\Report Which model to instantiate
id 8169 The primary key to query
relations [] Any eager-loaded relations to restore
connection mysql Which database connection to use
collectionClass null Custom collection class, if any

When the worker deserializes the job, the SerializesModels trait sees these ModelIdentifier objects and replaces them with fresh queries. That's why you see that select * from reports on every attempt. The model is always fetched fresh from the database.

Queue bloat without SerializesModels

If you removed the SerializesModels trait, PHP's default serialization kicks in and serializes the entire model as-is. Every attribute, every cast definition, the original attributes array, all of it. The data.command value balloons from a couple hundred bytes to something like this:

{
    "uuid": "c0852f21-f61a-4e10-b56c-a01301cb295f",
    "displayName": "App\\Jobs\\TestJob",
    "job": "Illuminate\\Queue\\CallQueuedHandler@call",
    "maxTries": 3,
    "maxExceptions": null,
    "failOnTimeout": false,
    "backoff": null,
    "timeout": null,
    "retryUntil": null,
    "data": {
        "commandName": "App\\Jobs\\TestJob",
        "command": "O:16:\"App\\Jobs\\TestJob\":12:{s:5:\"tries\";i:3;s:6:\"report\";O:17:\"App\\Models\\Report\":30:{s:13:\"\u0000*\u0000connection\";s:5:\"mysql\";s:8:\"\u0000*\u0000table\";s:7:\"reports\";s:13:\"\u0000*\u0000primaryKey\";s:2:\"id\";s:10:\"\u0000*\u0000keyType\";s:3:\"int\";s:12:\"incrementing\";b:1;s:7:\"\u0000*\u0000with\";a:0:{}s:12:\"\u0000*\u0000withCount\";a:0:{}s:19:\"preventsLazyLoading\";b:0;s:10:\"\u0000*\u0000perPage\";i:15;s:6:\"exists\";b:1;s:18:\"wasRecentlyCreated\";b:0;s:28:\"\u0000*\u0000escapeWhenCastingToString\";b:0;s:13:\"\u0000*\u0000attributes\";a:42:{s:2:\"id\";i:8169;s:10:\"company_id\";i:4;s:14:\"integration_id\";i:4;s:15:\"report_batch_id\";i:28;s:10:\"contact_id\";i:1124;s:18:\"form_submission_id\";N;s:5:\"email\";s:22:\"jane@example.com\";s:16:\"email_address_id\";i:304;s:9:\"domain_id\";i:57;s:15:\"corrected_email\";N;s:24:\"used_external_validation\";i:0;s:14:\"is_deliverable\";i:1;s:18:\"is_valid_structure\";i:1;s:7:\"has_dns\";i:1;s:6:\"has_mx\";i:1;s:11:\"mx_is_valid\";i:1;s:8:\"is_found\";i:1;s:16:\"mailbox_not_full\";i:1;s:8:\"has_typo\";i:0;s:9:\"is_vulgar\";i:0;s:16:\"has_vulgar_field\";i:0;s:13:\"is_suspicious\";i:0;s:20:\"has_suspicious_field\";i:0;s:7:\"is_role\";i:0;s:6:\"is_tag\";i:0;s:13:\"is_disposable\";i:0;s:7:\"is_free\";i:1;s:11:\"is_catchall\";i:0;s:8:\"opted_in\";i:1;s:21:\"days_ago_last_engaged\";i:18;s:12:\"last_engaged\";s:19:\"2024-10-17 18:57:58\";s:9:\"last_sent\";N;s:10:\"last_click\";N;s:9:\"last_open\";N;s:14:\"recommendation\";s:4:\"safe\";s:7:\"reasons\";s:93:\"{\"list\": [\"Contact engaged recently.\", \"Email is deliverable.\", \"Contact is safe to email.\"]}\";s:10:\"run_reason\";s:10:\"user_batch\";s:6:\"status\";s:9:\"Completed\";s:11:\"reviewed_at\";N;s:12:\"completed_at\";s:19:\"2024-11-04 23:27:44\";s:10:\"created_at\";s:19:\"2024-11-04 23:27:39\";s:10:\"updated_at\";s:19:\"2024-11-04 23:27:44\";}s:11:\"\u0000*\u0000original\";a:42:{s:2:\"id\";i:8169;s:10:\"company_id\";i:4;s:14:\"integration_id\";i:4;s:15:\"report_batch_id\";i:28;s:10:\"contact_id\";i:1124;s:18:\"form_submission_id\";N;s:5:\"email\";s:22:\"jane@example.com\";s:16:\"email_address_id\";i:304;s:9:\"domain_id\";i:57;s:15:\"corrected_email\";N;s:24:\"used_external_validation\";i:0;s:14:\"is_deliverable\";i:1;s:18:\"is_valid_structure\";i:1;s:7:\"has_dns\";i:1;s:6:\"has_mx\";i:1;s:11:\"mx_is_valid\";i:1;s:8:\"is_found\";i:1;s:16:\"mailbox_not_full\";i:1;s:8:\"has_typo\";i:0;s:9:\"is_vulgar\";i:0;s:16:\"has_vulgar_field\";i:0;s:13:\"is_suspicious\";i:0;s:20:\"has_suspicious_field\";i:0;s:7:\"is_role\";i:0;s:6:\"is_tag\";i:0;s:13:\"is_disposable\";i:0;s:7:\"is_free\";i:1;s:11:\"is_catchall\";i:0;s:8:\"opted_in\";i:1;s:21:\"days_ago_last_engaged\";i:18;s:12:\"last_engaged\";s:19:\"2024-10-17 18:57:58\";s:9:\"last_sent\";N;s:10:\"last_click\";N;s:9:\"last_open\";N;s:14:\"recommendation\";s:4:\"safe\";s:7:\"reasons\";s:93:\"{\"list\": [\"Contact engaged recently.\", \"Email is deliverable.\", \"Contact is safe to email.\"]}\";s:10:\"run_reason\";s:10:\"user_batch\";s:6:\"status\";s:9:\"Completed\";s:11:\"reviewed_at\";N;s:12:\"completed_at\";s:19:\"2024-11-04 23:27:44\";s:10:\"created_at\";s:19:\"2024-11-04 23:27:39\";s:10:\"updated_at\";s:19:\"2024-11-04 23:27:44\";}s:10:\"\u0000*\u0000changes\";a:0:{}s:8:\"\u0000*\u0000casts\";a:30:{s:7:\"reasons\";s:23:\"App\\Casts\\ReportReasons\";s:10:\"run_reason\";s:29:\"App\\Models\\Enums\\ReportReason\";s:24:\"used_external_validation\";s:7:\"boolean\";s:14:\"is_deliverable\";s:7:\"boolean\";s:18:\"is_valid_structure\";s:7:\"boolean\";s:7:\"has_dns\";s:7:\"boolean\";s:6:\"has_mx\";s:7:\"boolean\";s:11:\"mx_is_valid\";s:7:\"boolean\";s:8:\"is_found\";s:7:\"boolean\";s:16:\"mailbox_not_full\";s:7:\"boolean\";s:8:\"has_typo\";s:7:\"boolean\";s:9:\"is_vulgar\";s:7:\"boolean\";s:16:\"has_vulgar_field\";s:7:\"boolean\";s:13:\"is_suspicious\";s:7:\"boolean\";s:20:\"has_suspicious_field\";s:7:\"boolean\";s:7:\"is_role\";s:7:\"boolean\";s:6:\"is_tag\";s:7:\"boolean\";s:13:\"is_disposable\";s:7:\"boolean\";s:7:\"is_free\";s:7:\"boolean\";s:11:\"is_catchall\";s:7:\"boolean\";s:7:\"is_mock\";s:7:\"boolean\";s:8:\"opted_in\";s:7:\"boolean\";s:9:\"last_sent\";s:8:\"datetime\";s:12:\"last_engaged\";s:8:\"datetime\";s:9:\"last_open\";s:8:\"datetime\";s:10:\"last_click\";s:8:\"datetime\";s:12:\"completed_at\";s:8:\"datetime\";s:11:\"reviewed_at\";s:8:\"datetime\";s:6:\"status\";s:29:\"App\\Models\\Enums\\ReportStatus\";s:14:\"recommendation\";s:37:\"App\\Models\\Enums\\ReportRecommendation\";}s:17:\"\u0000*\u0000classCastCache\";a:0:{}s:21:\"\u0000*\u0000attributeCastCache\";a:0:{}s:13:\"\u0000*\u0000dateFormat\";N;s:10:\"\u0000*\u0000appends\";a:0:{}s:19:\"\u0000*\u0000dispatchesEvents\";a:0:{}s:14:\"\u0000*\u0000observables\";a:0:{}s:12:\"\u0000*\u0000relations\";a:0:{}s:10:\"\u0000*\u0000touches\";a:0:{}s:10:\"timestamps\";b:1;s:13:\"usesUniqueIds\";b:0;s:9:\"\u0000*\u0000hidden\";a:0:{}s:10:\"\u0000*\u0000visible\";a:0:{}s:11:\"\u0000*\u0000fillable\";a:0:{}s:10:\"\u0000*\u0000guarded\";a:0:{}}s:3:\"job\";N;s:10:\"connection\";N;s:5:\"queue\";N;s:5:\"delay\";N;s:11:\"afterCommit\";N;s:10:\"middleware\";a:0:{}s:7:\"chained\";a:0:{}s:15:\"chainConnection\";N;s:10:\"chainQueue\";N;s:19:\"chainCatchCallbacks\";N;}"
    },
    "id": "c0852f21-f61a-4e10-b56c-a01301cb295f",
    "attempts": 0,
    "type": "job",
    "tags": [
        "App\\Models\\Report:8169"
    ],
    "silenced": false,
    "pushedAt": "1733245641.7179"
}

That command value is now over 5kb. It contains every single attribute on the model twice (once in attributes, once in original), every cast definition, every config property on the base Model class. And this is a single model with 42 columns. Imagine a job that takes 3 or 4 models, some with eager-loaded relations. You're looking at 20-30kb per job sitting in your queue.

This matters for a few reasons. If you're using Redis, every job payload eats memory. If you're using the database driver, that's a TEXT column per job. If you have thousands of jobs queued, the bloat adds up fast. SerializesModels reduces that entire model down to about 200 bytes by only storing what's needed to query it back.

Thinking about optimization

Knowing that models get re-queried on every attempt opens up some optimization opportunities. If your job only needs the report ID to update a related record, you don't need the model at all:

// Queries the report on every attempt, even though we only need the ID
class UpdateReportStatus implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public function __construct(public Report $report) {}

    public function handle(): void
    {
        ReportLog::create([
            'report_id' => $this->report->id,
            'status' => 'processed',
        ]);
    }
}

// No unnecessary query
class UpdateReportStatus implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable;

    public function __construct(public int $reportId) {}

    public function handle(): void
    {
        ReportLog::create([
            'report_id' => $this->reportId,
            'status' => 'processed',
        ]);
    }
}

On a simple job this barely matters. But on a job that retries 3 times and runs thousands of times a day, those unnecessary queries add up. I've worked on systems where switching from model parameters to IDs cut the query count on the queue worker by 40%. As jobs get more complex, it is worth thinking about whether you actually need the model in the job, or just a few scalar values from it.

Job middleware execution order

Job middleware in Laravel follows the same onion pattern as HTTP middleware. Each middleware wraps around the next one, and after the innermost layer (your job's handle() method) returns, the stack unwinds in reverse order.

Here's a job with two anonymous middleware classes to make the execution order obvious:

class TestJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable;

    public function middleware(): array
    {
        return [
            new class {
                public function handle($job, $next)
                {
                    ray('First middleware before');
                    $next($job);
                    ray('First middleware after');
                }
            },
            new class {
                public function handle($job, $next)
                {
                    ray('Second middleware before');
                    $next($job);
                    ray('Second middleware after');
                }
            },
        ];
    }

    public function handle(): void
    {
        ray('Job handle');
    }
}

Output:

First middleware before
Second middleware before
Job handle
Second middleware after
First middleware after

First in, last out. The $next($job) call is what passes control to the next middleware (or the job itself if there's nothing left in the stack). Everything after $next($job) runs on the way back out.

Short-circuiting

If a middleware doesn't call $next($job), the rest of the stack never executes. This is how you bail out early:

public function middleware(): array
{
    return [
        new class {
            public function handle($job, $next)
            {
                ray('First middleware before');
                if (someCheckFails()) {
                    $job->release(5);
                } else {
                    $next($job);
                }
                ray('First middleware after');
            }
        },
        new class {
            public function handle($job, $next)
            {
                ray('Second middleware before');
                $next($job);
                ray('Second middleware after');
            }
        },
    ];
}

When the check fails, the output is just:

First middleware before
First middleware after

The second middleware and the job's handle() method never run. The job gets released back to the queue to try again in 5 seconds.

Wrapping existing middleware

This is something I haven't seen documented anywhere. You can wrap a closure around an existing middleware to hook into its before/after execution. This is useful when you want to know whether a middleware like RateLimitedWithRedis actually passed or not.

public function middleware(): array
{
    return [
        new class {
            public function handle($job, $next)
            {
                $wrappedNext = function ($job) use ($next) {
                    ray('Rate limit check passed');
                    $next($job);
                };

                ray('Checking rate limit');

                $limiter = new RateLimitedWithRedis('my-limiter');
                $limiter->handle($job, $wrappedNext);

                ray('Rate limit middleware exited');
            }
        },
    ];
}

On a successful pass:

Checking rate limit
Rate limit check passed
Job handle
Rate limit middleware exited

On a failed pass (rate limited):

Checking rate limit
Rate limit middleware exited

The trick is replacing $next with your own closure that does something before calling the original $next. The RateLimitedWithRedis middleware doesn't know or care that you wrapped it. It just calls whatever closure it was given. If the rate limit check passes, your wrapper runs, then passes control down the stack. If it doesn't pass, your wrapper never gets called, and the rate limiter handles releasing the job back to the queue.

I've used this pattern to log how often a specific rate limiter triggers, and to add custom metrics around third-party API middleware. It's a clean way to observe middleware behavior without modifying the middleware class itself.

Rate limiter internals

Laravel's rate limiters for jobs are stored in your cache backend (usually Redis). If you've ever had a rate limiter get stuck or corrupted, knowing how to find and reset it can save you a lot of grief.

The cache key for a rate limiter is an MD5 hash of two things concatenated together:

$cacheKey = md5($rateLimiterName . $limiterKey);

The $rateLimiterName is whatever you named the limiter when you defined it (e.g. api-requests). The $limiterKey is the string that the limiter's closure returns to partition limits by user, integration, or whatever your key is. Something like HubSpot-298:attempts:200000:decay:86400.

To find the actual key in your cache, you need to resolve it through the RateLimiter:

$limiterName = 'api-requests';
$rateLimiter = app(\Illuminate\Cache\RateLimiter::class);
$limiter = $rateLimiter->limiter($limiterName);

// Call the limiter closure with a job instance to get the response
$limiterResponse = $limiter($job);
$cacheKey = md5($limiterName . $limiterResponse[0]->key);

The $limiter is the closure you defined when registering the rate limiter. Calling it with a job returns an array of Limit objects, each with a key property. Hash the limiter name and key together, and you've got your cache key.

Once you have the key, you can look it up in Redis directly (using redis-cli, Tinker, or a GUI tool like TablePlus or Medis) and inspect the current counter. If a rate limiter is stuck, deleting the cache entry resets it immediately. I've had to do this a handful of times when a third-party API returned a 429 that was incorrectly cached, or when a deployment reset the limiter configuration but the old counters were still hanging around.

If you're troubleshooting rate limiter issues, your error tracking tool's breadcrumbs are usually the fastest way to figure out which integration or user is affected. From there, you can reconstruct the limiter key, hash it, find the cache entry, and either inspect it or nuke it.