/ Content
I've been using Laravel queues for years, and for most of that time I treated the worker like a black box. You dispatch a job, run queue:work, and things happen. It wasn't until I had a production incident where jobs were silently dying, memory was climbing, and workers refused to pick up new code, that I actually cracked open the source and figured out what's actually happening in there.
This post is what I wish I'd read before that debugging session.
The Worker Loop
When you run php artisan queue:work, Laravel boots up the framework exactly once and then enters a loop. That's the thing most people miss. It doesn't boot fresh for every job, it boots once and keeps running. Here's a simplified version of what the core loop looks like:
while (true) {
$job = $this->getNextJob($connection, $queue);
if ($job) {
$this->runJob($job, $connection, $options);
} else {
$this->sleep($options->sleep);
}
if ($this->memoryExceeded($options->memory)) {
$this->stop(12);
}
if ($this->shouldQuit) {
$this->stop();
}
}
That's it. Pop a job, run it, check memory, repeat. If there's no job available, the worker sleeps for a configurable number of seconds (default is 3) before trying again. This is why your CPU doesn't spike when the queue is empty. It's literally just sleeping in a loop.
The $this->shouldQuit flag is how graceful shutdowns work. When you send a SIGTERM signal, the worker finishes its current job and then exits on the next loop iteration. It doesn't kill the job mid-execution.
How Jobs Get Popped
The "pop" operation is where things get driver-specific. With the database driver, it's a SELECT ... FOR UPDATE query that atomically grabs the next available job and marks it as reserved. With Redis, it uses BRPOPLPUSH (or RPOPLPUSH in older versions) to atomically move a job from the waiting list to the reserved list.
The atomicity matters. If two workers try to pop from the same queue at the same time, only one of them gets the job. There's no double-processing. I've seen people add their own locking on top of queue jobs "just to be safe," but you don't need to. The drivers handle this.
Once a job is popped, it gets a "reservation" with a timeout. If the job doesn't complete within that timeout, it gets released back to the queue. This is your retry_after config value, and getting it wrong is one of the most common queue mistakes I see.
The Job Lifecycle
Here's what actually happens when a job runs. Say you've got a simple job:
class ProcessPodcastUpload implements ShouldQueue
{
use Queueable;
public function __construct(
public Podcast $podcast,
) {}
public function handle(TranscodingService $transcoder): void
{
$transcoder->process($this->podcast->audio_path);
$this->podcast->update(['status' => 'processed']);
}
public function failed(\Throwable $exception): void
{
$this->podcast->update(['status' => 'failed']);
Log::error("Podcast processing failed: {$exception->getMessage()}");
}
}
When this job gets popped, the worker deserializes the payload, resolves the job class from the container (which means dependency injection works in handle()), and calls it. If the job throws an exception, the worker catches it, increments the attempt counter, and either releases the job back to the queue or moves it to the failed jobs table, depending on whether it's exceeded $tries.
That failed() method only gets called when the job has truly failed, all retries exhausted. Not on individual attempt failures. I've seen people put cleanup logic in a catch block inside handle() thinking it was the same thing. It's not.
Memory and Timeout Management
Remember how I said the worker boots once and stays running? That means if you deploy new code, the worker is still running your old code. It has no idea you changed anything. This is why php artisan queue:restart exists.
What queue:restart actually does is pretty clever. It doesn't send a signal to your workers, because it can't. It doesn't know where they are or what PIDs they have. Instead, it writes a timestamp to the cache. On every loop iteration, the worker checks: "Is the restart timestamp newer than when I started?" If yes, it exits gracefully.
This is also why you need a process manager like Supervisor. The worker is designed to exit. It exits on restart signals, memory limits, timeouts, and more. Something needs to bring it back. Here's a basic Supervisor config that I use on most projects:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/app/artisan queue:work redis --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/app/storage/logs/worker.log
stopwaitsecs=3600
A few things worth calling out here. numprocs=4 runs four worker processes, which means four jobs can run concurrently. stopwaitsecs=3600 tells Supervisor to wait up to an hour for a worker to finish its current job before force-killing it, so match this to your longest-running job. And --max-time=3600 tells the worker to exit after an hour regardless, which helps with memory leaks in long-running processes.
The Gotchas That'll Bite You
Stale cache and config. Since the worker boots once, if you're caching config or routes, the worker uses whatever was cached when it started. Always run queue:restart after deploying. Always.
The retry_after trap. If your job takes 90 seconds but retry_after is set to 60 seconds, the queue will release the job back while it's still running. Now you've got two instances of the same job running simultaneously. Set retry_after to be comfortably longer than your longest job.
Database connections going stale. A worker that's been running for hours might find its database connection has been closed by the server. Laravel handles this with the --timeout option and by reconnecting on failure, but I've still hit edge cases. The --max-time flag is your best friend here, just let the worker die and restart periodically.
Serialization surprises. When you dispatch a job, Laravel serializes the Eloquent model down to its ID and class name. When the job runs, it fetches the model fresh from the database. This means the model in your job might have different data than when you dispatched it. If a user updates their profile and you dispatched a job with the old model, the job gets the new data. Sometimes that's fine. Sometimes it's absolutely not.
The Mental Model
Understanding the worker loop changed how I think about queues entirely. They're not magic. They're a while loop with a database query and some careful signal handling. Once you internalize that, debugging queue issues gets a lot less mysterious. You know why the worker didn't pick up your new code. You know why a job ran twice. You know why memory climbed until the process got killed.
That's really the whole point of poking around in framework internals. Not to memorize implementation details, but to build a mental model that makes the weird stuff make sense.