/ Content
Something that bugs me a little: most Laravel developers I talk to use Redis exclusively for caching. Maybe session storage if they're feeling adventurous. And look, Redis is excellent at caching. But treating it as just a cache is like buying a Swiss Army knife and only using the bottle opener.
Redis is an in-memory data structure server. The emphasis is on "data structure." It gives you strings, lists, sets, sorted sets, hashes, streams, and more, all with atomic operations. Laravel wraps several of these into really nice APIs that most people don't even know exist.
Here are three patterns I reach for constantly.
Atomic Locks
Race conditions are one of those problems that don't show up in development and then wreck you in production. Say you've got an endpoint that generates an invoice PDF. A user double-clicks the button, two requests come in simultaneously, and now you've got two invoices for the same order.
You could add a database flag, but then you need a transaction, and if the PDF generation takes 30 seconds, you're holding a lock on that row for way too long. Redis atomic locks are perfect for this.
use Illuminate\Support\Facades\Cache;
public function generateInvoice(Order $order): Response
{
$lock = Cache::lock("invoice-generation:{$order->id}", 120);
if (! $lock->get()) {
return response()->json([
'message' => 'Invoice is already being generated.',
], 409);
}
try {
$pdf = $this->pdfService->generateInvoice($order);
$order->update([
'invoice_path' => $pdf->store('invoices'),
'invoiced_at' => now(),
]);
return response()->download($pdf->path());
} finally {
$lock->release();
}
}
The Cache::lock() call creates a lock with a 120-second expiry. The expiry is your safety net: if the process dies mid-execution (server crash, deployment, OOM kill), the lock automatically releases after 2 minutes instead of being held forever. I've seen people forget the expiry and end up with permanent locks that required manual Redis intervention to clear. Don't be that person.
The get() call is atomic. Two concurrent requests hitting this at the exact same time, only one of them will get true. The other gets false immediately. No race condition, no polling, no waiting.
For queue jobs where you want to block and wait for the lock instead of failing immediately, there's a blocking variant:
$lock = Cache::lock("import:{$file->id}", 300);
$lock->block(30, function () use ($file) {
// This waits up to 30 seconds to acquire the lock
// Then runs the closure
$this->processImport($file);
});
I use this pattern for any operation that's expensive and non-idempotent. Report generation, file imports, webhook processing... anywhere a duplicate execution would cause real problems.
Rate Limiting with Redis::throttle
Laravel's rate limiting middleware is great for HTTP requests, but what about rate limiting other things? API calls to third-party services, email sending, webhook dispatches? Redis::throttle() gives you the same rate-limiting capability anywhere in your code.
I ran into this on a project that synced data with a third-party API. Their rate limit was 100 requests per minute, and we had thousands of records to sync. Without throttling, we'd blast through the limit in seconds and get blocked.
use Illuminate\Support\Facades\Redis;
class SyncProductToShopify implements ShouldQueue
{
use Queueable;
public function __construct(
public Product $product,
) {}
public function handle(ShopifyClient $shopify): void
{
Redis::throttle('shopify-api')
->allow(80)
->every(60)
->then(
function () use ($shopify) {
$shopify->updateProduct(
$this->product->shopify_id,
$this->product->toShopifyPayload(),
);
},
function () {
$this->release(30);
},
);
}
}
This says: allow 80 requests per 60 seconds (I use 80 instead of 100 to leave some headroom). If we're under the limit, execute the closure. If we're over, call the second closure, which in this case releases the job back to the queue with a 30-second delay.
I'm deliberately leaving some margin below the actual API limit. Third-party rate limits are often measured differently than you'd expect (rolling windows vs. fixed windows, counting headers vs. actual requests), and hitting the limit means your entire integration stops. A 20% buffer costs you nothing and saves a lot of headaches.
The throttle state lives in Redis, so it works across multiple queue workers, multiple servers, even multiple applications if they share the same Redis instance. That's the nice thing about centralized state.
Pub/Sub for Real-Time Features
This one's less common in the Laravel world, but Redis pub/sub is incredibly useful for real-time features without the complexity of a full WebSocket setup.
The pattern is simple: one process publishes messages to a channel, and one or more processes subscribe to that channel and react. Laravel's broadcasting system actually uses this internally when you're using the Redis broadcast driver.
But you can use it directly for things that don't fit neatly into Laravel's event broadcasting. I used it for a deployment dashboard that needed to show real-time log output from multiple servers:
// Publishing side (in a deployment script or Artisan command)
use Illuminate\Support\Facades\Redis;
public function handle(): void
{
$deployId = $this->argument('deploy-id');
$this->runDeploySteps(function (string $line) use ($deployId) {
Redis::publish("deploy:{$deployId}", json_encode([
'server' => gethostname(),
'line' => $line,
'timestamp' => now()->toISOString(),
]));
});
}
// Subscribing side (a separate long-running process)
Redis::subscribe(["deploy:{$deployId}"], function (string $message) {
$data = json_decode($message, true);
broadcast(new DeployLogReceived($data))->toOthers();
});
The subscriber listens on a Redis channel and rebroadcasts each message to the frontend via WebSockets (or SSE, or whatever you're using). The nice thing about this approach is that the publishing side doesn't need to know anything about the subscribers. It just fires messages into a channel. Multiple dashboard viewers, monitoring tools, or logging services can all subscribe independently.
One caveat though: Redis pub/sub is fire-and-forget. If a subscriber isn't connected when a message is published, they miss it. There's no message history or replay. If you need guaranteed delivery, use Redis Streams instead (Laravel doesn't wrap these natively, but you can use the raw Redis facade).
When Not to Use Redis
I don't want to make it sound like everything is a nail just because I found a nice hammer. There are situations where these patterns aren't the right call.
If you're running a single server with a single worker process, atomic locks via the database or even the filesystem might be simpler. Redis adds infrastructure complexity: you need to run it, monitor it, and handle the (admittedly rare) case where it goes down.
If your rate limiting needs are purely HTTP-based, Laravel's built-in rate limiting middleware is simpler and doesn't require Redis specifically.
And if you need guaranteed message delivery for pub/sub, look at a proper message queue (SQS, RabbitMQ) instead of Redis pub/sub.
But for most Laravel applications that are beyond the "single server" stage? Redis is already in your stack for caching. You might as well use the other 90% of what it can do.