/ Content
Using Redis for cache and queue is table stakes. You add CACHE_DRIVER=redis and QUEUE_CONNECTION=redis and mostly forget about it. The moment you start treating Redis as actual primary storage for something, that changes. Suddenly you need to know which data structure to pick, how to batch operations without killing performance, and exactly what "atomic" means in each context.
This post is the mental model I wish I had before building anything serious on top of Redis.
The Command Alphabet
Redis commands are not arbitrary. The prefix (or lack of one) tells you which data structure you're working with. The suffix tells you the variant of the operation. Once you internalize the naming, the docs make a lot more sense.
Strings (no prefix)
The default type. A key maps to a single value.
SET user:123:name "Alice"
GET user:123:name
INCR user:123:login_count
SETNX session:abc "payload" # Set if Not eXists
SETEX session:abc 3600 "payload" # Set with EXpiry (seconds)
Use strings for simple key-value lookups, counters, and flags. They're the building blocks.
Hashes (H prefix)
A key maps to a map of field-value pairs. Think of it like an object or a row.
HSET user:123 name "Alice" email "alice@example.com" plan "pro"
HGET user:123 name
HMGET user:123 name email
HGETALL user:123
HINCRBY user:123 credits 10
HDEL user:123 plan
Use hashes when you need to store and update multiple fields on the same entity without reserializing the whole thing. If you're storing a user object and you need to increment a counter on it, HINCRBY beats GET, deserialize, increment, serialize, SET.
Sorted Sets (Z prefix)
A key maps to a set of unique members, each with a floating-point score. Members are ordered by score.
ZADD leaderboard 1500.0 "user:42"
ZADD leaderboard 2300.5 "user:17"
ZRANGE leaderboard 0 -1 # All members, lowest score first
ZRANGEBYSCORE leaderboard 1000 2000
ZREVRANGEBYSCORE leaderboard +inf -inf LIMIT 0 10 # Top 10
ZSCORE leaderboard "user:42"
ZINCRBY leaderboard 50 "user:42"
ZREM leaderboard "user:42"
Sorted sets are the the most powerful structure Redis offers. You can use the score as a timestamp for time-ordered data, as a rank for leaderboards, or as any float you want to order or range-query by. The ZRANGEBYLEX variant lets you do lexicographic range queries when all members have the same score.
Sets (S prefix)
A key maps to an unordered collection of unique strings.
SADD user:123:tags "admin" "beta"
SREM user:123:tags "beta"
SMEMBERS user:123:tags
SCARD user:123:tags # Cardinality (count)
SISMEMBER user:123:tags "admin"
SUNION user:123:tags user:456:tags
SINTER user:123:tags user:456:tags
Use sets for membership tracking. If you need to know whether something belongs to a group, or you need set math (union, intersection, difference), reach for a set.
Lists (L prefix)
A key maps to an ordered list (a linked list, actually). Push and pop from either end.
RPUSH jobs:pending "job-abc"
LPUSH jobs:pending "job-xyz" # Prepend
RPOP jobs:pending
LPOP jobs:pending
LRANGE jobs:pending 0 -1
LLEN jobs:pending
Lists are what Redis uses internally for its queue implementation. You probably won't need them directly if you're using Laravel's queue, but they're the right structure when you need a first-in-first-out or last-in-first-out collection.
Common Suffixes
Once you know the prefixes, the suffixes snap into place:
| Suffix | Meaning |
|---|---|
REV |
Reversed order |
RANGE |
Return a slice |
SCORE |
The float value associated with a sorted set member |
BY |
Filter by a condition |
LEX |
Lexicographic (alphabetical) ordering |
EX |
Expiry in seconds |
NX |
Only if Not eXists |
XX |
Only if eXists |
Pipelines
Every Redis command is a round trip: your app sends the command, waits for Redis to process it, gets the response. When you need to send 50 commands, those round trips add up.
A pipeline batches multiple commands into a single network write. Redis processes them in order and returns all the results together.
$results = Redis::pipeline(function ($pipe) use ($batchIds) {
foreach ($batchIds as $id) {
$pipe->hgetall("batch:{$id}");
}
});
// $results is an array indexed in the same order as the commands
foreach ($results as $index => $hash) {
// process $hash
}
Pipelines are not atomic. If one command fails, the others still execute. The result array will have a mix of successful responses and error responses at their respective indexes. Redis processes them sequentially but doesn't treat the block as a unit. The performance win comes purely from reducing round trips, not from any transactional guarantee.
Transactions
When you need atomicity, use MULTI/EXEC. In Laravel that's Redis::transaction():
Redis::transaction(function ($tx) use ($batchId, $jobId) {
$tx->hincrby("batch:{$batchId}", 'failed_jobs', 1);
$tx->sadd("batch:{$batchId}:failed_ids", $jobId);
$tx->hset("batch:{$batchId}", 'finished_at', now()->toIso8601String());
});
All three commands execute as a single atomic unit. No other client can interleave commands between them. Either all three run or none do.
Here's the critical caveat: you cannot read a value inside a transaction and act on it conditionally. Commands inside MULTI/EXEC are queued, not executed immediately. If you call HGET inside a transaction, you get back a placeholder, not the actual value. By the time EXEC runs, the queue fires, but you've already missed your window to branch on the result.
// This does NOT work as you'd expect
Redis::transaction(function ($tx) use ($batchId) {
$pendingJobs = $tx->hget("batch:{$batchId}", 'pending_jobs'); // returns a queued placeholder
if ($pendingJobs === 0) { // always false
$tx->hset("batch:{$batchId}", 'finished_at', now()->toIso8601String());
}
});
When you need atomicity AND conditional logic, you need LUA.
LUA Scripts
LUA scripts run atomically on the Redis server. The entire script executes without interruption, and because it runs server-side, you can read a value and conditionally write based on it in one atomic operation.
$script = <<<'LUA'
local pending = redis.call('HGET', KEYS[1], 'pending_jobs')
if tonumber(pending) == 0 then
redis.call('HSET', KEYS[1], 'finished_at', ARGV[1])
return 1
end
return 0
LUA;
$result = Redis::eval(
$script,
1,
"batch:{$batchId}",
now()->toIso8601String()
);
The arguments to eval are: the script, the number of keys, then the keys, then any additional arguments. Inside the script, KEYS[1] maps to the first key argument and ARGV[1] maps to the first non-key argument.
LUA is harder to debug than a transaction. Errors surface as generic failures and the script runs opaquely. Reserve it for cases where you genuinely need the combination of atomicity and conditional logic. When a transaction covers your use case, prefer the transaction.
Locking and Atomicity in Laravel
Single Redis commands are inherently atomic. INCR, HSET, ZADD each execute atomically without any extra work. The problem only arises when you need multiple commands to behave atomically together, or when you need to read a value and then write conditionally based on it.
For the latter case in PHP code (not inside a LUA script), use Cache::lock():
$lock = Cache::lock("batch:{$batchId}:lock", 10);
if ($lock->get()) {
try {
$pending = (int) Redis::hget("batch:{$batchId}", 'pending_jobs');
if ($pending === 0) {
Redis::hset("batch:{$batchId}", 'finished_at', now()->toIso8601String());
}
} finally {
$lock->release();
}
}
Under the hood, Cache::lock() uses SET key value NX EX seconds. The NX ensures only one process acquires the lock at a time. It's not as efficient as LUA (you're still doing multiple round trips), but it's much easier to read and debug.
Laravel Prefixes
By default, Laravel prefixes every Redis key with laravel_database_. So when you call Redis::set('user:123', 'Alice'), the actual key in Redis is laravel_database_user:123.
This is configured in config/database.php:
'redis' => [
'client' => env('REDIS_CLIENT', 'phpredis'),
'options' => [
'prefix' => env('REDIS_PREFIX', Str::slug(env('APP_NAME', 'laravel'), '_').'_database_'),
],
// connections...
],
This matters when you're debugging in redis-cli. If you run KEYS * and don't see your key, the prefix is why. Run KEYS laravel_database_* and you'll find it. It also matters when multiple apps share a Redis instance. Each app's prefix acts as a namespace, which prevents collisions.
Jobs vs. Cache: Use Separate Connections
Mixing queue jobs and cache in the same Redis database is a bad idea. Not because keys will collide (they won't), but because the data profiles are completely different. Cache entries have TTLs and you'll flush them periodically. Queue jobs are transient but should not be flushed mid-processing. If you run Cache::flush() and the queue lives in the same database, you've just evicted all your queued jobs.
Use separate Redis databases or separate connections for each concern:
// config/database.php
'redis' => [
'cache' => [
'url' => env('REDIS_URL'),
'host' => env('REDIS_HOST', '127.0.0.1'),
'database' => env('REDIS_CACHE_DB', '1'),
],
'queue' => [
'url' => env('REDIS_URL'),
'host' => env('REDIS_HOST', '127.0.0.1'),
'database' => env('REDIS_QUEUE_DB', '2'),
],
],
// config/cache.php
'default' => 'redis',
'stores' => [
'redis' => [
'driver' => 'redis',
'connection' => 'cache',
],
],
// config/queue.php
'connections' => [
'redis' => [
'driver' => 'redis',
'connection' => 'queue',
'queue' => env('REDIS_QUEUE', 'default'),
],
],
Redis databases 0 through 15 are separate keyspaces. FLUSHDB on database 1 does not touch database 2. That isolation is worth the slight config overhead.
In the next post, I'll put all of this together: sorted sets, hashes, sets, pipelines, and transactions to build a Redis-backed replacement for Laravel's database batch repository.