/ Content
Database transactions feel deceptively simple. Wrap some queries in DB::transaction(), and either everything commits or everything rolls back. Atomic, done.
Except it's not that simple. I've shipped transaction-related bugs to production at least three times, and each one was a different flavor of "I didn't realize that could happen." Here's the subtle stuff that the docs don't emphasize enough.
The Basics (Quick Refresher)
The straightforward case works exactly like you'd expect:
DB::transaction(function () {
$order = Order::create([
'user_id' => auth()->id(),
'total' => $this->calculateTotal(),
]);
foreach ($this->items as $item) {
$order->lines()->create([
'product_id' => $item['product_id'],
'quantity' => $item['quantity'],
'price' => $item['price'],
]);
}
$order->payment()->create([
'amount' => $order->total,
'status' => 'pending',
]);
});
If any of those queries fail, the whole thing rolls back. No orphaned order lines, no payment records pointing at orders that don't exist. Great.
But here's where people get burned.
Gotcha #1: Jobs Dispatched Inside Transactions
This is the one that got me. Consider this:
DB::transaction(function () {
$user = User::create($validatedData);
SendWelcomeEmail::dispatch($user);
$user->profile()->create(['bio' => '']);
});
Looks reasonable, right? Create a user, dispatch a welcome email, create their profile. But the problem is that job gets pushed to the queue immediately when dispatch() is called. It doesn't wait for the transaction to commit. If your queue worker picks up that job before the transaction commits (which absolutely happens under load) the job tries to find a user that doesn't exist yet.
The fix is afterCommit():
DB::transaction(function () {
$user = User::create($validatedData);
SendWelcomeEmail::dispatch($user)->afterCommit();
$user->profile()->create(['bio' => '']);
});
Or you can set $afterCommit = true on the job class itself, which I'd honestly recommend as a default for most jobs. You can also set it globally in your queue config. I've started doing that on every new project because the number of times you actually want a job to run before the transaction commits is vanishingly small.
The same issue applies to events. If you fire an event inside a transaction and a listener dispatches a queued job... same problem. The event fires, the listener pushes the job, the job runs, the transaction hasn't committed yet.
Gotcha #2: Nested Transactions and Savepoints
Laravel supports nested transactions, but they might not work the way you think. When you nest DB::transaction() calls, Laravel uses a transaction counter. Only the outermost transaction actually issues a BEGIN and COMMIT. Inner "transactions" use savepoints.
This matters when exceptions happen. If an inner transaction fails:
DB::transaction(function () {
$team = Team::create(['name' => 'Acme']);
try {
DB::transaction(function () use ($team) {
$team->members()->create(['user_id' => 999]); // foreign key violation
});
} catch (\Throwable $e) {
// The inner transaction rolled back to a savepoint
// The outer transaction is still alive
Log::warning("Couldn't add member, continuing...");
}
$team->update(['setup_complete' => true]); // This still works
});
The savepoint gets rolled back, but the outer transaction continues. That's usually what you want. But if you're using a database driver that doesn't support savepoints, or if you've disabled them, the inner failure rolls back everything. I've seen this bite people when switching between SQLite (for tests) and MySQL (in production) because the behavior is subtly different.
Also worth knowing: if you call DB::rollBack() manually inside a nested transaction, you're rolling back the savepoint, not the whole transaction. If you want to blow up the whole thing, throw an exception and let it propagate.
Gotcha #3: Deadlocks
Deadlocks happen when two transactions are each waiting for a lock the other one holds. MySQL will detect this and kill one of the transactions, which surfaces in Laravel as a QueryException with error code 1213.
Here's a real one I dealt with. We had an endpoint that transferred credits between users:
// Request A: Transfer from User 1 to User 2
DB::transaction(function () {
$sender = User::lockForUpdate()->find(1); // Locks User 1
$receiver = User::lockForUpdate()->find(2); // Waits for User 2 lock
$sender->decrement('credits', 100);
$receiver->increment('credits', 100);
});
// Request B (simultaneous): Transfer from User 2 to User 1
DB::transaction(function () {
$sender = User::lockForUpdate()->find(2); // Locks User 2
$receiver = User::lockForUpdate()->find(1); // Waits for User 1 lock - DEADLOCK
});
Request A locks User 1 and waits for User 2. Request B locks User 2 and waits for User 1. Neither can proceed. MySQL kills one of them.
The classic fix is to always lock rows in a consistent order. Sort by ID, lock the lower ID first:
DB::transaction(function () use ($senderId, $receiverId) {
$ids = collect([$senderId, $receiverId])->sort()->values();
$users = User::lockForUpdate()
->whereIn('id', $ids)
->orderBy('id')
->get();
$sender = $users->firstWhere('id', $senderId);
$receiver = $users->firstWhere('id', $receiverId);
$sender->decrement('credits', 100);
$receiver->increment('credits', 100);
});
Now both requests lock in the same order: User 1 first, then User 2. No deadlock.
Laravel's DB::transaction() also accepts a second argument for retry attempts. DB::transaction($callback, 3) will retry the whole closure up to 3 times if a deadlock occurs. I always set this on any transaction that involves lockForUpdate().
Gotcha #4: Long-Running Transactions
I once had a transaction that called an external API. Don't do this. The transaction held locks on rows for the entire duration of the HTTP request, sometimes several seconds. Under load, every other request touching those rows would queue up behind it, and response times went through the roof.
The rule of thumb: keep transactions as short as possible. Do your external work first, gather all the data you need, and then open the transaction for just the database writes. If you need to do something after the writes (like notify an external service), do it after the commit.
Gotcha #5: Observers and Model Events
Eloquent fires model events (created, updated, etc.) inside the transaction if you're inside one. Any observer listening to those events runs inside the transaction too. If your observer does something slow, or dispatches a job (see Gotcha #1), or throws an exception, it affects the transaction.
I've started using DB::afterCommit() in observers when the observer's work isn't essential to the transaction's integrity. It keeps the transaction lean and avoids the "job ran before commit" problem.
What I Do Now
After getting burned enough times, here's where I've landed:
- Default to
afterCommitfor all queued jobs. Opt out when you need to, not in. - Never call external APIs inside a transaction.
- If you're using
lockForUpdate(), always lock in a consistent order and set retry attempts. - Keep transactions short. Gather data first, then write.
- Test your transaction behavior with concurrent requests, not just unit tests. A race condition won't show up in a test that runs everything sequentially.
Transactions are powerful, but the happy path is easy and the edge cases are brutal. Knowing where the edges are saves you a lot of 2am debugging.