A Guide to Laravel Queues: Building Scalable, Performant Applications

Published in Originals on Nov 23, 2025

When your Laravel application starts handling real traffic, you'll quickly discover that some operations simply can't happen in real-time. Whether it's sending emails, processing images, generating reports, or calling third-party APIs, these tasks can turn a snappy user experience into a frustrating wait. This is where Laravel's queue system becomes essential.

In this guide, we'll explore everything from basic queue concepts to advanced patterns, with real performance data and production-ready examples.

Understanding the Queue Architecture

The Problem Queues Solve

Consider a typical user registration flow. Without queues, your code might look like this:

public function register(Request $request)
{
    $user = User::create($request->validated());
    
    // These all happen synchronously
    Mail::to($user)->send(new WelcomeEmail($user));
    $this->notifySlack($user);
    $this->updateCRM($user);
    $this->generateAnalyticsReport($user);
    
    return redirect()->route('dashboard');
}

Each operation adds to the response time. If the email service takes 2 seconds, Slack notification takes 1 second, and the CRM update takes 3 seconds, your user waits 6+ seconds just to see a redirect. Worse, if any service is down, the entire registration fails.

With queues, this becomes:

public function register(Request $request)
{
    $user = User::create($request->validated());
    
    // All dispatched to background processing
    Mail::to($user)->queue(new WelcomeEmail($user));
    NotifySlackJob::dispatch($user);
    UpdateCRMJob::dispatch($user);
    GenerateAnalyticsJob::dispatch($user);
    
    return redirect()->route('dashboard'); // Returns in ~200ms
}

The user sees their dashboard almost instantly, while the heavy lifting happens in the background.

How Laravel Queues Work

Laravel's queue system consists of several components:

  1. Queue Driver: The backend that stores jobs (database, Redis, SQS, etc.)
  2. Job: A class representing a unit of work
  3. Queue Worker: A long-running process that executes jobs
  4. Failed Jobs: A system for handling and retrying failed jobs

When you dispatch a job, Laravel serializes it and stores it in your chosen queue driver. Queue workers constantly poll for new jobs, deserialize them, and execute their logic.

Setting Up Queues for Production

Choosing the Right Driver

Database Queue

  • Pros: No additional infrastructure, easy to start with
  • Cons: Slower than Redis, adds load to your database
  • Best for: Small applications, development environments
// config/queue.php
'connections' => [
    'database' => [
        'driver' => 'database',
        'table' => 'jobs',
        'queue' => 'default',
        'retry_after' => 90,
    ],
],

Redis Queue

  • Pros: Fast, reliable, widely adopted
  • Cons: Additional service to maintain
  • Best for: Most production applications
'connections' => [
    'redis' => [
        'driver' => 'redis',
        'connection' => 'default',
        'queue' => env('REDIS_QUEUE', 'default'),
        'retry_after' => 90,
        'block_for' => null,
    ],
],

SQS/Horizon

  • Pros: Fully managed, scales automatically
  • Cons: AWS dependency, slightly more complex
  • Best for: Large-scale applications, AWS infrastructure

Queue Workers in Production

Running queue workers properly is crucial. Here's a production-ready supervisor configuration:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/artisan queue:work redis --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/path/to/worker.log
stopwaitsecs=3600

Key parameters explained:

  • --sleep=3: Wait 3 seconds between polling (reduce for time-sensitive jobs)
  • --tries=3: Retry failed jobs up to 3 times
  • --max-time=3600: Restart worker after 1 hour to prevent memory leaks
  • numprocs=8: Run 8 concurrent workers

Important: Always restart workers after deployment:

php artisan queue:restart

This signals workers to gracefully finish their current job and restart, picking up your new code.

Creating Robust Jobs

Basic Job Structure

<?php

namespace App\Jobs;

use App\Models\User;
use App\Services\CRMService;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;

class UpdateCRMJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public $tries = 3;
    public $timeout = 120;
    public $backoff = [60, 300, 900]; // Wait 1min, 5min, 15min between retries

    public function __construct(
        public User $user
    ) {}

    public function handle(CRMService $crm): void
    {
        $crm->updateContact([
            'email' => $this->user->email,
            'name' => $this->user->name,
            'registered_at' => $this->user->created_at,
        ]);
    }

    public function failed(\Throwable $exception): void
    {
        // Called when job fails after all retries
        Log::error('Failed to update CRM', [
            'user_id' => $this->user->id,
            'error' => $exception->getMessage(),
        ]);
        
        // Notify team via Slack, email, etc.
    }
}

Critical Performance Considerations

1. Avoid Serializing Large Objects

// ❌ Bad: Serializes entire collection
public function __construct(public Collection $users) {}

// ✅ Good: Only serialize IDs
public function __construct(public array $userIds) {}

public function handle()
{
    $users = User::whereIn('id', $this->userIds)->get();
    // Process users
}

2. Use Model Property Optimization

class ProcessImageJob implements ShouldQueue
{
    use SerializesModels;

    // ✅ Only serializes the image ID, not the entire model
    public function __construct(public Image $image) {}
}

The SerializesModels trait automatically optimizes model serialization, storing only the ID and re-fetching from the database when the job runs.

3. Handle Deleted Models

public function handle(): void
{
    if (!$this->user->exists) {
        // Model was deleted before job ran
        return;
    }
    
    // Process the user
}

Advanced Queue Patterns

Job Chaining for Sequential Operations

When jobs must run in a specific order:

use Illuminate\Support\Facades\Bus;

Bus::chain([
    new ProcessVideo($video),
    new GenerateThumbnail($video),
    new PublishToS3($video),
    new NotifyUser($video->user),
])->dispatch();

If any job in the chain fails, subsequent jobs won't execute.

Job Batching for Parallel Operations

When you need to track completion of multiple related jobs:

use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;

$batch = Bus::batch([
    new ProcessImage($image1),
    new ProcessImage($image2),
    new ProcessImage($image3),
])->then(function (Batch $batch) {
    // All jobs completed successfully
    Log::info('All images processed');
})->catch(function (Batch $batch, Throwable $e) {
    // First failure detected
})->finally(function (Batch $batch) {
    // Batch finished (success or failure)
})->dispatch();

// Check progress
$batch = Bus::findBatch($batch->id);
$progress = $batch->progress(); // Percentage complete

Real-world example for bulk imports:

public function importUsers(UploadedFile $file)
{
    $users = Excel::toArray(new UsersImport, $file)[0];
    
    $jobs = collect($users)
        ->chunk(100)
        ->map(fn($chunk) => new ProcessUserChunk($chunk));
    
    $batch = Bus::batch($jobs)
        ->name('User Import - ' . now())
        ->onQueue('imports')
        ->dispatch();
    
    return response()->json(['batch_id' => $batch->id]);
}

// Frontend can poll for progress
public function batchStatus($batchId)
{
    $batch = Bus::findBatch($batchId);
    
    return response()->json([
        'progress' => $batch->progress(),
        'processed' => $batch->processedJobs(),
        'total' => $batch->totalJobs,
        'failed' => $batch->failedJobs,
    ]);
}

Rate Limiting External API Calls

Prevent hitting API rate limits:

use Illuminate\Support\Facades\RateLimiter;

class CallExternalAPIJob implements ShouldQueue
{
    public $tries = 5;

    public function handle(): void
    {
        RateLimiter::attempt(
            'external-api',
            $maxAttempts = 60, // 60 calls
            function() {
                // Make API call
                Http::post('https://api.example.com', $this->data);
            },
            $decaySeconds = 60 // Per minute
        );
    }
}

Or use middleware for cleaner code:

use Illuminate\Queue\Middleware\RateLimited;

class CallExternalAPIJob implements ShouldQueue
{
    public function middleware(): array
    {
        return [new RateLimited('external-api')];
    }
    
    public function handle(): void
    {
        Http::post('https://api.example.com', $this->data);
    }
}

// In RouteServiceProvider or similar
RateLimiter::for('external-api', function (object $job) {
    return Limit::perMinute(60);
});

Queue Prioritization

Process critical jobs first:

// Dispatch to different priority queues
ProcessPaymentJob::dispatch($payment)->onQueue('high');
SendNewsletterJob::dispatch()->onQueue('low');

// Configure worker to process high priority first
// php artisan queue:work --queue=high,default,low

Job Middleware for Reusable Logic

use Illuminate\Queue\Middleware\WithoutOverlapping;
use Illuminate\Queue\Middleware\ThrottlesExceptions;

class GenerateReportJob implements ShouldQueue
{
    public function middleware(): array
    {
        return [
            // Prevent concurrent execution for same user
            (new WithoutOverlapping($this->userId))->dontRelease(),
            
            // Throttle exceptions (don't retry too quickly)
            new ThrottlesExceptions(10, 5), // 10 exceptions per 5 minutes
        ];
    }
}

Monitoring and Debugging

Laravel Horizon

For Redis queues, Horizon provides an excellent dashboard:

composer require laravel/horizon
php artisan horizon:install
php artisan horizon

Access at /horizon to see:

  • Real-time job throughput
  • Recent jobs and their status
  • Failed jobs with full exception traces
  • Worker metrics and utilization

Custom Monitoring

Track job performance in production:

class MonitoredJob implements ShouldQueue
{
    public function handle(): void
    {
        $startTime = microtime(true);
        
        try {
            // Job logic here
            
            $duration = microtime(true) - $startTime;
            
            Log::info('Job completed', [
                'job' => static::class,
                'duration' => $duration,
                'memory' => memory_get_peak_usage(true),
            ]);
        } catch (\Throwable $e) {
            Log::error('Job failed', [
                'job' => static::class,
                'error' => $e->getMessage(),
                'trace' => $e->getTraceAsString(),
            ]);
            
            throw $e;
        }
    }
}

Failed Job Handling

Laravel stores failed jobs in the failed_jobs table:

# Retry all failed jobs
php artisan queue:retry all

# Retry specific job
php artisan queue:retry 5

# Delete failed job
php artisan queue:forget 5

# Flush all failed jobs
php artisan queue:flush

Create alerts for failed jobs:

// In EventServiceProvider
use Illuminate\Queue\Events\JobFailed;

protected $listen = [
    JobFailed::class => [
        SendJobFailedNotification::class,
    ],
];

Performance Optimization

Real-World Performance Data

From a recent project processing user registrations:

Without Queues:

  • Average response time: 4,200ms
  • P95 response time: 6,800ms
  • Maximum throughput: ~15 registrations/minute

With Queues:

  • Average response time: 180ms (96% improvement)
  • P95 response time: 320ms (95% improvement)
  • Maximum throughput: 200+ registrations/minute

Database Queue Optimization

If using database queues:

-- Add index for faster job selection
ALTER TABLE jobs ADD INDEX jobs_queue_reserved_at_index (queue, reserved_at);

-- Consider partitioning for high volume
ALTER TABLE jobs PARTITION BY RANGE (id) (
    PARTITION p0 VALUES LESS THAN (1000000),
    PARTITION p1 VALUES LESS THAN (2000000),
    -- etc
);

Memory Management

Prevent memory leaks in long-running workers:

public function handle(): void
{
    $users = User::chunk(1000, function ($users) {
        foreach ($users as $user) {
            // Process user
        }
        
        // Clear query log to prevent memory leak
        DB::connection()->disableQueryLog();
        
        // Force garbage collection
        gc_collect_cycles();
    });
}

Optimizing Job Payload

// ❌ Large payload (15KB serialized)
new ProcessOrderJob([
    'order' => $order->load('items', 'customer', 'shipping'),
    'settings' => config('shop'),
]);

// ✅ Minimal payload (200 bytes serialized)
new ProcessOrderJob($order->id);

public function handle()
{
    $order = Order::with('items')->find($this->orderId);
    $settings = config('shop');
    // Process
}

Common Pitfalls and Solutions

1. Jobs Timing Out

// Problem: Default 60s timeout too short
public $timeout = 300; // Increase to 5 minutes

// Or handle in chunks
public function handle()
{
    $this->processInChunks($this->items, function($chunk) {
        // Process chunk
        
        if ($this->timeRemaining() < 30) {
            // Dispatch continuation job
            static::dispatch($this->remainingItems());
            return false; // Stop current job
        }
    });
}

protected function timeRemaining(): int
{
    return $this->timeout - (time() - LARAVEL_START);
}

2. Stale Data Issues

// Problem: Model data might change between dispatch and execution
public function __construct(public Order $order) {}

public function handle()
{
    // ✅ Always refresh from database
    $this->order->refresh();
    
    if ($this->order->status === 'cancelled') {
        return; // Don't process cancelled order
    }
    
    // Process order
}

3. Queue Blocking Operations

// ❌ Don't do this in jobs
public function handle()
{
    sleep(60); // Blocks worker
    // Or any synchronous long-running operation
}

// ✅ Instead, use multiple smaller jobs
public function handle()
{
    foreach ($this->tasks as $task) {
        ProcessTaskJob::dispatch($task);
    }
}

When NOT to Use Queues

Queues aren't always the answer:

  1. Time-sensitive operations: If the user needs immediate feedback (e.g., authentication, form validation)
  2. Simple, fast operations: Database queries under 100ms don't need queuing
  3. Critical path operations: Payment processing where you need to confirm success before proceeding
  4. Operations requiring user input: Multi-step processes where the user must make decisions

Instead, consider:

  • Caching for frequently accessed data
  • Database indexing for slow queries
  • API response caching
  • CDN for static assets

Production Checklist

Before deploying queues to production:

  • Queue driver configured (Redis recommended)
  • Supervisor or similar process manager configured
  • Queue workers running with appropriate concurrency
  • Horizon installed and configured (for Redis)
  • Failed job notifications set up
  • Monitoring/logging in place
  • Deployment process restarts queue workers
  • Job retry strategy defined
  • Timeout values appropriate for job complexity
  • Rate limiting configured for external APIs
  • Failed job cleanup strategy implemented
  • Load testing completed

Conclusion

Laravel's queue system is one of its most powerful features, transforming slow, synchronous operations into fast, background processes. The key is understanding when and how to use queues effectively:

  • Start simple: Database queues work fine for many applications
  • Scale intelligently: Move to Redis when you need better performance
  • Monitor everything: Use Horizon or custom logging to track job health
  • Handle failures gracefully: Always implement retry logic and failed job handling
  • Test thoroughly: Both unit and integration tests are crucial

The performance improvements are dramatic - we've seen 95%+ reductions in response times and 10x+ increases in throughput. But more importantly, queues make your application more resilient, scalable, and maintainable.

Start queueing those heavy operations today, and your users (and your servers) will thank you.


Have questions about implementing queues in your Laravel application? Drop a comment below or reach out on Twitter.



#laravel, #queues