Bull — Job Manager for Node.js

Sathish
4 min readFeb 13, 2022

There are some blogs and documentation on Bull for Node.js. But, I would love to share my experience on how we replaced RabbitMQ with Bull.

My friend is working for Health and fitness app. Backend is written in Node.js. They were using RabbitMQ for job processing and needs to handle around 500K+ jobs every day for processing users activities, sending notifications to users and much more. Some of the notifications fails to deliver and there was no proper retry mechanism to process failed notifications. Also, they were using node-schedule to schedule jobs. He approached me to overcome this challenge.

I am highly fascinated by Resque in ruby. It’s a redis based job manager, which provides features like job retries, scheduling, hooks, priority jobs and much more. I understood, we need a job manager rather than just a message queue to overcome the above problem and my search ended with

Bull — The fastest, most reliable, Redis-based queue for Node.

We convinced, Bull is the right solution to overcome the above problem because of it’s rich in features:

  • Minimal CPU usage due to a polling-free design.
  • Robust design based on Redis.
  • Delayed jobs.
  • Schedule and repeat jobs according to a cron specification.
  • Rate limiter for jobs.
  • Retries.
  • Priority.
  • Concurrency.
  • Pause/resume — globally or locally.
  • Multiple job types per queue.
  • Threaded (sandboxed) processing functions.
  • Automatic recovery from process crashes.

Let’s discuss on POC and how we implemented in production.

It provides a very simple mechanism to create queues and publish jobs to queue.

const Queue = require('bull');const queue = new Queue('queue1', { redis: { port: 6379, host: '127.0.0.1' } });

We just created object of Queue class by passing the queue name and redis connection details. This made us to think, each queue establishes a separate connection to redis. No wonder, bull provides a provision to reuse the one redis connection for all queues. Here is the reference link for the same.

Next, to submit job to the queue,

queue.add({ data: 'job data' }, { attempts: 2, removeOnComplete: true, removeOnFail: true }));

Queue’s add() accepts job data as json parameter and optional configuration for that job. Here, I would like to highlight the key feature Queue provides for jobs. Each job can have it’s own configuration, even though it uses the same queue.

Example : we have two jobs — Job1 and Job2. Job1 should be retried in case of failure but Job2 should not. That’s possible, just pass relevant configurations while adding job to queue.

Here are the following optional parameters configuration for jobs:

  • priority: number; ranges from 1 (highest priority) to MAX_INT (lowest priority).
  • delay: number; An amount of milliseconds to wait until this job can be processed.
  • attempts: number; The total number of attempts to try the job until it completes.
  • repeat: RepeatOpts; Repeat job according to a cron specification.
  • backoff: number | BackoffOpts; Backoff setting for automatic retries if the job fails, default strategy: `fixed`.
  • lifo: boolean; if true, adds the job to the right of the queue instead of the left (default false)
  • timeout: number; The number of milliseconds after which the job should fail with a timeout error
  • jobId: number | string; Override the job ID — by default, the job ID is a unique integer, but you can use this setting to override it.
  • removeOnComplete: boolean | number | KeepJobs; If true, removes the job from redis when it successfully
  • removeOnFail: boolean | number | KeepJobs; If true, removes the job from redis when it fails after all attempts.
  • stackTraceLimit: number; Limits the amount of stack trace lines that will be recorded in the stacktrace.

For more details on job options, refer here.

Bull also provides addBulk() to submit all jobs at once , refer here.

jobManager.js

const Queue = require('bull');const queue = new Queue('queue1', { redis: { port: 6379, host: 'redis' } });queue.add({ data: 'job data' }, { attempts: 2, removeOnComplete: true, removeOnFail: true });

So far, we have discussed about submitting jobs to queue. Next, we will discuss on processing jobs or job workers/subscribers.

In job workers also, we will create queue object and pass the redis connection details. To process jobs,

worker.js

const Queue = require('bull');
const queue = new Queue('queue1', { redis: { port: 6379, host: '127.0.0.1' } });
queue.process(10, (job)=>{
console.log(job.data);
return Promise.resolve();
});

process() accepts 2 parameters —

Max no.of jobs a worker can process and callback function to actually process the job.

If Promise.resolve() is returned, Bull assumes job has been processed successfully. For failed jobs, return Promise.reject()

We tried processing more than 500k+ jobs using bull. We are impressed with the performance, retry on failure , scheduling , removing jobs data from redis options.

There are some UIs available as well to monitor bull queues and jobs and collect stats. Follow think link to get more details about the same.

Here is the github url with full implementation of bull.

Some of the lessons learnt -

There is nothing called best in this world. 3decades back, Ambasaddor was the best car, today, Mercedes rules the world. Tomorrow, it can be some other.

RabbitMQ is obviously one of the best MQs available, but bull is exceptional and meets the end requirements. So, choose the right tool for purpose instead of just using popular ones.

--

--

Sathish

Software Architect ★ Developer ★ Troubleshooter