Integration & APIs/
Lesson

Events need to travel from producers to consumers, and they do not teleport. You need infrastructure in between: message brokers. Understanding the difference between queues and topics -- and knowing which broker fits your integration pattern -- is essential to building event-driven systems that actually work.

This lesson focuses on how you use these tools from an application perspective: how to send, receive, route, and handle failures. (The infrastructure concerns -- deployment, capacity planning, replication -- belong to system design.)

AI pitfall
Ask AI "should I use Kafka or RabbitMQ?" and it will almost always recommend Kafka because it appears more frequently in blog posts and training data. But Kafka's complexity is overkill for most applications. If you process fewer than 10,000 events per second and do not need replay, RabbitMQ or SQS is simpler and cheaper to operate.

Queues: point-to-point delivery

A queue is a line. Messages go in one end and come out the other. Each message is delivered to exactly one consumer, even if multiple consumers are reading from the same queue.

// Producer: send a message to a queue
async function enqueueEmail(emailData: EmailPayload) {
  await queue.send('email-queue', {
    to: emailData.to,
    subject: emailData.subject,
    body: emailData.body
  });
}

// Consumer 1: reads from the queue
queue.consume('email-queue', async (message) => {
  await sendEmail(message.to, message.subject, message.body);
  message.ack(); // Acknowledge: remove from queue
});

// Consumer 2: also reads from the same queue
// But each message goes to EITHER Consumer 1 OR Consumer 2, not both
queue.consume('email-queue', async (message) => {
  await sendEmail(message.to, message.subject, message.body);
  message.ack();
});

Queues are perfect for work distribution: you have a pool of workers and you want to spread the load evenly. If one worker is busy, the next message goes to another worker.

Key behaviors:

  • Each message is delivered to exactly one consumer
  • Messages are removed from the queue after acknowledgement
  • If a consumer crashes before acknowledging, the message goes back to the queue
  • Consumers compete for messages (competing consumers pattern)

02

Topics: publish/subscribe

A topic broadcasts messages to all subscribers. Every subscriber gets a copy of every message.

// Producer: publish to a topic
async function publishOrderEvent(order: Order) {
  await topic.publish('order-events', {
    type: 'OrderPlaced',
    data: { orderId: order.id, total: order.total, customerId: order.customerId }
  });
}

// Subscriber 1: Email service gets the event
topic.subscribe('order-events', 'email-service', async (message) => {
  await sendOrderConfirmation(message.data.customerId, message.data.orderId);
});

// Subscriber 2: Analytics service ALSO gets the same event
topic.subscribe('order-events', 'analytics-service', async (message) => {
  await trackPurchase(message.data.orderId, message.data.total);
});

// Subscriber 3: Inventory service ALSO gets it
topic.subscribe('order-events', 'inventory-service', async (message) => {
  await reserveStock(message.data.orderId);
});

Topics are ideal when multiple services care about the same event and you do not want the producer to know about any of them.

03

Queues vs topics

FeatureQueueTopic
DeliveryOne consumer per messageAll subscribers per message
Use caseWork distribution, task processingEvent broadcasting, notifications
Consumer relationshipCompeting (load balancing)Independent (each gets a copy)
Message lifecycleRemoved after consumptionRetained per broker policy
Adding consumersShares the existing loadEach new consumer gets all messages
Example"Send this email""Order was placed"
Good to know
The most common mistake with topics is treating them like queues. If you subscribe two instances of the same service to a topic (for load balancing), both instances get every message, doubling your processing. Use consumer groups (Kafka) or competing consumers on a queue bound to the topic (RabbitMQ) to load-balance instead.
04

The fan-out pattern

Fan-out combines topics and queues. You publish to a topic, and each subscriber has its own queue. This gives you broadcasting (every service gets the event) plus reliability (each service processes at its own pace).

Producer
   |
   v
[Topic: order-events]
   |         |          |
   v         v          v
[Queue:   [Queue:    [Queue:
 email]    analytics]  inventory]
   |         |          |
   v         v          v
Email     Analytics   Inventory
Service   Service     Service
// Fan-out setup (conceptual)
// 1. Create a topic
const topic = broker.createTopic('order-events');

// 2. Each service has its own queue subscribed to the topic
const emailQueue = broker.createQueue('email-processing');
const analyticsQueue = broker.createQueue('analytics-processing');
const inventoryQueue = broker.createQueue('inventory-processing');

topic.bindQueue(emailQueue);
topic.bindQueue(analyticsQueue);
topic.bindQueue(inventoryQueue);

// 3. Each service consumes from its own queue independently
// If analytics is slow, it doesn't block email or inventory
emailQueue.consume(async (msg) => {
  await sendConfirmation(msg.data);
  msg.ack();
});

The key benefit: if the analytics service goes down for an hour, its queue accumulates messages. When it comes back up, it processes the backlog. The email and inventory services are completely unaffected.

Edge case
If the analytics service goes down for an hour, its queue accumulates messages. When it comes back up, it faces a backlog surge. Make sure your consumers can handle burst processing and that your queue has appropriate maximum depth limits to prevent unbounded growth.
05

Message routing

Sometimes you don't want every subscriber to get every message. Routing lets you filter messages based on their content or metadata.

// RabbitMQ-style routing with routing keys
// Publish with a routing key
await exchange.publish('orders.placed', orderEvent);
await exchange.publish('orders.cancelled', cancelEvent);
await exchange.publish('payments.completed', paymentEvent);

// Email service: only wants order events
emailQueue.bindTo(exchange, 'orders.*');

// Analytics service: wants everything
analyticsQueue.bindTo(exchange, '#');

// Refund service: only wants cancellations
refundQueue.bindTo(exchange, 'orders.cancelled');

This keeps services from receiving (and ignoring) events they do not care about.

06

Comparing messaging systems

From an integration perspective, the broker you choose affects how you write your application code. Here is how the three most common options compare.

AspectRabbitMQKafkaSQS + SNS
ModelQueues + exchanges with flexible routingAppend-only log with consumer groupsSQS (queue) + SNS (topic), composable
OrderingPer-queue FIFOPer-partition orderingFIFO queues available (SQS FIFO)
Message retentionRemoved after ackRetained for configurable period (days/weeks)Removed after ack (SQS), no retention (SNS)
ReplayNo (message gone after consumption)Yes (consumers can re-read from any offset)No
RoutingPowerful (topic, direct, fanout, headers exchanges)Topic-based onlySNS filters (attribute-based)
Consumer modelPush (broker delivers to consumers)Pull (consumers poll partitions)Pull (SQS) + push (SNS to Lambda/HTTP)
Best integration fitComplex routing, varied message patternsEvent sourcing, audit trails, high throughputServerless, AWS-native, low ops burden
Complexity for devModerate (exchanges, bindings, ack modes)Higher (partitions, offsets, consumer groups)Low (managed, simple API)

RabbitMQ in practice

RabbitMQ shines when you need flexible message routing. Its exchange system lets you implement point-to-point, pub/subWhat is pub/sub?A messaging pattern where senders publish events to a channel and any number of listeners receive them in real time., and content-based routing all within the same broker.

// RabbitMQ: publish to an exchange with routing
import amqp from 'amqplib';

const connection = await amqp.connect('amqp://localhost');
const channel = await connection.createChannel();

// Create a topic exchange
await channel.assertExchange('orders', 'topic', { durable: true });

// Publish with routing key
channel.publish('orders', 'order.placed', Buffer.from(JSON.stringify({
  orderId: 'ord-123',
  total: 59.98
})));

// Consumer: bind queue to exchange with pattern
await channel.assertQueue('email-notifications', { durable: true });
await channel.bindQueue('email-notifications', 'orders', 'order.*');

channel.consume('email-notifications', (msg) => {
  const event = JSON.parse(msg.content.toString());
  console.log('Processing:', event);
  channel.ack(msg);
});

Kafka in practice

Kafka treats every event as a permanent record in an ordered log. Consumers track their position (offset) and can re-read events from any point.

// Kafka: produce and consume from a topic
import { Kafka } from 'kafkajs';

const kafka = new Kafka({ brokers: ['localhost:9092'] });
const producer = kafka.producer();
const consumer = kafka.consumer({ groupId: 'email-service' });

// Produce
await producer.connect();
await producer.send({
  topic: 'order-events',
  messages: [{
    key: 'ord-123',  // Partition key: same order always goes to same partition
    value: JSON.stringify({ type: 'OrderPlaced', orderId: 'ord-123', total: 59.98 })
  }]
});

// Consume
await consumer.connect();
await consumer.subscribe({ topic: 'order-events', fromBeginning: false });
await consumer.run({
  eachMessage: async ({ topic, partition, message }) => {
    const event = JSON.parse(message.value.toString());
    console.log(`Partition ${partition}, Offset ${message.offset}:`, event);
  }
});

The partition key is critical: messages with the same key always land in the same partition, which guarantees ordering per key. This means all events for order ord-123 arrive in order.

SQS + SNS in practice

AWS's managed services require zero broker management. SNS handles fan-out (topic), SQS handles queuing, and you compose them together.

// SNS + SQS: fan-out with managed services
import { SNSClient, PublishCommand } from '@aws-sdk/client-sns';
import { SQSClient, ReceiveMessageCommand, DeleteMessageCommand } from '@aws-sdk/client-sqs';

const sns = new SNSClient({});
const sqs = new SQSClient({});

// Publish to SNS topic (fans out to all subscribed queues)
await sns.send(new PublishCommand({
  TopicArn: 'arn:aws:sns:us-east-1:123456789:order-events',
  Message: JSON.stringify({ type: 'OrderPlaced', orderId: 'ord-123' }),
  MessageAttributes: {
    eventType: { DataType: 'String', StringValue: 'OrderPlaced' }
  }
}));

// Consume from SQS queue (subscribed to the SNS topic)
const response = await sqs.send(new ReceiveMessageCommand({
  QueueUrl: 'https://sqs.us-east-1.amazonaws.com/123456789/email-queue',
  MaxNumberOfMessages: 10,
  WaitTimeSeconds: 20  // Long polling
}));

for (const msg of response.Messages || []) {
  const event = JSON.parse(JSON.parse(msg.Body).Message);
  await processEvent(event);
  await sqs.send(new DeleteMessageCommand({
    QueueUrl: 'https://sqs.us-east-1.amazonaws.com/123456789/email-queue',
    ReceiptHandle: msg.ReceiptHandle
  }));
}
07

Choosing the right broker

Ask yourself these questions:

  1. Do you need replay? If yes, Kafka. If no, RabbitMQ or SQS.
  2. Do you need complex routing? If yes, RabbitMQ. If no, any option works.
  3. Are you on AWS and want zero ops? SQS + SNS.
  4. Do you need strict per-key ordering? Kafka (partitions) or SQS FIFO.
  5. Is throughputWhat is throughput?The number of requests or operations a system can handle per unit of time, like requests per second. the primary concern? Kafka handles millions of events per second.

There is no universally best broker. The right choice depends on your integration requirements, team expertise, and operational capacity.