Understanding the Event Loop in Node.js (Without the CS Degree)

Posted November 17, 2025 by Karol Polakowski

Node.js runs JavaScript on a single thread, but it can handle thousands of concurrent connections without creating thousands of threads. That magic comes from the event loop — a small runtime mechanism that schedules callbacks, promises, timers, and I/O so your app feels concurrent without you needing a Computer Science degree to reason about it.


What the event loop actually is (in plain terms)

Think of your Node process as a one-lane highway (the main thread) with a traffic controller (the event loop). Work items — function calls, timers, finished I/O operations, promise callbacks — are queued and the controller picks what to run next. Only one thing runs at a time on the main thread. While the main thread is busy doing something (for example, a CPU-heavy loop), nothing else can run.

The key components to understand are:

  • Call stack: where synchronous code executes immediately.
  • Callback queues: where callbacks go when they’re ready. There is a macrotask queue (timers, setImmediate, I/O callbacks) and a microtask queue (promises and process.nextTick callbacks).
  • The event loop: pulls tasks from queues and executes them in a defined order.

Why microtasks vs macrotasks matter

Microtasks (Promise callbacks and process.nextTick) run before returning control to the event loop between ticks. That means if you queue something with Promise.resolve().then or process.nextTick, it will run before any setTimeout(…, 0) or setImmediate callbacks, even if those were scheduled earlier.

Example (order matters):

console.log('start');

setTimeout(() => console.log('timeout'), 0);
setImmediate(() => console.log('immediate'));

Promise.resolve().then(() => console.log('promise'));
process.nextTick(() => console.log('nextTick'));

console.log('end');

A typical output is:

start
end
nextTick
promise
timeout
immediate

Explanation:

  • ‘start’ and ‘end’ run on the call stack synchronously.
  • process.nextTick and Promise callbacks (microtasks) run before returning control to the event loop’s macrotask queues.
  • setTimeout and setImmediate are macrotasks; their exact order can depend on the environment and phase of the event loop.

A simplified view of the loop’s phases

You don’t need to memorize every internal phase, but it’s helpful to know the common ones you care about:

  • timers phase: expired setTimeout and setInterval callbacks run here.
  • I/O callbacks & poll: I/O callbacks from completed non-blocking operations run here.
  • check: setImmediate callbacks run here.
  • close callbacks: socket .on(‘close’) handlers, etc.

Between each macrotask, Node drains the microtask queue (promises and process.nextTick). That means microtasks can starve macrotasks if you keep queuing microtasks.

libuv and the thread pool — where blocking happens off the main thread

Node uses libuv under the hood. Not everything is single-threaded:

  • I/O operations (most network I/O) are non-blocking and handled by the OS; callbacks are scheduled on the event loop when results are ready.
  • Some operations use a libuv thread pool (by default size 4): DNS lookups (if using the threadpool-backed APIs), file system ops (fs.readFile, etc. when using the non-direct syscalls), compression, and crypto work.

You can change the pool size with the UV THREADPOOL SIZE environment variable (max around 128, use carefully).


Common pitfalls and how to avoid them

1) Blocking the event loop

Synchronous, CPU-intensive work blocks the event loop and makes your server unresponsive.

Bad:

// This blocks the event loop for a long time
function expensiveSync(n) {
  let s = 0;
  for (let i = 0; i < n; i++) s += i;
  return s;
}

setTimeout(() => console.log('timeout fired'), 0);
console.log('start');
console.log(expensiveSync(5e8));
console.log('end');

Better options:

  • Move CPU work to worker threads (worker_threads) or separate processes.
  • Break work into chunks and yield back to the event loop with setImmediate or setTimeout(…, 0).

2) Misunderstanding ordering of async constructs

Promise.then and async/await callbacks are microtasks. process.nextTick runs even earlier in the microtask phase. setTimeout(…, 0) cannot beat Promise.then.

3) Overloading the libuv thread pool

If you do many fs or crypto operations at once (or use APIs that use libuv thread pool), you can exhaust the pool and delay other tasks. Increase UV THREADPOOL SIZE or use batching/queues.

Practical examples

Using worker_threads for CPU-bound tasks

const { Worker } = require('worker_threads');

function runWorker(data) {
  return new Promise((resolve, reject) => {
    const w = new Worker('./worker.js');
    w.postMessage(data);
    w.on('message', resolve);
    w.on('error', reject);
    w.on('exit', code => { if (code !== 0) reject(new Error('Worker stopped with exit code ' + code)); });
  });
}

// main.js
(async () => {
  console.log('main start');
  const result = await runWorker({ iterations: 1e9 });
  console.log('worker finished', result);
})();

This moves heavy computations off the main thread so your server can keep responding to incoming requests.

Breaking work into chunks

function doChunkedWork(items) {
  const chunk = 1000;
  function processBatch(start) {
    const end = Math.min(start + chunk, items.length);
    for (let i = start; i < end; i++) {
      // process items[i]
    }
    if (end < items.length) setImmediate(() => processBatch(end));
  }
  processBatch(0);
}

This pattern keeps the main thread responsive by yielding between batches.

Debugging and diagnosing event loop issues

  • Use node –trace-deprecation or –trace-uncaught for certain issues.
  • For performance and CPU profiling, use: node –inspect and Chrome DevTools, or tools like Clinic.js (clinic doctor, clinic flame).
  • To detect event-loop blocking, measure event loop lag with a small timer (process.hrtime) or libraries like blocked-at or event-loop-lag.

Rules of thumb for day-to-day development

  • Keep the main thread non-blocking: offload CPU-heavy work.
  • Use async/await and promises for readability — but remember they are microtasks under the hood.
  • Use process.nextTick sparingly; it can starve I/O if abused.
  • Prefer setImmediate for work you want to run after I/O phases.
  • Be mindful of the libuv thread pool when doing heavy fs/crypto operations.

Summary

You don’t need a CS degree to be effective with Node’s event loop. Think of it as a single worker that picks tasks from queues: synchronous code blocks it, promises run as microtasks before macrotasks, and libuv handles many I/O details behind the scenes. With a few practical patterns — chunking work, using worker threads, and understanding microtask vs macrotask ordering — you can write fast, responsive Node applications.