Event Loop Distilled

Ryan Zheng
3 min readJun 12, 2020

Before we are able to understand what event loop is, we have to understand what problem the event loop wants to solve.

Let’s first describe the problem by using one example.

Example:

When we open a file for reading, if we synchronously read the content, the current thread will be blocked until the read is successful. There are two modes for file read, synchronous and asynchronous.

Synchronous read: The OS kernel will put the thread onto the wait state and when the kernel finishes the read and puts the read data into the file descriptor buffer, it will wake up this thread. So the thread could continue to run.

Asynchronous read: When the OS kernel sees that this file descriptor read is asynchronous, it will just return immediately and the current thread will continue its execution. The OS kernel might have put a file read request in IO queue. It does not matter for us.

In a server application, server has to serve many clients, which means that there are many sockets connections available at the same time. The older way of handling this is using one thread per socket connection. The number of threads that are allowed for each process is limited and it costs memories and resources.

These resources include thread data structure, paging memory, context switching etc. The modern way is to make each socket asynchronous. And put those sockets in a global place, and then continually check those sockets. Like asking each one “Do you have data available”. Different OS offers a different way of checking those sockets. On Linux, there are select, epoll etc. On windows, it might be different.

Pseudo Code

global List<Map<FileDescriptor,Callback> queue;class EventLoop {
while(queue.isNotEmpty())
{
event = queue.next();
if(event.fd.ready())
{
event.callback(event.fd.data)
}
}
}

The whole idea is like the above pseudo-code, but normally server sockets need to live for a long time. So we have to use some ways to make sure the while loop is not ticking the CPU all the time. The ways used are dependent on the system function. Many system functions offer the ability to wait on many file descriptors. Those system functions will only return when one of those file descriptors have data.

wait(events: Set<FileDescriptor>) {
/*when one file descritpor has data, wait will return. Wait does not continually ticking the cpu.
*/
}

In NodeJS, they used libuv for creating the event loop. The event loop is basically like a place where the <FileDescriptor,Callback> pair are registered.

The event_loop has run function which waits one of those file descriptor to become ready. When it becomes ready, then a callback task is scheduled.

V8 Engine

V8 engine is executing the javascript functions. I didn’t read the V8 engine source code, but V8 engine is also using event loop to handle the events. It says that if one callback is long-running, it will block the event loop. If we use our previous event loop model, V8 uses the event loop while to check the file descriptors. When it sees there is data in one of those file descriptors, it will call the callback. So if the callback is long-running, then the event loop thread is busy in executing the function, and it couldn’t continue to check other events. In V8 engine, there are actually two queues, macrotask and microtasks. In each iteration of the event loop, only one macrotask is executed, all the microtasks will be pulled off the microtask queues.

--

--

Ryan Zheng

I am a software developer who is keen to know how things work