What is an event loop?
You’ve probably heard this term before, but maybe you didn’t fully understand the meaning and purpose behind it. As a new web developer, the almighty event loop will be your best friend at work for many years to come…unless you have a generous co-worker who brings in gourmet coffee every morning.
Event loops are really the bread and butter of your application. They tie everything together from the moment you receive a page request until the web server returns a response to the browser.
Event loops are part of what we call asynchronous programming. Imagine User #1 types in the URL “www.google.com” into their browser’s address bar. The browser reaches out to the server that handles the request (yes I’ve skipped a few steps that aren’t relevant to this example) for the index page of “www.google.com.” The web server needs to query several database tables in order to generate the content for that page. This could take several additional seconds depending on the database’s workload, performance capabilities, and other factors.
Let’s also imagine that during that time User #2’s web browser also sends a request for “www.google.com.” What do you think would happen next?
The answer is… it depends on if the web server is handling the requests synchronously or asynchronously. Do yourself a huge favour and memorize these terms until you can spell them backwards.
“Sync” vs. “Async”
If the web server is configured to handle the requests synchronously, User #2’s web browser will have to wait until the server has completed and responded to User #1’s request.
We would all be viewing web pages at a snail’s pace (and probably much much slower) if we were restricted to always waiting on hold. Thankfully the web server compensates for this behaviour by starting up another system process (think of a process as a separate memory space or execution space) to handle other requests concurrently. User #2’s request would get handled by a different process, and if a “User #3” came along before User #2’s request was completed, you’d see yet another process launched to handle this new request…and the chain would continue and continue and continue.
It wouldn’t continue indefinitely though. A web server is like any other computer – it has a finite amount of RAM and processing power. And that’s where things get interesting:
If the server cannot handle another request because it has run out of memory and/or processing capacity, it will start responded to requests with an error code…or it might just “hang” on the request – letting the web browser spin it’s status wheel for an indefinite amount of time.
You might ask “So what’s the problem then?” Good question. If the web server is smart enough to not waste the user’s time keeping their web browser’s request on hold, then shouldn’t it be able to handle thousands or millions of requests simultaneously? Sure…if you’re ok with always having to invest in bigger and better hardware.
Bigger, Better, Faster, and in multiple quantities. I’m sure you’ve seen pics of data centers and server farms.
Here’s where Mr. Asynchronous comes to save the day! No really. You should see how much he’s saving both startups and fortune 500 companies in IT expenditures. So what does he offer? It’s quite simple…and affordable.
Instead of “blocking” the web server’s process for a single request, it places every request in a queue – handling them as they come in, and coming back to existing requests (such as those that require data to be retrieved from a database or another source) when its ready to return a response to the web browser. This type of behaviour is considered to be “non-blocking,” and it is implemented inside of an event loop.
Mr. Asynchronous has allowed us to maximize the efficiency of existing hardware by using less resources to handle requests. Yay!
Stay tuned for Part 2. Please feel free to leave a comment or suggestion below.