Open In App

Essence of Node.js

Last Updated : 31 Oct, 2019
Improve
Improve
Like Article
Like
Save
Share
Report

Node.js or Node has a small core group of modules, commonly referred to as the Node Core that is exposed as the public Node API by using we write our applications or we can say that the Node Core implements the public Node API.

Some examples of the modules present in the node core are:

  • To work with file systems we have a fs module.
  • For networks, we have a http module.
  • For getting OS-specific information we have a module called os.

Like these, there are dozens of modules in node core but most of them are there to support node’s main use case. Node handles its I/O operations mainly with callbacks, events, and streams. So you need to understand these concepts.

Before we start talking about the above-mentioned concepts, please make sure you have installed node.js. If you have trouble installing it then you can refer to our installation guide.

Callbacks: Callbacks are one of the most important fundamentals you need to understand to master Node. Before that let’s see why we need callbacks and how callbacks work in Node.

Traditional web servers work synchronously. It means when a request is sent to the server, the server processes the request and serves the response. In the processing period other I/O operations have to wait for the current process to finish then only another request can be processed. We call this blocking I/O as the new request gets blocked until the current process is finished.

Node has non-blocking I/O model because Node is asynchronous in design. Servers made with Node when receives a request, processes it and returns the response like traditional servers. But the Node server can do other tasks simultaneously when the request is in the processing period.

Example: Create a new folder then create a learn-callback.js file and name.txt file inside it. Our goal is to print a customized hello and a loop pattern to the terminal. Put your name in name.txt and save the file. We have “GeeksforGeeks” in our file.

  • Traditional server’s synchronous version:




    // Tell node we need to work with filesystem
    const fs = require("fs");
      
    // Read the file contents "synchronously" in
    // string (utf-8) encoding
    const fileContents = fs.readFileSync("name.txt", "utf-8");
      
    // Print to console
    console.log("Hello, ", fileContents);
      
    // Print pattern
    for (let i = 0; i < 5; i++) console.log(i);

    
    

    Output:

    Hello, GeeksforGeeks
    0
    1
    2
    3
    4
    
  • The Node asynchronous version: Open up your terminal in the directory where your files are saved. Run the code using node learn-callback.js and observe the output. you will get to the point but first, see the Node version.




    // Tell node we need to work with filesystem
    const fs = require("fs");
      
    // Read the file contents "asynchronously" in
    // string (utf-8) encoding
    fs.readFile("name.txt", "utf-8", (error, fileContents) => {
        if (error) 
            return error;
        else 
            console.log("Hello, ", fileContents);
    });
      
    // Print the pattern
    for (let i = 0; i < 5; i++) 
        console.log(i);

    
    

    Output:

    0
    1
    2
    3
    4
    Hello, GeeksforGeeks
    

Explanation: Run the code using node learn-callback.js. You notice a difference in the outputs? It’s due to the non-blocking model of Node. In the synchronous version, we first observe hello, then the pattern. We fire a request to read the name.txt file, the file is processed, hello is printed and then the pattern is printed. In synchronous model, execution is sequential i.e. in the top to bottom order.

In the Node’s asynchronous version, when we fire a request to read the file, the file starts processing but in this case, our program can do other tasks simultaneously while the node is reading the file. This saves computing resources and makes Node I/O operations extremely fast.

In the above-highlighted code, fs.readFile tells node to read name.txt file in utf-8 encoding. The third argument to fs.readFile is a callback. Callbacks are functions that execute when a particular process is finished. When the file is being read node is free and it executes the next line of code just after the fs.readFile function, which happens to be a loop in our case so the loop gets executed during the reading process and we get a pattern in the terminal. When the reading is finished, the callback function executes and hello gets printed in the terminal after the pattern.
So callbacks are functions that execute later in time after a process has finished execution and in that period node is free to do other tasks. Keep in mind that callbacks are not a special feature of a node. Actually, they are built into JavaScript. Node just uses it smartly to achieve the non-blocking I/O nature.

Events: You can think about events like ‘when X happens to Y’. So in this analogy ‘X’ is an event that is emitted by Node and ‘Y’ is a listener who’s waiting for ‘X’ signal to do its job. Let’s write a small program to grasp this concept.

  • Example: This example illustrate the Events. Run this code and see if you get the correct output.




    // Require "events"; give us access to EventEmitter class
    // EventEmitter class has all the event related methods in it
    const EventEmitter = require("events");
      
    // Create an instance of the EventEmitter class
    const ourEmitter = new EventEmitter();
      
    // Create an event listener - listens for the "GfG opened" event
    // Event listeners always keep its ear open; it never sleeps
    // Means it'll keep on listening for the event throughout the code
    // It'll execute the callback function when "GfG opened" event is emitted
    ourEmitter.on("GfG opened", (error) => {
        if (error) 
            return error;
        else 
            console.log("Let's learn computer science concepts.");
    });
      
    // Emit event or send a signal that "GfG opened" has happened
    ourEmitter.emit("GfG opened");

    
    

    Output:

    Let's learn computer science concepts.
    
  • Explanation: When you emit “GfG opened” event, we have an event listener that executes the callback function which prints a message to console. Now let’s see what happens when we put ourEmitter.emit(“GfG opened”); before the event listener.

  • Program where put ourEmitter.emit(“GfG opened”); before the event listener:




    ...
    // Emit "GfG opened"
    ourEmitter.emit("GfG opened");
      
    // Create an event listener
    ourEmitter.on("GfG opened", (error) => {
        if (error) 
            return error;
        else 
            console.log("Let's learn computer science concepts.");
    ...

    
    

    Output: The node event API says:

    "When the EventEmitter object emits an event, all of the functions attached to 
    that specific event are called synchronously"
  • It means when the node emits the “GfG opened” event, node checks if there’s anyone listening to this event but the node doesn’t know about the listener yet as the listener is after the emit command. Node event .emit command can’t check for listeners which appear after the emit command because it is synchronous. So the order of code is important when you are dealing with events. The rule of thumb is: first listen then emit. First, create a listener then emit the event.

    Events are useful for creating game servers that need to know when new players get connected or get disconnected, move, shoot, die, etc. Also, events are heavily used in creating chat rooms where you want to broadcast messages to listeners.

    Streams: Reading and writing data has two approaches: Buffered and Streams. In the buffered approach the whole data has to be read before the writing process can start. But the streams are much more efficient. Streams read a chunk of data, in that time another stream can keep on writing the previous data chunk. So node.js handles data asynchronously – doing tasks parallely.

    Streams come with piping. Basically what piping does is that they take the output of a stream and that output can be sent to another stream through piping which becomes input for that new stream. This gives amazing powers to the node. Streams are time efficient because they aren’t wasting time waiting for all the reading to happen at once instead we are constantly reading and writing at the same time. Yes, we are doing it asynchronously as they say.

    Also, streams are spatially efficient (saves memory space). Suppose we have to read a 100 MB file and write that somewhere, and we have a buffer of 50 MB. The buffered approach first needs to read the whole data and then it can start writing it. With the buffered approach when the reading process starts, our buffer will eventually leak as soon as we exceed the 50 MB mark.

    But we can use streams to read 50 MB chunk of data, write that data and then clear that buffer before proceeding forward to read the next 50 MB. So there will be no leaks in stream’s case.



    Like Article
    Suggest improvement
    Previous
    Next
    Share your thoughts in the comments

Similar Reads