Open In App

Create a Video and Audio Recorder with JavaScript MediaRecorder API

Improve
Improve
Like Article
Like
Save
Share
Report

WebRTC is very popular for accessing device cameras and device microphones and streaming the video or audio media in the browser. But in many cases, we might need to record the streaming for future use or for users (Like user might want to download the streaming, etc.). In that case, we can use the MediaRecorder API to record the media streaming.

In this article, we will create a basic Video and Audio Recorder website using pure JavaScript and its MediaRecorder API.

The Project Description: The website we are building will have-

  • A select option to let the users choose what type of media (audio or video with audio) to record.
  • If the user chooses to record video then the browser will ask for permission to access the device camera and microphone and if the user allows it, then —
    • A video element will display the camera media stream
    • The “Start Recording” button will start the recording
    • The “Stop Recording” button will stop the recording.
    • When the recording is complete, a new video element containing the recorded media gets displayed.
    • A link to let the users download the recorded video.
  • If the user chooses to record only audio, then the browser will ask for permission to access the microphone, and if the user allows it, then —
    • The “Start Recording” button will start the recording
    • The “Stop Recording” button will stop the recording
    • When the recording is done, a new audio element containing the recorded audio gets displayed.
    • A link is given to let the users download the recorded audio.

So, let’s first set up our simple HTML page —

index.html




<!DOCTYPE html>
<html lang="en">
  
<head>
    <meta charset="UTF-8">
    <meta http-equiv="X-UA-Compatible" 
        content="IE=edge">
    <meta name="viewport" content=
        "width=device-width, initial-scale=1.0">
    <link rel="stylesheet" href="index.css">
    <title>Video & Audio Recorder</title>
</head>
  
<body>
    <h1> Video & Audio Recorder </h1>
    <label for="media">
        Select what you want to record:
    </label>
  
    <select id="media">
        <option value="choose-an-option"
            Choose an option
        </option>
        <option value="vid">Video</option>
        <option value="aud">Audio</option>
    </select>
  
    <div class="display-none" id="vid-recorder">
        <h3>Record Video </h3>
        <video autoplay id="web-cam-container" 
            style="background-color: black;">
            Your browser doesn't support 
            the video tag
        </video>
  
        <div class="recording" id="vid-record-status">
            Click the "Start video Recording" 
            button to start recording
        </div>
  
        <!-- This button will start the video recording -->
        <button type="button" id="start-vid-recording" 
            onclick="startRecording(this, 
            document.getElementById('stop-vid-recording'))">
            Start video recording
        </button>
  
        <!-- This button will stop the video recording -->
        <button type="button" id="stop-vid-recording" 
            disabled onclick="stopRecording(this, 
            document.getElementById('start-vid-recording'))">
            Stop video recording
        </button>
  
        <!--The video element will be created using 
            JavaScript and contains recorded video-->
        <!-- <video id="recorded-video"  controls>
            Your browser doesn't support the video tag
        </video> -->
  
        <!-- The below link will let the
             users download the recorded video -->
        <!-- <a href="" > Download it! </a> -->
    </div>
  
    <div class="display-none" id="aud-recorder">
        <h3> Record Audio</h3>
  
        <div class="recording" id="aud-record-status">
            Click the "Start Recording" 
            button to start recording
        </div>
  
        <button type="button" id="start-aud-recording" 
            onclick="startRecording(this, 
            document.getElementById('stop-aud-recording'))">
            Start recording 
        </button>
  
        <button type="button" id="stop-aud-recording" 
            disabled onclick="stopRecording(this, 
            document.getElementById('start-aud-recording'))">
            Stop recording
        </button>
  
        <!-- The audio element will contain the 
            recorded audio and will be created 
            using Javascript -->
        <!-- <audio id="recorded-audio" 
            controls></audio> -->
  
        <!-- The below link will let the users 
             download the recorded audio -->
        <!-- <a href="" > Download it! </a> -->
    </div>
  
    <script src="index.js"></script>
</body>
  
</html>


Output:

If you check the index.html carefully, you will see that the video and audio tags are not given any source, We will add the sources later using JavaScript. For now, we have a select option that lets the user select the type of media they want to record. The first video element inside the “vid-recorder” div element will contain the webcam stream and the video element in the comments will contain the recorded video. Notice that only the last video element has the “controls” attribute as the first video element will contain the stream and don’t need any controls.

Inside the “aud-recorder” div, we have two buttons to start and stop the recording, the commented audio element will contain recorded audio.

Now, let’s add some CSS to the HTML page —

index.css




body {
    text-align: center;
    color: green;
    font-size: 1.2em;
}
  
.display-none {
    display: none;
}
  
.recording {
    color: red;
    background-color: rgb(241 211 211);
    padding: 5px;
    margin: 6px auto;
    width: fit-content;
}
  
video {
    background-color: black;
    display: block;
    margin: 6px auto;
    width: 420px;
    height: 240px;
}
  
audio {
    display: block;
    margin: 6px auto;
}
  
a {
    color: green;
}


Output:

For now, we have added the “display-none” class to the “vid-recorder” and “aud-recorder” divs.  As we want to display the correct recorder based on the user’s choice.

Now, let’s implement the logic to display only the user-selected recorder using JavaScript—

index.js




const mediaSelector = document.getElementById("media");
let selectedMedia = null;
  
// Handler function to handle the "change" event
// when the user selects some option
mediaSelector.addEventListener("change", (e) => {
    selectedMedia = e.target.value;
    document.getElementById(
      `${selectedMedia}-recorder`).style.display = "block";
    document.getElementById(
      `${otherRecorder(selectedMedia)}-recorder`)
      .style.display = "none";
});
  
function otherRecorder(selectedMedia) {
    return selectedMedia === "vid" ? "aud" : "vid";
}


Output: When the user selects “video”, the following video recorder is displayed —

Similarly, when the user selects the “audio” option, the audio recorder gets displayed —

The above code displays only the user-selected recorder i.e. audio or video. We have added a “change” event listener to the mediaSelector element, when the value of the select element changes, it emits a “change” event and the event is handled by the given callback function. The callback function changes the CSS “display” property of the selected media recorder to “block” and other media recorder to “none”.

Accessing the Web cam and microphone: The WebRTC getUserMedia API lets you access the device camera and microphone. The getUserMedia() method returns a Promise that resolves to a MediaStream which contains the media contents (a stream of media tracks) based on the given specifications. The getUserMedia() method takes a MediaStreamConstraints object as the parameter that defines all the constraints the resulting media stream should match.

const mediaStreamConstraints = {
   audio: true,
   video: true
};
// The above MediaStreamConstraints object 
// specifies that the resulting media must have
// both the video and audio media content or tracks.

// The mediaStreamConstraints object is passed to 
// the getUserMedia method
navigator.mediaDevices.getUserMedia( MediaStreamConstraints )
.then( resultingMediaStream => {
   // Code to use the received media stream
});

When the getUserMedia method is invoked, the browser prompts the user asking for permission to use the device camera and microphone. If the user allows it, then the promise returned by getUserMedia resolves to the resulting media stream, otherwise, it throws a NotAllowedError exception. In the above code, the received media stream contains both the video and audio media data.

So, add the below lines of code to the index.js file :

index.js




const mediaSelector = 
    document.getElementById("media");
  
// Added code
const webCamContainer = document
    .getElementById('web-cam-container');
  
let selectedMedia = null;
  
/* Previous code 
...
Added code */
  
const audioMediaConstraints = {
    audio: true,
    video: false
};
  
const videoMediaConstraints = {
    // or you can set audio to false 
    // to record only video
    audio: true,
    video: true
};
  
function startRecording(thisButton, otherButton) {
  
    navigator.mediaDevices.getUserMedia(
            selectedMedia === "vid"
            videoMediaConstraints : 
            audioMediaConstraints)
        .then(mediaStream => {
            // Use the mediaStream in 
            // your application
  
            // Make the mediaStream global
            window.mediaStream = mediaStream;
  
            if (selectedMedia === 'vid') {
  
                // Remember to use the "srcObject" 
                // attribute since the "src" attribute 
                // doesn't support media stream as a value
                webCamContainer.srcObject = mediaStream;
            }
  
            document.getElementById(
                `${selectedMedia}-record-status`)
                .innerText = "Recording";
            thisButton.disabled = true;
            otherButton.disabled = false;
        });
  
}
  
function stopRecording(thisButton, otherButton) {
  
    // Stop all the tracks in the received 
    // media stream i.e. close the camera
    // and microphone
    window.mediaStream.getTracks().forEach(track => {
        track.stop();
    });
  
    document.getElementById(
        `${selectedMedia}-record-status`)
        .innerText = "Recording done!";
          
    thisButton.disabled = true;
    otherButton.disabled = false;
}


The startRecording function invokes the navigator.mediaDevices.getUserMedia() method to access the device camera and microphone, disables the “start recording” button, and enables the “stop recording” button. Whereas, the stopRecording function closes the camera and microphone by calling the “stop()” method of each media track used by the media stream, disables the “stop recording” button, and enables the “start recording” button.

Implementing the Recorder: Until now, we have only accessed the webcam and microphone but didn’t do anything to record the media.

To record a media stream, we first need to create an instance of MediaRecorder( an interface for recording media streams ) using the MediaRecorder constructor.

The MediaRecorder constructor takes two parameters —

  • stream: A stream is like a flow of data( data of any type).  Here in this article, we will use MediaStream which is basically a stream of media(video or audio or both) data or media content.
  • Options( optional ): An object containing some specifications about the recording. You can set the MIME-type of the recorded media, the audio bit rate, video bit rate, etc. MIME-type is a standard that represents the format of the recorded media file( for example, the two MIME types — “audio/webm”, “video/mp4” indicate an audio webm file and a video mp4 file respectively).

Syntax:

const mediaRecorder = new MediaRecorder(
    stream, { mimeType: "audio/webm" });

The above line of code creates a new MediaRecorder instance that records the given stream and stores it as an audio WebM file.

So, modify your index.js file :

index.js file




/* Previous code 
... */
  
function startRecording(thisButton, otherButton) {
  
    navigator.mediaDevices.getUserMedia(
        selectedMedia === "vid" ?
        videoMediaConstraints :
        audioMediaConstraints)
  
    .then(mediaStream => {
  
        /* New code */
        // Create a new MediaRecorder 
        // instance that records the 
        // received mediaStream
        const mediaRecorder = 
            new MediaRecorder(mediaStream);
  
        // Make the mediaStream global
        window.mediaStream = mediaStream;
  
        // Make the mediaRecorder global
        // New line of code
        window.mediaRecorder = mediaRecorder;
  
        if (selectedMedia === 'vid') {
  
            // Remember to use the srcObject 
            // attribute since the src attribute 
            // doesn't support media stream as a value
            webCamContainer.srcObject = mediaStream;
        }
  
        document.getElementById(
            `${selectedMedia}-record-status`)
            .innerText = "Recording";
  
        thisButton.disabled = true;
        otherButton.disabled = false;
    });
}
  
/* Remaining code 
...*/


When the startRecording() function gets invoked, it creates a MediaRecorder instance to record the received mediaStream. Now, we need to use the created MediaRecorder instance. MediaRecorder provides some useful methods that we can use here —

  • start(): When this method is invoked, the MediaRecorder instance starts recording the given media stream. Optionally takes “timeslice” as an argument which is specified will result in recording the given media in small separate chunks of that time slice duration.
  • pause(): When invoked, pauses the recording
  • resume(): When invoked after invoking the pause( ) method, resumes the recording.
  • stop(): When invoked, stops the recording and fires a “dataavailable” event containing the final Blob of the saved data.
  • requestData() : When invoked, requests a Blob containing data saved till now.

Similarly, MediaRecorder also provides some useful Event Handlers —

  • ondataavailable: Event Handler for “dataavailable” event. Whenever the timeslice (if specified) milliseconds of media data is recorded or when the recording is done(if timeslice is not specified), the MediaRecorder emits a “dataavailable” event with the recorded Blob data. This data can be obtained from the “data” property of the “event” —
mediaRecorder.ondataavailable = ( event ) => {
  const recordedData = event.data;
}
  • onstop: Event Handler for the “stop” event emitted by MediaRecorder. This event is emitted when the MediaRecorder.stop() method is called or when the corresponding MediaStream is stopped.
  • onerror: Handler for the “error” event which is emitted whenever an error occurs while using the MediaRecorder. The “error” property of the event contains the details of the error —
mediaRecorder.onerror = ( event ) => {
  console.log(event.error);
}
  • onstart : Handler for the “start” event which is emitted when the MediaRecorder starts recording.
  • onpause: Handler for the “pause” event. This event is emitted when the recording is paused.
  • onresume : Handler for the ” resume” event. This event is emitted when the recording is resumed again after being paused.

Now, we need to use some of these methods and event handlers to make our project work.

index.js




const mediaSelector = document.getElementById("media");
  
const webCamContainer =
    document.getElementById("web-cam-container");
  
let selectedMedia = null;
  
// This array stores the recorded media data
let chunks = [];
  
// Handler function to handle the "change" event
// when the user selects some option
mediaSelector.addEventListener("change", (e) => {
  
    // Takes the current value of the mediaSeletor
    selectedMedia = e.target.value;
  
    document.getElementById(
        `${selectedMedia}-recorder`)
            .style.display = "block";
  
    document.getElementById(
            `${otherRecorderContainer(
            selectedMedia)}-recorder`)
        .style.display = "none";
});
  
function otherRecorderContainer(
    selectedMedia) {
  
    return selectedMedia === "vid"
        "aud" : "vid";
}
  
// This constraints object tells 
// the browser to include only 
// the audio Media Track
const audioMediaConstraints = {
    audio: true,
    video: false,
};
  
// This constraints object tells 
// the browser to include
// both the audio and video
// Media Tracks
const videoMediaConstraints = {
  
    // or you can set audio to
    // false to record
    // only video
    audio: true,
    video: true,
};
  
// When the user clicks the "Start 
// Recording" button this function
// gets invoked
function startRecording(
    thisButton, otherButton) {
  
    // Access the camera and microphone
    navigator.mediaDevices.getUserMedia(
        selectedMedia === "vid"
        videoMediaConstraints :
        audioMediaConstraints)
        .then((mediaStream) => {
  
        // Create a new MediaRecorder instance
        const mediaRecorder = 
            new MediaRecorder(mediaStream);
  
        //Make the mediaStream global
        window.mediaStream = mediaStream;
        //Make the mediaRecorder global
        window.mediaRecorder = mediaRecorder;
  
        mediaRecorder.start();
  
        // Whenever (here when the recorder
        // stops recording) data is available
        // the MediaRecorder emits a "dataavailable" 
        // event with the recorded media data.
        mediaRecorder.ondataavailable = (e) => {
  
            // Push the recorded media data to
            // the chunks array
            chunks.push(e.data);
        };
  
        // When the MediaRecorder stops
        // recording, it emits "stop"
        // event
        mediaRecorder.onstop = () => {
  
            /* A Blob is a File like object.
            In fact, the File interface is 
            based on Blob. File inherits the 
            Blob interface and expands it to
            support the files on the user's 
            systemThe Blob constructor takes 
            the chunk of media data as the 
            first parameter and constructs 
            a Blob of the type given as the 
            second parameter*/
            const blob = new Blob(
                chunks, {
                    type: selectedMedia === "vid" ?
                        "video/mp4" : "audio/mpeg"
                });
            chunks = [];
  
            // Create a video or audio element
            // that stores the recorded media
            const recordedMedia = document.createElement(
                selectedMedia === "vid" ? "video" : "audio");
            recordedMedia.controls = true;
  
            // You can not directly set the blob as 
            // the source of the video or audio element
            // Instead, you need to create a URL for blob
            // using URL.createObjectURL() method.
            const recordedMediaURL = URL.createObjectURL(blob);
  
            // Now you can use the created URL as the
            // source of the video or audio element
            recordedMedia.src = recordedMediaURL;
  
            // Create a download button that lets the 
            // user download the recorded media
            const downloadButton = document.createElement("a");
  
            // Set the download attribute to true so that
            // when the user clicks the link the recorded
            // media is automatically gets downloaded.
            downloadButton.download = "Recorded-Media";
  
            downloadButton.href = recordedMediaURL;
            downloadButton.innerText = "Download it!";
  
            downloadButton.onclick = () => {
  
                /* After download revoke the created URL
                using URL.revokeObjectURL() method to 
                avoid possible memory leak. Though, 
                the browser automatically revokes the 
                created URL when the document is unloaded,
                but still it is good to revoke the created 
                URLs */
                URL.revokeObjectURL(recordedMedia);
            };
  
            document.getElementById(
                `${selectedMedia}-recorder`).append(
                recordedMedia, downloadButton);
        };
  
        if (selectedMedia === "vid") {
  
            // Remember to use the srcObject
            // attribute since the src attribute
            // doesn't support media stream as a value
            webCamContainer.srcObject = mediaStream;
        }
  
        document.getElementById(
                `${selectedMedia}-record-status`)
                .innerText = "Recording";
  
        thisButton.disabled = true;
        otherButton.disabled = false;
    });
}
  
function stopRecording(thisButton, otherButton) {
  
    // Stop the recording
    window.mediaRecorder.stop();
  
    // Stop all the tracks in the 
    // received media stream
    window.mediaStream.getTracks()
    .forEach((track) => {
        track.stop();
    });
  
    document.getElementById(
            `${selectedMedia}-record-status`)
            .innerText = "Recording done!";
    thisButton.disabled = true;
    otherButton.disabled = false;
}


Output:

Suppose, the user selects the audio recorder —

Now, if the user clicks the start recording button, then  â€”

And when the “stop recording” button is clicked —

It displays the recorded audio and gives a link to download the recorded audio.

So, what does the startRecording( ) function do?

  • It invokes the navigator.mediaDevices.getUserMedia() method and accesses the device camera or microphone or both. This method returns a Promise that resolves to a MediaStream.
  • After receiving the MediaStream, it creates an instance of MediaRecorder that can record the given MediaStream, makes both the MediaStream and the MediaRecorder global so that we can use them outside the startRecording function —
window.mediaStream = mediaStream;
window.mediaRecorder  = mediaRecorder;
  • Starts the recording of the given MediaStream by calling the MediaRecorder.start() method.
mediaRecorder.start();
  • Defines the event handlers for the created MediaRecorder. The “dataavailable” event handler function pushes the recorded data to the chunks array (Array of recorded media data Blob). The “stop” event handler function.
    • Creates a new Blob from the chunks array using the Blob() constructor.
    • Re-initializes the chunks array
    • Creates a URL for the created Blob using URL.createObjectURL() method.
    • Sets the source of the newly created “video”/“audio” element to the created URL.
    • Creates a link to download the recorded media and revokes the created URL after the link is clicked using URL.revokeObjectURL() method.
  • Disables the “start recording” button and enables the “stop recording” button.

What does the stopRecording() function do?

  • Stops the recording by calling the MediaRecorder.stop() method.
  • Stops all the Media Tracks of the MediaStream i.e. close the camera and the microphone
  • Disables the “stop recording” button and enables the “start recording” button.


Last Updated : 16 Mar, 2021
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads