Skip to content

Record Videos with MediaStream Recording API

The MediaStream Recording API makes it possible to capture data generated by a MediaStream or HTMLMediaElement object for analysis, processing, or saving to disk. The API has just one interface called MediaRecorder. MediaRecorder gets the media data from a MediaStream and delivers it for processing. In this post, we’ll also learn the process of recording a stream. For better context, Nimbus, a google chrome extension makes use of the MediaStream Recording API to record video with audio and save the recording file to disk. In this post, we will be building an application that allows us record videos with JavaScript and MediaStream Recording API.

To flow along with this tutorial, you’ll need to have the following

  • JavaScript and React Knowledge
  • A code editor (preferably VS Code)
  • Live Server extension on your code editor

The complete code and demo is on Codesandbox.

Let’s start by creating a folder, i’ll call mine Video Recoding App. Open the folder on your code editor and create an index.html, main.js and styles.css files. Add these lines of code to the index.html:

//index.html
<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge">
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <title> Video Recording app</title>
        <link rel="stylesheet" href="styles.css">
    </head>
    
    <body>
        <div class="container">
            <div class="preview">
                <h2>Preview</h2>
                <video id="preview" width="160" height="120" autoplay muted></video>
                <button id="startButton" class="button">
                    Start
                </button>
            </div>
            <div class="recorded">
                <h2>Recording</h2>
                <video id="recording" width="160" height="120" controls></video>
                <button id="stopButton" class="button">
                    Stop
                </button>
            </div>
            <button id="downloadButton" class="button">
                Download
            </button>
        </div>
        <div id="log"></div>
    
        <script src="main.js"></script>
    </body>
</html>
Code language: HTML, XML (xml)

Here, we have a container with two divs. The first div has the video tag with autoplay and muted set to true. There’s also a start button with an id of startButton. When you click on the button, it’ll request camera and audio access, then start recording with the video element. We’ll write the logic for this soon. The second div tag also has a video element but this video will be the recorded video. The controls attribute enable the video with all the media controls including play, pause, volume, e.t.c. We will use the stop button to stop recording the video. We also have a div with id of log, this will log information in the page.

Let’s go ahead and write the logic. Add these lines of code to your main.js

//main.js
const preview = document.getElementById("preview");
const recording = document.getElementById("recording");
const startButton = document.getElementById("startButton");
const stopButton = document.getElementById("stopButton");
const downloadButton = document.getElementById("downloadButton");
const logElement = document.getElementById("log");
const recordingTimeMS = 10000;
Code language: JavaScript (javascript)

Here, we define some global variables. Most worthy to take note of is the recordingTimeMS variable, this is the recording time we set for our videos, so when it gets to 10seconds, the video automatically stops recording. Add these lines of code:

function log(msg) {
    logElement.innerHTML += msg + "\n";
}

function wait(delayInMS) {
    return new Promise(resolve => setTimeout(resolve, delayInMS));
}

function formatBytes(bytes, decimals = 2) {
    if (bytes === 0) return '0 Bytes';
        const k = 1024;
        const dm = decimals < 0 ? 0 : decimals;
        const sizes = ['Bytes', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'];
        const i = Math.floor(Math.log(bytes) / Math.log(k));
        return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes[i];
}
    
Code language: JavaScript (javascript)

Here, we create 3 utility functions. First the log function, will log information to the webpage. The wait function will return a Promise which resolves once the specified number of milliseconds have elapsed. Finally, the formatBytes will convert the recorded size in bytes to kilobytes, megabytes, e.t.c.

Let’s move on to write the function that starts recording a video. Add these lines of code:

//main.js

function startRecording(stream, lengthInMS) {
    let recorder = new MediaRecorder(stream);
    let data = [];
    recorder.ondataavailable = event => data.push(event.data);
    recorder.start();
    log(recorder.state + " for " + (lengthInMS / 1000) + " seconds...");
    let stopped = new Promise((resolve, reject) => {
        recorder.onstop = resolve;
        recorder.onerror = event => reject(event.name);
    });
    
    let recorded = wait(lengthInMS).then(
        () => recorder.state == "recording" && recorder.stop()
    );
    
    return Promise.all([
        stopped,
        recorded
    ])
    .then(() => data);
}
Code language: JavaScript (javascript)

Here, the startRecording() function takes two input parameters: a stream(which represents the MediaStream) to record from and the length in milliseconds of the recording to make. Next, we instantiate the MediaRecorder (remember this is the interface to MediaStream API) that will handle recording the input stream. Next, we assign an empty array to the data variable. Next, we set the ondataavailable event which is fired when the MediaRecorder delivers media data to our application for its use. It returns a Blob object that contains the data. recorder.start() starts the recording process. Next, we create a new Promise called stopped. The Promise resolves when the onstop event is triggered, and it’s rejected when the onerror event is called. Next, we create another Promise called recorded. It resolves when the assigned number of milliseconds elapses. Finally, we create a Promise that is fulfilled when the two Promises stopped and recorded resolves. On resolution, we output the data. Below, the startRecording() function, add these lines of code:

//main.js
function stop(stream) {
    stream.getTracks().forEach(track => track.stop());
}
Code language: JavaScript (javascript)

This function stops the input media from recording. If the camera is on, it turns it off too.

When you click on the Start button what happens? add these lines of code to implement the functionality:

//main.js

startButton.addEventListener("click", function () {
    navigator.mediaDevices.getUserMedia({
        video: true,
        audio: true
    }).then(stream => {
        preview.srcObject = stream;
        downloadButton.href = stream;
        preview.captureStream = preview.captureStream || preview.mozCaptureStream;
        return new Promise(resolve => preview.onplaying = resolve);
    }).then(() => startRecording(preview.captureStream(), recordingTimeMS))
        .then(recordedChunks => {
            let recordedBlob = new Blob(recordedChunks, { type: "video/webm" });
            recording.src = URL.createObjectURL(recordedBlob);
            downloadButton.href = recording.src;
            downloadButton.download = "RecordedVideo.webm";
            log(`Your video is ${formatBytes(recordedBlob.size)}`);
            console.log(recordedBlob.size)
        })
        .catch(log);
}, false);
Code language: JavaScript (javascript)

Let’s go through the moving parts of this code snippet. First, we get permissions from the user to have access to audio, and video using naviagator.mediaDevices.getUserMedia. The getUserMedia returns a Promise, on resolution, we assign the input stream to preview <video> srcObject. This allows the video captured by the camera to be displayed in the <video id=``"``preview``"``> box. After that, a new Promise resolves when the preview video starts to play. When the video starts to play, we invoke the startsRecording() and pass two arguments; the preview video stream as the source media to be recorded, and recordingTimeMS as the number of milliseconds of media to record. Next, we merge the recordedChunks (which is the array of media data Blobs) with mimeType of video/webm. We then set the src attribute of the recorded video to URL.createObjectURL and pass the recordedBlob as argument. [URL.createObjectURL()](https://docs.w3cub.com/dom/url/createobjecturl) is used to create an URL that references the blob. We assign the newly created URL to the href attribute of the download button. Then, we set the download attribute of the download button to download a RecordedVideo.webm file whenever we click on the button.

Finally, let’s add the functionality for when we click on the stop button. Add these lines of code:

//main.js
stopButton.addEventListener("click", function () {
    stop(preview.srcObject);
}, false);
Code language: JavaScript (javascript)

Go ahead to add these lines of css code in your styles.css

//styles.css
@import url('https://fonts.googleapis.com/css2?family=Karla:wght@300;400;500&display=swap');
    
body {
    width: 1200px;
    margin: auto;
    font-family: 'Karla', sans-serif;
    background: #9ebded;
}
    
.container {
    display: grid;
    grid-template-columns: 1fr 1fr;
    grid-gap: 2rem;
    place-content: center;
    height: 100vh;
    background: #9ebded;
}

.preview, .recorded {
    margin: auto;
}

video {
    width: 100%;
    height: 100%;
}

button, .button {
    padding: 0.5rem 3rem;
    background: #fff;
    border: none;
    font-size: 1rem;
    cursor: pointer;
}

#startButton, h2 {
    margin-left: 2rem
}

#downloadButton {
    margin: auto;
}

#log {
    color: #fff;
    margin-bottom: 3rem;
}
Code language: PHP (php)

Awesome! Now start your app by running Live Server on your code editor. Navigate to your browser, you should have something like this:

https://www.dropbox.com/s/29tmoy3shhxuk0m/videoRecord.webm?dl=0

I had already given permission to audio and video so i didn’t get the prompt in this video.

Let’s go ahead and implement the recording app with React. We will be using a React library [**use-screen-recorder**](https://github.com/ishan-chhabra/use-screen-recorder) that wraps the MediaStream Recording API nicely into a Hook.

//javascript
    
import * as React from "react";
import useScreenRecorder from "use-screen-recorder";

export default function MediaController() {
    const {
        blobUrl,
        pauseRecording,
        resetRecording,
        resumeRecording,
        startRecording,
        status,
        stopRecording,
    } = useScreenRecorder();
    
    return (
        <div>
            <video src={blobUrl} controls autoplay />
            <small>Status: {status}</small>
            <button onClick={startRecording}>Start Recording</button>
            <button onClick={stopRecording}>Stop Recording</button>
            <button onClick={pauseRecording}>Pause Recording</button>
            <button onClick={resumeRecording}>Resume Recording</button>
            <button onClick={resetRecording}>Reset Recording</button>
        </div>
    );
};
Code language: JavaScript (javascript)

Check this example to see how to use this hook with React.

In this tutorial, we learned about the MediaStream Recording API, and we went on to build a JavaScript media recorder application using the API. We also got to see how to implement similar functionality in React using the use-screen-recorder hook. I hope you’ve learned something new from this.

Happy Coding!

Back to top

Featured Post