Skip to content

RESOURCES / BLOG

Recording video interviews using NuxtJS

Since the pandemic began, usage of in-person meetings has become less preferential. For interviews as well, we mostly shifted to video conference services such as Zoom calls. The challenge is that this can be difficult to schedule with over 50 candidates for a single position. Let us explore how we can create self-managed recorded video interviews for our candidates to do on their own.

HTML, CSS, and JavaScript knowledge is essential to be able to follow along with this tutorial. Vue.Js knowledge would be a plus but is not a hard requirement to follow along.

The completed project is available on Codesandbox.

You can find the full codebase on my Github

Nuxt.Js is a Vue.Js framework that boosts productivity due to its simplicity. We will be using it to build our project.

To get started, ensure you have either Yarn or NPM v5.2+/v6.1+ installed. Open the terminal in your preferred working directory and run the following command

yarn create nuxt-app nuxtjs-video-interviews
# OR
npx create-nuxt-app nuxtjs-video-interviews
# OR
npm init nuxt-app nuxtjs-video-interviews
Code language: PHP (php)

A set of questions will help customize the installation. Here are the options we selected for our project:

Project name: nuxtjs-video-interviews Programming language: JavaScript Package manager: Yarn UI framework: Tailwind CSS Nuxt.js modules: N/A Linting tools: N/A Testing frameworks: None Rendering mode: Universal (SSR/SSG) Deployment target: Server (Node.js hosting) Development tools: N/A

Once setup is complete, you may now run the app. It will be accessible on https://localhost:3000.

We will store the interview videos on Cloudinary, a powerful media platform with a comprehensive set of SDKs and APIs. To create an account, you may signup here.

nuxt/cloudinary is the recommended Nuxt.Js cloudinary plugin. Let’s add @nuxtjs/cloudinary to our project:

yarn add @nuxtjs/cloudinary
# OR
npm install @nuxtjs/cloudinary
Code language: CSS (css)

Next, add @nuxtjs/cloudinary in the modules section of the nuxt.config.js file:

// nuxt.config.js
export default {
    ...
    modules:[
        '@nuxtjs/cloudinary'
    ]
    ...
}
Code language: JavaScript (javascript)

Finally, add the cloudinary section in nuxt.config.js to configure our module.

// nuxt.config.js
export default {
    ...
    cloudinary:{
        cloudName: process.env.NUXT_ENV_CLOUDINARY_CLOUD_NAME
    }
}
Code language: JavaScript (javascript)

The cloudName is being obtained from the process’ environmental variables, values we place in a separate file not included in our code repository. To set up our NUXT_ENV_CLOUDINARY_CLOUD_NAME, we will create a .env file and load our environmental variables there:

touch .env
Code language: CSS (css)

You can access your cloud name on your dashboard.

 <!-- env -->
NUXT_ENV_CLOUDINARY_CLOUD_NAME=<your-cloudinary-cloud-name>
Code language: HTML, XML (xml)

The first thing we want to do is build the HTML for our questions and the submit form. Let’s add the necessary code to the template section of our pages/index.vue file

<!-- pages/index.vue -->
<template>
  <div class="m-20">
    <h1>Submit your interview</h1>
    <h2>For best experience, use either Firefox or Chome</h2>
    <div v-for="(question,index) in questions" :key="index">
      <h3>{{index+1}}. {{question.question}}</h3>
      <div>
        <button type="button">Record answer</button>
      </div>
    </div>
    <div>
      <form  @submit.prevent="submit">
        <div>
          <div>
            <input required v-model="interviewee" type="text" placeholder="Enter your name...">
          </div>
          <button type="submit">Submit interview</button>
        </div>
      </form>
    </div>
  </div>
</template>
Code language: HTML, XML (xml)

Let us now add the questions and the interview variable to our page state in the script section.

<!-- pages/index.vue -->
<script>
export default {
  data(){
    return {
      interviewee:null,
      questions:[
        {
          question: "Tell me something about yourself.",
          recording:false,
          recorder:null,
          recordedChunks:[],
          answer:null,
          uploading:false
        },
        {
          question: "How did you hear about this position?",
          recording:false,
          recorder:null,
          recordedChunks:[],
          answer:null,
          uploading:false
        },
        {
          question: "Why do you want to work here?",
          recording:false,
          recorder:null,
          recordedChunks:[],
          answer:null,
          uploading:false
        },
      ]
    }
  }
}
</script>
Code language: HTML, XML (xml)

The above will now render the basic HTML needed for our app to run.

To record the video, we will be interacting with the Media Devices API. We initialize the video and the audio, store an instance of the recorder, pass the stream to a visible video element and store the recorded chunks.

When recording is stopped, we create a video file from the chunks and save it as the answer. The answer is rendered in a different visible video element. Let us add the HTML to support this.

<!-- pages/index.vue -->
<template>
  <div>
    ...
    <div v-for="(question,index) in questions" :key="index">
      <h3>{{index+1}}. {{question.question}}</h3>
      <div class="mx-10">
        <video v-if="!question.recording && question.answer" :src="question.answer" controls></video>
        <video v-else-if="question.recording" :id="`player-${index}`" controls="false"></video>
        <button v-if="!question.recording" @click="recordAnswer(index)" type="button">
          {{question.answer ? 'Record again' : 'Record answer'}}
        </button>
        <button v-if="question.recording" @click="questions[index].recorder.stop()">
          Stop Recording
        </button>
      </div>
    </div>
    ...
  </div>
</template>
Code language: HTML, XML (xml)

Within the script section, we add the methods portion of the code.

// pages/index.vue
<script>
export default {
  data(){
    return {
      ...
      questions:[
        {
          question: "Tell me something about yourself.",
          recording:false,
          recorder:null,
          recordedChunks:[],
          answer:null,
          uploading:false
        },
        {
          question: "How did you hear about this position?",
          recording:false,
          recorder:null,
          recordedChunks:[],
          answer:null,
          uploading:false
        },
        {
          question: "Why do you want to work here?",
          recording:false,
          recorder:null,
          recordedChunks:[],
          answer:null,
          uploading:false
        },
      ]
    }
  },
  methods:{
    recordAnswer(index){
      this.questions[index].recording = true;
      navigator.mediaDevices.getUserMedia({ audio: true, video: true }).then(
        stream => this.handleRecordingSuccess(index,stream) 
      );
    },
    handleRecordingSuccess(index,stream){
      this.questions[index].recorder = new MediaRecorder(stream);
      document.getElementById(`player-${index}`).srcObject = stream;
      document.getElementById(`player-${index}`).play();
      this.questions[index].recorder.addEventListener(
        'dataavailable',
        e => this.handleDataAvailable(index,e)
      );

      this.questions[index].recorder.addEventListener(
        'stop', 
        () => this.handleRecordingStopped(index,stream)
      );

      this.questions[index].recorder.start();
    },
    handleDataAvailable(index, e){
        if (e.data.size > 0) {
          this.questions[index].recordedChunks.push(e.data);
        }
    },
    handleRecordingStopped(index,stream){
        stream.getTracks().forEach(track => track.stop());
        this.questions[index].recording = false;
        document.getElementById(`player-${index}`).pause();
        document.getElementById(`player-${index}`).srcObject = null;
        this.questions[index].answer = URL.createObjectURL(new Blob(this.questions[index].recordedChunks), {type: 'video/mp4'});
    },
    ... 
  }
}
</script>
Code language: HTML, XML (xml)

To make our code simpler, we split the JavaScript logic across multiple methods. This is always recommended to make the code more maintainable.

Once the recording is done and the answer has been saved, we can now submit the responses. Before we submit, we need to get the users’ names. Let us create an HTML form for this purpose. When the form is submitting/uploading, we also want the users to view a friendly loader. Hence we add an SVG loader.

<!-- pages/index.vue -->
<template>
  ...
  <form @submit.prevent="submit">
    <div>
      <div>
        <input required v-model="interviewee" type="text" placeholder="Enter your name...">
      </div>
      <button type="submit" :disabled="submitting">
       <!-- Loader to show when uploading -->
        <svg v-if="submitting" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
          <circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
          <path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
        </svg>
        <!-- End of loader -->
        {{submitting ? 'Uploading' : 'Submit interview'}}
      </button>
    </div>
  </form>
  ...
</template>
Code language: HTML, XML (xml)

Once the form is submitted, we check if all the questions have been answered. If they have all been answered we will proceed to upload all the videos to a folder named after the interviewee. For easy reviewing, we add the question as a context variable to the upload.

The upload data has to be in Base64. This is why we use the blobToBase64 method to read the blob and return the data as Base64 encoded.

// pages/index.vue
<script>
export default {
  data(){
    return {
      interviewee:null,
      submitting:false,
    }
  },
  methods:{
    blobToBase64(blob) {
      return new Promise((resolve, _) => {
        const reader = new FileReader();
        reader.onloadend = () => resolve(reader.result);
        reader.readAsDataURL(blob);
      });
    },
    submit(){
      if(this.questions.filter(question => question.answer === null).length){
        alert("Some questions have not been answered. Answer all questions before submitting");
        return;
      }
      this.submitting = true;
      Promise.all(this.questions.map(async (question,index) => {
        this.questions[index].uploading=true;
        const blob = new Blob(question.recordedChunks);
        const base64 = await this.blobToBase64(blob);
        await this.$cloudinary.upload(
          base64, 
          {
            public_id: `Question-${index+1}`,
            folder: `nuxtjs-video-interviews/${this.interviewee}`,
            upload_preset: "default-preset",
            context:`question=${question.question}`
          }
        );
        this.questions[index].uploading=true;
      })).then(() => {
        this.submitting = false;
        alert("Upload successful, thank you.");
      }).catch(() => {
        this.submitting = false;
        alert("Upload failed. Please try again.");
      });
    }
  }
}
</script>
Code language: HTML, XML (xml)

Using the above code, we have been able to interview our app users and save their responses. To read more about how to use the Media Devices API feel free to review their docs. Also, check out Cloudinary’s API documentation for more about how you can interact with their service.

Start Using Cloudinary

Sign up for our free plan and start creating stunning visual experiences in minutes.

Sign Up for Free