Skip to content

Simplifying Video Content Delivery

Video is an increasingly important component for websites and apps, as consumer demand for and consumption of video content is growing. It’s not only for wired devices, though. More and more, consumers are accessing video where they want it, when they want it, on whatever device that happens to be nearby.
While this may be convenient for consumers, the varying screen sizes, resolutions and bandwidth limitations result in some significant challenges for developers, who want to ensure an optimal experience for their viewers. As a result, developers will need to rethink their approach to video transcoding, so it can work on a variety of screens, while ensuring an optimal user experience, reducing costs and minimizing bandwidth demands.
Case in point: by 2019, consumer Internet video traffic will account for 80 percent of all consumer Internet traffic, according to Cisco’s Visual Networking Index. And, increasingly, consumers are using their mobile devices to view videos. In the IAB Mobile Video Consumption survey, 35 percent of viewers said they are watching more video on their smartphones compared to last year, with many of them noting that they’ll spend five minutes or more watching videos from their mobile device. Of those using smartphones for video, 68 percent indicated that they watch on iOS devices. The iPad accounts for even more video consumption, comprising 86 percent of tablet viewing, followed by the Samsung Galaxy Tablet, Kindle Fire and Microsoft Surface in the single digits.
Before the proliferation of devices from which consumers could access video content, it was relatively simple to pre-transcode video. That, however, is now impossible, so Web and app developers are moving toward live transcoding to ensure users have the right format of video for the device they’re using.
But live transcoding is a time-consuming delicate process, and if it’s not done right, there will be a noticeable loss in quality. If you’re doing a lot of in-house transcoding, you’ll likely have to purchase expensive software that is complex to use, requiring you to manage and configure the settings yourself, which may not be part of the average developer’s expertise. You’ll also have to consider how to reduce the file size to ensure that it’s easier to distribute over mobile networks and less expensive for consumers who often don’t have unlimited bandwidth plans, and have both standard definition and high definition versions available so that streaming sessions can adapt to the bandwidth available. And then you’ll have to worry about storage. Having videos in a variety of formats to meet all users’ needs requires a lot of capacity and requires servers to be provisioned in advance, another task that is typically outside of the scope and expertise of most Web designers.
To address these challenges, developers are moving toward adaptive bitrate streaming and HTTP Live Streaming (HLS). While in the past most video streaming technologies utilized streaming protocols such as RTP with RTSP, today’s adaptive streaming technologies are almost exclusively based on HTTP and designed to work efficiently over large distributed HTTP networks, such as the Internet. Adaptive bitrate streaming works by detecting a user’s bandwidth and CPU capacity in real time and adjusting the quality of a video stream accordingly. It requires the use of an encoder that can encode a single source video at multiple bit rates, which results in very little buffering, a fast start time and a good user experience. HLS is an HTTP-based media streaming communications protocol implemented by Apple. It is similar to MPEG-DASH because it breaks the overall stream into a sequence of small HTTP-based file downloads, that combine in an unbounded transport stream. As the stream is played, alternate streams containing the same material encoded at a variety of data rates may be selected, allowing the streaming session to adapt to the available data rate.
When addressing these complex issues, there are a number of key features to keep in mind, allowing users to upload videos, transcode and modify them to ensure an optimal user experience and apply fine-grained control over codecs and bitrate.
  • Choosing the correct format – the most fitting format (MP4, OGV, FLV, WebM) must be accurately specified for optimal viewing in various resolutions and aspect ratios and delivery over any type of browser, laptop or mobile device.
  • Responsive Design – your video content must be optimized for delivering on different screen sizes or fitting into a graphic design by dynamically resizing and cropping video on the fly. Also, by resizing and cropping on the fly, you can ensure that the video is not stretched or shrunk when adapting to different devices.
  • Video Setting Adjustments – adjust properties of video files, such as quality, bitrate, video and codecs, on the fly to ensure optimal viewing for all of your users. Adjusting the quality of videos can achieve significant bandwidth savings. There are a number of Web codecs, including HEVC/H.265, H.264, MPEG-4, Theora, VP9, VP8, VP6 and WMV.

Cloudinary can of course assist in doing these easily and on-the-fly, alongside many other features, such as creating thumbnails from videos and applying various transformations and transformations on the video and audio controls to enhance your visitors’ viewing experiences.

You can read more about Cloudinary’s video management solution here, and specifically about transcoding, quality & bit rate control, video codec settings and more here.

Video has become an integral part of virtually every website and app, and will grow in importance as consumers seek out more video content. If you’re still pre-transcoding video or working with an in-house transcoding system, now is the time to rethink your approach of how you upload, transform manage and store videos so you can deliver an optimal experience to all of your users.

Back to top

Featured Post