MIT’s new AI can keep streaming video from buffering
Buffering and pixelation are the scourge of streaming video. It ruins the experience for viewers, robs advertisers of revenue as said viewers tune out, and causes technical headaches for streaming services which have to engineer solutions. But a new neural network AI from MIT CSAIL may be just what the internet needs for velvety smooth streaming services.
The video above that you’re streaming isn’t arriving to your computer in one complete chunk. That would take entirely too much bandwidth. So instead, that data is chopped up into smaller pieces and sent sequentially. But to ensure that the video quality is sufficient, sites like YouTube leverage ABR (Adaptive BitRate) algorithms to determine at what resolution the video will play. ABRs generally come in two styles: those that measure how fast a network can transmit data and those that work to maintain a sufficient buffer at the head of the video.
If the rate-based algorithm fails, the video will suffer pixelation as the system drops the bitrate to ensure that the video keeps playing. But if you try to skip too far ahead, it causes havoc with the buffer-based system which then has to freeze playback while it loads both the new chunk of video and the buffer ahead of it. Both of these ABRs are essentially addressing two sides of the same overarching issue, but neither is fully capable of solving it. And that’s where AI comes in.
There’s actually already been a bit of research into this issue. A research team from Carnegie Mellon recently developed a “model predictive control” (MPC) scheme that attempts to predict how network conditions will change over time and make optimizations decisions based on that model. The problem with that system, however, is that it will only ever be as good as the model itself and it is ill-suited for networks that see sudden or drastic changes in traffic flows.
CSAIL’s AI, dubbed Pensive, does not rely on a model. Instead it’s used machine learning to figure out when (and under what conditions) to switch between rate and buffer-based ABRs. Like other neural networks, Pensive uses rewards and penalties to weight the results of each trial. Over time, the system is able to tune its behavior to consistently receive the highest reward. Interestingly, since the rewards can be adjusted, the entire system can be tuned to behave however we want.
“Our system is flexible for whatever you want to optimize it for,” MIT professor Mohammad Alizadeh said in a statement. “You could even imagine a user personalizing their own streaming experience based on whether they want to prioritize rebuffering versus resolution.” The team trained this neural network on just a month’s worth of downloaded video content and yet was able to get the same resolution quality as the MPC system but with 10 to 30 percent less buffering.
We should eventually see this technology be adopted by the likes of YouTube and Netflix but first, the MIT team hopes to apply the AI to VR. “The bitrates you need for 4K-quality VR can easily top hundreds of megabits per second, which today’s networks simply can’t support,” Alizadeh said. “We’re excited to see what systems like Pensieve can do for things like VR. This is really just the first step in seeing what we can do.”
(51)