In this article posted July 25 on Videonet website, Rémi Beaudouin VP of Marketing at ATEME writes about the video delivery transformation with NFV and SDN.
Digital television became a practical proposition around 20 years ago, but at that time the only way to encode and multiplex it involved dedicated hardware. Standard processors simply did not have sufficient power.
Since then we have moved from digital broadcasting to video delivery to multiple platforms. Systems architects had started to build headends using hardware-based encoders, and the tendency was to continue in this way. The result was most networks involve a large variety of proprietary hardware appliances, each of which has its own rack requirements, power supplies and so on.
As standard processors have gained power, so it has become practical to use them inside our dedicated encoding appliances. But the temptation remains, as existing devices reach their end of life, to simply replace one dedicated, proprietary box with a newer version of the same thing. Now is the time to break out of this circle.
The solution lies in virtualisation: implementing the functions of an encoder in software, which in turn can run, when necessary, on a virtual machine in a data centre. This is now entirely practical, and it allows us to define systems by functionality, rather than by what dedicated boxes do. In turn, that gives us must greater flexibility to define and vary our workflows and processes to reflect consumer demand and commercial opportunities.
We can call this design philosophy network function virtualisation (NFV). This method of linking software to achieve what we need to do at any moment is also now extended to storage, giving us complete flexibility to scale our operations up and down as we need to.
First, this delivers real commercial benefits. Capex is reduced, because we are simply building, or extending, a server farm based on standardised IT hardware and simple ethernet connectivity. We are not building racks of dedicated hardware with complex SDI cabling. The hardware is lower cost, because we can take advantage of the commoditisation of the IT industry. And building high levels of redundancy means much less investment than duplication of dedicated hardware.
The opex view also changes. For many applications, software will be licensed by use, so there is a direct link between operating costs and revenues. A virtualised environment will have a much smaller physical footprint than one using dedicated appliances, and consequently power and air conditioning costs will be reduced. And support costs can be shared across the whole data centre, not just that part of it which has to deal with dedicated broadcast and delivery technology.
The result is a new elasticity for all operations, and which can be accurately costed per service. Proposed new services can be precisely checked for commercial viability, and brought to market very quickly, perhaps gaining an edge over a competitor.
Software-defined networking
One of the new phrases which has entered the broadcast lexicon is software-defined networking (SDN). An established technique in other fields of IT, it simply means separating functionality from control. Whereas with traditional hardware the way the devices were interconnected defined the way processes were completed, SDN simply selects the required software packages to complete any task.
This makes SDN and NFV natural partners. NFV allows us to virtualise the functionality; SDN provides the control and monitoring layer that issues instructions, prioritises the use of processor cores, and manages storage, all the while maintaining high availability and high reliability.
One of the potential traps is that the management of virtual machines and their processes originally depended upon proprietary software, which we would prefer to avoid. OpenStack, an open source virtualisation framework, started to emerge in 2010, allowing specialists to build architectures that meet the practical requirements of best-of-breed installations. Configuration is achieved through a web-based dashboard, command-line tools or, most common, a RESTful API.
While this architecture is widely used in the wider networking industry, leading-edge companies like ATEME are implementing it in the video headend, both for OTT and for traditional broadcast deliveries. Our architecture uses ATEME Management System (AMS), an OpenStack solution, as the control layer with the virtualisation layer provided by as many Titan devices as required. Titan uses another OpenStack concept, Docker.
Dockers provide several advantages over virtual machines, not least that they are very lightweight (typically just a few megabytes) because they do not need to include the full operating system. That means a process can be spooled up in less than a second typically, and they do not impose any CPU penalty. Because they are open source software they are free.
The other fundamental advantage of using OpenStack principles is that it makes AMS and its functionality immediately ready for the cloud. Third-party cloud service providers rely heavily on OpenStack, and AMS provides transparent interoperability, deploying functions, assigning IP addresses and providing redundancy.
In use
To demonstrate the efficiency of this virtualised framework, consider a real-world example: the transcoding of a UHD asset. If we are talking about something at feature length, this could take a day to transcode at high quality on a standalone device. That means tying up expensive hardware for a very long time which will impact other workflows; and you may not have a day to wait before you show the content.
In a virtualised environment the AMS parses the input file and starts as many encoder instances as it requires, or are made available by the system’s business rules. Each instance will be responsible for encoding one segment of the content. After the first pass, the AMS will analyse the result and launch the instances for the second encoding pass to the required quality level and bitrate. Finally, the AMS will aggregate all the encoded segments into a single output asset.
The speed increase will obviously depend upon the number of parallel instances that can be launched. Practical experience shows that, for content segmented into one minute chunks, you can get a hundredfold increase in encoding speed, bringing what was a 24-hour plus timescale down to around 15 minutes. And of course some or all of those instances could be in the cloud, giving your architecture the elasticity it needs for workflows with high processor demand.
This elasticity is matched by commercial flexibility. ATEME virtualised headend systems are available as:
- a perpetual licence – the traditional product purchase model
- a rental – particularly useful for pop-up channels, for increasing capacity around major events such as the Olympics, or to cover a major archive digitisation project
- a pay-per-use model, based on content minutes processed or gigabytes output.
It is clear to everyone that the challenges – and particularly the volumes – of transcoding will continue to rise exponentially, due to ever more platforms and services; new content formats like 360˚ virtual reality and Ultra HD in 4K and 8K; and the pressing need to protect archives. Virtualisation gives the flexibility broadcasters and service providers need to meet the peaks in demand, and the technology to maximise its benefits is proven, readily available and affordable.
Rémi Beaudouin
Rémi Beaudoui is VP of Marketing at ATEME