VidTrans20 Annual Conference & Exposition
February 25 - 27, 2020
Los Angeles, California

| Overview | Program | Rates & Registration | Exhibitor Info | Sponsorships |

Synopses of Presentations (alphabetical by presenter)

 > Texas AM University IP Production Case Study
Zack Bacon – TAMU & Anup Mehta - Cisco

This session will use Texas A&M University new production studios as a case study to present architecture, best practices and lessons learned from a real live sports production using SMPTE 2110 enabled IP infrastructure. The session will cover the business drivers for migrating to SMPTE 2110 based infrastructure at TAMU's new production facility. We will include details of the planning process involved and consideration for deciding the overall architecture including spines and leafs , bandwidth management and connectivity of end points. Presentation will highlight decision points regarding sizing of the infrastructure , future scaling of capacity as well as criteria for IP signalling protocols selected. We will include details on PTP implementation including grand master selection and type of PTP clocks to achieve timing distribution throughout the facility. We will discuss details on our observations regarding building of needed skill sets to setup and manage such environment. The presentation would wrap up with a review of the path to virtualization in this environment and consideration to accommodate virtualization requirement on top of the same media fabric carrying SMPTE 2110.

 > Time distribution over WAN for ATSC 3.0 Single Frequency Networks
Magnus Danielson – Net Insight

Distribution of time over WAN require facing challenges, as most WAN infrastructure has very poor support for timing. For such situations the traditional methods have not been successful in delivering solutions adapted to the problem.

The presentation will describe how the Time Transfer method supports mission-critical services and large-scale distribution networks with the delivery of GNSS-independent TAI/UTC time. This approach provides the accuracy and precision needed for any real-time wide-area network applications including those based on SMPTE ST2110 and LTE-TDD. The solution is at the same time, more resilient than satellite-based systems and more accurate and scalable than traditional network synchronization methods.

As transmission frequencies are a regulated and limited resource, the importance of the SFN mode of operation is of vital importance to the longevity of digital terrestrial transmission standards such as ATSC 3.0.

ATSC 3.0 SFN operations require highly accurate synchronous transmission within the same frequency. Consequently, a much stricter performance level is required from the synchronization reference equipment. A typical solution uses a non-network based system such as a GNSS (Global Navigation Satellite System) receiver as the UTC source reference at the transmission site.

 > NMOS – What is it and Why should I care?
Jed Deame - Nextera Video

SMPTE ST2110 has emerged as a key technology enabling flexible and scalable Video over IP but until recently, there was an absence of a common control system. Spearheaded by AMWA, the NMOS specification previously defined IS-04 (Registration & Discovery) and IS-05 (Connection Management), which was a great start to getting a standardized control system in place and met the needs of many users.

The latest advancements in NMOS, including IS-08 (Audio Mapping), IS-09 (System Discovery), BCP- 002 (Grouping) and BCP-003 (Security) take NMOS to a new level, surpassing the level of control provided in SDI while also adding a layer of security that has been sorely needed in control systems for quite some time.

This overview will explain NMOS in a clear and simple language and provide an update on the importance of the latest features.

 > Proposal for Programmatic Configuration of Media Devices with NETCONF/YANG
Thomas Edwards - Disney

Configurations of media devices has been traditionally provided through web GUI interfaces, proprietary command lines, or through specialized control GUI applications. The move to IP-based media devices is increasing the amount of configuration required (such as media flow IP addresses, UDP ports, RTP payload types, PTP domains, audio channel configuration, etc.) Also, many media users would prefer automated, programmatic configuration rather than manual point-and-click. The networking industry has developed the NETCONF protocol as a common mechanism to manage the configuration of network devices. NETCONF can programmatically reveal device configuration capability, allows for candidate and startup configurations, as well as transactional commits with rollback if required. NETCONF XML payloads are defined by the YANG modeling language. YANG models are produced by standards development organizations, user organizations, and vendors. There is a rich ecosystem of NETCONF & YANG tools, including editing systems, validators, client and server base models, and libraries for several programming languages. Can NETCONF/YANG be the basis for reliable programmatic media device configuration?

 > Dynamic Media System Architecture - Creating truly dynamic media facilities
Brad Gilmer – Gilmer & Associates

While the transition to IP has resulted in products and systems that have met some user requirements for more dynamic, flexible facilities, a more IT-centric approach will be necessary in the future.

Attendees will be introduced to a high-level overview of the Dynamic Media System Architecture, built squarely on current best practices in the IT domain. They will learn about key functionality in the Media Service Provisioning layer, and about how facilities may be built "just-in-time" from a service layer using the Media System Constructor.

It is hoped that an interactive discussion at the end of this presentation may outline areas of activity for the VSF.

 > Unlocking The Value of Media Over IP : Networked PTP Time Delivery
Richard Hoptroff - Hoptroff London Ltd

Distributing media over IP is fast, cheap and flexible compared to SDI. The price paid is that, being asynchronous, arrival time is not guaranteed. Resynchronization is needed to realign the different media streams when they arrive.

Installing local time synchronization equipment at every point in the delivery chain quickly becomes costly. Hoptroff London learned from supplying time to the financial services industry that networked time feeds offer a significant cost advantage over local grandmaster clock hardware.

Our work distributing PTP on a global basis to financial services is presented. Quality of service is compared to SMPTE/JT-NM specifications, and is shown to achieve the required 1us within facilities and 1ms between facilities, synchronization software for cloud servers at the receiving end is also discussed.

Resiliency is prime: Failsafe protection at every point in the time delivery chain, and the skills required to support it, are discussed, including mixed terrestrial and satellite sources and redundant time delivery all the way to the devices that require synchronization.

 > XR - P2MP coherent optical subcarrier aggregation technology
Antti Kankkunen - Infinera

XR optics is a new technology to surmount the inherent limitations of traditional point-to-point optical transmission solutions. XR optics enable a single transceiver (for example 400GE switch port) to generate numerous lower-speed Nyquist subcarriers that can be independently steered to different destinations (for example 16x25GE) using a cost-effective splitter/combiner optical infrastructure and standard form factor pluggables (QSFP-DD/QSFP/SFP/OSFP). Media networks are inherently hub and spoke, with a large number of spoke devices (cameras, microphones, multiviewers, video switchers) connecting to a smaller number of hub devices (ethernet switches). However, the technology used to build these networks has been point to point, with transceivers of the same speed required at each end. This mismatch results in a large number of inefficiently used optical transceivers and router ports. XR optics allow network architects to dramatically reduce the number of transceivers in the network and eliminate the need for costly intermediate aggregation switches. Attendees will learn the underlying technological basis of XR optics and how it can improve TCO for media networks.

 > JT-NM Tested August 2019: What Is In It For You? Results, Methodologies, and Plans for he Future
Ievgen Kostiukevych – European Broadcast Union

The presentation explains the second iteration of the JT-NM Tested by the authors of the program themselves, including the new test plans, testing procedures and methodologies, results, overall findings, and plans for the future of the program.

The presentation will be co-presented with Willem Vermost (VRT/EBU).

 > Native IP decoding MPEG-TS video to Uncompressed IP (and vice versa) on COTS hardware
Kieran Kunhya – OBE.TV

The advent of Uncompressed IP video such as SMPTE 2110 running on COTS hardware allows for significant workflow improvements in facilities where large amounts of contribution video take place. For example, reducing the rackspace of encoders/decoders by a factor of 10 and being able to scale up and down at will in a similar fashion to cloud services. However, there are major software engineering challenges related to creating SMPTE 2110 streams in software. These challenges and how they can be solved will be discussed in detail.

This presentation will use cover an example of a well-known consumer brand which has recently entered sports video broadcasting.

 > Machine Learning driven Variable Frame-Rate for Production and Broadcast Applications
Jiri Matela - Comprimato

DVB has proposed to introduce Ultra-High Definition services in three phases. The second phase (UHD-1 phase 2) specification includes several new features such as High Dynamic Range (HDR) and High Frame-Rate (HFR). It has been shown in several studies that HFR (+100 fps) enhances the perceptual quality and that this quality enhancement is content dependent.

On the other hand, HFR raises several challenges for the production and transmission chains including higher bandwidth and file sizes, codec complexity increase and bit-rate overhead, which may delay or even prevent its deployment in the broadcast ecosystem.

This presentation proposes a Variable Frame Rate (VFR) approach based on machine learning techniques to determine the lowest frame-rate that preserves the perceived video quality of the HFR video. This approach enables significant storage and bit-rate savings as well as complexity reductions at both encoder and decoder sides.

 > RCLP: Inexpensive Synchronization on Cheap Networks
Marc Levy – Macnica

In the broader world of Pro AV, there are many circumstances where IP video systems are expected to run on unmanaged networks. PTP is too expensive and complicated to implement for many customers and this hurts adoption.

 > The 2110 Audio Muddle – Managing Mono Audio inside Multi-Channel Streams
John Mailhot - Imagine Communications

Broadcasters want the ability to manipulate audio with mono-channel granularity, but they also value the systemic simplicity of keeping logical mixes and content together inside of multi-channel streams Audio sending and receiving devices each have their own constraints and capabilities, and sometimes there is no common ground. How to make systems under these conditions? Here’s how.

 > SMPTE 2110 diagnostic and monitoring in real-time encoding systems
Jovo Miskin - Synamedia

The media processing industry has been accustomed to the reliability of baseband inputs which never or very rarely fail. The SDI over IP standards (SMPTE 2022-6 and SMPTE 2110) have introduced new challenges for end-to-end monitoring and troubleshooting. When a video encoder reports an error, it is key to be able to identify the root cause of the issue including potentially tracing it to the input. Having good tools to monitor the inputs and processing stages at various points in the system can be of great value. This presentation will cover the usage of Grafana to accomplish this task focusing on the diagnostics and monitoring of SDIoIP streams as they enter a real-time encoder.

 > Bit-Rate Evaluation of Compressed HDR using SL-HDR1
Ciro Noronha, Ph.D. – Cobalt Digital

From a signal standpoint, what differentiates High Dynamic Range (HDR) content from Standard Dynamic Range (SDR) content is the mapping of the pixel samples to actual colors and light intensity. Video compression encoders and decoders (of any type) are agnostic to that – the encoder will take a signal, compress it, and at the other side, the decoder will re-create something that is “about the same†as the signal fed to the encoder.

This paper focuses on the required bit rates to produce a final HDR signal over a compressed link. We compare encoding SMPTE-2084 PQ HDR signals directly versus using SL-HDR1 to generate an SDR signal plus dynamic metadata. The comparison is done objectively by comparing the PSNR of the decoded signal. The SMPTE-2084 HDR signal is used as a reference at a fixed bit rate, and the bit rate of the SL-HDR1 encoded signal is varied until it matches the PSNR, over a range of source material. The evaluation is done for both AVC (H.264) and HEVC (H.265). This is similar to the work by Touze and Kerkhof published in 2017, but using commercial equipment.

 > IP production - matters of space and time
Andrew Rayner – Nevion

As IP production becomes ‘business as usual’ we have been addressing the challenges that we are facing to get systems that scale and work in a timely manner!

Some production facilities are now scale of the order of 100,000s of devices and connections. Further to this, with the increase in federation of multiple facilities, these numbers get even bigger. We are also getting to grips with the different issues pertaining to timing in all-IP production workflows – and the bits that have room for improvement. This presentation overviews where we have got to as an industry and some of the fun still to come.

 > Live TS over IP Live lesson learned
Adi Rozenberg – VideoFlow

Live network streaming and the problem that we learned in the last 20 years. The presentation will explain the problems, and the method to deal with problems ( bandwidth, Link failure, Wrong RTT, long haul streaming ).

For example I will show results of streaming over Docsis 3.0

Streaming in Africa, Streaming over KA Band Satellite & Satellite replacement with open internet service

 > High Throughput JPEG 2000 for Entertainment Imaging
Mike Smith – Wavelet Consulting LLC

High Throughput JPEG 2000 (HTJ2K) is a new royalty-free image compression standard published in 2019 that enhances JPEG 2000 by replacing its slow arithmetic block coder with a fast block coder. The impact of the resulting speedup (e.g. > 30x for lossless coding) has great potential for existing as well as new workflows throughout the entertainment industry. The highly parallel and extensively vectorizable block coding algorithm in HTJ2K opens the door to processing state-of-the-art high-resolution high frame rate content on lower performance hardware, such as commonly available PCs. HTJ2K leverages the great flexibility of the existing JPEG 2000 framework to facilitate compression of almost any type of image data including both integer and floating-point image data. This paper provides an overview of the HTJ2K algorithm and describes how HTJ2K can be used to accelerate workflows that are currently using JPEG 2000 Part-1 like DCinema, IMF and broadcast contribution networks. Additionally, the compression efficiency and throughput of HTJ2K is examined in comparison to other codecs.

 > Media Platform Automation
Steven Soenens – Skyline Communications

Managing media flows across hybrid on-premises and multi-cloud environments comes with a lot of pitfalls. In this session, we will discuss key technical challenges and range of solutions to address those:

  • Monitoring SMPTE ST 2022 and SMPTE ST 2110 flows : how to add metadata to IP flows, and use that to drive workloads
  • There is a myriad of media flow routing paradigms for on-prem and public clouds: how to use SDN control (multicast routing), source based switching, destination based switching, or all of the above to build the most effective media pipelines
  • Capacity planning for networking, store and compute nodes: how to build a deterministic IT infrastructure for media
  • Security is a key consideration for anyone migrating to IP and public cloud: how to increase security for media centric ICT systems
  • Connecting on-prem and cloud functions is very much about integration of functions from many vendors and creators: how to cope with multi-vendor functions while open standards are still in the making
  • Workloads are not only about configuring network functions: how to create a true environment for infrastructure & services deployment and orchestration

 > IPMX: Open Standards Media Over IP for Pro AV
Andrew Starks - Macnica

The Alliance for IP Media Solutions (AIMS) has been actively promoting SMPTE 2110 and NMOS within the pro AV industry at InfoComm and ISE and demand for open standards is a hot topic on the tradeshow floor. The AIMS Pro AV Working Group’s proposed roadmap for a Pro AV features open standards and specifications collectively called IPMX: Internet Protocol Media Experience.

We'll discuss the roadmap and the standards and specifications it includes, and the benefits of expanding the reach of VSF into Pro AV would have.

















  • Link to Visit the VSF Channel on YouTube - over 180 videos, with more each week!
  • Link to IP Showcase website
  • Link to IP Showcase Theatre presentations, curated by VSF
  • Link to JT-NM
  • Link to GDPR Policy