Tag Archives: Jonathan Abrams

What you should ask when searching for storage

Looking to add storage to your post studio? Who isn’t these days? Jonathan Abrams, chief technical officer at New York City’s Nutmeg Creative was kind enough to put together a list that can help all in their quest for the storage solution that best fits their needs.

Here are some questions that customers should ask a storage manufacturer.

What is your stream count at RAID-6?
The storage manufacturer should have stream count specifications available for both Avid DNx and Apple ProRes at varying frame rates and raster sizes. Use this information to help determine which product best fits your environment.

How do I connect my clients to your storage?  
Gigabit Ethernet (copper)? 10 Gigabit Ethernet (50-micron Fiber)? Fiber Channel (FC)? These are listed in ascending order of cost and performance. Combined with the answer to the question above, this narrows down which product a storage manufacturer has that fits your environment.

Can I use whichever network switch I want to and know that it will work, or must I be using a particular model in order for you to be able to support my configuration and guarantee a baseline of performance?
If you are using a Mac with Thunderbolt ports, then you will need a network adapter, such as a Promise SANLink2 10G SFP+ for your shared storage connection. Also ask, “Can I use any Thunderbolt network adapter, or must I be using a particular model in order for you to be able to support my configuration and guarantee a baseline of performance?”

If you are an Avid Media Composer user, ask, “Does your storage present itself to Media Composer as if it was Avid shared storage?”
This will allow the first person who opens a Media Composer project to obtain a lock on a bin.  Other clients can open the same project, though they will not have write access to said bin.

What is covered by support? 
Make certain that both the hardware (chassis and everything inside of it) and the software (client and server) are covered by support. This includes major version upgrades to the server and client software (i.e. v.11 to v.12). You do not want your storage manufacturer to announce a new software version at NAB 2018 and then find out that it’s not covered by your support contract. That upgrade is a separate cost.

For how many years will you be able to replace all of the hardware parts?
Will the storage manufacturer replace any part within three years of your purchase, provided that you have an active support contract? Will they charge you less for support if they cannot replace failed components during that year’s support contract? The variation of this question is, “What is your business model?” If the storage manufacturer will only guarantee availability of all components for three years, then their business model is based upon you buying another server from them in three years. Are you prepared to be locked into that upgrade cycle?

Are you using custom components that I cannot source elsewhere?
If you continue using your storage beyond the date when the manufacturer can replace a failed part, is the failed part a custom part that was only sold to the manufacturer of your storage? Is the failed part one that you may be able to find used or refurbished and swap out yourself?

What is the penalty for not renewing support? Can I purchase support incidents on an as-needed basis?
How many as-needed event purchases equate to you realizing, “We should have renewed support instead.” If you cannot purchase support on an as-needed basis, then you need to ask what the penalty for reinstating support is. This information helps you determine what your risk tolerance is and whether or not there is a date in the future when you can say, “We did not incur a financial loss with that risk.”

Main Image:  Nutmeg Creative’s Jonathan Abrams with the company’s 80 TB of EditShare storage and two spare drive.  Photo Credit:  Larry Closs

BoxCast offers end-to-end live streaming

By Jonathan Abrams

My interest in BoxCast originated with their social media publishing capabilities (Facebook Live,
YouTube Live, Twitter). I met with Gordon Daily (CEO/co-founder) and Sam Brenner (VP, marketing) during this year’s NAB Show.

BoxCast’s focus is on end-to-end live streaming and simplifying the process through automation. At the originating, or transmit (XMT), end is either a physical encoder or a software encoder. The two physical encoders are BoxCaster and BoxCaster Pro. The software encoders are Broadcaster and Switcher (for iDevices). The BoxCaster can accept either a 1080p60 (HDMI) or CVBS video input. Separate audio can be connected using two RCA inputs. The BoxCaster Pro ($990, shipping Q3) can accept a 4Kp60 input (12G-SDI or HDMI 2.0a) with High Dynamic Range (HDR10). If you are not using embedded audio, there are two combination XLR/TRS inputs.

Both the BoxCaster and BoxCaster Pro use the H.264 (AVC) codec, while the BoxCaster Pro can also use the H.265 (HEVC) codec, which provide approximately 2x improvement compared to H.264 (AVC). BoxCast is using Amazon Web Services (AWS) as its cloud. The encoder output is uploaded to the cloud using the BoxCast Flow protocol (patent pending), which mitigates lost packets using content-aware forward error correction (FEC) to mitigate lost packets, protocol-diversity (UDP and/or TCP), adaptive recovery, encryption and link quality adjustment for bandwidth flow control. Their FEC implementation does not have an impact on latency. Upload takes place via either Ethernet or Wi-Fi (802.11ac, 2×2 MIMO). The cloud is where distribution and transcoding takes place using BoxCast’s proprietary transcoding architecture. It is also where you can record your event and keep it for either a month or a year, depending upon which monthly cloud offering you subscribe to. Both recordings and the streams can be encrypted using their custom, proprietary solution.

At the receiving end (RCV) is an embedded player if you are not using Facebook Live or YouTube Live.


Jonathan Abrams is Chief Technical Engineer at NYC’s Nutmeg Creative.

Life is but a Streambox

By Jonathan Abrams

My interest in Streambox originated with their social media publishing capabilities (Facebook Live, YouTube Live, Twitter). I was shuttled to an unsecured, disclosed location (Suite 28140 at The Venetian) for a meeting with Tony Taylor (business development manager) and Bob Hildeman (CEO), where they were conducting user-focused presentations within a quiet and relaxing setting.

The primary use for Streambox in post production is live editorial and color review. Succinctly, it’s WebEx for post. A majority of large post production facilities use Streambox for live review services. It allows remote editorial and color grading over the public Internet with Mezzanine quality.

The process starts with either a software or hardware encoder. With the software encoder, you need to have your own I/O. As Bob mentioned this, he reached for a Blackmagic Design Mini Converter. The software encoder is limited to 8 bits. They also have two hardware encoders that occupy 1 RU. One of these can work with 4K video, and a new one shipping in June that uses a new version of their codec and works with 2K video. The 2K encoder will likely receive a software upgrade eventually that will enable it to work with 4K. All of their hardware encoders operate at 10 bit with 4:2:2 sampling and have additional post-specific features which include; genlock, frame sync, encryption and IFB audio talkback capabilities. Post companies offering remote color grading services are using a hardware encoder.

Streambox uses a proprietary ACT (Advanced Compression Technology) L3/L4 codec and LDMP (Low Delay Multi Path) protocol. For HD and 2K contribution over the Public Internet, their claim is that the ACT-L3/L4 codec is more bandwidth- and picture quality- efficient than H.264 (AVC), H.265 (HEVC), and JPEG2000. The low, and most importantly, sustained latency of the codec is in the use of LDMP (Low Delay Mutipath) video transport. The software and hardware decoders have about two seconds of latency, while the web output (browser) latency is 10 seconds. You can mix and match encoders and decoders. Put another way, you could use a hardware encoder and a software decoder.

TCP (Transmission Control Protocol), which is used for HTTP data transfer, is designed to have the receiving device confirm with the sender that it received packets. This creates packet redundancy overhead that reduces how much bandwidth you have available for data transmission.

Recovered packets in FEC display artifacts (macro blocking, buffering) when network saturation becomes problematic during playback. This does not generally effect lower bandwidth streams that use caching topology for network delivery, but for persistent streaming of video over 4Mbps this problem becomes apparent because of the large bandwidth that is needed for high-quality contribution content. UDP (User Datagram Protocol) eliminates this overhead at the cost of packets that were not delivered being unrecoverable. Streambox is using UDP to send its data and the decoder can detect and request lost packets. This keeps the transmission overhead low while eliminating lost packets. If you do have to limit your bandwidth, you can set a bitrate ceiling and not have to consider overhead. Streambox supports AES128 encryption as an add-on, and the number of bits can be higher (192 or 256).

Streambox Cloud allows the encoder to connect to the geographically closest cloud out of 10 sites available and have the data travel in the cloud until it reaches what is called the last mile to the decoder. All 10 cloud sites use Amazon Web Services, and two of those cloud sites also use Microsoft Azure. The cloud advantage in this situation is the use of global transport services, which minimize the risk of bandwidth loss while retaining quality.

Streambox has a database-driven service called Post Cloud that is evolving from broadcast-centric roots. It is effectively a v1 system, with post-specific reports and functionality added and broadcast-specific options stripped away. This is also where publishing to Facebook Live, YouTube Live and Twitter happens. After providing your live publishing credentials, Streambox manages the transcoding for the selected service. The publishing functionality does not prevent select users from establishing higher quality connections. You can have HQ streams to hardware and software decoders running simultaneously with a live streaming component.

The cloud effectively acts as a signal router to multiple destinations. Streamed content can be recorded and encrypted. Other cloud functionality includes realtime stitching of Ricoh Theta S camera outputs for 360º video.


Jonathan Abrams is Chief Technical Engineer at NYC’s Nutmeg Creative.

What does Fraunhofer Digital Media Alliance do? A lot!

By Jonathan Abrams

While the vast majority of the companies with exhibit space at NAB are for-profit, there is one non-profit that stands out. With a history of providing ubiquitous technology to the masses since 1949, Fraunhofer focuses on applied research and developments that end up — at some point in the near future — as practical products or ready-for-market technology.

In terms of their revenue, one-third of their funding is for basic research, with the remaining two-thirds applied toward industry projects and coming directly from private companies. Their business model is focused on contract research and licensing of technologies. They have sold first prototypes and work with distributors, though Fraunhofer always keeps the rights to continue development.

What projects were they showcasing at NAB 2106 that have real-world applications in the near future? You may have heard about the Lytro camera. Fraunhofer Digital Media Alliance member Fraunhofer IIS has been taking a camera agnostic approach to their work with light-field technology. Their goal is to make this technology available for many different camera set-ups, and they were proving it with a demo of their multi-cam light-field plug-in for The Foundry’s Nuke. After capturing a light-field, users can perform framing correction and relighting, including changes to angles, depth and the creation of point clouds.

The Nuke plug-in (see our main image) allows the user to create virtual lighting (relighting) and interactive lighting. Light-field data also allows for depth estimation (called depth maps) and is useful for mattes and secondary color correction. Similar to Lytro, focus pulling can be performed with this light-field plug-in. Why Nuke? That is what their users requested. Even though Nuke is an OFX host, the Fraunhofer IIS light field plug-in only works within Nuke. As for using this light-field plug-in outside of Nuke, I was told that “porting to Mac should be an easy task.” Hopefully that is an accurate statement, though we will have to wait to find out.

DCP
Fraunhofer IIS has its hand in other parts of production and post as well. The last two steps of most projects are the creation of deliverables and their delivery. If you need to create and deliver a DCP (Digital Cinema Package), then easyDCP may be for you.easydcp1

This project began in 2008, when creating a DCP was not as familiar as it is today to most users, and a deep expertise of the specifications for correctly making a DCP was very complex. Small- to medium-sized post companies, in particular, profit from the easy-to-use easyDCP suite. The engineers of Fraunhofer IIS were also working on behalf of the DCI specifications for Digital Cinema, therefore they are experienced in integrating all important features in this software for DCPs.

The demo I saw indicated that the JPEG2000 encode was as fast as 108fps! In 2013, Fraunhofer partnered with both Blackmagic and Quantel to make this software available to the users of those respective finishing suites. The demo I saw was using a Final Cut Pro X project file and it was with the Creator+ version since it had support for encryption. Avid Media Composer users will have to export their sequence and import it into Resolve to use easyDCP Creator. Amazingly, this software works as far back as Mac OS X Leopard. IMF creation and playback can also be done with the easyDCP software suite.

VR/360
VR and 360-degree video were prominent at NAB, and the institutes of the Fraunhofer Digital Media Alliance are involved in this as well, having worked on live streaming and surround sound as part of a project with the Berlin Symphony Orchestra.

Fraunhofer had a VR demo pod at the ATSC 3.0 Consumer Experience (in South Hall Upper) — I tried it and the sound did track with my head movement. Speaking of ATSC 3.0, it calls for an immersive audio codec. Each country or geographic region that adopts ATSC 3.0 can choose to implement either Dolby AC-4 or MPEG-H, the latter of which is the result of research and development by Fraunhofer, Technicolor and Qualcomm. South Korea announced earlier this year that they will begin ATSC 3.0 (UHDTV) broadcasting in February 2017 using the MPEG-H audio codec.

From what you see to what you hear, from post to delivery, the Fraunhofer Digital Media Alliance has been involved in the process.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

Digging Deeper: Dolby Vision at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for their offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. You can read about Dolby AC-4 and Dolby Atmos here. In this post, the focus will be on Dolby Vision.

First, let’s consider quantization. All digital video signals are encoded as bits. When digitizing analog video, the analog-to-digital conversion process uses a quantizer. The quantizer determines which bits are active or on (value = 1) and which bits are inactive or off (value = 0). As the bit depth for representing a finite range increases, the greater the detail for each possible value, which directly reduces the quantization error. The number of possible values is 2^X, where X is the number of bits available. A 10-bit signal has four times the number of possible encoded values than an 8-bit signal. This difference in bit depth does not equate to dynamic range. It is the same range of values with a degree of quantization accuracy that increases as the number of bits used increases.

Now, why is quantization relevant to Dolby Vision? In 2008, Dolby began work on a system specifically for this application that has been standardized as SMPTE ST-2084, which is SMPTE’s standard for an electro-optical transfer function (EOTF) and a perceptual quantizer (PQ). This work is based on work in the early 1990s by Peter G. J. Barten for medical imaging applications. The resulting PQ process allows for video to be encoded and displayed with a 10,000-nit range of brightness using 12 bits instead of 14. This is possible because Dolby Vision exploits a human visual characteristic where our eyes are less sensitive to changes in highlights than they are to changes in shadows.

Previous display systems, referred to as SDR or Standard Dynamic Range, are usually 8 bits. Even at 10 bits, SD and HD video is specified to be displayed at a maximum output of 100 nits using a gamma curve. Dolby Vision has a nit range that is 100 times greater than what we have been typically seeing from a video display.

This brings us to the issue of backwards compatibility. What will be seen by those with SDR displays when they receive a Dolby Vision signal? Dolby is working on a system that will allow broadcasters to derive an SDR signal in their plant prior to transmission. At my NAB demo, there was a Grass Valley camera whose output image was shown on three displays. One display was PQ (Dolby Vision), the second display was SDR, and the third display was software-derived SDR from PQ. There was a perceptible improvement for the software-derived SDR image when compared to the SDR image. As for the HDR, I could definitely see details in the darker regions on their HDR display that were just dark areas on the SDR display. This software for deriving an SDR signal from PQ will eventually also make its way into some set-top boxes (STBs).

This backwards-compatible system works on the concept of layers. The base layer is SDR (based on Rec. 709), and the enhancement layer is HDR (Dolby Vision). This layered approach uses incrementally more bandwidth when compared to a signal that contains only SDR video.  For on-demand services, this dual-layer concept reduces the amount of storage required on cloud servers. Dolby Vision also offers a non-backwards compatible profile using a single-layer approach. In-band signaling over the HDMI connection between a display and the video source will be used to identify whether or not the TV you are using is capable of SDR, HDR10 or Dolby Vision.

Broadcasting live events using Dolby Vision is currently a challenge for reasons beyond HDTV not being able to support the different signal. The challenge is due to some issues with adapting the Dolby Vision process for live broadcasting. Dolby is working on these issues, but Dolby is not proposing a new system for Dolby Vision at live events. Some signal paths will be replaced, though the infrastructure, or physical layer, will remain the same.

At my NAB demo, I saw a Dolby Vision clip of Mad Max: Fury Road on a Vizio R65 series display. The red and orange colors were unlike anything I have seen on an SDR display.

Nearly a decade of R&D at Dolby has been put into Dolby Vision. While Dolby Vision has some competition in the HDR war from Technicolor and Philips (Prime) and BBC and NHK (Hybrid Log Gamma or HLG), it does have an advantage in that there have been several TV models available from both LG and Vizio that are Dolby Vision compatible. If their continued investment in R&D for solving the issues related to live broadcast results in a solution that broadcasters can successfully implement, it may become the de-facto standard for HDR video production.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

Dolby Audio at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for the company’s offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. In this post, the focus will be on Dolby’s audio technologies.

Why would Dolby create AC-4? Dolby AC-3 is over 20 years old, and as a function of its age, it does not do new things well. What are those new things and how will Dolby AC-4 elevate your audio experience?

First, let’s define some acronyms, as they are part of the past and present of Dolby audio in broadcasting. OTA stands for Over The Air, as in what you can receive with an antenna. ATSC stands for Advanced Television Systems Committee, an organization based in the US that standardized HDTV (ATSC 1.0) in the US 20 years ago and is working to standardize Ultra HDTV broadcasts as ATSC 3.0. Ultra HD is referred to as UHD.

Now, some math. Dolby AC-3, which is used with ATSC 1.0, uses up to 384 kbps for 5.1 audio. Dolby AC-4 needs only 128 kbps for 5.1 audio. That increased coding efficiency, along with a maximum bit rate of 640 kbps, leaves 512 kbps to work with. What can be done with that extra 512 kbps?

If you are watching sporting events, Dolby AC-4 allows broadcasters to provide you with the option to select which audio stream you are listening to. You can choose which team’s audio broadcast to listen to, listen to another language, hear what is happening on the field of play, or listen to the audio description of what is happening. This could be applicable to other types of broadcasts, though the demos I have heard, including one at this year’s NAB Show, have all been for sporting events.

Dolby AC-4 allows the viewer to select from three types of dialog enhancement: none, low and high. The dialog enhancement processing is done at the encoder, where it runs a sophisticated dialog identification algorithm and then creates a parametric description that is included as metadata in the Dolby AC-4 bit stream.

What if I told you that after implementing what I described above in a Dolby AC-4 bit stream that there were still bits available for other audio content? It is true, and Dolby AC-4 is what allows Dolby Atmos, a next-generation, rich, and complex object audio system, to be inside ATSC 3.0 audio streams in the US, At my NAB demo, I heard a clip of Mad Max: Fury Road, which was mixed in Dolby Atmos, from a Yamaha sound bar. I perceived elements of the mix coming from places other than the screen, even though the sound bar was where all of the sound waves originated from. Whatever is being done with psychoacoustics to make the experience of surround sound from a sound bar possible is convincing.

The advancements in both the coding and presentation of audio have applications beyond broadcasting. The next challenge that Dolby is taking on is mobile. Dolby’s audio codecs are being licensed to mobile applications, which allows them to be pushed out via apps, which in turn removes the dependency from the mobile device’s OS. I heard a Dolby Atmos clip from a Samsung mobile device. While the device had to be centered in front of me to perceive surround sound, I did perceive it.

Years of R&D at Dolby have yielded efficiencies in coding and new ways of presenting audio that will elevate your experience. From home theater, to mobile, and once broadcasters adopt ATSC 3.0, Ultra HDTV.

Check out my coverage of Dolby’s Dolby Vision offerings at NAB as well.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

Meet Nutmeg Post’s Chief Technical Engineer Jonathan Abrams

NAME: Jonathan Abrams

COMPANY: Nutmeg Post (They are also on Facebook and Twitter.)

CAN YOU DESCRIBE YOUR COMPANY?
Nutmeg is a full-service creative, marketing and promotions resource. We have a history of post-production specialization that spans more than three decades, and we have now expanded our services to include all phases of creative, production and post — from concept to completion.

WHAT’S YOUR JOB TITLE?
Chief Technical Engineer

WHAT DOES THAT ENTAIL?
Working alongside two other people to design and implement new systems, upgrade and Continue reading