Category Archives: Digging Deeper

Mozart in the Jungle

The colorful dimensions of Amazon’s Mozart in the Jungle

By Randi Altman

How do you describe Amazon’s Mozart in the Jungle? Well, in its most basic form it’s a comedy about the changing of the guard — or maestro — at the New York Philharmonic, and the musicians that make up that orchestra. When you dig deeper you get a behind-the-scenes look at the back-biting and crazy that goes on in the lives and heads of these gifted artists.

Timothy Vincent

Timothy Vincent

Based on the novel Mozart in the Jungle: Sex, Drugs, and Classical Music by oboist Blair Tindall, the series — which won the Golden Globe last year and was nominated this year — has shot in a number of locations over its three seasons, including Mexico and Italy.

Since its inception, Mozart in the Jungle has been finishing in 4K and streaming in both SDR and HDR. We recently reached out to Technicolor’s senior color timer, Timothy Vincent, who has been on the show since the pilot to find out more about the show’s color workflow.

Did Technicolor have to gear up infrastructure-wise for the show’s HDR workflow?
We were doing UHD 4K already and were just getting our HDR workflows worked out.

What is the workflow from offline to online to color?
The dailies are done in New York based on the Alexa K1S1 709 LUT. (Technicolor On-Location Services handled dailies out of Italy, and Technicolor PostWorks in New York.) After the offline and online, I get the offline reference made with the dailies so I can look at if I have a question about what was intended.

If someone was unsure about watching in HDR versus SDR, what would you tell them?
The emotional feel of both the SDR and the HDR is the same. That is always the goal in the HDR pass for Mozart. One of the experiences that is enhanced in the HDR is the depth of field and the three-dimensional quality you gain in the image. This really plays nicely with the feel in the landscapes of Italy, the stage performances where you feel more like you are in the audience, and the long streets of New York just to name a few.

Mozart in the JungleWhen I’m grading the HDR version, I’m able to retain more highlight detail than I was in the SDR pass. For someone who has not yet been able to experience HDR, I would actually recommend that they watch an episode of the show in SDR first and then in HDR so they can see the difference between them. At that point they can choose what kind of viewing experience they want. I think that Mozart looks fantastic in both versions.

What about the “look” of the show. What kind of direction where you given?
We established the look of the show based on conversations and collaboration in my bay. It has always been a filmic look with soft blacks and yellow warm tones as the main palette for the show. Then we added in a fearlessness to take the story in and out of strong shadows. We shape the look of the show to guide the viewers to exactly the story that is being told and the emotions that we want them to feel. Color has always been used as one of the storytelling tools on the show. There is a realistic beauty to the show.

What was your creative partnership like with the show’s cinematographer, Tobias Datum?
I look forward to each episode and discovering what Tobias has given me as palette and mood for each scene. For Season 3 we picked up where we left off at the end of Season 2. We had established the look and feel of the show and only had to account for a large portion of Season 3 being shot in Italy. Making sure to feel the different quality of light and feel of the warmth and beauty of Italy. We did this by playing with natural warm skin tones and the contrast of light and shadow he was creating for the different moods and locations. The same can be said for the two episodes in Mexico in Season 2. I know now what Tobias likes and can make decisions I’m confident that he will like.

Mozart in the JungleFrom a director and cinematographer’s point of view, what kind of choices does HDR open up creatively?
It depends on if they want to maintain the same feel of the SDR or if they want to create a new feel. If they choose to go in a different direction, they can accentuate the contrast and color more with HDR. You can keep more low-light detail while being dark, and you can really create a separate feel to different parts of the show… like a dream sequence or something like that.

Any workflow tricks/tips/trouble spots within the workflow or is it a well-oiled machine at this point?
I have actually changed the way I grade my shows based on the evolution of this show. My end results are the same, but I learned how to build grades that translate to HDR much easier and consistently.

Do you have a color assistant?
I have a couple of assistants that I work with who help me with prepping the show, getting proxies generated, color tracing and some color support.

What tools do you use — monitor, software, computer, scope, etc.?
I am working on Autodesk Lustre 2017 on an HP Z840, while monitoring on both a Panasonic CZ950 and a Sony X300. I work on Omnitek scopes off the downconverter to 2K. The show is shot on both Alexa XT and Alexa Mini, framing for 16×9. All finishing is done in 4K UHD for both SDR and HDR.

Anything you would like to add?
I would only say that everyone should be open to experiencing both SDR and HDR and giving themselves that opportunity to choose which they want to watch and when.

Digging Deeper: Fraunhofer’s Dr. Siegfried Foessel

By Randi Altman

If you’ve been to NAB, IBC, AES or regional conferences involving media and entertainment technology, you have likely seen Fraunhofer exhibiting or heard one of their representatives speaking on a panel.

Fraunhofer first showed up on my radar years ago at an AES show in New York City when they were touting the new MP3 format, which they created. From that moment on, I’ve made it a point to keep up on what Fraunhofer has been doing in other areas of the industry, but for some, what Fraunhofer is and does is a mystery.

We decided to help with that mystery by throwing some questions at Dr. Siegfried Foessel, Fraunhofer IIS Department Moving Picture Technologies.

Can you describe Fraunhofer?
Fraunhofer-Gesellschaft is an organization for applied research that has 67 institutes and research units at locations throughout Germany. At present, there are around 24,000 people. The majority are qualified scientists and engineers who work with an annual research budget of more than 2.1 billion euros.

More than 70 percent of the Fraunhofer-Gesellschaft’s research revenue is derived from contracts with industry and from publicly financed research projects. Almost 30 percent is contributed by the German federal and Länder governments in the form of base funding. This enables the institutes to work ahead on solutions to problems that will become relevant to industry and society within the next five or ten years from now.

How did it all begin? Is it a think tank of sorts? Tell us about Fraunhofer’s business model.
The Fraunhofer-Gesellschaft was founded in 1949 and is a recognized non-profit organization that takes its name from Joseph von Fraunhofer (1787–1826), the illustrious Munich researcher, inventor and entrepreneur. Its focus was clearly defined to do application-oriented research and to develop future-relevant key technologies. Through their research and development work, the Fraunhofer Institutes help to reinforce the competitive strength of the economy. They do so by promoting innovation, strengthening the technological base, improving the acceptance of new technologies and helping to train the urgently needed future generation of scientists and engineers.

What is Fraunhofer IIS?
The Fraunhofer Institute for Integrated Circuits IIS is an application-oriented research institution for microelectronic and IT system solutions and services. With the creation of MP3 and the co-development of AAC, Fraunhofer IIS has reached worldwide recognition. In close cooperation with partners and clients, the ISS institute provides research and development services in the following areas: audio and multimedia, imaging systems, energy management, IC design and design automation, communication systems, positioning, medical technology, sensor systems, safety and security technology, supply chain management and non-destructive testing. About 880 employees conduct contract research for industry, the service sector and public authorities.

Fraunhofer IIS partners with companies as well as public institutions?
We develop, implement and optimize processes, products and equipment until they are ready for use in the market. Flexible interlinking of expertise and capacities enables us to meet extremely broad project requirements and complex system solutions. We do contracted research for companies of all sizes. We license our technologies and developments. We work together with partners in publicly funded research projects or carry out commercial and technical feasibility studies.

IMF transcoding.

What is the focus of Fraunhofer IIS’ Department of Moving Picture Technologies?
For more than 15 years, our Department Moving Picture Technologies has driven developments for digital cinema and broadcast solutions focused on imaging systems, post production tools, formats and workflow solutions. The Department Moving Picture Technologies was chosen by the Digital Cinema Initiatives (DCI) to develop and implement the first certification test plan for digital cinema as the main reference for all systems in this area. As a leader in the ISO standardization committee for digital cinema within JPEG, my team and I are driving standardization for JPEG 2000 and formats, such as DCP and the Interoperable Master Format (IMF.)

We also are working together with SMPTE and other standardization bodies worldwide. Renowned developments for the department that are highly respected are the Arri D20/D21 camera, the easyDCP post production suite for DCP and IMF creation and playback, as well as the latest developments and results of multi-camera/light-field technology.

What are some of the things you are working on and how does that work find its way to post houses and post pros?
The engineers and scientists of the Department Moving Picture Technologies are working on tools and workflow solutions for new media file formats like IMF to enable smooth integration and use in existing workflows and to optimize performance and quality. As an example, we always enhance and augment the features available through the post production easyDCP suite. The team discusses and collaborates with customers, industry partners and professionals in the post production and digital cinema industries to identify the “most wanted and needed” requirements.

easyDCP

We preview new technologies and present developments that meet these requirements or facilitate process steps. Examples of this include the acceleration process of IMF or DCP creation by using an approach based on a hybrid JPEG 2000 functionality or introducing a media asset management tool for DCP/IMF or dailies. We present our ideas, developments and results at exhibitions such as NAB, the HPA Tech Retreat and IBC, as well as SMPTE conferences and plugfests all around the world.

Together with distribution partners who are selling the products like easyDCP, Fraunhofer IIS licenses those developments and puts them into the market. Therefore, the team always looks for customer feedback for their developments that is supported by a very active community.

Who are some of your current customers and partners?
We have more than 1,500 post houses as customers, managed by our licensing partner easyDCP GmbH. Nearly all of the Hollywood studios and post houses on all continents are our customers. We also work together with integration partners like Blackmagic and Quantel. Most of the names of our partners in the contract research area are confidential, but to name some partners from the past and present: Arri, DCI, IHSE GmbH.

Which technologies are available for license now?
• Tools for creation and playback of DCPs and IMPs, as standalone tools and for integration into third party tools
• Tools for quality control of DCPs and IMPs
• Tools for media asset management of DCPs and IMPs
• Plug-ins for light-field-processing and depth map generation
• Codecs for mezzanine compression of images

Lightfield tech

What are you working on now that people should know about?
We are developing new tools and plug-ins for bringing lightfield technology to the movie industry to enhance creativity opportunities. This includes system aspects in combination with existing post tools. We are chairing and actively participating on adhoc groups for lightfield-related standardization efforts in the JPEG/MPEG Joint Adhoc Group for digital representations of light/sound fields for immersive media applications (see https://jpeg.org/items/20160603_pleno_report.html).

We are also working together with DIN on a proposal to standardize digital long-term archive formats for movies. Basic work is done with German archives and service providers at DIN NVBF3 and together with CST from France at SMPTE with IMF App#4. Furthermore, we are developing mezzanine image compression formats for the transmission of video over IP in professional broadcast environments and GPU accelerated tools for creation and playback of JPEG 2000 code streams.

How do you pick what you will work on?
The employees at Fraunhofer IIS are very creative people. By observation of the market, research in joint projects and cooperation with universities, ideas are created and evaluated. Employees and our student scientists are discussing with industry partners what might be possible in the near future and which ideas have the greatest potential. Selected ideas will then be evaluated with respect to the business opportunities and transformed into internal projects or proposed as research projects. Our employees are tasked with working much like our eponym Joseph von Fraunhofer, as researchers, inventors and entrepreneurs — all at the same time.

What other “hats” do you wear in the industry?
As mentioned earlier, Fraunhofer is involved in standardization bodies and industry associations. For example, I chair the Systems Group within ISO SC29WG1 (JPEG) and the post production group within ISO TC36 (Cinematography). I am also a SMPTE governor (EMEA and Central and South America region) and a SMPTE fellow, along with supporting SMPTE conferences as a program committee member.

Currently, I am president of the German Society Fernseh- und Kinotechnische Gesellschaft (FKTG) and am involved in associations like EDCF and ISDCF. Additionally, I’m a speaker for the German VDE/ITG society in the area of media technology. Last, but not least, I chair the German standardization body at DIN for NVBF3 and consult the German federal film board in questions related to new technical challenges in the film industry.

G-Tech 6-15

Digging Deep: Sony intros the PXW-FS7 II camera

By Daniel Rodriguez

At a press event in New York City a couple of weeks ago, Sony unveiled the long-rumored follow-up to its extremely successful Sony PXW FS7 — the Sony PXW-FS7 II. With the new FS7 II, Sony dives deeper in the mid-level cinematographer/ videographer market that it firmly established with the FS100, FS700, FS7 and the more recent Sony FS5.

Knowing they are competing with cameras of other similarly priced brands, Sony has built upon a line that fulfills most technical and ergonomic needs. Sony prides itself on listening to videographers and cinematographers who make requests and suggestions from first-hand field experience, and it’s clear that they’ve continued to listen.

New Features
The Sony FS7 II might be the first camera where you can feel the deep care and consideration from Sony for those who have used the FS7 extensively, in regards to improvements. Although the body and overall design might seem nearly identical to the original FS7, the FS7 II has made subtle but important ergonomic improvements to the camera’s design.

Improving on their E-mount design, Sony has introduced a lever locking mechanism much how a PL mount functions. Unlike the PL mount, the new lever lock rotates counter-clockwise but provides a massive amount of support, especially since there is a secondary latch that prevents you from accidentally turning the lever back. The mount has been tested to support the same weight as traditional PL mounts, and larger cinema zooms can be easily mounted without the need of a lens support. Due to its short flange distance, Sony’s E-mount has become very popular with users for adapting almost all stills and cinema lenses to Sony cameras, and with this added support there is reduced risk and concern when adding lens adapters.

The camera body’s corners and edges have all been rounded out, allowing users to have a much more comfortable control of the camera. This is especially helpful for handheld use when the camera might be pressed up against someone’s body or under their arm. Considering things like operating below the underarm and at the waist, Sony has redesigned the arm grip, and most of the body, to be tool-less. The arm grip no longer requires tools to be adjusted and now uses two knobs to allow easy adjustments. This saves much needed time and maximizes comfort.

The viewfinder can now be extended further in either direction with a longer rod, which benefits left-eye dominant operators. The microphone holder is no longer permanently attached to the other side of the rod so it can either be adapted to the left side of camera to allow viewing the monitor to the right of the camera or it could be removed altogether. Sony has also made the viewfinder collapsible for those who’d rather just view the monitor. The viewfinder rod is now square shaped to allow uniform horizontal aligning in the framing in relation to the cameras balancing. This stemmed from operators confusing their framing by believing framing was crooked due to how the viewfinder was aligned, even if the camera was perfectly balanced.

Sony really kept the smaller suggestions in mind by making the memory card slots protrude more than on the original FS7. This allows for loaders to more easily access the memory card should they be wearing something that inhibits their grip, like gloves. Compatibility with the newer G-series XQD cards, which boast an impressive 440MBps write and 400MBps read speed, allowing FS7 II users to quickly dump their footage on the field without the worry of running out of useable memory cards.

Coming straight out the box is the FS7 II’s ability to do internal 4K DCI (4096×2160) without the need for upgrades or HDMI output. This 4K can be captured in nearly every codec, whether in XAVC, ProRes 422HQ, or RAW, with the option of HyperGammas, Slog-3 or basic 709. RAW output will be available to the camera, but like its siblings, an external recorder will still be required to do so. The FS7 II will also be capable of recording Sony’s version of compressed RAW, XOCN, which allows 16-bit 3:1 recording to an external recorder. Custom 3D LUTs will still be available to be uploaded into the camera. This allows more of a cinematographer’s touch when using a custom LUT, rather than factory presets.

Electronic Internal Variable ND
The most exciting feature of the Sony FS7 II — and the one that really separates this camera from the FS7 — is the introduction of an Electronic Internal Variable ND. Introduced originally in the FS5, the new options that the FS7 II has over the FS5 with this new Electronic Variable ND makes this a very promising camera and an improvement over its older sibling.

Oftentimes with similarly priced cameras, or ones that offer the same options, there is either a lack of internal NDs or a limited amount of internal ND control, which is either too much or not enough when it comes to exposure control. The term Variable ND is also approached with caution from videographers/cinematographers with concerns of color shifts and infrared pollution, but Sony has taken care of these precautions by having an IR cut filter over the sensor. This way, no level of ND will introduce any color shifts or infrared pollution. It’s also often easy to break the bank buying IR NDs to prevent infrared pollution, and the constant swapping of ND filters might prove a disadvantage when it comes to being time-efficient, which could also lead you to open or close your F-stop to compensate.

Compromising your F-stop is often an unfortunate reality when shooting — indoors or outdoors — and it’s extremely exciting to have a feature that allows you to adjust your exposure flawlessly without worrying about having the right ND level or adjusting your F-stop to compensate. It’s also exciting to know that you can adjust the ND filter without having to see a literal filter rotate in front of your image. The Electronic Variable ND can be adjusted from the grip as well, so you can essentially ride the iris without having to touch your F-stop and risk your depth of field being inconsistent.

closeup-settingsAs with most modern-day lenses that lack manual exposure, riding the iris is simply out of the question due to mechanical “clicked” irises and the very obvious exposure shift when changing the F-stop on one of these lenses. This is eliminated by letting the Variable ND do all the work and allowing you to leave your F-stop untouched. The Electronic Variable ND on manual mode allows you to smoothly transition between 0.6ND to 2.1ND in one-third increments.

Recording in BT
Another exciting new addition to the FS7 II is the ability to record in BT. 2020 (more commonly known as Rec. 2020) internally in UHD. While this might seem excessive to some, considering this camera is still a step below its siblings the F55 and F65 as far as use in productions where HDR deliverables are required, providing the option to shoot Rec. 2020 futureproofs this camera for years to come especially when Rec. 2020 monitoring and projection becomes the norm. Companies like Netflix usually request an HDR deliverable for their original programs so despite the FS7 II not being on the same level as the F55/F65, it shows it can deliver the same level of quality.

While the camera can’t boast a global shutter like its bigger sibling, the F55, the FS7 does show very capable rolling shutter with little to no skewing effects. In the FS7 II’s case it is preferable to retain rolling shutter over global because as a camera that leans slightly toward the commercial/videography spectrum of cinematography, it is preferable to retain a native ISO of 2000 and the full 14 stops over global shutter, which is easy to overlook and use cost much-needed dynamic range.

This exclusion of global shutter retains the native ISO of the FS7II at 2000 ISO, which is the same as the previous FS7. Retaining this native ISO puts the FS7 II above many similar priced video cameras whose native ISOs usually sit at 800. While the FS7 II may not be a low-light beast like the Sony a7s/a7sii, the ability to do internal 4K DCI, higher frame rates and record 10-bit 422HQ (and even RAW) greatly outweigh this loss in exposure.

The SELP18110G 18-110 F4.0 Servo Zoom
Alongside the Sony FS7 II, Sony has announced a new zoom lens to be released alongside the camera. Building off what they have introduced before with the Sony FE PZ 28-135 F4 G, the 18-110 F4 is a very powerful lens optically and the perfect companion to the FS7 II. The lens is sharp to the edges; doesn’t drop focus while zooming in and out; has no breathing whatsoever; has a quiet internal zoom, iris, and focus control; internal stabilization; and a 90-second zoom crawl from end to end. The lens covers Super 35mm and APSC-sized sensors and retains a constant f4 throughout each focal length.

It’s multi-coating allows for high contrast and low flaring with circular bokeh to give truly cinematic images. Despite its size, the lens only weighs 2.4 pounds, a weight easily supported by the FS7 II’s lever-locking E mount. Though it isn’t an extremely fast lens, paired with a camera like the FS7 II, which has a native ISO of 2000, the 18-110 F4 should prove to be a very useable lens on the field and as well in narrative work.

Final Impressions
This camera is very specifically designed for camerapersons who either have a very small camera team or shoot as individuals. Many of the new features, big and small, are great additions for making any project go down smoothly and nearly effortlessly. While its bigger siblings the F55 and F65 will still dominate major motion picture production and commercial work, this camera has all its corners covered to fill the freelance videographer/cinematographer’s needs.

Indie films, short films, smaller commercial and videography work will no doubt find this camera to be hugely beneficial and give as few headaches as possible. Speed and efficiency are often the biggest advantage on smaller productions and this camera easily handles and facilitates the most overlooked aspects of video production.

The specs are hard to pass up when discussing the Sony FS7 II. Hearing of a camera that does internal 4K DCI with the option of high frame rates at 10-bit 422HQ with 14 stops of dynamic range and the option to shoot in Slog3 or one of the many HyperGammas for faster deliverables should immediately excite any videographer/cinematographer. Many cinematographers making feature or short films have grown accustomed to shooting RAW, and unless they rent the external recorder, or buy it, they will be unable to do so with this camera. But with the high write speeds of the internal codecs, it’s difficult to argue that, despite a few minor features being lost, the internal video will retain a massive amount of information.

This camera truly delivers on providing nearly any ergonomic and technical need, and by anticipating future display formats with Rec.2020, this shows that Sony is very conscious of future-proofing this camera. The physical improvements on the camera have shown that Sony is very open and eager to hear suggestions and first-hand experiences from FS7 users, and no doubt any suggestions on the FS7 II will be taken into mind.

The Electronic Variable ND is easily the best feature of the camera since so much time in the field will be saved by not having to swap NDs, and the ability to shift through increments between the standard ND levels will be hugely beneficial to get your exposure right. Being able to adjust exposure mid shot without having filters come between the image will be a great feature to those shooting outdoors or working events where the lighting is uneven. Speed cannot be emphasized enough, and by having such a massively advantageous feature you are just cutting more and more time from whatever production you’re working.

Pairing up the camera with the new 18-110 F4 will make a great camera package for location shooting since you will be covered for nearly every focal length and have a sharp lens that has servo zooming, internal stabilization and low flaring. The lens might be off-putting to some narrative filmmakers, since it only opens to a F4.0 and isn’t fast by other lens standards, but with the quality and attention to optic performance the lens should be considered seriously alongside other lenses that aren’t quite cinema lenses but have been used heavily so far in the narrative world. With the native ISO of 2000, one should be able to shoot comfortably wide open or closed down with proper lighting and for films done mostly in natural light this lens should be highly considered.

Oftentimes when choosing a camera, the biggest question isn’t what the camera has but what it will cost. Since Sony isn’t discontinuing the original FS7, the FS7 II will be more expensive, and when considering BP-U60 batteries and XQD cards the price will only climb. I think despite these shortcomings, one must always consider the price of storage and power when upgrading your camera system. More powerful cameras will no doubt require faster cards and bigger power supplies, so these costs must be seen as investments.

While XQD cards might be considered pricey to some, especially those who are more familiar with buying and using SD cards, I consider jumping into the XQD card world a necessary step to develop your video capabilities. CFast cards are becoming the norm in higher-end digital cinema, especially when the FS7 II is being heavily considered.

Compromise is often expected in any level of production, be it technically, logistically or artistically. After getting an impression of what the FS7 II can provide and facilitate in any production scenario I feel this is one of the few cameras that will take away feelings of compromise from what you as a user can provide.

The FS7 II will be available in January 2017 for an estimated street price of $10,000 (body only) and $13,000 for the camcorder with 18-110mm power zoom lens kit.


Daniel Rodriguez is cinematographer and photographer living in New York City. Check out his work here. Dan took many of the pictures featured in this article.


Capturing the Olympic spirit for Coke

By Randi Altman

There is nothing like the feeling you get from a great achievement, or spending time with people who are special to you. This is the premise behind Coke’s Gold Feelings commercial out of agency David. The spot, which aired on broadcast television and via social media and exists in 60-, 30- and 15-second iterations, features Olympic athletes at the moment of winning. Along with the celebratory footage, there were graphics that feature quotes about winning and an update of the iconic Coke ribbon.

The agency brought in Lost Planet, Black Hole’s parent company for graphics, editing and final finishing. Lost Planet provided editing while Black Hole provided graphics and finishing.

Tim Vierling

Still feeling the Olympic spirit, we reached out to Black Hole producer Tim Vierling to find out more.

How early did you get involved in the project?
Black Hole became involved early on in the offline edit when initially conceptualizing how to integrate graphics. We worked with the agency creatives to layout the supers and helped determine what approach would be best.

How far along was it in terms of the graphics at that point?
Whereas the agency established the print portion of the creative beforehand, much of the animation was undiscovered territory. For the end tag, Black Hole animated various iterations of the Coke ribbon wiping onto screen and carefully considered how this would interact with each subject in the end shots.

We then had to update the existing disc animation to complement the new and improved/iconic Coke ribbon. The titles/supers that appear throughout the spot were under constant scrutiny — from tracking to kerning to font type. We held to a rule that type could never cross over an athlete’s face, which led to some clever thinking. Black Hole’s job was to locate the strongest moments to highlight and rotoscope various body parts of the athletes, having them move over and behind the titles throughout the spot.

What was the most challenging part of the project? Olympics projects tend to have a lot of moving parts, and there were some challenges caused by licensing issues, forcing us to adapt to an unusually high amount of editorial changes. This, in turn, resulted in constant rotoscoping. Often a new shot didn’t work well with the previous supers, so they were changing as frequently as the edit. This forced us to the push the schedule, but in the end we delivered something we’re really proud of.

What tools did you use?
Adobe After Effects and Photoshop, Imagineer Mocha and Autodesk Flame were all used for finishing and graphics.

A question for Lost Planet’s assistant editor Steven san Miguel: What direction were you given on the edit?
The spots were originally boarded with supers on solid backgrounds, but Lost Planet editors Kimmy Dube and Max Koepke knew this wouldn’t really work for a 60-second. It was just too much to read and not enough footage. Max was the first one to suggest a level of interactivity between the footage and the type, so from the very beginning we were working with Black Hole to lay out the type and roto the footage. This started before the agency even sat down with us. And since the copy and the footage were constantly changing there had to be really close communication between Lost Planet and Black Hole.

Early on the agency provided YouTube links for footage they used in their pitch video. We scoured the YouTube Olympic channel for more footage, and as the spot got closer to being final, we would send the clips to the IOC (International Olympic Committee) and they would provide us with the high-res material.

Check out the spot!


Digging Deeper: Dolby Vision at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for their offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. You can read about Dolby AC-4 and Dolby Atmos here. In this post, the focus will be on Dolby Vision.

First, let’s consider quantization. All digital video signals are encoded as bits. When digitizing analog video, the analog-to-digital conversion process uses a quantizer. The quantizer determines which bits are active or on (value = 1) and which bits are inactive or off (value = 0). As the bit depth for representing a finite range increases, the greater the detail for each possible value, which directly reduces the quantization error. The number of possible values is 2^X, where X is the number of bits available. A 10-bit signal has four times the number of possible encoded values than an 8-bit signal. This difference in bit depth does not equate to dynamic range. It is the same range of values with a degree of quantization accuracy that increases as the number of bits used increases.

Now, why is quantization relevant to Dolby Vision? In 2008, Dolby began work on a system specifically for this application that has been standardized as SMPTE ST-2084, which is SMPTE’s standard for an electro-optical transfer function (EOTF) and a perceptual quantizer (PQ). This work is based on work in the early 1990s by Peter G. J. Barten for medical imaging applications. The resulting PQ process allows for video to be encoded and displayed with a 10,000-nit range of brightness using 12 bits instead of 14. This is possible because Dolby Vision exploits a human visual characteristic where our eyes are less sensitive to changes in highlights than they are to changes in shadows.

Previous display systems, referred to as SDR or Standard Dynamic Range, are usually 8 bits. Even at 10 bits, SD and HD video is specified to be displayed at a maximum output of 100 nits using a gamma curve. Dolby Vision has a nit range that is 100 times greater than what we have been typically seeing from a video display.

This brings us to the issue of backwards compatibility. What will be seen by those with SDR displays when they receive a Dolby Vision signal? Dolby is working on a system that will allow broadcasters to derive an SDR signal in their plant prior to transmission. At my NAB demo, there was a Grass Valley camera whose output image was shown on three displays. One display was PQ (Dolby Vision), the second display was SDR, and the third display was software-derived SDR from PQ. There was a perceptible improvement for the software-derived SDR image when compared to the SDR image. As for the HDR, I could definitely see details in the darker regions on their HDR display that were just dark areas on the SDR display. This software for deriving an SDR signal from PQ will eventually also make its way into some set-top boxes (STBs).

This backwards-compatible system works on the concept of layers. The base layer is SDR (based on Rec. 709), and the enhancement layer is HDR (Dolby Vision). This layered approach uses incrementally more bandwidth when compared to a signal that contains only SDR video.  For on-demand services, this dual-layer concept reduces the amount of storage required on cloud servers. Dolby Vision also offers a non-backwards compatible profile using a single-layer approach. In-band signaling over the HDMI connection between a display and the video source will be used to identify whether or not the TV you are using is capable of SDR, HDR10 or Dolby Vision.

Broadcasting live events using Dolby Vision is currently a challenge for reasons beyond HDTV not being able to support the different signal. The challenge is due to some issues with adapting the Dolby Vision process for live broadcasting. Dolby is working on these issues, but Dolby is not proposing a new system for Dolby Vision at live events. Some signal paths will be replaced, though the infrastructure, or physical layer, will remain the same.

At my NAB demo, I saw a Dolby Vision clip of Mad Max: Fury Road on a Vizio R65 series display. The red and orange colors were unlike anything I have seen on an SDR display.

Nearly a decade of R&D at Dolby has been put into Dolby Vision. While Dolby Vision has some competition in the HDR war from Technicolor and Philips (Prime) and BBC and NHK (Hybrid Log Gamma or HLG), it does have an advantage in that there have been several TV models available from both LG and Vizio that are Dolby Vision compatible. If their continued investment in R&D for solving the issues related to live broadcast results in a solution that broadcasters can successfully implement, it may become the de-facto standard for HDR video production.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

NAB 1/17

Quick Chat: Cut + Run’s Jay Nelson on editing ‘The Bronze’

Who doesn’t like the story of someone overcoming a physical injury in sport and succeeding? (Think Curt Schilling’s bloody ankle during the 2004 World Series.) It’s how legends are made, but what happens after the applause has stopped and the reporters stop requesting interviews? Well this is the premise of the new comedy, The Bronze, by Bryan Buckley.

The film focuses a light on gymnast Hope Ann Greggory (Melissa Rauch), whose performance on a ruptured Achilles during the Olympics clinched a bronze medal for the US team — but things went downhill from there. In the years since capturing the medal, she’s still living in her father’s basement, still wearing her Team USA gym suit and sporting some crazy bangs, a ponytail and a scrunchie. She spends most days at the mall enjoying her minor celebrity while being unpleasant and rude. All of that changes when she is asked to coach her hometown’s newest gymnastics prodigy.

Jay Nelson

Jay Nelson

Director Buckley called on Cut + Run’s Jay Nelson to edit The Bronze, from Sony Pictures Classics. We reached out to LA-based Nelson, who used Avid Media Composer on the film, to find out more about the workflow and how he collaborated with the director.

How did you get involved in the film?
I had been working with Bryan for a couple of years, and he had been developing the idea with Melissa and Winston Rauch for about six months and he asked me if I’d want to be involved. He gave me the script, but I didn’t really need to read it — if Bryan asks if you want to do a film with him, you do it. Then I read the script and I thought it was hilarious and bold.

What are some things you enjoy about working with Buckley?
He is always available for you, no matter how busy he is. Also, he covers exactly what I need to make an edit great, which makes my job a heck of a lot easier. We have a really amazing shorthand with each other. We have the same taste in comedy. But my favorite part about working with Bryan is that I am constantly learning from him, and not just about filmmaking… about life. And we laugh a hell of a lot

Can you talk about any challenges during the editing process?
The approval process was very long. We had to answer to a lot of masters. I showed an edit a week after they finished shooting, then we spent six months revising that cut. The hardest part about the revisions was shaving the last four minutes out of the film. It was a very painful process getting it to 90 minutes.

How was it to premiere at Sundance?
Exhilarating. I’ve submitted four films to Sundance over the years and none of them ever made the cut for one reason or another. It’s always a roll of the dice; there are so many factors that contribute to a films success with their review process. To finally be there after all these years and experience seeing a first run of the film with a massive crowd was truly incredible. And to see lines of people just to be on the waiting list to get in was total vindication for all the work we put into it.

What’s the biggest lesson you learned?
The lessons I learned on this film weren’t so much about the process of making a film, but rather the process of bringing a film to market. Just making a great movie doesn’t mean a film is going to have success. It was almost 16 months from the time we premiered at Sundance to the final release of The Bronze, and a lot of stuff happened during that time. Relativity went out of business, then Sony Classics rescued the film, and then there were several delays pertaining to the release date.

I say it on every film I do — there are no guarantees. If you’re going to do a film, you gotta be willing to do it for the love of making a picture. Success is not imminent. In the end, I’m really proud of The Bronze, and proud we were able to share it with a wide audience. I think it’s going to have a great long life down the road. I think that sex scene alone will be kept in a hall of fame of some sort (laughs). That is the great thing about making movies: you have the opportunity to create something that can stay around after your gone.

If you could compete in the Olympics, your sport would be?
I always dreamed of winning a gold in hockey. It certainly wouldn’t be gymnastics. After sitting in an editing chair for as long as I have been, maybe I’d be better off pursuing curling or something like that.

——–
Check out The Bronze’s trailer.

NAB 1/17

Digging Deeper: NASA TV UHD executive producer Joel Marsden

It’s hard to deny the beauty of images of Earth captured from outer space. And NASA and partner Harmonic agree, boldly going where no one has gone before — creating NASA TV UHD, the first non-commercial consumer UHD channel in North America. Leveraging the resolution of ultra high definition, the channel gives viewers a front row seat to some gorgeous views captured from the International Space Station (ISS), other current NASA missions and remastered historical footage.

We recently reached out to Joel Marsden, executive producer of NASA TV UHD, to find out how this exciting new endeavor reached “liftoff.”

Joel Marsden

Joel Marsden

This was obviously a huge undertaking. How did you get started and how is the channel set up?
The new channel was launched with programming created from raw video footage and imagery supplied by NASA. Since that time, Harmonic has also shot and contributed 4K footage, including video of recent rocket launches. They provide the end-to-end UHD video delivery system and post production services while managing operations. It’s all hosted at a NASA facility managed by Encompass Digital Media in Atlanta, which is home to the agency’s satellite and NASA TV hubs.

Like the current NASA TV channels, and on the same transponder, NASA TV UHD is transmitted via the SES AMC-18C satellite, in the clear, with a North American footprint. The channel is delivered at 13.5Mbps, as compared with many of the UHD demo channels in the industry, which have required between 50 and 100 Mbps. NASA’s ability to minimize bandwidth use is based on a combination of encoding technology from Harmonic in conjunction with the next-generation H.265 HEVC compression algorithm.

Can you talk about how the footage was captured and how it got to you for post?
When the National Aeronautics and Space Act of 1958 was created, one of the legal requirements of NASA was to keep the public apprised of its work in the most efficient means possible and with the ultimate goal of bringing everyone on Earth as close as possible to being in space. Over the years, NASA has used imagery as the primary means of demonstration. The group in charge of these efforts, the NASA Imagery Experts Program, provides the public with a wide array of digital television, web video and still images based on the agency’s activities. Today, NASA’s broadcast offerings via NASA TV include an HD consumer channel, an HD media channel and an SD education channel.

In 2015, the agency introduced NASA TV UHD. Naturally, NASA archives provide remastered footage from historical missions and shots from NASA’s development and training processes, all of which are used for production of broadcast programming. In fact, before the agency launched NASA TV, it had already begun production of its own documentary series, based on footage collected during missions.

Just five or six years ago, NASA also began documenting major events in 4K resolution or higher. The agency has been using 6K Red Dragon digital cinema cameras for some time. NASA TV UHD video content is sourced from high-resolution images and video generated on the ISS, Hubble Space Telescope and other current NASA missions. The raw content files are then sent to Harmonic for post.

Can you walk us through the workflow?
Raw video files are mailed on physical discs or sent via FTP from a variety of NASA facilities to Harmonic’s post studio in San Jose and stored on the Harmonic MediaGrid system, which supports an edit-in-place workflow with Final Cut Pro and other third-party editing tools.

During the content processing phase, Harmonic uses Adobe After Effects to paint out dead pixels that result from the impact of cosmic radiation on camera sensors. They have built bad-pixel maps that they use in post production to remove the distracting white dots from the picture. The detail of UHD means that the footage also shows scratches on the windows of the ISS through which the camera is shooting, but these are left in for authenticity.

 

A Blackmagic DaVinci Resolve is used to color grade footage, and Maxon Cinema 4D Studio is used to create animations of images. Final Cut Pro X and Adobe Creative Suite are used to set the video to music and add text and graphics, along with the programming name, logo and branding.

Final programs are then transferred in HD back to the NASA teams for review, and in UHD to the Harmonic team in Atlanta to be loaded onto the Spectrum X for playout.

————

You can check out NASA TV’s offerings here.


‘Late Night with Seth Meyers’ associate director of post Dan Dome

This long-time editor talks about his path to late night television

By Randi Altman

You could say that editing runs through Dan Dome’s veins. Dome, associate director of post at Late Night with Seth Meyers, started in the business in 1994 when he took a job as a tape operator at National Video Industries (NVI) in New York.

Dome grew up around post — his dad, Art, was a linear videotape editor at NVI, working on Shop Rite spots and programming for a variety of other clients. Art had previously edited commercials for such artists as Kiss and was awarded a gold record for Kiss Alive 2. Dome loved to go in and watch his dad work. “I saw that there were a lot of machines and I knew he put videos together, but I was completely clueless to what the real process was.”

Dome’s first job at NVI was working in the centralized machine room as a tape operator. “I learned how to read a waveform monitor and a vectorscope, how to patch up Betacam SP, 1-inch and D2 machines to linear edit rooms, insert stages, graphics and audio suites. I also learned how to change the timings of the switcher through a proc amp — the nuts and bolts.”

This process proved to be invaluable. “Being able to have an understanding of signal flow on the technical side helped a ton in my career,” he explains. “A lot of post jobs are super technical. You’ve got to know the software and you’ve got to know the computers and machines; those were the fundamentals I learned in the machine room. I had to learn it all.”

While at NVI, nonlinear editing via the Avid Media Composer came on the scene. Dome took every advantage to learn this new way of working. After his 4pm-to-midnight shift as tape op, he would stay in the Avid rooms learning all he could about the software. He also befriended an editor who rented space at NVI. Christian Giornelli allowed Dome to shadow him on the Media Composer and, later, assist on different projects.

After a time, he became comfortable on the Avid. “I was working as a tape operator and editing at night at NVI, cutting promos and show reels. The first professional editing job I had was cutting the Neil Peart Test For Echo instructional drum video. Shortly after editing that video, I left NVI to pursue freelance work.”

A year into freelancing, Dome began doing work for the NBC promo department working on a little bit of everything, including cutting promos for various shows and sales tapes. During this time, freelance work also brought him to Broadway Video, MSNBC, MTV and VH1 — he was steadily building up a nice resume.

Let’s find out more about his path and how he landed at Late Night with Seth Myers...

When you and I were first in contact, you were out in LA working on the Conan O’Brien show on TBS.
Yes. My early work with NBC’s promo department led me to NBC’s post team, and they started booking me on gigs for Dateline, The Today Show and, every now and then, as an editor on Late Night With Conan O’Brien. I developed a great relationship with the writers and the other editors on Conan. When the transition to HD happened, they chose to use in a nonlinear system, Avid DS, which I learned. That helped me work on the first HD season of Saturday Night Live through the 2008 season.

Late Night With Conan O’Brien had two primary show editors and another editor cutting remote packages. I started falling into that group more and more, and close to the time Conan was taking over for Jay Leno on The Tonight Show, two of the editors retired. Myself and another editor ended up seeing Conan’s Late Night off the air.

During that time, I developed a great relationship with the associate director and mentioned that I wouldn’t mind moving out to California if they needed an editor. They did. I started working with Conan on The Tonight Show and continued on when he went over to do Conan on TBS. All in all, I was out in LA for almost five years.

How did you end up back in New York and working on Seth Meyers?
While I did enjoy California, I got a little homesick. I heard that Seth was taking over for Late Night from Jimmy Fallon and threw my hat in the ring for that show.

Let’s talk about Seth’s show. Did you help to set up the post workflow?
Late Night with Seth Meyers was the third show I’d launched as a lead editor — there was Conan’s Tonight Show, then Conan’s show on TBS and then Late Night with Seth. It was great to be at Late Night from the ground up, and now, with the title of associate director/lead editor,

 I worked with our engineers at NBC as far as folder structure on the SAN, what our workflow was going to be, what NLE we were going to use, what plug-ins we needed — we worked very closely on workflow and how we were going to deliver the show to air and the web.

A lot of those systems had already been in place, but there are always new technologies to consider. We went through the whole workflow and got it as streamlined as possible.

You are using Adobe Premiere, can you tell us why that was the right tool for this show?
Well, Final Cut 7 wasn’t going to grow any more, and I wasn’t convinced about Final Cut X. If we went with Avid, we’d need all the Avid-approved gear. We already knew we were going to be on Macs, and we would use AJA Kona cards with Premiere. We based this show’s post model off some of the other shows already using Premiere.

Do you use other parts of the Adobe suite?
The entire post team is using Creative Cloud. I edit, and I have an editor, Devon Schwab, and an assistant, Tony Dolezal. We’re primarily working in Premiere, Audition and Media Encoder. Our graphics artists are in Illustrator, Photoshop and After Effects. Every now and then we editors will dip into After Effects if we need to rotoscope something out, or we’ll use Mocha Pro for motion tracking when something in the show has to be censored or if we are making mattes for color grading.

You guys are live-to-tape — could you walk us through that?
We shoot the show live-to-tape between 6:30pm and 7:30pm. During the first act I’m watching the show as well as listening to the director, the production AD and the control room from my edit suite. If there are camera ISO fixes that need to be addressed, I’m hearing that from the director. If there are any issues with standards, like a word has to be bleeped or content has to be removed, I’m getting those notes from the producers and from the lawyers.

Tony, Dan and Devon.

Tony, Dan and Devon.

As soon as the first act is done, my assistant stops ingest and then starts it back up again, so now I have act one: seven ISO cameras and one program record. The program record file has the show as it’s cut for the audience, so all the graphics are already baked into it, and it’s a 5.1 mix coming from our audio rooms. I bring those eight QuickTime files into Premiere through an app called Easy Find and start laying the show out.

I try and finish all that needs to be done in the first act by the time second act of the show is done being ingested. Once all six acts are done, we’ll have a good idea if the show’s over or under in time. If it’s over, we figure out what we are going to cut. If it’s a little bit under, let’s say 20 or 30 seconds, then we may decide to run credits that night.

So taping is done by 7:30?
Yes. At that point the director, the show producers, segment producers and writers come down. We start editing the entire the show together for air. At that point I’ve already built the main project for the show to be edited. I then save a version of my project for my editor and my assistant editor and assign acts for them to edit.

How many do you cut personally?
I’ll usually end up doing three out of the six acts. My editor will do two interview acts, and my assistant will do one, usually the musical act. As the show is being put together for air, I keep track of the show time on an Excel spreadsheet. There’s a lot of communication among us during this time.

Once I do have the show close to time, I start sending individual acts to the Broadcast Operations Center at NBC, so they can start their QC process. That’s between 8:00pm or 8:15pm. As they are getting the six acts and they’ve begun to QC them, I release my timing sheet so they can confirm the show is on time. It’s 41 minutes and 20 seconds, and they get it ready to go to what they call a “composite” after QC. They composite the show between 10:30pm and 11:30pm with all the commercials put in. I’m completely done for the night when the show hits the air at 12:35am… if there have been no emergencies.

Taking a step back, how do you cut the pre-packaged bits?
Usually those go to my editor Devon. He will be editing, mixing audio and I will be doing the color grade — all within Premiere. If it’s a two- or three-camera shoot, I’ll get a look established for the A, B and the C cameras and have the segment director give notes on the color grade. Once the grade is approved, Devon can then just apply the color to the finished piece. Sometimes we are finessing pre-tapes right up until show record time at 6:30pm.

One recent color grade I did, that Devon edited, was a pre-taped piece called Reasonable Max, which was about Seth’s deleted scenes from the film Mad Max: Fury Road.

Anything you want to add before we wrap up?
I feel very lucky to have had all these experiences in the TV business. I want to thank my dad for introducing me to it and all the people who helped me get where I am today. The most talented people in the business staff all the shows that I have been lucky enough to work on. Watch Late Night With Seth Meyers, weeknights at 12:35 on NBC!


Digging Deeper: Endcrawl co-founder John ‘Pliny’ Eremic

By Randi Altman

Many of you might know John “Pliny” Eremic, a fixture in New York post. When I first met Pliny he was CTO and director of post production at Offhollywood. His post division was later spun off and sold to Light Iron, which was in turn acquired by Panavision.

After Offhollywood, Pliny moved to HBO as a workflow specialist, but he is also the co-founder— with long-time collaborator Alan Grow — of Endcrawl.com, a cloud-based tool for creating end titles for film and television.

Endcrawl has grown significantly over the last year and counts both modest indies and some pretty high-end titles as customers. I figured it was a good time to dig a bit deeper.

How did Endcrawl come about?
End titles were always a huge thorn in my side when I was running the post boutique. The endless, manual revision process is so time intensive that a number of major post houses flat-out refuse to offer this service any more. So, I started hacking on Endcrawl to scratch my own itch.

Both you and your co-founder Alan are working media professionals. Can you talk about how this affected the tool and its evolution?
Most filmmakers aren’t hackers; most coders never made a movie. As a result, many of this industry’s tools are built by folks who are incredibly smart but may lack first-hand post and filmmaking experience. I’ve felt that pain a lot.

Endcrawl is built by filmmakers for filmmakers. We have deep, first-hand experience with file-based specs and formats (DCI, IMF, AS-02), so our renders are targeted at these industry-standard delivery specifications. Occasionally we’re even able to steer customers away from a bad workflow decision.

How is this different than other end credit tools in the world?
For starters we offer unlimited renders.

Why unlimited renders?
This was a mantra from day one. There’s always “one last fix.” A typical indie feature with Endcrawl will keep making revisions six to 12 months after calling it final. That’s where a flat rate with unlimited do-overs comes in very handy. I’ve seen productions start with a $2-3k quote from a designer, and end up with a $6-10k bill. That’s just for the end credits. We’re not interested in dinging you for overages. It’s a flat rate, so render away.

What else differentiates Endcrawl?
Endcrawl is a cloud tool that’s designed to manage the end titles process only — that is its reason for being. So speed, affordability and removing workflow hassles is our goal.

How do people do traditionally do end titles?
Typically there are three options. One is using a title designer. This option costs a lot and they might want to charge you overages after your 89th revision.

There are also do-it-yourself options using products from Adobe or Autodesk, and while these are great tools, the process is extremely time consuming for this use — I’d estimate 40-plus hours of human labor.

Finally, there are affordable plug-ins, but they deliver, in my opinion, cheap-looking results.

Do you need to be a designer to use Endcrawl?
No. We’ve made it so our typography is good-looking right out of the box. After hundreds of projects, we’ve spent a lot of time thinking about what does and does not work typographically.

Do you have tips for these non-deisgners regarding typography?
I could write a book. In fact, we are about to publish a series of articles on this topic, but I’ll give you a few:

• Don’t rely on “classic” typefaces like Helvetica and Futura. Nice on large posters, but lousy on screen in small point sizes.

• Lean toward typefaces with an upright stress — meaning more condensed fonts — which will allow you to make better use of horizontal space. This in turn preserves vertical space, resulting in a smoother scroll.

• Avoid “light” and “ultralight” fonts, or typefaces with a high stroke contrast. Those tend to shimmer quite a bit when in motion. Pick a typeface that has a large variety of designed weights and stick to medium, semibold and bold.

• Make sure your font has strong glyph support for those grips named Bjørn Sæther Løvås and Hansína Þórðardóttir.

Do people have to download the product?
Endcrawl runs right in your web browser. There is nothing to download or install.

What about compatibility?
Our render engine outputs uncompressed DPX, all the standard QuickTime formats, H.264 and PDFs. By far the most common final deliverable is 10-bit DPX, which we typically turn around inside of one hour. The preview renders come in minutes. And the render engine is on-demand, 24/7.

 

How has the product evolved since you first came to market?
Our “lean startup” was a script attached to a Google Doc. We did our first 20 to 30 projects that way. We saw a lot of validation, especially around the speed and ease of the service.

Year one, we had a customer with four films at Sundance. He completed all of his end titles in three days, with many revisions and renders in between. He’s finished over 20 projects with us now.

Since then, Alan has architected a highly optimized cloud render engine. Endcrawl still integrates with Google Docs for collaboration, but that is now connected to a powerful Web UI controlling layout and realtime preview.

How do people pay for Endcrawl?
On the free tier, we provide free and unlimited 1K preview renders in H.264. For $499, a project can upgrade to unlimited, uncompressed DPX renders. We are currently targeting feature films, but we will be deploying more pricing tiers for other types of projects — think episodic and shorts — in 2016.

What films have used the tool?
Some recent titles include Spike Lee’s Chi-Raq and Oliver Stone’s Snowden. Our customers run the gamut from $50K Kickstarter movies to $100 million studio franchises. (I can’t name most of those studio features because several title houses run all of their end credits through us as a white-label service.)

Some 2016 Sundance movies this year include Spa Night, Swiss Army Man, Tallulah and The Bad Kids. Some of my personal favorites are Beasts of No Nation, A Most Violent Year, The Family Fang, Meadowland, The Adderall Diaries and Video Game High School.

What haven’t I asked that is important?
We’re about to roll out 4K. We’ve “unofficially” supported 4K on a few pilot projects like Beasts of No Nation and War Room, but it’s about to be available to everyone.

Also, we have a pretty cool Twitter account @Endcrawl, which you should definitely follow.