Category Archives: Cameras

postPerspective names NAB Impact Award MVPs and winners

NAB is a bear. Anyone who has attended this show can attest to that. But through all the clutter, postPerspective sought to seek out the best of the best for our Impact Awards. So we turned to a panel of esteemed industry pros (to whom we are very grateful!) to cast their votes on what they thought would be most impactful to their day-to-day workflows, and those of their colleagues.

In addition to our Impact Award winners, this year we are also celebrating two pieces of technology that not only caused a big buzz around the show, but are also bringing things a step further in terms of technology and workflow: Blackmagic’s DaVinci Resolve 15 and Apple’s ProRes RAW.

With ProRes RAW, Apple has introduced a new, high-quality video recording codec that has already been adopted by three competing camera vendors — Sony, Canon and Panasonic. According to Mike McCarthy, one of our NAB bloggers and regular contributors, “ProRes RAW has the potential to dramatically change future workflows if it becomes even more widely supported. The applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.”

Fairlight’s audio tools are now embedded in the new Resolve 15.

With Resolve 15, Blackmagic has launched the product further into a wide range of post workflows, and they haven’t raised the price. This standalone app — which comes in a free version — provides color grading, editing, compositing and even audio post, thanks to the DAW Fairlight, which is now built into the product.

These two technologies are Impact Award winners, but our judges felt they stood out enough to be called postPerspective Impact Award MVPs.

Our other Impact Award winners are:

• Adobe for Creative Cloud

• Arri for the Alexa LF

• Codex for Codex One Workflow and ColorSynth

• FilmLight for Baselight 5

• Flanders Scientific for the XM650U monitor

• Frame.io for the All New Frame.io

• Shift for their new Shift Platform

• Sony for their 8K CLED display

In a sea of awards surrounding NAB, the postPerspective Impact Awards stand out, and are worth waiting for, because they are voted on by working post professionals.

Flanders Scientific’s XM650U monitor.

“All of these technologies from NAB are very worthy recipients of our postPerspective Impact Awards,” says Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually have an impact on workflows as well as the ability to make users’ working lives easier and their projects better. This year we have honored 10 different products that span the production and post pipeline.

“We’re very proud of the fact that companies don’t ‘submit’ for our awards,” continues Altman. “We’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We feel it makes our awards quite special.”

With our Impact Awards, postPerspective is also hoping to help those who weren’t at the show, or who were unable to see it all, with a starting point for their research into new gear that might be right for their workflows.

postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at SIGGRAPH 2018.

Atomos at NAB offering ProRes RAW recorders

Atomos is at this year’s NAB showing support for ProRes RAW, a new format from Apple that combines the performance of ProRes with the flexibility of RAW video. The ProRes RAW update will be available free for the Atomos Shogun Inferno and Sumo 19 devices.

Atomos devices are currently the only monitor recorders to offer ProRes RAW, with realtime recording from the sensor output of Panasonic, Sony and Canon cameras.

The new upgrade brings ProRes RAW and ProRes RAW HQ recording, monitoring, playback and tag editing to all owners of an Atomos Shogun Inferno or Sumo19 device. Once installed, it will allow the capture of RAW images in up to 12-bit RGB — direct from many of our industry’s most advanced cameras onto affordable SSD media. ProRes RAW files can be imported directly into Final Cut Pro 10.4.1 for high-performance editing, color grading, and finishing on Mac laptop and desktop systems.
Eight popular cine cameras with a RAW output — including the Panasonic AU-EVA1, Varicam LT, Sony FS5/FS7 and Canon C300mkII/C500 — will be supported with more to follow.

With this ProRes RAW support, filmmakers can work easily with RAW – whether they are shooting episodic TV, commercials, documentaries, indie films or social events.

Shooting ProRes RAW preserves maximum dynamic range, with a 12-bit depth and wide color gamut — essential for HDR finishing. The new format, which is available in two compression levels — ProRes RAW and ProRes RAW HQ — preserves image quality with low data rates and file sizes much smaller than uncompressed RAW.

Atomos recorders through ProRes RAW allow for increased flexibility in captured frame rates and resolutions. Atomos can record ProRes RAW up to 2K at 240 frames a second, or 4K at up to 120 frames per second. Higher resolutions such as 5.7K from the Panasonic AU-EVA1 are also supported.

Atomos’ OS, AtomOS 9, gives users filming tools to allow them to work efficiently and creatively with ProRes RAW in portable devices. Fast connections in and out and advanced HDR screen processing means every pixel is accurately and instantly available for on-set creative playback and review. Pull the SSD out and dock to your Mac over Thunderbolt 3 or USB-C 3.1 for immediate super fast post production.

Download the AtomOS 9 update for Shogun Inferno and Sumo 19 at www.atomos.com/firmware.

DG 7.9.18

B&H expands its NAB footprint to target multiple workflows

By Randi Altman

In a short time, many in our industry will be making the pilgrimage to Las Vegas for NAB. They will come (if they are smart) with their comfy shoes, Chapstick and the NAB Show app and plot a course for the most efficient way to see all they need to see.

NAB is a big show that spans a large footprint, and typically companies showing their wares need to pick a hall — Central, South Lower, South Upper or North. This year, however, The Studio-B&H made some pros’ lives a bit easier by adding a booth in South Lower in addition to their usual presence in Central Hall.

B&H’s business and services have grown, so it made perfect sense to Michel Suissa, managing director at The Studio-B&H, to grow their NAB presence to include many of the digital workflows the company has been servicing.

We reached out to Suissa to find out more.

This year B&H and its Studio division are in the South Lower. Why was it important for you guys to have a presence in both the Central and South Halls this year?
The Central Hall has been our home for a long time and it remains our home with our largest footprint, but we felt we needed to have a presence in South Hall as well.

Production and post workflows merge and converge constantly and we need to be knowledgeable in both. The simple fact is that we serve all segments of our industry, not just image acquisition and camera equipment. Our presence in image and data centric workflows has grown leaps and bounds.

This world is a familiar one for you personally.
That’s true. The post and VFX worlds are very dear to me. I was an editor, Flame artist and colorist for 25 years. This background certainly plays a role in expanding our reach and services to these communities. The Studio-B&H team is part of a company-wide effort to grow our presence in these markets. From a business standpoint, the South Hall attendees are also our customers, and we needed to show we are here to assist and support them.

What kind of workflows should people expect to see at both your NAB locations?
At the South Hall, we will show a whole range of solutions to show the breadth and diversity of what we have to offer. That includes VR post workflow, color grading, animation and VFX, editing and high-performance Flash storage.

In addition to the new booth in South Hall, we have two in Central. One is for B&H’s main product offerings, including our camera shootout, which is a pillar of our NAB presence.

This Studio-B&H booth features a digital cinema and broadcast acquisition technology showcase, including hybrid SDI/IP switching, 4K studio cameras, a gyro-stabilized camera car, the most recent full-frame cinema cameras, and our lightweight cable cam, the DynamiCam.

Our other Central Hall location is where our corporate team can discuss all business opportunities with new and existing B2B customers

How has The Studio-B&H changed along with the industry over the past year or two?
We have changed quite a bit. With our services and tools, we have re-invented our image from equipment providers to solution providers.

Our services now range from system design to installation and deployment. One of the more notable recent examples is our recent collaboration with HBO Sports on World Championship Boxing. The Studio-B&H team was instrumental in deploying our DynamiCam system to cover several live fights in different venues and integrating with NEP’s mobile production team. This is part of an entirely new type of service —  something the company had never offered its customers before. It is a true game-changer for our presence in the media and entertainment industry.

What do you expect the “big thing” to be at NAB this year?
That’s hard to say. Markets are in transition with a number of new technology advancements: machine learning and AI, cloud-based environments, momentum for the IP transition, AR/VR, etc.

On the acquisition side, full frame/large sensor cameras have captured a lot of attention. And, of course, HDR will be everywhere. It’s almost not a novelty anymore. If you’re not taking advantage of HDR, you are living in the past.


Red’s new Gemini 5K S35 sensor offers low-light and standard mode

Red Digital Cinema’s new Gemini 5K S35 sensor for its Red Epic-W camera leverages dual-sensitivity modes, allowing shooters to use standard mode for well-lit conditions or low-light mode for darker environments.

In low-light conditions, the Gemini 5K S35 sensor allows for cleaner imagery with less noise and better shadow detail. Camera operators can easily switch between modes through the camera’s on-screen menu with no down time.

The Gemini Mini 5K S35 sensor offers an increased field of view at 2K and 4K resolutions compared to the higher-resolution Red Helium sensor. In addition, the sensor’s 30.72mm x 18mm dimensions allow for greater anamorphic lens coverage than with Helium or Red Dragon sensors.

“While the Gemini sensor was developed for low-light conditions in outer space, we quickly saw there was so much more to this sensor,” explains Jarred Land, president of Red Digital Cinema. “In fact, we loved the potential of this sensor so much, we wanted to evolve it to for broader appeal. As a result, the Epic-W Gemini now sports dual-sensitivity modes. It still has the low-light performance mode, but also has a default, standard mode that allows you to shoot in brighter conditions.”

Built on the compact DSMC2 form factor, this new camera and sensor combination captures 5K full-format motion at up to 96fps along with data speeds of up to 275MB per second. Additionally, it supports Red’s IPP2 enhanced image processing pipeline in-camera. Like all of Red’s DSMC2 cameras, the Epic-W is able to shoot simultaneous Redcode RAW and Apple ProRes or Avid DNxHD/HR recording and adheres to Red’s “Obsolescence Obsolete” program, which allows current Red owners to upgrade their technology as innovations are unveiled. It also lets’ them move between camera systems without having to purchase all new gear.

Starting at $24,500, the new Red Epic-W with Gemini 5K S35 sensor is available for purchase now. Alternatively, Weapon Carbon Fiber and Red Epic-W 8K customers will have the option to upgrade to the Gemini sensor at a later date.


Sony to ship Venice camera this month, adds capabilities

Sony’s next-gen CineAlta motion picture camera Venice, which won a postPerspective Impact Award for IBC2017, will start shipping this month. As previously announced, V.1.0 features support for full-frame 24x36mm recording. In addition, and as a result of customer feedback, Sony has added several new capabilities, including a Dual Base ISO mode. With 15+ stops of exposure latitude, Venice will support an additional High Base ISO of 2500 using the sensor’s physical attributes. This takes advantage of Sony’s sensor for low-light performance with high dynamic range — from 6 stops over to 9 stops under 18% middle gray.

This new capability increases exposure indexes at higher ISOs for night exteriors, dark interiors, working with slower lenses or where content needs to be graded in HDR, while maintaining the maximum shadow details. An added benefit within Venice is its built-in 8-step optical ND filter servo mechanism. This can emulate different ISO operating points when in High Base ISO 2500 and also maintains the extremely low levels of noise characteristics of the Venice sensor.

Venice also features new color science designed to offer a soft tonal film look, with shadows and mid-tones having a natural response and the highlights preserving the dynamic range.

Sony has also developed the Venice camera menu simulator. This tool is designed to give camera operators an opportunity to familiarize themselves with the camera’s operational workflow before using Venice in production.

Features and capabilities planned to be available later this year as free firmware upgrades in Version 2 include:
• 25p in 6K full-frame mode will be added in Version 2
• False Color (moved from Version 3 to Version 2)

Venice has an established workflow with support from Sony’s RAW Viewer 3, and third-party vendors including Filmlight Baselight 5, Davinci Resolve 14.3, and Assimilate Scratch 8.6 among others. Sony continues to work closely with all relevant third parties on workflows including editing, grading, color management and dailies.

Another often requested feature is support for high frame rates, which Sony is working to implement and make available at a later date.

Venice features include:
• True 36x24mm full frame imaging based on the photography standard that goes back 100 years
• Built-in 8-step optical ND filter servo mechanism
• Dual Base ISO mode, with High Base ISO 2500
• New color science for appealing skin tones and graceful highlights – out of the box
• Aspect ratio freedom: Full frame 3:2 (1.5:1), 4K 4:3 full height anamorphic, spherical 17:9, 16:9.
• Lens mount with 18mm flange depth opens up tremendous lens options (PL lens mount included)
• 15+ stops of exposure latitude
• User-interchangeable sensor that requires removal of just six screws
• 6K resolution (6048 x 4032) in full frame mode


Seasoned pros and young talent team on short films

By James Hughes

In Los Angeles on a Saturday morning, a crew of 10 students from Hollywood High School — helmed by 17-year-old director Celine Gimpirea — were transforming a corner of the Calgary Cemetery into a movie set. In The Box, a boy slips inside a cardboard box and finds himself transported to other realms. On this well-manicured lawn, among rows of flat, black granite grave markers, are rows of flat, black camera cases holding Red cameras, DIT stations, iPads and MacBook Pros.

Gimpirea’s is one of three teams of filmmakers involved in a month-long filmmaking workshop connecting creative pros with emerging talent. The teams worked with tools from Apple, including the MacBook Pro, iMac and Final Cut Pro X, as well as the Red Raven camera for shooting. LA-based independent filmmaking collective We Make Movies provided post supervision. They used a workflow very similar to that of the feature film Whiskey Tango Foxtrot, which was shot on Red and edited in FCP X.

In the documentary La Buena Muerte produced by instructors from the Mobile Film Classroom, a non-profit that provides digital media workshops to youth in under-resourced communities, the filmmakers examine mortality and family bonds surrounding the Day of the Dead, the Mexican holiday honoring lost loved ones. And in The Dancer, director Krista Amigone channels her background in theater to tell a personal story about a dancer confronting the afterlife.

Krista Amigone

During a two-week post period, teams received feedback from a rotating cast of surprise guests and mentors from across the industry, each a professional working in the field of film and television production.

Among the first mentors to view The Dancer was Sean Baker, director of 2017’s critically acclaimed The Florida Project and the 2015 feature Tangerine, shot entirely on iPhone 5S. Baker, who edits his own films, surveyed clips from Amigone’s shoot. Each take had been marked with the Movie Slate app on an iPad, which automatically stores and logs the timecode data. Together, they discussed Amigone’s backstory as well. A stay-at-home mother of a three-year-old daughter, she is no stranger to maximizing time and resources. She not only served as writer and director, but also star and choreographer.

Meanwhile, the La Buena Muerte crew, headed by executive producer Manon Banta, were editing their piece. Reviewing the volume of interviews and B-roll, all captured by cinematographer Elle Schneider on the 4.5K Red Raven camera, initially felt like a daunting task. Fortunately, their metadata was automatically organized after being imported straight into Final Cut Pro X from Shot Notes X and Lumberjack, along with the secondary source audio via Sync-N-Link X, which spared days of hand syncing.

Perhaps the most constructive feedback about story structure came from TJ Martin, director of LA92 and Undefeated, the Oscar-winner for Best Documentary Feature in 2012, which director Jean Balest has used as teaching material in the Mobile Film Classroom. Midway through the cut, Martin was struck by a plot point he felt required precision placement up front: A daughter is introduced while presiding over a conceptual art altar alongside her mother, who reveals she’s coping with her own pending death after a stage four cancer diagnosis.

Reshoots were vital to The Box. The dream world Gimpirea created — she cites Christopher Nolan’s Inception as an influence — required some clarification. During a visit from Valerie Faris, the Oscar-nominated co-director of Little Miss Sunshine and Battle of the Sexes, Gimpirea listened intently as she offered advice for pickup shots. Faris urged Gimpirea to keep the story focused on the point of view of her young lead during his travels. “There’s a lot told in his body and seeing him from behind,” Faris said. “In some ways, I’m more with him when I’m traveling behind him and seeing what he’s seeing.”

Celine Gimpirea

Gimpirea’s collaborative nature was evident throughout post. She was helped out by Antonio Manriquez, a video production teacher at Hollywood High, as well as her crew. Kais Karram was the film’s assistant director, and twin brother Zane was cinematographer. The brothers’ athleticism was an asset on-set, particularly during a day-long shoot in Griffith Park where they executed numerous tracking shots behind the film’s fleet-footed star as he navigated a walkway they had cleared of park visitors.

The selection of music was crucial, particularly for Amigone. For her main theme, she wanted a sound reminiscent of John Coltrane’s “After The Rain” and Claude Debussy’s “Clair De Lune.” She chose an original nocturne by John Mickevich, a composer and fellow member of the collective We Make Movies, whose founder/CEO Sam Mestman is also the CEO of LumaForge, developer of the Jellyfish Mobile — a “portable cloud,” as he put it — which, along with two MacBook Pros, were storing and syncing Amigone’s footage on location. Mestman believes “post should live on set.” As proof, a half-day of work for the editing team was done before the dance studio shoot had even wrapped.

During his mentor visit, Aaron Kaufman, director and longtime producing partner of filmmaker Robert Rodriguez, encouraged the teams to not be precious about losing shots in service of story. The documentary team certainly heeded this advice, as did Gimpirea, who cut a whole scene from Calvary Cemetery from her film.

As the project was winding down, Gimpirea reflected on her experience. “Knowing all the possibilities that I have in post now, it allows me to look completely differently at production and pre-production, and to pick out, more precisely, what I want,” she said.

Main Image: Shooting with the Red Raven at the Calvary Cemetery.


James Hughes is a writer and editor based in Chicago.


Panavision Hollywood names Dan Hammond VP/GM

Panavision has named Dan Hammond, a longtime industry creative solutions technologist, as vice president and general manager of Panavision Hollywood. He will be responsible for overseeing daily operations at the facility and working with the Hollywood team on camera systems, optics, service and support.

Hammond is a Panavision veteran, who worked at the company between 1989 and 2008 in various departments, including training, technical marketing and sales. Most recently he was at Production Resource Group (PRG), expanding his technical services skills. He is active with industry organizations, and is an associate member of the American Society of Cinematographers (ASC), as well as a member of the Academy of Television Arts and Sciences (ATAS) and Association of Independent Commercial Producers (AICP).


Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Mercy Christmas director offers advice for indie filmmakers

By Ryan Nelson

After graduating from film school at The University of North Carolina School of the Arts, I was punched in the gut. I had driven into Los Angeles mere hours after the last day of school ready to set Hollywood on fire with my thesis film. But Hollywood didn’t seem to know I’d arrived. A few months later, Hollywood still wasn’t knocking on my door. Desperate to work on film sets and learn the tools of the trade, I took a job as a grip. In hindsight, it was a lucky accident. I spent the next few years watching some of the industry’s most successful filmmakers from just a few feet away.

Like a sponge, I soaked in every aspect of filmmaking that I could from my time on the sets of Avengers, Real Steel, Spider Man 3, Bad Boys 2, Seven Psychopaths, Smokin’ Aces and a slew of Adam Sandler comedies. I spent hours working, watching, learning and judging. How are they blocking the actors in this scene? What sort of cameras are they using? Why did they use that light? When do you move the camera? When is it static? When I saw the finished films in theaters, I ultimately asked myself, did it all work?

During that same time, I wrote and directed a slew of my own short films. I tried many of the same techniques I’d seen on set. Some of those attempts succeeded and some failed.

Recently, the stars finally aligned and I directed my first feature-length film, Mercy Christmas, from a script I co-wrote with my wife Beth Levy Nelson. After five years of writing, fundraising, production and post production, the movie is finished. We made the movie outside the Hollywood system, using crowd funding, generous friends and loving family members to compile enough cash to make the ultra-low-budget version of the Mercy Christmas screenplay.

I say low budget because it was financially, but thanks to my time on set, years of practice and much trial and error, the finished film looks and feels like much more than it cost.

Mercy Christmas, by the way, features Michael Briskett, who meets the perfect woman and his ideal Christmas dream comes true when she invites him to her family’s holiday celebration. Michael’s dream shatters, however, when he realizes that he will be the Christmas dinner. The film is currently on iTunes.

My experience working professionally in the film business while I struggled to get my shot at directing taught me many things. I learned over those years that a mastery of the techniques and equipment used to tell stories for film was imperative.

The stories I gravitate towards tend to have higher concept set pieces. I really enjoy combining action and character. At this point in my career, the budgets are more limited. However, I can’t allow financial restrictions to hold me back from the stories I want to tell. I must always find a way to use the tools available in their best way.

Ryan Nelson with camera on set.

Two Cameras
I remember an early meeting with a possible producer for Mercy Christmas. I told him I was planning to shoot two cameras. The producer chided me, saying it would be a waste of money. Right then, I knew I didn’t want to work with that producer, and I didn’t.

Every project I do now and in the future will be two cameras. And the reason is simple: It would be a waste of money not to use two cameras. On a limited budget, two cameras offer twice the coverage. Yes, understanding how to shoot two cameras is key, but it’s also simple to master. Cross coverage is not conducive to lower budget lighting so stacking the cameras on a single piece of coverage gives you a medium shot and close shot at the same time. Or for instance, when shooting the wide master shot, you can also get a medium master shot to give the editor another option to breakaway to while building a scene.

In Mercy Christmas, we have a fight scene that consists of seven minutes of screen time. It’s a raucous fight that covers three individual fights happening simultaneously. We scheduled three days to shoot the fight. Without two cameras it would have taken more days to shoot, and we definitely didn’t have more days in the budget.

Of course, two camera rentals and camera crews are budget concerns, so the key is to find a lower budget but high-quality camera. For Mercy Christmas, we chose the Canon C-300 Mark II. We found the image to be fantastic. I was very happy with the final result. You can also save money by only renting one lens package to use for both cameras.

Editing
Good camera coverage doesn’t mean much without an excellent editor. Our editor for Mercy Christmas, Matt Evans, is a very good friend and also very experienced in post. Like me, Matt started at the bottom and worked his way up. Along the way, he worked on many studio films as apprentice editor, first assistant editor and finally editor. Matt’s preferred tool is Avid Media Composer. He’s incredibly fast and understands every aspect of the system.

Matt’s technical grasp is superb, but his story sense is the real key. Matt’s technique is a fun thing to witness. He approaches a scene by letting the footage tell him what to do on a first pass. Soaking in the performances with each take, Matt finds the story that the images want to tell. It’s almost as if he’s reading a new script based on the images. I am delighted each time I can watch Matt’s first pass on a scene. I always expect to see something I hadn’t anticipated. And it’s a thrill.

Color Grading
Another aspect that should be budgeted into an independent film is professional color grading. No, your editor doing color does not count. A professional post house with a professional color grader is what you need. I know this seems exorbitant for a small-budget indie film, but I’d highly recommend planning for it from the beginning. We budgeted color grading for Mercy Christmas because we knew it would take the look to professional levels.

Color grading is not only a tool for the cinematographer it’s a godsend for the director as well. First and foremost, it can save a shot, making a preferred take that has an inferior look actually become a usable take. Second, I believe strongly that color is another tool for storytelling. An audience can be as moved by color as by music. Every detail coming to the audience is information they’ll process to understand the story. I learned very early in my career how shots I saw created on set were accentuated in post by color grading. We used Framework post house in Los Angeles on Mercy Christmas. The colorist was David Sims who did the color and conform in DaVinci Resolve 12.

In the end, my struggle over the years did gain my one of my best tools: experience. I’ve taken the time to absorb all the filmmaking I’ve been surrounded by. Watching movies. Working on sets. Making my own.

After all that time chasing my dream, I kept learning, refining my skills and honing my technique. For me, filmmaking is a passion, a dream and a job. All of those elements made me the storyteller I am today and I wouldn’t change a thing.

On Hold: Making an indie web series

By John Parenteau

On Hold is an eight-episode web series, created and co-written by myself and Craig Kuehne, about a couple of guys working at a satellite company for an India-based technology firm. They have little going for themselves except each other, and that’s not saying much. Season 1 is available now, and we are in prepro on Season 2.

While I personally identify as a filmmaker, I’ve worn a wide range of hats in the entertainment industry since graduating from USC School of Cinematic Arts in the late ‘80s. As a visual effects supervisor, I’ve been involved in projects as diverse as Star Trek: Voyager and Hunger Games. I have also filled management roles at companies such as Amblin Entertainment, Ascent Media, Pixomondo and Shade VFX.

That’s me in the chair, conferring on setup.

It was with my filmmaker hat on that I recently partnered with Craig, a long-time veteran of visual effects, whose credits include Westworld and Game of Thrones. We thought it might be interesting to share our experiences as we ventured into live-action production.

It’s not unique that Craig and I want to be filmmakers. I think most industry professionals, who are not already working as directors or producers, strive to eventually reach that goal. It’s usually the reason people like us get into the business in the first place, and what many of us continue to pursue. Often we’ve become successful in another aspect of entertainment and found it difficult to break out of those “golden handcuffs.” I know Craig and I have both felt that way for years, despite having led fairly successful lives as visual effects pros.

But regardless of our successes in other roles, we still identify ourselves as filmmakers, and at some point, you just have to make the big push or let the dream go. I decided to live by my own mantra that “filmmakers make film.” Thus, On Hold was born.

Why the web series format, you might ask? With so many streaming and online platforms focused on episodic material, doing a series would show we are comfortable with the format, even if ours was a micro-version of a full series. We had, for years, talked about doing a feature film, but that type of project takes so many resources and so much coordination. It just seemed daunting in a no-budget scenario. The web series concept allows us to produce something that resembles a marketable project, essentially on little or no budget. In addition, the format is easily recreated for an equally low budget, so we knew we could do a second season of the show once we had done the first.

This is Craig, pondering a shot.

The Story
We have been friends for years, and the idea for the series came from both our friendship and  our own lives. Who hasn’t felt, as they were getting older, that maybe some of the life choices they made might not have been the best? That can be a serious topic, but we took a comedic angle, looking for the extremes. Our main characters, Jeff (Jimmy Blakeney) and Larry (Paul Vaillancourt), are subtle reflections of us (Craig is Jeff, the somewhat over-thinking, obsessive nerd, and I’m Larry, a bit of a curmudgeon, who can take himself way too seriously), but they quickly took a life of their own, as did the rest of the cast. We added in Katy (Brittney Bertier), their over-energetic intern, Connie (Kelly Keaton), Jeff’s bigger-than-life sister, and Brandon (Scott Rognlien), the creepy and not-very- bright boss. The chemistry just clicked. They say casting is key, and we certainly discovered that on this project. We were very lucky to find the actors we did, and  played off of each other perfectly.

So what does it take to do a web series? First off, writing was key. We spent a few months working out the overall storyline of the first season and then honed in on the basic outlines of each episode. We actually worked out a rough overall arc of the show itself, deciding on a four-season project, which gave us a target to aim for. It was just some basic imagery for an ultimate ending of the show, but it helped keep us focused and helped drive the structure of the early episodes. We split up writing duties, each working on alternate episodes and then sharing scripts with each other. We tried to be brutally honest; It was important that the show reflect both of our views. We spent many nights arguing over certain moments in each episode, both very passionate about the storyline.

In the end we could see we had something good, we just needed to add our talented actors to make it great.

On Hold

The Production
We shot on a Blackmagic Cinema camera, which was fairly new at that point. I wanted the flexibility of different lenses but a high-resolution and high-quality picture. I had never been thrilled with standard DSLR cameras, so I thought the Blackmagic camera would be a good option. To top it off, I could get one for free — always a deciding factor at our budget level. We ended up shooting with a single Canon zoom lens that Craig had, and for the most part it worked fine. I can’t tell you how important the “glass” you shoot with can be. If we had the budget I would have rented some nice Zeiss lenses or something equally professional, and the quality of the image reflects the lack of budget. But the beauty of the Blackmagic Cinema Camera is that it shoots such a nice image already, and at such a high resolution, that we knew we would have some flexibility in post. We recorded in Apple ProRes.

As a DP, I have shot everything from PBS documentaries to music videos, commercials and EPKs (a.k.a. behind the scenes projects), and have had the luxury of working with a load of gear, sometimes with a single light. At USC Film School, my alma mater, you learn to work with what you have, so I learned early to adapt my style to the gear on hand. I ended up using a single lighting kit (a Lowell DP 3 head kit) which worked fine. Shooting comedy is always more about static angles and higher key lighting, and my limited kit made that easily accessible. I would usually lift the ambience in the room by bouncing a light off a wall or ceiling area off camera, then use bounce cards on C-stands to give some source light from the top/side, complementing but not competing with the existing fluorescents in the office. The bigger challenges were when we shot toward the windows. The bright sunlight outside, even with the blinds closed, was a challenge, but we creatively scheduled those shots for early or late in the day.

Low-budget projects are always an exercise in inventiveness and flexibility, mostly by the crew. We had a few people helping off and on, but ultimately it came down to the two of us wearing most of the hats and our associate producer, Maggie Jones, filling in the gaps. She handled the SAG paperwork, some AD tasks, ordered lunch and even operated the boom microphone. That left me shooting all but one episode, while we alternated directing episodes. We shot an episode a day, using a friend’s office on the weekends for free. We made sure we created shot lists ahead of time, so I could see what he had in mind when I shot Craig’s episodes, but also so he could act as a backup check on my list when I was directing.

The Blackmagic camera at work.

One thing about SAG — we decided to go with the guild’s new media contract for our actors. Most of them were already SAG, and while they most likely would have been fine shooting such a small project non-union, we wanted them to be comfortable with the work. We also wanted to respect the guild. Many people complain that working under SAG, especially at this level, is a hassle, but we found it to be exactly the opposite. The key is keeping up with the paperwork each day you shoot. Unless you are working incredibly long hours, or plan to abuse your talent (not a good idea regardless), it’s fairly easy to remain compliant. Maggie managed the daily paperwork and ensured we broke for lunch as per the requirements. Other than that, it was a non-issue.

The Post
Much like our writing and directing, Craig and I split editorial tasks. We both cut on Apple Final Cut Pro X (he with pleasure, me begrudgingly), and shared edits with each other. It was interesting to note differences in style. I tended to cut long, letting scenes breathe. Craig, a much better editor than I, had snappier cuts that moved quicker. This isn’t to say my way didn’t work at times, but it was a nice balance as we made comments on each other’s work. You can tell my episodes are a bit longer than his, but I learned from the experience and managed to shorten my episodes significantly.

I did learn another lesson, one called “killing your darlings.” In one episode, we had as scene where Jeff enjoyed a box of donuts, fishing through them to find the fruit-filled one he craved. The process of him licking each one and putting them back, or biting into a few and spitting out pieces, was hilarious onset, but in editorial I soon learned that too much of a good thing can be bad. Craig persuaded me to trim the scene, and I realized quickly that having one strong beat is just as good as several.

We had a variety of issues with other areas of post, but with no budget we could do little about them. Our “mix” consisted of adjusting levels in our timeline. Our DI amounted to a little color correction. While we were happy with the end result, we realized quickly that we want to make season two even better.

On Hold

The Lessons
A few things pop out as areas needing improvement. First of all, shooting a comedy series with a great group of improv comedians mandates at least two cameras. Both Craig and I, as directors, would do improv takes with the actors after getting the “scripted version,” but some of it was not usable since cutting between different improv takes from a single camera shoot is nearly impossible. We also realized the importance of a real sound mixer on set. Our single mic, mono tracks, run by our unprofessional hands, definitely needed some serious fixing in post. Simply having more experienced hands would have made our day more efficient as well.

For post, I certainly wanted to use newer tools, and we called in some favors for finishing. A confident color correction really makes the image cohesive, and even a rudimentary audio mix can remove many sound issues.

All in all, we are very proud of our first season of On Hold. Despite the technical issues and challenges, what really came together was the performances, and, ultimately, that is what people are watching. We’ve already started development on Season 2, which we will start shooting in January 2018, and we couldn’t be more excited.

The ultimate lesson we’ve learned is that producing a project like On Hold is not as hard as you might think. Sure it has its challenges, but what part of entertainment isn’t a challenge? As Tom Hanks says in A League of Their Own, “It’s supposed to be hard. If it wasn’t hard everyone would do it.” Well, this time, the hard work was worth it, and has inspired us to continue on. Ultimately, isn’t that the point of it all? Whether making films for millions of dollars, or no-budget web series, the point is making stuff. That’s what makes us filmmakers.