Category Archives: Cameras

Winners: IBC2017 Impact Awards

postPerspective has announced the winners of our postPerspective Impact Awards from IBC2017. All winning products reflect the latest version of the product, as shown at IBC.

The postPerspective Impact Award winners from IBC2017 are:

• Adobe for Creative Cloud
• Avid for Avid Nexis Pro
• Colorfront for Transkoder 2017
• Sony Electronics for Venice CineAlta camera

Seeking to recognize debut products and key upgrades with real-world applications, the postPerspective Impact Awards are determined by an anonymous judging body made up of industry pros. The awards honor innovative products and technologies for the post production and production industries that will influence the way people work.

“All four of these technologies are very worthy recipients of our first postPerspective Impact Awards from IBC,” said Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually make users’ working lives easier and projects better, and our winners certainly fall into that category. You’ll notice that our awards from IBC span the entire pro pipeline, from acquisition to on-set dailies to editing/compositing to storage.

“As IBC falls later in the year, we are able to see where companies are driving refinements to really elevate workflow and enhance production. So we’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We’re very proud of that fact, and it makes our awards quite special.”

IBC2017 took place September 15-19 in Amsterdam. postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at the 2018 NAB Show.

Red intros Monstro 8K VV, a full-frame sensor

Red Digital Cinema has a new cinematic full-frame sensor for its Weapon cameras called the Monstro 8K VV. Monstro evolves beyond the Dragon 8K VV sensor with improvements in image quality including dynamic range and shadow detail.

This newest camera and sensor combination, Weapon 8K VV, offers full-frame lens coverage, captures 8K full-format motion at up to 60fps, produces ultra-detailed 35.4 megapixel stills and delivers incredibly fast data speeds — up to 300MB/s. And like all of Red’s DSMC2 cameras, Weapon shoots simultaneous RedCode RAW and Apple ProRes or Avid DNxHD/HR recording. It also adheres to the company’s Obsolescence Obsolete — its operating principle that allows current Red owners to upgrade their technology as innovations are unveiled and move between camera systems without having to purchase all new gear.

The new Weapon is priced at $79,500 (for the camera brain) with upgrades for carbon fiber Weapon customers available for $29,500. Monstro 8K VV will replace the Dragon 8K VV in Red’s line-up, and customers that had previously placed an order for a Dragon 8K VV sensor will be offered this new sensor beginning now. New orders will start being fulfilled in early 2018.

Red has also introduced a service offering for all carbon fiber Weapon owners called Red Armor-W. Red Armor-W offers enhanced and extended protection beyond Red Armor, and also includes one sensor swap each year.

According to Red president Jarred Land, “We put ourselves in the shoes of our customers and see how we can improve how we can support them. Red Armor-W builds upon the foundation of our original extended warranty program and includes giving customers the ability to move between sensors based upon their shooting needs.”

Additionally, Red has made its enhanced image processing pipeline (IPP2) available in-camera with the company’s latest firmware release (V.7.0) for all cameras with Helium and Monstro sensors. IPP2 offers a completely overhauled workflow experience, featuring enhancements such as smoother highlight roll-off, better management of challenging colors, an improved demosaicing algorithm and more.

Dell 6.15

GoPro intros Hero6 and its first integrated 360 solution, Fusion

By Mike McCarthy

Last week, I traveled to San Francisco to attend GoPro’s launch event for its new Hero6 and Fusion cameras. The Hero6 is the next logical step in the company’s iteration of action cameras, increasing the supported frame rates to 4Kp60 and 1080p240, as well as adding integrated image stabilization. The Fusion on the other hand is a totally new product for them, an action-cam for 360-degree video. GoPro has developed a variety of other 360-degree video capture solutions in the past, based on rigs using many of their existing Hero cameras, but Fusion is their first integrated 360-video solution.

While the Hero6 is available immediately for $499, the Fusion is expected to ship in November for $699. While we got to see the Fusion and its footage, most of the hands-on aspects of the launch event revolved around the Hero6. Each of the attendees was provided a Hero6 kit to record the rest of the days events. My group was provided a ride on the RocketBoat through the San Francisco Bay. This adventure took advantage of a number of features of the camera, including the waterproofing, the slow motion and the image stabilization.

The Hero6

The big change within the Hero6 is the inclusion of GoPro’s new custom-designed GP1 image processing chip. This allows them to process and encode higher frame rates, and allows for image stabilization at many frame-rate settings. The camera itself is physically similar to the previous generations, so all of your existing mounts and rigs will still work with it. It is an easy swap out to upgrade the Karma drone with the new camera, which also got a few software improvements. It can now automatically track the controller with the camera to keep the user in the frame while the drone is following or stationary. It can also fly a circuit of 10 waypoints for repeatable shots, and overcoming a limitation I didn’t know existed, it can now look “up.”

There were fewer precise details about the Fusion. It is stated to be able to record a 5.2K video sphere at 30fps and a 3K sphere at 60fps. This is presumably the circumference of the sphere in pixels, and therefore the width of an equi-rectangular output. That would lead us to conclude that the individual fish-eye recording is about 2,600 pixels wide, plus a little overlap for the stitch. (In this article, GoPro’s David Newman details how the company arrives at 5.2K.)

GoPro Fusion for 360

The sensors are slightly laterally offset from one another, allowing the camera to be thinner and decreasing the parallax shift at the side seams, but adding a slight offset at the top and bottom seams. If the camera is oriented upright, those seams are the least important areas in most shots. They also appear to have a good solution for hiding the camera support pole within the stitch, based on the demo footage they were showing. It will be interesting to see what effect the Fusion camera has on the “culture” of 360 video. It is not the first affordable 360-degree camera, but it will definitely bring 360 capture to new places.

A big part of the equation for 360 video is the supporting software and the need to get the footage from the camera to the viewer in a usable way. GoPro already acquired Kolor’s Autopano Video Pro a few years ago to support image stitching for their larger 360 video camera rigs, so certain pieces of the underlying software ecosystem to support 360-video workflow are already in place. The desktop solution for processing the 360 footage will be called Fusion Studio, and is listed as coming soon on their website.

They have a pretty slick demonstration of flat image extraction from the video sphere, which they are marketing as “OverCapture.” This allows a cellphone to pan around the 360 sphere, which is pretty standard these days, but by recording that viewing in realtime they can output standard flat videos from the 360 sphere. This is a much simpler and more intuitive approach to virtual cinematography that trying to control the view with angles and keyframes in a desktop app.

This workflow should result in a very fish-eye flat video, similar to the more traditional GoPro shots, due to the similar lens characteristics. There are a variety of possible approaches to handling the fish-eye look. GoPro’s David Newman was explaining to me some of the solutions he has been working on to re-project GoPro footage into a sphere, to reframe or alter the field of view in a virtual environment. Based on their demo reel, it looks like they also have some interesting tools coming for using the unique functionality that 360 makes available to content creators, using various 360 projections for creative purposes within a flat video.

GoPro Software
On the software front, GoPro has also been developing tools to help its camera users process and share their footage. One of the inherent issues of action-camera footage is that there is basically no trigger discipline. You hit record long before anything happens, and then get back to the camera after the event in question is over. I used to get one-hour roll-outs that had 10 seconds of usable footage within them. The same is true when recording many attempts to do something before one of them succeeds.

Remote control of the recording process has helped with this a bit, but regardless you end up with tons of extra footage that you don’t need. GoPro is working on software tools that use AI and machine learning to sort through your footage and find the best parts automatically. The next logical step is to start cutting together the best shots, which is what Quikstories in their mobile app is beginning to do. As someone who edits video for a living, and is fairly particular and precise, I have a bit of trouble with the idea of using something like that for my videos, but for someone to whom the idea of “video editing” is intimidating, this could be a good place to start. And once the tools get to a point where their output can be trusted, automatically sorting footage could make even very serious editing a bit easier when there is a lot of potential material to get through. In the meantime though, I find their desktop tool Quik to be too limiting for my needs and will continue to use Premiere to edit my GoPro footage, which is the response I believe they expect of any professional user.

There are also a variety of new camera mount options available, including small extendable tripod handles in two lengths, as well as a unique “Bite Mount” (pictured, left) for POV shots. It includes a colorful padded float in case it pops out of your mouth while shooting in the water. The tripods are extra important for the forthcoming Fusion, to support the camera with minimal obstruction of the shot. And I wouldn’t recommend the using Fusion on the Bite Mount, unless you want a lot of head in the shot.

Ease of Use
Ironically, as someone who has processed and edited hundreds of hours of GoPro footage, and even worked for GoPro for a week on paper (as an NAB demo artist for Cineform during their acquisition), I don’t think I had ever actually used a GoPro camera. The fact that at this event we were all handed new cameras with zero instructions and expected to go out and shoot is a testament to how confident GoPro is that their products are easy to use. I didn’t have any difficulty with it, but the engineer within me wanted to know the details of the settings I was adjusting. Bouncing around with water hitting you in the face is not the best environment for learning how to do new things, but I was able to use pretty much every feature the camera had to offer during that ride with no prior experience. (Obviously I have extensive experience with video, just not with GoPro usage.) And I was pretty happy with the results. Now I want to take it sailing, skiing and other such places, just like a “normal” GoPro user.

I have pieced together a quick highlight video of the various features of the Hero6:


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Making the jump to 360 Video (Part 1)

By Mike McCarthy

VR headsets have been available for over a year now, and more content is constantly being developed for them. We should expect that rate to increase as new headset models are being released from established technology companies, prompted in part by the new VR features expected in Microsoft’s next update to Windows 10. As the potential customer base increases, the software continues to mature, and the content offerings broaden. And with the advances in graphics processing technology, we are finally getting to a point where it is feasible to edit videos in VR, on a laptop.

While a full VR experience requires true 3D content, in order to render a custom perspective based on the position of the viewer’s head, there is a “video” version of VR, which is called 360 Video. The difference between “Full VR” and “360 Video,” is that while both allow you to look around every direction, 360 Video is pre-recorded from a particular point, and you are limited to the view from that spot. You can’t move your head to see around behind something, like you can in true VR. But 360 video can still offer a very immersive experience and arguably better visuals, since they aren’t being rendered on the fly. 360 video can be recorded in stereoscopic or flat, depending on the capabilities of the cameras used.

Stereoscopic is obviously more immersive, less of a video dome and inherently supported by the nature of VR HMDs (Head Mounted Displays). I expect that stereoscopic content will be much more popular in 360 Video than it ever was for flat screen content. Basically the viewer is already wearing the 3D glasses, so there is no downside, besides needing twice as much source imagery to work with, similar to flat screen stereoscopic.

There are a variety of options for recording 360 video, from a single ultra-wide fisheye lens on the Fly360, to dual 180-degree lens options like the Gear 360, Nikon KeyMission, and Garmin Virb. GoPro is releasing the Fusion, which will fall into this category as well. The next step is more lens, with cameras like the Orah4i or the Insta360 Pro. Beyond that, you are stepping into the much more expensive rigs with lots of lenses and lots of stitching, but usually much higher final image quality, like the GoPro Omni or the Nokia Ozo. There are also countless rigs that use an array of standard cameras to capture 360 degrees, but these solutions are much less integrated than the all-in-one products that are now entering the market. Regardless of the camera you use, you are going to be recording one or more files in a pixel format fairly unique to that camera that will need to be processed before it can be used in the later stages of the post workflow.

Affordable cameras

The simplest and cheapest 360 camera option I have found is the Samsung Gear 360. There are two totally different models with the same name, usually differentiated by the year of their release. I am using the older 2016 model, which has a higher resolution sensor, but records UHD instead of the slightly larger full 4K video of the newer 2017 model.

The Gear 360 records two fisheye views that are just over 180 degrees, from cameras situated back to back in a 2.5-inch sphere. Both captured image circles are recorded onto a single frame, side by side, resulting in a 2:1 aspect ratio files. These are encoded into JPEG (7776×3888 stills) or HEVC (3840×1920 video) at 30Mb and saved onto a MicroSD card. The camera is remarkably simple to use, with only three buttons, and a tiny UI screen to select recording mode and resolution. If you have a Samsung Galaxy phone, there are a variety of other functions that allows, like remote control and streaming the output to the phone as a viewfinder and such. Even without a Galaxy phone, the camera did everything I needed to generate 360 footage to stitch and edit with but it was cool to have a remote viewfinder for the driving shots.

Pricier cameras

One of the big challenges of shooting with any 360 camera is how to avoid getting gear and rigging in the shot since the camera records everything around it. Even the tiny integrated tripod on the Gear 360 is visible in the shots, and putting it on the plate of my regular DSLR tripod fills the bottom of the footage. My solution was to use the thinnest support I could to keep the rest of the rigging as far from the camera as possible, and therefore smaller from its perspective. I created a couple options to shoot with that are pictured below. The results are much less intrusive in the resulting images that are recorded. Obviously besides the camera support, there is the issue of everything else in the shot including the operator. Since most 360 videos are locked off, an operator may not be needed, but there is no “behind the camera” for hiding gear or anything else. Your set needs to be considered in every direction, since it will all be visible to your viewer. If you can see the camera, it can see you.

There are many different approaches to storing 360 images, which are inherently spherical, as a video file, which is inherently flat. This is the same issue that cartographers have faced for hundreds of years — creating flat paper maps of a planet that is inherently curved. While there are sphere map, cube map and pyramid projection options (among others) based on the way VR headsets work, the equirectangular format has emerged as the standard for editing and distribution encoding, while other projections are occasionally used for certain effects processing or other playback options.

Usually the objective of the stitching process is to get the images from all of your lenses combined into a single frame with the least amount of distortion and the fewest visible seams. There are a number of software solutions that do this, from After Effects plugins, to dedicated stitching applications like Kolor AVP and Orah VideoStitch-Studio to unique utilities for certain cameras. Once you have your 360 video footage in the equirectangular format, most of the other steps of the workflow are similar to their flat counterparts, besides VFX. You can cut, fade, title and mix your footage in an NLE and then encode it in the standard H.264 or H.265 formats with a few changes to the metadata.

Technically, the only thing you need to add to an existing 4K editing workflow in order to make the jump to 360 video is a 360 camera. Everything else could be done in software, but the other thing you will want is a VR headset or HMD. It is possible to edit 360 video without an HMD, but it is a lot like grading a film using scopes but no monitor. The data and tools you need are all right there, but without being able to see the results, you can’t be confident of what the final product will be like. You can scroll around the 360 video in the view window, or see the whole projected image all distorted, but it won’t have the same feel as experiencing it in a VR headset.

360 Video is not as processing intensive as true 3D VR, but it still requires a substantial amount of power to provide a good editing experience. I am using a Thinkpad P71 with an Nvidia Quadro P5000 GPU to get smooth performance during all these tests.

Stay tuned for Part 2 where we focus on editing 360 Video.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Blackmagic’s new Ultimatte 12 keyer with one-touch keying

Building on the 40-year heritage of its Ultimatte keyer, Blackmagic Design has introduced the Ultimatte 12 realtime hardware compositing processor for broadcast-quality keying, adding augmented reality elements into shots, working with virtual sets and more. The Ultimatte 12 features new algorithms and color science, enhanced edge handling, greater color separation and color fidelity and better spill suppression.

The 12G-SDI design gives Ultimatte 12 users the flexibility to work in HD and switch to Ultra HD when they are ready. Sub-pixel processing is said to boost image quality and textures in both HD and Ultra HD. The Ultimatte 12 is also compatible with most SD, HD and Ultra HD equipment, so it can be used with existing cameras.

With Ultimatte 12, users can create lifelike composites and place talent into any scene, working with both fixed cameras and static backgrounds or automated virtual set systems. It also enables on-set previs in television and film production, letting actors and directors see the virtual sets they’re interacting with while shooting against a green screen.

Here are a few more Ultimatte 12 features:

  • For augmented reality, on-air talent typically interacts with glass-like computer-generated charts, graphs, displays and other objects with colored translucency. Adding tinted, translucent objects is very difficult with a traditional keyer, and the results don’t look realistic. Ultimatte 12 addresses this with a new “realistic” layer compositing mode that can add tinted objects on top of the foreground image and key them correctly.
  • One-touch keying technology analyzes a scene and automatically sets more than 100 parameters, simplifying keying as long as the scene is well-lit and the cameras are properly white-balanced. With one-touch keying, operators can pull a key accurately and with minimum effort, freeing them to focus on the program with fewer distractions.
  • Ultimatte 12’s new image processing algorithms, large internal color space, and automatic internal matte generation lets users work on different parts of the image separately with a single keyer.
  • For color handling, Ultimatte 12 has new flare, edge and transition processing to remove backgrounds without affecting other colors. The improved flare algorithms can remove green tinting and spill from any object — even dark shadow areas or through transparent objects.
  • Ultimatte 12 is controlled via Ultimatte Smart Remote 4, a touch-screen remote device that connects via Ethernet. Up to eight Ultimatte 12 units can be daisy-chained together and connected to the same Smart Remote, with physical buttons for switching and controlling any attached Ultimatte 12.

Ultimatte 12 is now available from Blackmagic Design resellers.


Sony adds 36×24 full-frame camera to CineAlta line

Sony has introduced Venice, the company’s first full-frame digital motion picture camera system and the newest of its CineAlta camera lineup, which is designed to expand the filmmaker’s creative freedom through immersive, large-format, full-frame capture of filmic imagery that enables production of natural skin tones, elegant highlight handling and wide dynamic range.

Venice was officially unveiled on September 6 to American Society of Cinematographers (ASC) members and a range of other industry pros. Sony also screened the first footage shot with Venice, a short film, The Dig, that was produced in anamorphic, written and directed by Joseph Kosinski, and shot by Academy Award-winning cinematographer Claudio Miranda, ASC.

The new sensor.

“We really went back to the drawing board for this one,” says Peter Crithary, marketing manager, Sony Electronics. “It is our next-generation camera system, a ground-up development initiative encompassing a completely new image sensor. We carefully considered key aspects such as form factor, ergonomics, build quality, ease of use, a refined picture and painterly look — with a simple, established workflow. We worked in close collaboration with film industry professionals. We also considered the longer-term strategy by designing a user-interchangeable sensor that is as quick and simple to swap as removing four screws, and can accommodate different shooting scenarios as the need arises.”

Venice features a newly developed 36x24mm full-frame sensor to meet the demands of feature filmmaking. Full frame offers the advantages of compatibility with a wide range of lenses, including anamorphic, Super 35mm, spherical and full-frame PL mount lenses for a greater range of expressive freedom with shallow depth of field. The lens mount can also be changed to support E-mount lenses for shooting situations that require smaller, lighter and wider lenses. User-selectable areas of the image sensor allow shooting in Super 35mm 4-perf. Future firmware upgrades are planned to allow the camera to handle 36mm-wide 6K resolution. Fast image scan technology minimizes “Jello” effects.

A new color management system with an ultra-wide color gamut gives users more control and greater flexibility in working with images during grading and post production. Venice also has more than 15 stops of latitude to handle challenging lighting situations from low light to harsh sunlight with a gentle roll-off handling of highlights.

Venice uses Sony’s 16-bit RAW/X-OCN via the AXS-R7 recorder, and 10-bit XAVC workflows. The new camera is also compatible with current and upcoming CineAlta camera hardware accessories, including the DVF-EL200 full-HD OLED viewfinder, AXS-R7 recorder, AXS-CR1 and high-speed Thunderbolt-enabled AXS-AR1 card reader, using established AXS and SxS memory card formats.

Venice has a fully modular and intuitive design with functionality refined to support simple and efficient on-location operation. It is the film industry’s first camera with a built-in stage glass ND filter system, making the shooting process efficient and streamlining camera setup. The camera is designed for easy operation with an intuitive control panel placed on the assistant and operator sides of the camera. A 24-V power supply input/output and LEMO connector allow use of many standard camera accessories designed for use in harsh environments.

Users can customize Venice by enabling the features needed, matched to their individual production requirements. Optional licenses will be available in permanent, monthly and weekly durations to expand the camera’s capabilities, with new features including 4K anamorphic and full frame sold separately.

The Venice CineAlta digital motion picture camera system is scheduled to be available in February 2018.


Agent of Sleep: The making of a spec commercial

By Jennifer Walden

Names like Jason Bourne and James Bond make one think “eternal sleep,” not just merely a “restful” one. That’s what makes director/producer/writer Stephen Vitale’s spec commercial for Tempur-Pedic mattresses so compelling. Like a mad scientist crossing a shark with a sheep, Vitale combines an energetic spy/action film aesthetic with the sleepy world of mattress advertising for Agent of Sleep.

Vitale originally pitched the idea to a different mattress brand. “That brand passed, and I decided they were silly to, so I made the spot that exists on spec and chose to use Tempur-Pedic as the featured brand instead. I hear Tempur-Pedic really enjoyed the spot.”

In Agent of Sleep, two assailants fight their way up a stairwell and into a sun-dappled apartment where their altercation eventually leads into a bedroom and onto a comfy (albeit naked) mattress. One assailant applies a choke hold to the other but his grip loosens as he falls fast asleep. The other assailant lies down beside the first and promptly falls asleep too.

LA-based Vitale drew inspiration from Bourne and Bond films. He referenced fight scenes from Haywire, John Wick and Mission Impossible too. “Mostly all of them have a version of the action sequence in Agent of Sleep — a visceral, intimate fight between spies/hired guns that ends with one of them getting choked out. It was about distilling this trope, dropping a viewer right into the middle of it to grab them and immediately establishing visuals that would tap into the familiarity they have with the setup.”

Once the spy/action foundation was in place, Vitale (who is pictured shooting in our main image) added tropes from mattress ads to his concept, like choosing a warmly lit, serene apartment and ending the spot with a couple lying comfortably on a bare mattress as a narrator shares product information. “The spies are bursting into what would be the typical setting for a mattress ad and they upend all of its elements. The visuals reflect that trajectory.”

To achieve the desired cinematic look, Vitale chose the Arri Alexa Mini with Cooke anamorphic lenses, and shot in a wide aspect ratio of 2:66 — wider than the normal cinemascope. “My cinematographer David Bolen and I felt like it gave the confined sets and the close-range fist fight a bigger scope and pushed the piece further away from the look of an ad.”

They shot in a practical location and dressed it to replicate the bedrooms shown in actual Tempur-Pedic product images. As for smashing through the bedroom wall, that wasn’t part of the plan but it did add to the believability of the fight. “That was an accidental alteration to the location,” jokes Vitale.

The handheld camera movement up front adds to the energy of the fight, and Vitale framed the shots to clearly show who is throwing the punch and how hard it landed. “I tried to design longer takes and find angles that created a dance between the camera and the amazing fight work from Yoshi Sudarso and Cory DeMeyers.”

In contrast, the spot ends with steady, smooth shots that exude a calm feeling. Vitale says, “We used a jib and sticks for the end shots because I wanted it to be as tranquil and still as possible to play up the joke.”

Production sound was captured with a Røde NTG-2 boom mic onto a Zoom H5 recorder. The vocalizations from the two spies on-set, i.e. their breaths and efforts, were all used in post. Vitale, who handled the sound design and final mix, says, “I would use alt audio takes and drop in grunts and impact reactions to shots that needed a boost. The main goal was that it felt kinetic throughout and that the fight sounded really visceral. A lot of punch sounds were layered with other sound effects to avoid them feeling canned, and I also did Foley for different moments in the spot to help fill it out and give it a more natural sound.”

The Post
Vitale also handled picture editing using Apple Final Cut Pro 7, which worked out perfectly for him. Editing the spot was pretty straightforward, since he had designed a solid plan for the shoot and didn’t need to cover extra shots and setups. “I usually only shoot what I know I will use,” he says. “The one shot I didn’t use was an insert of the glass the woman drops, shattering on the floor. So structurally, it was easy to find. The rest was about keeping cuts tight, making sure the longer takes didn’t drag and the quicker cuts were still clear and exciting to watch.”

Vitale worked with colorist Bryan Smaller, who uses Blackmagic Resolve. They agreed that fully committing to the action film aesthetic, by playing with contrast levels and grain to keep the image gritty and grounded was the best way of not letting the audience in on the joke until the end. “For the stairwell and hallway, we leaned into the green and orange hues of those respective locations. The apartment has a bit of a teal hue to it and has a much more organic feel, which again was to help transition the spies and the audience into the mattress ad world, so to speak,” explains Vitale.

The icing on the cake was composer Patrick Sullivan’s action film-style score. “He did a great job of bringing the audience into the action and creating tension and excitement. We’ve been friends since elementary school and played in a band together, so we can find what’s working and what’s not pretty quickly. He’s one of my most consistent collaborators, in various aspects of post production, and he always brings something special to the project.”


Jennifer Walden is a New Jersey-based writer. Follow her at @audiojeney on Twitter.

FMPX8.14

Review: Polaroid Cube+

By Brady Betzel

There are a lot of options out there for outdoor, extreme sports cameras — GoPro is the first that comes to mind with their Hero line, but even companies like Garmin have their own versions that are gaining traction in the niche action camera market. Polaroid has been trying their hand in lots of product markets lately, from camera sliders to monopods and even video cameras with the Polaroid Cube+.

I’m a big fan of GoPro cameras, but one thing that might keep people away is the price. So what if you want something that will record video and take still pictures at a lower cost? That’s where the Polaroid Cube+ fits in. It’s a cube-shaped HD camera that is not much larger than a few sugar cubes. It can film HD video (technically 720p at 30, 60 or 120 fps; 1080p at 30 or 60fps; or 1440p at 30fps), as well take still images at four megapixels interpolated into eight megapixels.

Right off the bat you’ll read “4MP interpolated into 8MP,” which really means it’s a 4MP camera sensor that uses some sort of algorithm, like bicubic interpolation, to blow up your image with a minimal amount of quality loss. Think of it this way — if you are viewing images on your smartphone, you probably won’t see a lot of problems except for your image being a little soft. Other than that tricky bit of word play (which is not uncommon among camera manufacturers), the Cube+ has a decent retail price at just $150.

In my mind, this is a camera that can be used as an educational tool for young filmmakers or for a filmmaker that wants to get a really sneaky b-roll shot in a tight space without paying a high cost. The sound quality isn’t great, but it’s good for reference when syncing cameras together or in an emergency when there is no other audio recording.

Inside the box you get the Cube+ in black, red or teal; a microUSB cable to charge and connect the Cube + to your computer, a user guide, and an 8GB MicroSD. There is a WiFi button, a power/record button and a back cover. Your MicroSD lives under the back cover, and the connection for the microUSB cable can be found there as well.

The Cube+ has WiFi built in, so you can access the camera on your Android or iPhone, control your camera and settings, or even browse the content of your camera. You must have their app to be able to control the Cube+’s camera settings, otherwise it will default to what you had last. To start filming or taking pictures, you hold the power button for three seconds to turn it on. You click the button on the top twice to start recording video, then click once more to end video recording. You click just once to take a picture.

The Cube+ films with its 124-degree lens that has a fisheye look like many wide-angle action cams. According to Polaroid, the Cube+ has image stabilization built in, but I found the footage to still be shaky. It’s possible that the video could be shakier without it, but I found the footage to need some post production stabilization work.

In my opinion, what really sets this camera apart from other action cameras, besides the price point, is the magnet inside the camera that allows you to stick it to anything magnetic without buying additional accessories. Others should consider adding that to their lineup too.

I took the Cube+ to the Santa Barbara Zoo with one of my sons recently and wasn’t afraid to give it to him to film or take pictures with. Since it is splash proof, it can even get a little wet without ruining it. Again, I really love the ability to mount the Cube+ to almost anything with its magnet on the bottom, which is pretty strong. We were riding the train around the zoo, and I stuck it to the train rail without a worry of it falling off. But I did notice when using it that the magnet did get pretty warm, as in it would border on being too hot to touch. Just something to keep in mind if you let kids use it.

In the end, the Polaroid Cube+ is not on the quality level of the GoPro Hero 5 Session, but it might be good for someone filming for the first time that doesn’t want to spend a lot of money. And at $150, it might be a good b-roll camera when used in conjunction with your phone’s camera.

You can check out more about the Polaroid Cube+ in its user manual.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


Fangs Film Gear’s Wolf Packs and Panther lens bags

By Brady Betzel

Summertime is the perfect time to make sure you have the right gear bags to throw your cameras, memory cards and lenses in. Fangs Film Gear is a brand from the company Release the Hounds Studios. They also have other products, like Ground Control color correction LUTs, Wave Brigade royalty-free sound effects and ambience, and a podcast called Video Dogpound.

I found out about them when I was watching some music videos for inspiration and slowly fell down a rabbit hole that led me to a great organization called Heart Support. It’s basically a group that lends a helping hand to people having a hard time. I found co-owner Casey Faris’ YouTube page, where he has some awesome and easily digestible video editing and color correction tutorials — mainly on Blackmagic Design Resolve. You can find his co-owner Dan Bernard’s YouTube page here. They were promoting some of their LUTs from Ground Control and also some gear bags, which I’m now reviewing.

Fangs Film Gear Panther Lens Bags
These are black weather-resistant drawstring lens bags lined with lens-quality micro fiber cloth. They come in three sizes: small for $19.99, medium for $22.99 and large for $24.99. You can also buy one of each for $64.99. Once you touch them you will immediately feel the durability on the outside but the softness a lens demands on the inside. When using these bags you will always have a great lens cloth nearby.

Not only have I been toting around my lenses in these bags, but they’ve made great GoPro carrying bags since the GoPro’s lens is constantly exposed. The small bag works with a compact DSLR lens and is perfect for something like the Nifty Fifty Canon 50mm lens; it’s about the size of an iPhone 7 Plus or my Samsung Galaxy S8 +. Even the Blackmagic Pocket Cinema Camera fits well — it measures 5×7 inches when lying flat. The medium bag measures 6×8 inches and is good for multiple GoPros or a medium-sized lens like my micro four-thirds Lumix 14-140. The large measures 6.5×9.25 inches and is obviously great for a longer lens, but in a rush I keep my Blackmagic Pocket Cinema Camera with 14-140 lens attached sometimes. All the bags are weather resistant, meaning you can splash some water on them and it won’t get through. However, as they are drawstring so water can still get in through the top.

If you are looking for a GoPro-specific gear bag they also carry something called The Viper, a GoPro-focused sling bag. And if you are a DJI Mavic owner, they sell a two-pack of Panther bags that will fit the remote and Mavic — it looks essentially like the small and large Panther bags.

Fangs Film Gear Tactical Production Organizers
These are called Wolf Packs but I like to call them sweet dad bags. Not only do they have a practical production purpose, but they are great for dads who have to carry baby stuff around but want a little more stylish look.

So first the production purpose of the Wolf Packs. It’s really genius and simple: one side is green for your charged batteries or unused memory cards, and the red side is for depleted batteries and used memory cards. No more worrying about which cards have been used, or having to try and label a bunch of MicroSD cards with some gaffers tape. Now for the dad use of the Wolf Packs — green for the clean diapers and red for the used diapers! If you’ve ever used cloth diapers you may be a little more familiar with this technique.

The Wolf Packs are ultra durable and haven’t shed a stitch since I’ve used them in production scenarios, and even Disneyland dad scenarios. The zippers are extremely sturdy, but what impressed me the most were the included carabiner and carabiner grommet on the Wolf Packs themselves. The grommet is very high quality and won’t rip. The carabiner itself isn’t of rock climbing grade but will do for almost any situation you will need it for. The clip makes these bags easy to attach anywhere but specifically my backpack.

Inside the Wolf Pack is a durable fabric that isn’t the same as the Panther Lens bags, so do not clean your lenses with these! The pockets are made to stand up to the abuses of throwing batteries in and out all day long. I would love to see one of these with the microfiber lining like the Panther bags, but I also see the benefit of using both of them separate. The Wolf Packs break down like this: the small is 6.5×5.5 inches for $29.99, the medium is 8.25×7 inches for $34.99 and the large is 9×9 inches for $39.99. You can also purchase all three for $99.99.

Summing Up
I’ve definitely put these bags through the wringer over an extended period of time to make sure they will hold up. I am particularly concerned about things like zippers, stitching and cinches, so just a month or so of testing won’t give you a great sample. So over multiple months I’ve put the Panther Lens Bags and Wolf Packs through the wringer, hiking in the Simi Valley mountains and running into rattle snakes with GoPros, batteries, memory cards, lenses, Blackmagic Pocket Cinema Cameras and much more.

Even lightly dropping some of the bags with GoPros and BMPCC’s in them into the water and found no damage. I’ve really come to love the lens bags, especially when I need a quick lens cleaning and I know that I always have that with me. The Wolf Packs are something I constantly keep with me, great for shoots where I need to change out batteries and memory cards but also great for kid snacks, chapstick and sunscreen. Without hesitation I would order these again; the fabric and stitching is top notch. I had my wife, who really likes to sew and make clothing, take a look at them and she was really impressed with the Wolf Packs… so much so there is now one missing.

Check them out at their website www.fangsfilmgear.com, Twitter @FangsFilmGear and their main company site. Finally, if you are interested in some positivity you should check out www.heartsupport.com.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Quick Look: Jaunt One’s 360 camera

By Claudio Santos

To those who have been following the virtual reality market from the beginning, one very interesting phenomenon is how the hardware development seems to have outpaced both the content creation and the software development. The industry has been in a constant state of excitement over the release of new and improved hardware that pushes the capabilities of the medium, and content creators are still scrambling to experiment and learn how to use the new technologies.

One of the products of this tech boom is the Jaunt One camera. It is a 360 camera that was developed with the explicit focus of addressing the many production complexities that plague real life field shooting. What do I mean by that? Well, the camera quickly disassembles and allows you to replace a broken camera module. After all, when you’re across the world and the elephant that is standing in your shot decides to play with the camera, it is quite useful to be able to quickly swap parts instead of having to replace the whole camera or sending it in for repair from the middle of the jungle.

Another of the main selling points of the Jaunt One camera is the streamlined cloud finishing service they provide. It takes the content creator all the way from shooting on set through stitching, editing, onlining and preparing the different deliverables for all the different publishing platforms available. The pipeline is also flexible enough to allow you to bring your footage in and out of the service at any point so you can pick and choose what services you want to use. You could, for example, do your own stitching in Nuke, AVP or any other software and use the Jaunt cloud service to edit and online these stitched videos.

The Jaunt One camera takes a few important details into consideration, such as the synchronization of all of the shutters in the lenses. This prevents stitching abnormalities in fast moving objects that are captured in different moments in time by adjacent lenses.

The camera doesn’t have an internal ambisonics microphone but the cloud service supports ambisonic recordings made in a dual system or Dolby Atmos. It was interesting to notice that one of the toolset apps they released was the Jaunt Slate, a tool that allows for easy slating on all the cameras (without having to run around the camera like a child, clapping repeatedly) and is meant to automatize the synchronization of the separate audio recordings in post.

The Jaunt One camera shows that the market is maturing past its initial DIY stage and the demand for reliable, robust solutions for higher budget productions is now significant enough to attract developers such as Jaunt. Let’s hope tools such as these encourage more and more filmmakers to produce new content in VR.