Cinnafilm 6.6.19

Category Archives: IBC

I was an IBC virgin

By Martina Nilgitsalanont

I recently had the opportunity to attend the IBC show in Amsterdam. My husband, Mike Nuget, was asked to demonstrate workflow and features of FilmLight’s Baselight software, and since I was in between projects — I’m an assistant editor on Showtime’s Billions and will start on Season 4 in early October — we turned his business trip into a bit of a vacation as well.

Although I’ve worked in television for quite some time, this was my first trip to an industry convention, and what an eye opener it was! The breadth and scope of the exhibit halls, the vendors, the attendees and all the fun tech equipment that gets used in the film and television industry took my breath away (dancing robotic cameras??!!). My husband attempted to prepare me for it before we left the states, but I think you have to experience it to fully appreciate it.

Since I edit on Media Composer, I stopped by Avid’s booth to see what new features they were showing off, and while I saw some great new additions, I was most tickled when one of the questions I asked stumped the coders. They took a note of what I was asking of the feature, and let me know, “We’ll work on that.” I’ll be keeping an eye out!

Of course, I spent some time over at the FilmLight booth. It was great chatting with the folks there and getting to see some of Baselight’s new features. And since Mike was giving a demonstration of the software, I got to attend some of the other demos as well. It was a real eye opener as to how much time and effort goes into color correction, whether it’s on a 30-second commercial, documentary or feature film.

Another booth I stopped by was Cinedeck, over at the Launchpad. I got a demo of their CineXtools, and I was blown away. How many times do we receive a finished master (file) that we find errors in? With this software, instead of making the fixes and re-exporting (and QCing) a brand-new file, you can insert the fixes and be done! You can remap audio tracks if they’re incorrect, or even fix an incorrect closed caption. This is, I’m sure, a pretty watered down explanation of some of the things the CineX software is capable of, but I was floored by what I was shown. How more finishing houses aren’t aware of this is beyond me. It seems like it would be a huge time saver for the operator(s) that need to make the fixes.

Amsterdam!
Since we went spent the week before the convention in Amsterdam, Mike and I got to do some sightseeing. One of our first stops was the Van Gogh Museum, which was very enlightening and had an impressive collection of his work. We took a canal cruise at night, which offered a unique vantage point of the city. And while the city is beautiful during the day, it’s simply magical at night —whether by boat or simply strolling through the streets— with the warm glow from living rooms and streetlights reflected in the water below.

One of my favorite things was a food tour in the Jordaan district, where we were introduced to a fantastic shop called Jwo Lekkernijen. They sell assorted cheeses, delectable deli meats, fresh breads and treats. Our prime focus while in Amsterdam was to taste the cheese, so we made a point of revisiting later in the week so that we could delight in some of the best sandwiches EVER.

I could go on and on about all our wanderings (Red Light District? Been there. Done that. Royal Palace? Check.), but I’ll keep it short and say that Amsterdam is definitely a city that should be explored fully. It’s a vibrant and multicultural metropolis, full of warm and friendly people, eager to show off and share their heritage with you.  I’m so glad I tagged along!

AJA Introduces Kona 5 with 12G-SDI I/O

At IBC 2018, AJA debuted Kona 5, a new eight-lane PCIe 3.0 video and audio I/O card supporting 12G-SDI I/O and HDMI 2.0 monitoring/output for workstations or Thunderbolt 3-connected chassis. Kona 5 supports 4K/UltraHD and HD high frame rate, deep color and HDR workflows over one cable. For developers, AJA’s SDK offers support for Kona 5 multi-channel 12G-SDI I/O, enabling multiple 4K streams of input or output.

The Kona 5 capture and output card is interoperable with standard tools such as Adobe Premiere Pro, Apple Final Cut Pro X and Avid Media Composer, using AJA’s Mac OS and Windows drivers and application plug-ins. The card supports simultaneous capture with pass-through monitoring when using 12G-SDI and offers HDMI 2.0 output for connecting to the latest displays.

“With today’s audiences expecting the highest quality content, high resolution, high frame rate and deep color are quickly becoming the norm across broadcast and post workflows, prompting the need for faster, more efficient approaches,” says AJA president Nick Rashby. “Kona 5 combines the flexibility of AJA’s Io 4K Plus into a desktop I/O solution with a more powerful feature set.”

Kona 5 feature highlights include:

• 12G-SDI I/O and HDMI 2.0 monitoring/output for 4K, UltraHD, 2K, HD and SD with HFR support up to 4K 60p at YUV 10-bit 4:2:2 and support for RGB 12-bit 4:4:4 up to 4K 30p
• 4x bi-directional 12G-SDI ports and 1x reference in, on robust HD-BNC connectors, with HD-BNC to full-sized BNC cables included
• 16-channel embedded audio on SDI, 8-channel embedded audio on HDMI
• 8-channel AES audio I/O, LTC I/O, and RS-422 serial control via supplied break-out cable
• 10-bit downstream keyer in hardware, supporting up to 4K resolution
• Compatibility with Adobe Premiere Pro, Apple Final Cut Pro X, Avid Media Composer, Telestream Wirecast, AJA Control Room and others
• AJA SDK compatibility, offering advanced features including multi-channel 4K I/O
• Three-year international warranty

Cinnafilm 6.6.19

Xytech intros mobile UI, REST APIs for MediaPulse at IBC

Xytech, maker of the MediaPulse facility management software, has introduced a new user interface that extends MediaPulse to a wider range of users and expands support for multiple devices.

The new MediaPulse UI provides custom screens tailored to the needs of operations staff, producers, facility managers, field crews and freelancers. The goal of the new UI, according to Xytech, is to increase efficiency and consistency for the entire organization and speed media workflows.

“Our new MediaPulse Mobile UI is designed to provide a personalized interface for all team members,” explains Greg Dolan of Xytech. “This is the beginning of a crucial strategy for Xytech as we expand our technology from the hands of operational and financial users to all users in the enterprise.”

In addition, Xytech has also announced the latest release of the MediaPulse Development Platform. The platform provides integrations with all systems through an API library now supporting REST calls. Triggered messaging, parameter-based alerts and automated report delivery are all included with the new release.


Adobe updates Creative Cloud

By Brady Betzel

You know it’s almost fall when when pumpkin spice lattes are  back and Adobe announces its annual updates. At this year’s IBC, Adobe had a variety of updates to its Creative Cloud line of apps. From more info on their new editing platform Project Rush to the addition of Characterizer to Character Animator — there are a lot of updates so I’m going to focus on a select few that I think really stand out.

Project Rush

I use Adobe Premiere quite a lot these days; it’s quick and relatively easy to use and will work with pretty much every codec in the universe. In addition, the Dynamic Link between Adobe Premiere Pro and Adobe After Effects is an indispensible feature in my world.

With the 2018 fall updates, Adobe Premiere will be closer to a color tool like Blackmagic’s Resolve with the addition of new hue saturation curves in the Lumetri Color toolset. In Resolve these are some of the most important aspects of the color corrector, and I think that will be the same for Premiere. From Hue vs. Sat, which can help isolate a specific color and desaturate it to Hue vs. Luma, which can help add or subtract brightness values from specific hues and hue ranges — these new color correcting tools further Premiere’s venture into true professional color correction. These new curves will also be available inside of After Effects.

After Effects features many updates, but my favorites are the ability to access depth matte data of 3D elements and the addition of the new JavaScript engine for building expressions.

There is one update that runs across both Premiere and After Effects that seems to be a sleeper update. The improvements to motion graphics templates, if implemented correctly, could be a time and creativity saver for both artists and editors.

AI
Adobe, like many other companies, seem to be diving heavily into the “AI” pool, which is amazing, but… with great power comes great responsibility. While I feel this way and realize others might not, sometimes I don’t want all the work done for me. With new features like Auto Lip Sync and Color Match, editors and creators of all kinds should not lose the forest for the trees. I’m not telling people to ignore these features, but asking that they put a few minutes into discovering how the color of a shot was matched, so you can fix something if it goes wrong. You don’t want to be the editor who says, “Premiere did it” and not have a great solution to fix something when it goes wrong.

What Else?
I would love to see Adobe take a stab at digging up the bones of SpeedGrade and integrating that into the Premiere Pro world as a new tab. Call it Lumetri Grade, or whatever? A page with a more traditional colorist layout and clip organization would go a long way.

In the end, there are plenty of other updates to Adobe’s 2018 Creative Cloud apps, and you can read their blog to find out about other updates.


Presenting at IBC vs. NAB

By Mike Nuget

I have been lucky enough to attend NAB a few times over the years, both as an onlooker and as a presenter. In 2004, I went to NAB for the first time as an assistant online editor, mainly just tagging along with my boss. It was awesome! It was very overwhelming and, for the most part, completely over my head.  I loved seeing things demonstrated live by industry leaders. I felt I was finally a part of this crazy industry that I was new to. It was sort of a rite of passage.

Twelve years later, Avid asked me to present on the main stage. Knowing that I would be one of the demo artists that other people would sit down and watch — as I had done just 12 years earlier — was beyond anything I thought I would do back when I first started. The demo showed the Avid and FilmLight collaboration between the Media Composer and the Baselight color system. Two of my favorite systems to work on. (Watch Mike’s presentation here.)

Thanks to my friend and now former co-worker Matt Schneider, who also presented alongside of me, I had developed a very good relationship with the Avid developers and some of the people who run the Avid booth at NAB. And at the same time, the Filmlight team was quickly being put on my speed dial and that relationship strengthened as well.

This past NAB, Avid once again asked me to come back and present on the main stage about Avid Symphony Color and FilmLight’s Baselight Editions plug-in for Avid, but this time I would get to represent myself and my new freelance career change — I had just left my job at Technicolor-Postworks in New York a few weeks prior. I thought that since I was now a full-time freelancer this might be the last time I would ever do this kind of thing. That was until this past July, when I got an email from the FilmLight team asking me to present at IBC in Amsterdam. I was ecstatic.

Preparing for IBC was similar enough as far as my demo, but I was definitely more nervous than I was at NAB. I think it was two reasons: First, presenting in front of many different people in an international setting. Even though I am from the melting pot of NYC, it is a different and interesting feeling being surrounded by so many different nationalities all day long, and pretty much being the minority. On a personal note, I loved it. My wife and I love traveling, and to us this was an exciting chance to be around people from other cultures. On a business level, I guess I was a little afraid that my fast-talking New Yorker side would lose some people, and I didn’t want that to happen.

The second thing was that this was the first time that I was presenting strictly for FilmLight and not Avid. I have been an Avid guy for over 15 years. It’s my home, it’s my most comfortable system, and I feel like I know it inside and out. I discovered Baselight in 2012, so to be presenting in front of FilmLight people, who might have been using their systems for much longer, was a little intimidating.

When I walked into the room, they had setup a full-on production, along with spotlights, three cameras, a projector… the nerves rushed once again. The demo was standing room only. Sometimes when you are doing presentations, time seems to fly by, so I am not sure I remember every minute of the 50-minute presentation, but I do remember at one point within the first few minutes my voice actually trembled, which internally I thought was funny, because I do not tend to get nervous. So instead of fighting it, I actually just said out loud “Sorry guys, I’m a little nervous here,” then took a deep breath, gathered myself, and fell right into my routine.

I spent the rest of the day watching the other FilmLight demos and running around the convention again saying hello to some new vendors and goodbye to those I had already seen, as Sunday was my last day at the show.

That night I got to hang out with the entire Filmlight staff for dinner and some drinks. These guys are hilarious, what a great tight-knit family vibe they have. At one point they even started to label each other, the uncle, the crazy brother, the funny cousin. I can’t thank them enough for being so kind and welcoming. I kind of felt like a part of the family for a few days, and it was tremendously enjoyable and appreciated.

Overall, IBC felt similar enough to NAB, but with a nice international twist. I definitely got lost more since the layout is much more confusing than NAB’s. There are 14 halls!

I will say that the “relaxing areas” at IBC are much better than NAB’s! There is a sandy beach to sit on, a beautiful canal to sit by while having a Heineken (of course) and the food trucks were much, much better.

I do hope I get to come back one day!


Mike Nuget (known to most as just “Nuget”) is a NYC-based colorist and finishing editor. He recently decided to branch out on his own and become a freelancer after 13 years with Technicolor-Postworks. He has honed a skill set across multiple platforms, including FilmLight’s Baselight, Blackmagic’s Resolve, Avid and more. 


IBC 2018: Convergence and deep learning

By David Cox

In the 20 years I’ve been traveling to IBC, I’ve tried to seek out new technology, work practices and trends that could benefit my clients and help them be more competitive. One thing that is perennially exciting about this industry is the rapid pace of change. Certainly, from a post production point of view, there is a mini revolution every three years or so. In the past, those revolutions have increased image quality or the efficiency of making those images. The current revolution is to leverage the power and flexibly of cloud computing. But those revolutions haven’t fundamentally changed what we do. The images might have gotten sharper, brighter and easier to produce, but TV is still TV. This year though, there are some fascinating undercurrents that could herald a fundamental shift in the sort of content we create and how we create it.

Games and Media Collide
There is a new convergence on the horizon in our industry. A few years ago, all the talk was about the merge between telecommunications companies and broadcasters, as well as the joining of creative hardware and software for broadcast and film, as both moved to digital.

The new convergence is between media content creation as we know it and the games industry. It was subtle, but technology from gaming was present in many applications around the halls of IBC 2018.

One of the drivers for this is a giant leap forward in the quality of realtime rendering by the two main game engine providers: Unreal and Unity. I program with Unity for interactive applications, and their new HDSRP rendering allows for incredible realism, even when being rendered fast enough for 60+ frames per second. In order to create such high-quality images, those game engines must start with reasonably detailed models. This is a departure from the past, where less detailed models were used for games than were used for film CGI shots, to protect for realtime performance. So, the first clear advantage created by the new realtime renderers is that a film and its inevitable related game can use the same or similar model data.

NCam

Being able to use the same scene data between final CGI and a realtime game engine allows for some interesting applications. Habib Zargarpour from Digital Monarch Media showed a system based on Unity that allows a camera operator to control a virtual camera in realtime within a complex CGI scene. The resulting camera moves feel significantly more real than if they had been keyframed by an animator. The camera operator chases high-speed action, jumps at surprises and reacts to unfolding scenes. The subtleties that these human reactions deliver via minor deviations in the movement of the camera can convey the mood of a scene as much as the design of the scene itself.

NCam was showing the possibilities of augmenting scenes with digital assets, using their system based on the Unreal game engine. The NCam system provides realtime tracking data to specify the position and angle of a freely moving physical camera. This data was being fed to an Unreal game engine, which was then adding in animated digital objects. They were also using an additional ultra-wide-angle camera to capture realtime lighting information from the scene, which was then being passed back to Unreal to be used as a dynamic reflection and lighting map. This ensured that digitally added objects were lit by the physical lights in the realworld scene.

Even a seemingly unrelated (but very enlightening) chat with StreamGuys president Kiriki Delany about all things related to content streaming still referenced gaming technology. Delany talked about their tests to build applications with Unity to provide streaming services in VR headsets.

Unity itself has further aspirations to move into storytelling rather than just gaming. The latest version of Unity features an editing timeline and color grading. This allows scenes to be built and animated, then played out through various virtual cameras to create a linear story. Since those scenes are being rendered in realtime, tweaks to scenes such as positions of objects, lights and material properties are instantly updated.

Game engines not only offer us new ways to create our content, but they are a pathway to create a new type of hybrid entertainment, which sits between a game and a film.

Deep Learning
Other undercurrents at IBC 2018 were the possibilities offered by machine learning and deep learning software. Essentially, a normal computer program is hard wired to give a particular output for a given input. Machine learning allows an algorithm to compare its output to a set of data and adjust itself if the output is not correct. Deep learning extends that principle by using neural network structures to make a vast number of assessments of input data, then draw conclusions and predications from that data.

Real-world applications are already prevalent and are largely related in our industry to processing viewing metrics. For example, Netflix suggests what we might want to watch next by comparing our viewing habits to others with a similar viewing pattern.

But deep learning offers — indeed threatens — much more. Of course, it is understandable to think that, say, delivery drivers might be redundant in a world where autonomous vehicles rule, but surely creative jobs are safe, right? Think again!

IBM was showing how its Watson Studio has used deep learning to provide automated editing highlights packages for sporting events. The process is relatively simple to comprehend, although considerably more complicated in practice. A DL algorithm is trained to scan a video file and “listen” for a cheering crowd. This finds the highlight moment. Another algorithm rewinds back from that to find the logical beginning of that moment, such as the pass forward, the beginning of the volley etc. Taking the score into account helps decide whether that highlight was pivotal to the outcome of the game. Joining all that up creates a highlight package without the services of an editor. This isn’t future stuff. This has been happening over the last year.

BBC R&D was talking about their trials to have DL systems control cameras at sporting events, as they could be trained to follow the “two thirds” framing rule and to spot moments of excitement that justified close-ups.

In post production, manual tasks such as rotoscoping and color matching in color grading could be automated. Even styles for graphics, color and compositing could be “learned” from other projects.

It’s certainly possible to see that deep learning systems could provide a great deal of assistance in the creation of day-to-day media. Tasks that are based on repetitiveness or formula would be the obvious targets. The truth is, much of our industry is repetitive and formulaic. Investors prefer content that is more likely to be a hit, and this leads to replication over innovation.

So, are we heading for “Skynet” and need Arnold to save us? I thought it was very telling that IBM occupied the central stand position in Hall 7 — traditionally the home of the tech companies that have driven creativity in post. Clearly, IBM and its peers are staking their claim. I have no doubt that DL and ML will make massive changes to this industry in the years ahead. Creativity is probably, but not necessarily, the only defence for mere humans to keep a hand in.

That said, at IBC2018 the most popular place for us mere humans to visit was a bar area called The Beach, where we largely drank Heineken. If the ultimate deep learning system is tasked to emulate media people, surely it would create digital alcohol and spend hours talking nonsense, rather than try and take over the media world? So perhaps we have a few years left yet.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.


Quantum upgrades Xcellis scale-out storage with StoreNext 6.2, NVMe tech

Quantum has made enhancements to its Xcellisscale-out storage appliance portfolio with an upgrade to StorNext 6.2 and the introduction of NVMe storage. StorNext 6.2 bolsters performance for 4K and 8K video while enhancing integration with cloud-based workflows and global collaborative environments. NVMe storage significantly accelerates ingest and other aspects of media workflows.

Quantum’s Xcellis scale-out appliances provide high performance for increasingly demanding applications and higher resolution content. Adding NVMe storage to the Xcellis appliances offers ultra-fast performance: 22 GB/s single-client, uncached streaming bandwidth. Excelero’s NVMesh technology in combination with StorNext ensures all data is accessible by multiple clients in a global namespace, making it easy to access and cost-effective to share Flash-based resources.

Xcellis provides cross-protocol locking for shared access across SAN, NFS and SMB, helping users share content across both Fibre Channel and Ethernet.

With StorNext 6.2, Quantum now offers an S3 interface to Xcellis appliances, allowing them to serve as targets for applications designed to write to RESTful interfaces. This allows pros to use Xcellis as either a gateway to the cloud or as an S3 target for web-based applications.

Xcellis environments can now be managed with a new cloud monitoring tool that enables Quantum’s support team to monitor critical customer environmental factors, speed time to resolution and ultimately increase uptime. When combined with Xcellis Web Services — a suite of services that lets users set policies and adjust system configuration — overall system management is streamlined.

Available with StorNext 6.2, enhanced FlexSync replication capabilities enable users to create local or remote replicas of multitier file system content and metadata. With the ability to protect data for both high-performance systems and massive archives, users now have more flexibility to protect a single directory or an entire file system.

StorNext 6.2 lets administrators provide defined and enforceable quotas and implement quality of service levels for specific users, and it simplifies reporting of used storage capacity. These new features make it easier for administrators to manage large-scale media archives efficiently.

The new S3 interface and NVMe storage option are available today. The other StorNext features and capabilities will be available by December 2018.

 


mLogic at IBC with four new storage solutions

mLogic will be at partner booths during IBC showing four new products at: the mSpeed Pro, mRack Pro, mShare MDC and mTape SAS.

The mLogic mSpeed Pro (pictured) is a 10-drive RAID system with integrated LTO tape. Thishybrid storage solution and hard drive provides high-speed access to media for coloring, editing and VFX, while also providing an extended, long-term archive for content to LTO tape, which promises more than 30+ years of media preservation.

mSpeed Pro supports multiple RAID levels, including RAID-6 for the ultimate in fault tolerance. It connects to any Linux, macOS, or Windows computer via a fast 40Gb/second Thunderbolt 3 port. The unit ships with the mLogic Linear Tape File System (LTFS) Utility, a simple drag-and-drop application that transfers media from the RAID to the LTO.

The mLogic mSpeed Pro will be available in 60, 80 and 100TB with an LT0-7 or LTO-8 tape drive. Pricing starts at $8,999.

The mRack Pro is a 2U rack-mountable archiving solution that features full-height LTO-8 drives and Thunderbolt 3 connectivity. Full-height (FH) LTO-8 drives offer numerous benefits over their half-height counterparts, including:
– Having larger motors that move media faster
– Working more optimally in LTFS (Linear Tape File System) environments
– Providing increased mechanical reliability
– Being a better choice for high-duty cycle workloads
– Having a lower operating temperature

The mRack Pro is available with one or two LTO-8 FH drives. Pricing starts at $7,999.

mLogic’s mShare is a metadata controller (MDC) with PCIe switch and embedded Storage Area Network (SAN) software, all integrated in a single compact rack-mount enclosure. Designed to work with mLogic’s mSAN Thunderbolt 3 RAID, the unit can be configured with Apple Xsan or Tiger Technology Tiger Store software. With mShare and mSAN, collaborative workgroups can be configured over Thunderbolt at a fraction of the cost of traditional SAN solutions. Pricing TBD.

Designed for archiving media in the Linux and Windows environments, mTape SAS is a desktop LTO-7 or LTO-8 that ships bundled with a high-speed SAS PCIe adapter to install in host computers. The mTape SAS can also be bundled with Xendata Workstation 6 archiving software for Windows. Pricing starts at $3,399.


Winners: IBC2017 Impact Awards

postPerspective has announced the winners of our postPerspective Impact Awards from IBC2017. All winning products reflect the latest version of the product, as shown at IBC.

The postPerspective Impact Award winners from IBC2017 are:

• Adobe for Creative Cloud
• Avid for Avid Nexis Pro
• Colorfront for Transkoder 2017
• Sony Electronics for Venice CineAlta camera

Seeking to recognize debut products and key upgrades with real-world applications, the postPerspective Impact Awards are determined by an anonymous judging body made up of industry pros. The awards honor innovative products and technologies for the post production and production industries that will influence the way people work.

“All four of these technologies are very worthy recipients of our first postPerspective Impact Awards from IBC,” said Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually make users’ working lives easier and projects better, and our winners certainly fall into that category. You’ll notice that our awards from IBC span the entire pro pipeline, from acquisition to on-set dailies to editing/compositing to storage.

“As IBC falls later in the year, we are able to see where companies are driving refinements to really elevate workflow and enhance production. So we’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We’re very proud of that fact, and it makes our awards quite special.”

IBC2017 took place September 15-19 in Amsterdam. postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at the 2018 NAB Show.

Xytech launches MediaPulse Managed Cloud at IBC

Facility management software provider Xytech has introduced a cloud and managed services offering, MediaPulse Managed Cloud. Hosted in Microsoft Azure, MediaPulse Managed Cloud is a secure platform offering full system management.

MediaPulse Managed Cloud is available through any web browser and compatible with iOS, Android and Windows mobile devices. The new managed services handle most administrative functions, including daily backups, user permissions and screen layouts. The offering is available with several options, including a variety of language packs, allowing for customization and localization.

Slated for shipping in October, MediaPulse Managed Cloud is compliant with European privacy laws and enables secure data transmission across multiple geographies.

Xytech debuted MediaPulse Managed Cloud at IBC2017. The show was the company’s first as a member of the Advanced Media Workflow Association, a community-driven forum focused on advancing business-driven solutions for networked media workflows.

Blackmagic’s new Ultimatte 12 keyer with one-touch keying

Building on the 40-year heritage of its Ultimatte keyer, Blackmagic Design has introduced the Ultimatte 12 realtime hardware compositing processor for broadcast-quality keying, adding augmented reality elements into shots, working with virtual sets and more. The Ultimatte 12 features new algorithms and color science, enhanced edge handling, greater color separation and color fidelity and better spill suppression.

The 12G-SDI design gives Ultimatte 12 users the flexibility to work in HD and switch to Ultra HD when they are ready. Sub-pixel processing is said to boost image quality and textures in both HD and Ultra HD. The Ultimatte 12 is also compatible with most SD, HD and Ultra HD equipment, so it can be used with existing cameras.

With Ultimatte 12, users can create lifelike composites and place talent into any scene, working with both fixed cameras and static backgrounds or automated virtual set systems. It also enables on-set previs in television and film production, letting actors and directors see the virtual sets they’re interacting with while shooting against a green screen.

Here are a few more Ultimatte 12 features:

  • For augmented reality, on-air talent typically interacts with glass-like computer-generated charts, graphs, displays and other objects with colored translucency. Adding tinted, translucent objects is very difficult with a traditional keyer, and the results don’t look realistic. Ultimatte 12 addresses this with a new “realistic” layer compositing mode that can add tinted objects on top of the foreground image and key them correctly.
  • One-touch keying technology analyzes a scene and automatically sets more than 100 parameters, simplifying keying as long as the scene is well-lit and the cameras are properly white-balanced. With one-touch keying, operators can pull a key accurately and with minimum effort, freeing them to focus on the program with fewer distractions.
  • Ultimatte 12’s new image processing algorithms, large internal color space, and automatic internal matte generation lets users work on different parts of the image separately with a single keyer.
  • For color handling, Ultimatte 12 has new flare, edge and transition processing to remove backgrounds without affecting other colors. The improved flare algorithms can remove green tinting and spill from any object — even dark shadow areas or through transparent objects.
  • Ultimatte 12 is controlled via Ultimatte Smart Remote 4, a touch-screen remote device that connects via Ethernet. Up to eight Ultimatte 12 units can be daisy-chained together and connected to the same Smart Remote, with physical buttons for switching and controlling any attached Ultimatte 12.

Ultimatte 12 is now available from Blackmagic Design resellers.

Adobe intros updates to Creative Cloud, including Team Projects

Later this year, Adobe will be offering new capabilities within its Adobe Creative Cloud video tools and services. This includes updates for VR/360, animation, motion graphics, editing, collaboration and Adobe Stock. Many of these features are powered by Adobe Sensei, the company’s artificial intelligence and machine learning framework. Adobe will preview these advancements at IBC.

The new capabilities coming later this year to Adobe Creative Cloud for video include:
• Access to motion graphics templates in Adobe Stock and through Creative Cloud Libraries, as well as usability improvements to the Essential Graphics panel in Premiere Pro, including responsive design options for preserving spatial and temporal.
• Character Animator 1.0 with changes to core and custom animation functions, such as pose-to-pose blending, new physics behaviors and visual puppet controls. Adobe Sensei will help improve lip-sync capability by accurately matching mouth shape with spoken sounds.
• Virtual reality video creation with a dedicated viewing environment in Premiere Pro. Editors can experience the deeply engaging qualities of content, review their timeline and use keyboard driven editing for trimming and markers while wearing the same VR head-mounts as their audience. In addition, audio can be determined by orientation or position and exported as ambisonics audio for VR-enabled platforms such as YouTube and Facebook. VR effects and transitions are now native and accelerated via the Mercury playback engine.
• Improved collaborative workflows with Team Projects on the Local Area Network with managed access features that allow users to lock bins and provide read-only access to others. Formerly in beta, the release of Team Projects will offer smoother workflows hosted in Creative Cloud and the ability to more easily manage versions with auto-save history.
• Flexible session organization to multi-take workflows and continuous playback while editing in Adobe Audition. Powered by Adobe Sensei, auto-ducking is added to the Essential Sound panel that automatically adjusts levels by type: dialogue, background sound or music.

Integration with Adobe Stock
Adobe Stock is now offering over 90 million assets including photos, illustrations and vectors. Customers now have access to over 4 million HD and 4K Adobe Stock video footage directly within their Creative Cloud video workflows and can now search and scrub assets in Premiere Pro.

Coming to this new release are hundreds of professionally-created motion graphics templates for Adobe Stock, available later this year. Additionally, motion graphic artists will be able to sell Motion Graphic templates for Premiere Pro through Adobe Stock. Earlier this year, Adobe added editorial and premium collections from Reuters, USA Today Sports, Stocksy and 500px.

LumaForge offering support for shared projects in Adobe Premiere

LumaForge, which designs and sells high-performance servers and shared storage appliances for video workflows, will be at IBC this year showing full support for new collaboration features in Adobe Premiere Pro CC. When combined with LumaForge’s Jellyfish or ShareStation post production servers, the new Adobe features — including multiple open projects and project locking —allow production groups and video editors to work more effectively with shared projects and assets. This is something that feature film and TV editors have been asking for from Adobe.

Project locking allows multiple users to work with the same content. In a narrative workflow, an editing team can divide their film into shared projects per reel or scene. An assistant editor can get to work synchronizing and logging one scene, while the editor begins assembling another. Once the assistant editor is finished with their scene, the editor can refresh their copy of the scene’s Shared Project and immediately see the changes.

An added benefit of using Shared Projects on productions with large amounts of footage is the significantly reduced load time of master projects. When a master project is broken into multiple shared project bins, footage from those shared projects is only loaded once that shared project is opened.

“Adobe Premiere Pro facilitates a broad range of editorial collaboration scenarios,” says Sue Skidmore, partner relations for Adobe Professional Video. “The LumaForge Jellyfish shared storage solution complements and supports them well.”

All LumaForge Jellyfish and LumaForge ShareStation servers will support the Premiere Pro CC collaboration features for both Mac OS and Windows users, connecting over 10Gb Ethernet.

Check out their video on the collaboration here.

Chatting up IBC’s Michael Crimp about this year’s show

Every year, many from our industry head to Amsterdam for the International Broadcasting Convention. With IBC’s start date coming fast, what better time for the organization’s CEO, Michael Crimp, to answer questions about the show, which runs from September 15-19.

IBC is celebrating its 50th anniversary this year. How will you celebrate?
In addition to producing a commemorative book, and our annual party, IBC is starting a new charitable venture, supporting an Amsterdam group that provides support through sport for disadvantaged and disabled children. If you want to play against former Ajax players in our Saturday night match, bid now to join the IBC All-Stars.

It’s also about keeping the conversation going. We are 50 years on and have a huge amount to talk about — from Ultra HD to 5G connectivity, from IP to cyber security.

How has IBC evolved over the past 10 years?
The simple answer is that IBC has evolved along with the industry, or rather IBC has strived to identify the key trends which will transform the industry and ensure that we are ahead of the curve.

Looking back 10 years, digital cinema was still a work in progress: the total transition we have now seen was just beginning. We had dedicated areas focused on mobile video and digital signage, things that we take for granted today. You can see the equivalents in IBC2017, like the IP Showcase and all the work done on interoperability.

Five years ago we started our Leaders’ Summit, the behind-closed-doors conference for CEOs from the top broadcasters and media organizations, and it has proved hugely successful. This year we are adding two more similar, invitation-only events, this time aimed at CTOs. We have a day focusing on cyber security and another looking at the potential for 5G.

We are also trying a new business matchmaking venue this year, the IBC Startup Forum. Working with Media Honeypot, we are aiming to bring startups and scale-ups together with the media companies that might want to use their talents and the investors who might back the deals.

Will IBC and annual trade shows still be relevant in another 50 years?
Yes, I firmly believe they will. Of course, you will be able to research basic information online — and you can do that now. We have added to the online resources available with our IBC365 year-round online presence. But it is much harder to exchange opinions and experiences that way. Human nature dictates that we learn best from direct contact, from friendly discussions, from chance conversations. You cannot do that online. It is why we regard the opportunity to meet old friends and new peers as one of the key parts of the IBC experience.

What are some of the most important decisions you face in your job on a daily basis?
IBC is an interesting business to head. In some ways, of course, my job as CEO is the same as the head of any other company: making sure the staff are all pulling in the same direction, the customers are happy and the finances are secure. But IBC is unlike any other business because our focus is on spreading and sharing knowledge, and because our shareholders are our customers. IBC is organized by the industry for the industry, and at the top of our organization is the Partnership Board, which contains representatives of the six leading professional and trade bodies in the industry: IABM, IEE, IET, RTS, SCTE and SMPTE.

Can you talk a bit about the conference?
One significant development from that first IBC 50 years ago is the nature of the conference. The founders were insistent that an exhibition needed a technical conference, and in 1967 it was based solely on papers outlining the latest research.

Today, the technical papers program still forms the center piece of the conference. But today our conference is much broader, speaking to the creative and commercial people in our community as well as the engineering and operational.

This year’s conference is subtitled “Truth, Trust and Transformation,” and has five tracks running over five days. Session topics range from the deeply technical, like new codec design, to fake news and alternative facts. Speakers range from Alberto Duenas, the principal video architect at chipmaker ARM to Dan Danker, the product director at Facebook.

How are the attendees and companies participating in IBC changing?
The industry is so much broader than it once was. Consumers used to watch television, because that was all that the technology could achieve. Today, they expect to choose what they want to watch, when and where they want to watch it, and on the device and platform which happen to be convenient at the time.

As the industry expands, so does the IBC community. This year, for example, we have the biggest temporary structure we have ever built for an IBC, to house Hall 14, dedicated to content everywhere.

Given that international travel can be painful, what should those outside the EU consider?
Amsterdam is, in truth, a very easy place for visitors in any part of the world to reach. Its airport is a global hub. The EU maintains an open attitude and a practical approach to visas when required, so there should be no barriers to anyone wanting to visit IBC.

The IBC Innovation Awards are always a draw. Can you comment on the calibre of entries this year?
When we decided to add the IBC Innovation Awards to our program, our aim was to reflect the real nature of the industry. We wanted to reward the real-world projects, where users and technology partners got together to tackle a real challenge and come up with a solution that was much more than the sum of its parts.

Our finalists range from a small French-language service based in Canada to Google Earth; from a new approach to transmitters in the USA to an online service in India; and from Asia’s biggest broadcaster to the Spanish national railway company.

The Awards Ceremony on Sunday night is always one of my highlights. This year there is a special guest presenter: the academic and broadcaster Dr. Helen Czerski. The show lasts about an hour and is free to all IBC visitors.

What are the latest developments in adding capacity at IBC?
There is always talk of the need to move to another venue, and of course as a responsible business we keep this continually under review. But where would we move to? There is nowhere that offers the same combination of exhibition space, conference facilities and catering and networking under one roof. There is nowhere that can provide the range of hotels at all prices that Amsterdam offers, nor its friendly and welcoming atmosphere.

Talking of hotels, visitors this year may notice a large building site between hall 12 and the station. This will be a large on-site hotel, scheduled to be open in time for IBC in 2019.

And regulars who have resigned themselves to walking around the hoardings covering up the now not-so-new underground station will be pleased to hear that the North-South metro line is due to open in July 2018. Test trains are already running, and visitors to IBC next year will be able to speed from the centre of the city in under 10 minutes.

As you mentioned earlier, the theme for IBC2017 is “Truth, Trust and Transformation.” What is the rationale behind this?
Everyone has noticed that the terms “fake news” and “alternative facts” are ubiquitous these days. Broadcasters have traditionally been the trusted brand for news: is the era of social media and universal Internet access changing that?

It is a critical topic to debate at IBC, because the industry’s response to it is central to its future, commercially, as well as technically. Providing true, accurate and honest access to news (and related genres like sport) is expensive and demanding. How do we address this key issue? Also, one of the challenges of the transition to IP connectivity is the risk that the media industry will become a major target for malware and hackers. As the transport platform becomes more open, the more we need to focus on cyber security and the intrinsic design of safe, secure systems.

OTT and social media delivery is sometimes seen as “disruptive,” but I think that “transformative” is the better word. It brings new challenges for creativity and business, and it is right that IBC looks at them.

Will VR and AR be addressed at this year’s conference?
Yes, in the Future Zone, and no doubt on the show floor. Technologies in this area are tumbling out, but the business and creative case seems to be lagging behind. We know what VR can do, but how can we tell stories with it? How can we monetize it? IBC can bring all the sides of the industry together to dig into all the issues. And not just in debate, but by seeing and experiencing the state of the art.

Cyber security and security breaches are becoming more frequent. How will IBC address these challenges?
Cyber security is such a critical issue that we have devoted a day to it in our new C-Tech Forum. Beyond that, we have an important session on cyber security on Friday in the main conference with experts from around the world and around the industry debating what can and should be done to protect content and operations.

Incidentally, we are also looking at artificial intelligence and machine learning, with conference sessions in both the technology and business transformation strands.

What is the Platform Futures — Sport conference aiming to address?
Platform Futures is one of the strands running through the conference. It looks at how the latest delivery and engagement technologies are opening new opportunities for the presentation of content.

Sport has always been a major driver – perhaps the major driver – of innovation in television and media. For many years now we have had a sport day as part of the conference. This year, we are dedicating the Platform Futures strand to sport on Sunday.

The stream looks at how new technology is pushing boundaries for live sports coverage; the increasing importance of fan engagement; and the phenomenon of “alternative sports formats” like Twenty20 cricket and Rugby 7s, which provide lucrative alternatives to traditional competitions. It will also examine the unprecedented growth of eSports, and the exponential opportunities for broadcasters in a market that is now pushing towards the half-billion-dollar size.

 

IBC 2016: VR and 8K will drive M&E storage demand

By Tom Coughlin

While attending the 2016 IBC show, I noticed some interesting trends, cool demos and new offerings. For example, while flying drones were missing, VR goggles were everywhere; IBM was showing 8K video editing using flash memory and magnetic tape; the IBC itself featured a fully IP-based video studio showing the path to future media production using lower-cost commodity hardware with software management; and, it became clear that digital technology is driving new entertainment experiences and will dictate the next generation of content distribution, including the growing trend to OTT channels.

In general, IBC 2016 featured the move to higher resolution and more immersive content. On display throughout the show was 360-degree video for virtual reality, as well as 4K and 8K workflows. Virtual reality and 8K are driving new levels of performance and storage demand, and these are just some of the ways that media and entertainment pros are future-zone-2increasing the size of video files. Nokia’s Ozo was just one of several multi-camera content capture devices on display for 360-degree video.

Besides multi-camera capture technology and VR editing, the Future Tech Zone at IBC included even larger 360-degree video display spheres than at the 2015 event. These were from Puffer Fish (pictured right). The smaller-sized spherical display was touch-sensitive so you could move your hand across the surface and cause the display to move (sadly, I didn’t get to try the big sphere).

IBM had a demonstration of a 4K/8K video editing workflow using the IBM FlashSystem and IBM Enterprise tape storage technology, which was a collaboration between the IBM Tokyo Laboratory and IBM’s Storage Systems division. This work was done to support the move to 4K/8K broadcasts in Japan by 2018, with a broadcast satellite and delivery of 8K video streams of the 2020 Tokyo Olympic Games. The combination of flash memory storage for working content and tape for inactive content is referred to as FLAPE (flash and tAPE).

The graphic below shows a schematic of the 8K video workflow demonstration.

The argument for FLAPE appears to be that flash performance is needed for editing 8K content and the magnetic tape provides low-cost storage the 8K content, which may require greater than 18TB for an hour of raw content (depending upon the sampling and frame rate). Note that magnetic tape is often used for archiving of video content, so this is a rather unusual application. The IBM demonstration, plus discussions with media and entertainment professionals at IBC indicate that with the declining costs of flash memory and the performance demands of 8K, 8K workflows may finally drive increased demand for flash memory for post production.

Avid was promoting their Nexis file system, the successor to ISIS. The company uses SSDs for metadata, but generally flash isn’t used for actual editing yet. They agreed that as flash costs drop, flash could find a role for higher resolution and richer media. Avid has embraced open source for their code and provides free APIs for their storage. The company sees a hybrid of on-site and cloud storage for many media and entertainment applications.

EditShare announced a significant update to its XStream EFS Shared Storage Platform (our main image). The update provides non-disruptive scaling to over 5PB with millions of assets in a single namespace. The system provides a distributed file system with multiple levels of hardware redundancy and reduced downtime. An EFS cluster can be configured with a mix of capacity and performance with SSDs for high data rate content and SATA HDD for cost-efficient higher-performance storage — 8TB HDDs have been qualified for the system. The latest release expands optimization support for file-per-frame media.

The IBC IP Interoperability Zone was showing a complete IP-based studio (pictured right) was done with the cooperation of AIMS and the IABM. The zone brings to life the work of the JT-NM (the Joint Task Force on Networked Media, a combined initiative of AMWA, EBU, SMPTE and VSF) and the AES on a common roadmap for IP interoperability. Central to the IBC Feature Area was a live production studio, based on the technologies of the JT-NM roadmap that Belgian broadcaster VRT has been using daily on-air all this summer as part of the LiveIP Project, which is a collaboration between VRT, the European Broadcasting Union (EBU) and LiveIP’s 12 technology partners.

Summing Up
IBC 2016 showed some clear trends to more immersive, richer content with the numerous displays of 360-degree and VR content and many demonstrations of 4K and even 8K workflows. Clearly, the trend is for higher-capacity, higher-performance workflows and storage systems that support this workflow. This is leading to a gradual move to use flash memory to support these workflows as the costs for flash go down. At the same time, the move to IP-based equipment will lead to lower-cost commodity hardware with software control.

Storage analyst Tom Coughlin is president of Coughlin Associates. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide. He also  publishes the Digital Storage Technology Newsletter, the Digital Storage in Media and Entertainment Report.

My first trip to IBC

By Sophia Kyriacou

When I was asked by the team at Maxon to present my work at their IBC stand this year, I jumped at the chance. I’m a London-based working professional with 20 years of experience as a designer and 3D artist, but I had never been to an IBC. My first impression of the RAI convention center in Amsterdam was that it’s super huge and easy to get lost in for days. But once I found the halls relevant to my interests, the creative and technical buzz hit me like heat in the face when disembarking from a plane in a hot humid summer. It was immediate, and it felt so good!

The sounds and lights were intense. I was surrounded by booths with baselines of audio vibrating against the floor changing as you walked along. It was a great atmosphere; so warm and friendly.

My first Maxon presentation was on day two of IBC — it was a show-and-tell of three award-winning and nominated sequences I created for the BBC in London and one for Noon Visual Creatives. As a Cinema 4D user, it was great to see the audience at the stand captivated by my work. and knowing it was streamed live to a large audience globally made it even more exciting.

The great thing about IBC is that it’s not only about companies shouting about their new toys. I also saw how it brings passionate pros from all over the world together — people you would never meet in your usual day-to-day work life. I met people from all over globe and made new friends. Everyone appeared to share the same or similar experience, which was wonderful.

The great thing about having the first presentation of the day at Maxon meant I could take a breather and look around the show. I also sat in on a Dell Precision/Radeon Technologies roundtable event one afternoon. That was a really interesting meeting. We were a group of pros from varied disciplines within the industry. It was great to talk about what hardware works, what doesn’t work, and how it could all get better. I don’t work in a realtime area, but I do know what I would like to see as someone who works in 3D. It was incredibly interesting, and everyone was so welcoming. Thoroughly enjoyed it.

Sunday evening, I went over to the SuperMeet — such an energetic and friendly vibe. The stage demos were very interesting. I was particularly taken with the fayIN tracker plug-in for Adobe After Effects. It appears to be a very effective tool, and I will certainly look into purchasing it. The new Adobe Premiere features look fantastic as well.

Everything about my time at IBC was so enjoyable. I went back London buzzing, and am already looking forward to next year’s IBC show.

Sophia Kyriacou is a London-based broadcast designer and 3D artist who splits her time working as a freelancer and for the BBC.

IBC: Surrounded by sound

By Simon Ray

I came to the 2016 IBC Show in Amsterdam at the start of a period of consolidation at Goldcrest in London. We had just gone through three years of expansion, upgrading, building and installing. Our flagship Dolby Atmos sound mixing theatre finished its first feature, Jason Bourne, and the DI department recently upgraded to offer 4K and HDR.

I didn’t have a particular area to research at the show, but there were two things that struck me almost immediately on arrival: the lack of drones and the abundance of VR headsets.

Goldcrest’s Atmos mixing stage.

360 audio is an area I knew a little about, and we did provide a binaural DTS Headphone X mix at the end of Jason Bourne, but there was so much more to learn.

Happily, my first IBC meeting was with Fraunhofer, where I was updated on some of the developments they have made in production, delivery and playback of immersive and 360 sound. Of particular interest was their Cingo technology. This is a playback solution that lives in devices such as phones and tablets and can already be found in products from Google, Samsung and LG. This technology renders 3D audio content onto headphones and can incorporate head movements. That means a binaural render that gives spatial information to make the sound appear to be originating outside the head rather than inside, as can be the case when listening to traditionally mixed stereo material.

For feature films, for example, this might mean taking the 5.1 home theatrical mix and rendering it into a binaural signal to be played back on headphones, giving the listener the experience of always sitting in the sweet spot of a surround sound speaker set-up. Cingo can also support content with a height component, such as 9.1 and 11.1 formats, and add that into the headphone stream as well to make it truly 3D. I had a great demo of this and it worked very well.

I was impressed that Fraunhofer had also created a tool for creating immersive content, a plug-in called Cingo Composer that could run as both VST and AAX plug-ins. This could run in Pro Tools, Nuendo and other DAWs and aid the creation of 3D content. For example, content could be mixed and automated in an immersive soundscape and then rendered into an FOA (First Order Ambisonics or B-Format) 4-channel file that could be played with a 360 video to be played on VR headsets with headtracking.

After Fraunhofer, I went straight to DTS to catch up with what they were doing. We had recently completed some immersive DTS:X theatrical, home theatrical and, as mentioned above, headphone mixes using the DTS tools, so I wanted to see what was new. There were some nice updates to the content creation tools, players and renderers and a great demo of the DTS decoder doing some live binaural decoding and headtracking.

With immersive and 3D audio being the exciting new things, there were other interesting products on display that related to this area. In the Future Zone Sennheiser was showing their Ambeo VR mic (see picture, right). This is an ambisonic microphone that has four capsules arranged in a tetrahedron, which make up the A-format. They also provide a proprietary A-B format encoder that can run as a VST or AAX plug-in on Mac and Windows to process the outputs of the four microphones to the W,X,Y,Z signals (the B-format).

From the B-Format it is possible to recreate the 3D soundfield, but you can also derive any number of first-order microphones pointing in any direction in post! The demo (with headtracking and 360 video) of a man speaking by the fireplace was recorded just using this mic and was the most convincing of all the binaural demos I saw (heard!).

Still in the Future Zone, for creating brand new content I visited the makers of the Spatial Audio Toolbox, which is similar to the Cingo Creator tool from Fraunhofer. B-Com’s Spatial Audio Toolbox contains VST plug-ins (soon to be AAX) to enable you to create an HOA (higher order ambisonics) encoded 3D sound scene using standard mono, stereo or surround source (using HOA Pan) and then listen to this sound scene on headphones (using Render Spk2Bin).

The demo we saw at the stand was impressive and included headtracking. The plug-ins themselves were running on a Pyramix on the Merging Technologies stand in Hall 8. It was great to get my hands on some “live” material and play with the 3D panning and hear the effect. It was generally quite effective, particularly in the horizontal plane.

I found all this binaural and VR stuff exciting. I am not sure exactly how and if it might fit into a film workflow, but it was a lot of fun playing! The idea of rendering a 3D soundfield into a binaural signal has been around for a long time (I even dedicated months of my final year at university to writing a project on that very subject quite a long time ago) but with mixed success. It is exciting to see now that today’s mobile devices contain the processing power to render the binaural signal on the fly. Combine that with VR video and headtracking, and the ability to add that information into the rendering process, and you have an offering that is very impressive when demonstrated.

I will be interested to see how content creators, specifically in the film area, use this (or don’t). The recreation of the 3D surround sound mix over 2-channel headphones works well, but whether headtracking gets added to this or not remains to be seen. If the sound is matched to video that’s designed for an immersive experience, then it makes sense to track the head movements with the sound. If not, then I think it would be off-putting. Exciting times ahead anyway.

Simon Ray is head of operations and engineering Goldcrest Post Production in London.

IBC: Blackmagic buys Fairlight and Ultimatte

Before every major trade show, we at postPerspective play a little game. Who is Blackmagic going to buy this time? Well, we didn’t see this coming, but it’s cool. Ultimatte and Fairlight are now owned by Blackmagic.

Ultimatte makes broadcast-quality, realtime blue- and greenscreen removal hardware that is used in studios to seamlessly composite reporters and talk show hosts into virtual sets.

Ultimatte was founded in 1976 and has won an Emmy for their realtime compositing technology and a Lifetime Achievement Award from the Academy of Motion Picture Arts and Sciences, as well as an Oscar.

“Ultimatte’s realtime blue- and greenscreen compositing solutions have been the standard for 40 years,” says Blackmagic CEO Grant Petty. Ultimatte has been used by virtually every major broadcast network in the world. We are thrilled to bring Ultimatte and Blackmagic Design together, and are excited about continuing to build innovative products for our customers.”

Fairlight creates professional digital audio products for live broadcast event production, film and television post, as well as immersive 3D audio mixing and finishing. “The exciting part about this acquisition is that it will add incredibly high-end professional audio technology to Blackmagic Design’s video products,” says Petty.

New Products
Teranex AV: A new broadcastquality standards converter designed specifically for AV professionals. Teranex AV features 12G-SDI and HDMI 2.0 inputs, outputs and loop-through, along with AV specific features such as low latency, a still store, freeze frame and HiFi audio inputs for professionals working on live, staged presentations and conferences. Teranex AV will be available in September for $1,695 from Blackmagic resellers.

New Video Assist 4K update: A major new update for Blackmagic Video Assist 4K customers that improves DNxHD and DNxHR support, adds false color monitoring, expanded focus options and new screen rotation features. It is available for download from the Blackmagic website next week, free of charge, for all Blackmagic Video Assist 4K customers.

DeckLink Mini Monitor 4K and Mini Recorder 4K: New DeckLink Mini Monitor 4K and DeckLink Mini Recorder 4K PCIe capture cards that include all the features of the HD DeckLink models but now have Ultra HD and HDR (high dynamic range) features. Both models support all SD, HD and Ultra HD formats up to 2160p30. DeckLink Mini 4K models are available now from Blackmagic resellers for $195 each.

Davinci Resolve 12.5.2: The latest version of Resolve is available free for download from Blackmagic’s site. It adds support for additional Ursa Mini Camera metadata, color space tags on QuickTime export, Fusion Connect for Linux, advanced filtering options and more.

IBC: Thoughts on Dolby and Nokia

By Zak Tucker

Strolling the halls of IBC in Amsterdam this past week, I found a lot of interesting tools and tech. Here are just a few thoughts about a of couple companies I visited.

Dolby
On Picture: Dolby is presenting their PQ workflow, which enables HDR and SDR deliverables seamlessly. Recognizing that there will be a real transition period as consumers adopt HDR home viewing environments, Dolby has written algorithms that detect the native specs of each monitor that is Dolby-enabled so that it can interpret the intent of the PQ color and translate it to the specific monitor. In demos, the HDR media is optically more vibrant and true-to-life colors are also more accurately represented compared to traditional SDR. Also, the SDR that Dolby is able to draw from the HDR is optically more vibrant and sharp than the traditional SDR.

On Sound: Dolby is pressing forward with its home immersive sound experience. Through its sound bar and associated sub-woofer, Dolby is producing a home Atmos sound experience that is quite compelling. Dolby can also work with the additional speakers that can be installed by home users. Dolby’s home Atmos is able to dynamically adjust to various home speaker installations.

Nokia OZO
They have developed and delivered a purpose-built VR camera that records both picture and sound. The form factor, not any bigger than a person’s head, is clean and small so as to address the concern of most VR rigs that are large and overly obtrusive — often an issue with talent, for example, when capturing a live event such as a concert. This camera is cable of north of 4K resolution and the current stitched deliverable is a 4K, 3D, VR file. The accompanying software can accomplish both a Fast auto stitch as well as a higher quality stitch. The software is also capable of taking a live stream from the VR camera and transmitting it, stitched, to a platform, such as YouTube, in real time. In the demo, the stitching is quite seamless.

Zak Tucker is president and co-founder of Harbor Picture Company in New York.

Boris FX merges with GenArts

Boris FX, maker of Boris Continuum Complete, has inked a deal to acquire visual effects plug-in developer GenArts, whose high-end plug-in line includes Sapphire. Sapphire has been used in at least one of each year’s VFX Oscar-nominated films since 1996. This acquisition follows the 2015 addition of Imagineer Systems, developer of Academy Award-winning planar tracking tool Mocha. Sapphire will continue to be developed and sold in its current form alongside Boris Continuum Complete (BCC) and Mocha Pro.

“We are excited to announce this strategic merger and welcome the Sapphire team to the Boris FX/Imagineer group,” says owner Boris Yamnitsky. “This acquisition makes Boris FX uniquely positioned to serve editors and effects artists with the industry’s leading tools for motion graphics, broadcast design, visual effects, image restoration, motion tracking and finishing — all under one roof. Sapphire’s suite of creative plug-ins has been used to design many of the last decades’ most memorable film images. Sapphire perfectly complements BCC and mocha as essential tools for professional VFX and we look forward to serving Sapphire’s extremely accomplished users.”

“Equally impressive is the team behind the technology,” continues Yamnitsky. “Key GenArts staff from engineering, sales, marketing and support will join our Boston office to ensure the smoothest transition for customers. Our shared goal is to serve our combined customer base with useful new tools and the highest quality training and technical support.”

 

 

EditShare launches Flow Story at IBC, promotes Peter Lambert

At IBC, EditShare is launching its new Flow Story, a professional remote editing application. A module of the EditShare Flow media asset management solution, Flow Story offers advanced proxy editing and roundtrip workflow support with professional editing features and functions courtesy of the Lightworks NLE engine.

Flow Story allows users to work remotely with secure access to on-premises storage and media assets via an Internet connection. Flow Story lets users assemble content, add voiceovers and collaborate with other NLEs for finishing, delivery or playout of packages. Direct access to on-premises storage accelerates content exchange within the safety of a secure network.

Feature Highlights
• Wide Format Support — Flow Story supports hundreds of formats, including ProRes, Avid DNxHD, AVC-Intra and XDCAM, through to 4K and beyond, such as Red R3D, XAVC, Cinema DNG and DPX. As well as working with low-resolution proxy files, users can import and publish many popular formats to the EditShare storage server.
• Voiceover — Simple-to-use VO tools let users finalize packages at their desk or out in the field. Flow Story auto-detects and enables any connected audio input device. Users can upload newly created voiceover files and clips they have created locally.
• Edit While Capture — Flow Story’s Edit While Capture feature allows any format (including Long GOP) to be accessed during recording using EditShare Flow MAM or Geevs Ingest servers. This is ideal for fast turnaround environments such as live events and sports highlights.
• Realtime collaboration — When connected to any EditShare Flow Database, Flow Story has real time collaboration with other Flow users, such as Flow Browse and AirFlow. Projects, clips, sequences, markers and metadata are all updated and synchronized in realtime.
• NLE Integration — Flow Story supports industry-standard NLEs (and DAWs) such as Avid Media Composer, Adobe Premiere, Blackmagic DaVinci Resolve and Avid Pro Tools. A creative hub, Flow Story facilitates collaboration among editors through AAF, an interchange file format that advances round-trip workflows.
• Work Offline — Flow Story is purpose-built with remote editing in mind. While you only need a regular Internet connection to access your content, that is not always possible. Flow Story can work in a standalone mode, accessing existing Flow projects in motion. Flow Story projects are synchronized via Internet.
• Advanced realtime effects, including Color, Titles and DVEs — Using the power of the graphics card, all the realtime effects can be played back remotely or locally without the need for rendering or flattening.
• Third-party integration with Audio Network — Browse the selection of Audio Network music directly from within Flow Story. Stream MP3 audio files directly over sequences, add search criteria that best suits requirements, then register or sign in to purchase directly. The full quality track is then downloaded and available within the project.

In Other Editshare news, Peter Lambert has been named worldwide sales director. An industry business development executive with more than 25 years of experience, including his start at the BBC as an audio engineer, Lambert recently held the director of sales position for EditShare’s APAC region.

“Since coming on board to manage our Asia Pacific regional business, Peter has been instrumental in rebuilding the channel and has been a steady advocate for building up our technical, sales and administrative staff in the region. Peter has brought order and stability to our business in the region, and largely as a result of his efforts we have seen substantial growth and stronger client relations,” says Andy Liebman, CEO, EditShare. Responsible for the company’s overall sales strategy and reseller partner program, Lambert’s appointment is effective immediately.

Panasonic and Codex team on VariCam Pure targeting episodic TV, features

At IBC in Amsterdam, Panasonic is showing its new cinema-ready version of the VariCam 35, featuring a jointly-developed Codex recorder capable of uncompressed, 4K RAW acquisition.

The VariCam Pure is the latest addition to the company’s family of pro cinematography products. A co-production between Panasonic and Codex, it couples the existing VariCam 35 camera head with a new Codex V-RAW 2.0 recorder, suited for episodic television shows and feature films.

The V-RAW 2.0 recorder attaches directly to the back of the VariCam 35 camera head. As a result, the camera retains the same Super 35 sensor, 14+ stops of latitude and dual native 800/5000 ISO as the original VariCam 35.

Panasonic VariCam Pure“The new VariCam Pure camera system records pure, uncompressed RAW up to 120 fps onto the industry-standard Codex Capture Drive 2.0 media, already widely used by many camera systems, post facilities and studios,” said Panasonic senior product manager Steven Cooperman. “There is significant demand for uncompressed RAW recording in the high-end market. The modular concept of the VariCam has enabled us to meet this demand. We’ve also listened to feedback from cinematographers and camera operators and ensured that the VariCam Pure is rugged, compact and lightweight, weighing just 11 pounds.”

Codex will provide a dailies and archiving workflow available through its Production Suite. In addition, the Codex Virtual File system means users can transfer many file formats, including Panasonic VRAW, Apple ProRes and Avid DNxHR.

Along with the original camera negative, frame-accurate metadata (such as lens and CDL data) can also be captured, streamlining production and post, and delivering time and cost savings.

The V-RAW 2.0 recorder for VariCam Pure is scheduled for release in December 2016 with a suggested list price of $30,000.

Timecode’s :Pulse for multicamera sync and control now available

Timecode Systems, which makes wireless technologies for sharing timecode and metadata, has made its :Pulse multi camera sync and control product available for purchase.

Powered by the company’s robust Blink RF protocol, the :Pulse offers wireless sync and remote device control capability in one product. Used in its simplest form, the :Pulse is a highly accurate timecode, genlock and word clock generator with an integrated RF transceiver to ensure solid synchronization with zero drift between timecode sources.

As well as being a hub for timecode and metadata exchange, it’s also a center for wirelessly controlling devices on multicamera shoots. With a :Pulse set as the timecode master unit, users can activate the device’s integral Wi-Fi or add a wired connection to the Ethernet port to open the free, multiplatform Blink Hub app on their smartphones, tablets or laptops.

Enabled by the Blink RF protocol, the Blink Hub app allows users to not only centrally monitor and control all Timecode Systems timecode sources on set, but also any compatible camera and audio equipment to which they are connected.

Timecode Systems has already developed a bespoke remote device control solution for Sound Devices 6-Series mixer/recorders and is working on adding to the :Pulse the capability to control GoPro, Arri and Red cameras remotely via the Blink Hub app.

“With the production of the SyncBac Pro, our embedded timecode sync accessory for GoPro cameras, now in full flow, we’re very close to launching remote control of Hero4 Silver and Black cameras,” says CEO Paul Scurrell. “Using either the :Pulse’s Wi-Fi or a wired Ethernet connection into the :Pulse, SyncBac Pro users will be able to connect their GoPro Hero4 Black and Silver cameras to the Blink Hub app. This, among other things, unlocks the capability to put a GoPro to sleep remotely and then start recording again from the app when the action starts again. It’s a great way to save the camera’s battery life when it’s gear-mounted or rigged somewhere inaccessible.”

IBC Report: Making high-resolution panoramic video

By Tom Coughlin

Higher resolution content is becoming the norm in today’s media workflows, but pixel count is not the only element that is changing. In addition to the pixel density the depth of image, color gamut, frame rates and even the number of simultaneous streams of video will be important. At the 2015 IBC in Amsterdam there was a clear picture of a future that includes UHD 4K and 8K video, as well as virtual reality, as the future path to more immersive video and entertainment experiences.

NHK, a pioneer in 8K video hardware and infrastructure development has given more details on its introduction of this higher resolution format. They will start test broadcasts of their 8K technology in 2016, followed by significant satellite video transmission in 2018 and widespread deployment in 2020 in time for the Tokyo Olympic Games. The company is looking at using HEVC compression to put a 72Gb/s video stream with 22:2 channel audio into a 100Mb/s delivery channel.

In the Technology Zone at the IBC there were displays of virtual reality, 8K video developments, (mostly by NHK), as well as multiple camera set-ups for creating virtual reality video and various ways to use panoramic video. Sphericam 2 is a Kickstarter-funded product that provides 60 frames per second 4K video capture for creating VR content. This six-camera device is compact and can be placed on a stick and used like a selfie camera to capture a 360-degree view.

Sphericam 2

Sphericam 2

At the 2015 Google Developers Conference, GoPro demonstrated a 360-degree camera rig (our main image) using 16 GoPro cameras to capture panoramic video. At the IBC, GoPro displayed a more compact 360 Hero six-camera rig for 3D video capture.

In the Technology Zone, Al Jeezera had an eight-camera rig for 4K video capture (made using a 3D printer) and were using software to create panoramic videos. There are many such videos on YouTube that can be viewed as panoramic videos, which change perspective when viewed on a smart phone that has an accelerometer that will create a reference around which the viewer can look at the panoramic activities. The Kolor software actually provides a number of different ways to view the captured content.

Eight Camera rig

Eight-camera rig at Al Jeezera stand.

While many viewing devices for VR video use special split-screen displays, or even use smart phones with a split screen image while using the phone’s accelerometers to give the sense of being surrounded by the viewed image — like the Google Cardboard — there are other ways to create an immersive experience. As mentioned earlier, panoramic videos with a single (or split screen) are available on YouTube. There are also spherical display devices where the still or video image can be rotated by moving your hand across the sphere like the one shown below.

Higher resolution content is becoming mainstream, with 4K TVs set to be the majority that are sold within the next few years. 8K video production, pioneered by NHK and others in Japan, could be the next 4K video by the start of the next decade, driving even more realistic content capture and higher bandwidth and higher storage capacity post.

Multi-camera content is also growing in popularity to support virtual reality games and other applications. This growth is enabled by the proliferation of low cost, high-resolution cameras and sophisticated software that combine the video from these cameras to create a panoramic video and virtual reality experience.

The trends toward higher resolution, combined with a greater color gamut, higher frame rate and color depth will transform video experiences by the next decade, leading to new requirements for storage, networking and processing in video production and display.

Dr. Tom Coughlin, president of Coughlin Associates, has over 35 years in the data storage industry. Coughlin is also the founder and organizer of the annual Storage Visions Conference, a partner to the International Consumer Electronics Show, as well as the Creative Storage Conference

Mocha now plug-in for NLEs, BCC 10 integrates Mocha 5

The big news from Boris FX/Imagineer at IBC this year was that the soon-to-be-released Mocha Pro planar tracking and roto masking technology will be available as a plug-in for Avid, Adobe and OFX. This brings all of the tools from Mocha Pro to these NLEs — no more workarounds needed. This Mocha Pro 5 plug-in, which will be available in a month, incorporates a new effects panel for integrated keying, grain, sharpening and skin smoothing as well as new Python scripting support and more.

“Avid editors have always asked us for the Mocha planar tracking tools on their timeline. Now with the Imagineer/Boris FX collaboration, we are bringing the full Mocha Pro to Avid,” explains Ross Shain, CMO at BorisFX/Imagineer. “Media Composer and Symphony users will be able to handle more complex effects and finishing tasks, without importing/exporting footage. Just drop the Mocha Pro plug-in on your clip and you immediately have access to the same powerful tracking, stabilization and object removal tools used in high-end feature film visual effects.”

The availability of this plugin coincides with the Mocha Pro 5 release.

In other company news, Boris FX’s upcoming Boris Continuum Complete (BCC) 10 will have Mocha planar tracking and masking embedded. This is inside every BCC 10 plug-in and can be used for isolating areas of the effect with Mocha masks. The first version to ship will be BCC 10 for Avid in a few weeks.

Besides integrating Mocha Pro 5, BCC 10 will also offer a new Beauty Studio skin-retouching filter, new 3D titling and animation tools, import of Maxon Cinema 4D models, new image restoration filters, new transitions and more host support.

Bluefish444 bundling Scratch 8 with its Epoch 4K Neutron

Bluefish444, which makes uncompressed 4K/2K/HD/SD SDI video cards, has released a software bundle consisting of the Epoch 4K Neutron SDI/HDMI solution and Assimilate Scratch 8, a realtime digital intermediate system.

The Epoch 4K Neutron has a new half-height form factor that allows for integration into a broader range of chassis, including low-profile servers, small form factor (SFF) computers and low-profile Thunderbolt expansion chassis. The full-height shield option allows for integration in more traditional workstation computers and meets additional I/O requirements like AES/EBU, and also provides RS422 machine control and domestic analogue audio monitoring. In addition, the solution supports 3G SDI I/O configurations to allow for 4K SDI workflows. An HDMI mini-connector enables a 4K/2K/HD/SD HDMI monitoring preview and allows for color-critical monitoring on consumer HDMI displays supporting Deep Color.

Epoch 4K Neutron Turbo

Other features of the Epoch 4K Neutron/Scratch 8 bundle include cross-platform Windows and Mac OS X support; 4K 30p fps HDMI monitoring, 8-bit/10-bit/12-bit SDI monitoring and 4K/2K/HD/SD mastering and monitoring; stereoscopic SDI output; 12-bit precision color space conversions; eight-channel AES digital audio I/O; and stereo analogue audio monitoring. The solutions are compatible with Thunderbolt 2 expansion chassis offered by Bluefish444-qualified third-party partners.

postPerspective met with Bluefish444’s Tom Lithgow at IBC. He gave us a run down of the bundle.

IBC 2015 Blog: audio offerings

By Simon Ray

Last year, I wrote about the potential of a couple of audio products, namely the DAD audio interfaces and the Avid S6. I have had the opportunity to put both of these products into an install at the recently completed 7.1 theatrical mix room (Theatre 2) at Goldcrest in London.

The DAD lived up to its promise and is the cornerstone of this install. With its comprehensive I/O and routing it makes the set up of a “mixing in the box” room a reality — and one that can handle large theatrical mixing.

The Avid S6 is also a great product that has come a long way since the first buggy prototype was unveiled at IBC two years ago. The recent V.2 release has added the last few items that were needed to make it a fully formed mixing tool. The response so far has been very positive, but as always there a few more “requests” from the mixers to make it even more flexible.

There is never that much in the way of audio represented at IBC in comparison with video, but I was encouraged to see that many of the audio companies that were at the show were incorporating audio-over-IP into their products.

Dante is the audio-over-IP solution from Audinate. I was initially skeptical about using this in a critical situation, such as in our new Theatre 2, but we wanted an all-digital (up to the output of the amps) monitoring chain. Dante allowed us to achieve this and has the flexibility to do the larger numbers of channels required for Atmos all the way down to a single cable, and all controlled within a single application.

We looked to minimize the use of Dante in this room and the install required only a point-to-point connection between the DAD audio interface we are using as the main monitoring router and the unit controlling the signal distribution, DSP, amps and speakers. In order to investigate the possibilities that audio over IP offers, we also installed a secondary network that we could use to test the reliability of Dante without affecting the running of the theatre itself.

It quickly became apparent that Dante opens up a wide range of possibilities. It is simple to route, appears very reliable and will save on expensive multicore cabling installation around the building. As such, it was encouraging to see a number of exhibitors display “Dante Enabled” signs around the halls, including at SSL. I always like to visit SSL’s stand even if it is mostly a nostalgic trip down memory lane for me… remembering their great analogue mixing consoles of 20 years ago and my time working in music studios.

SSL Network I-O MADI-Bridge

SSL’s MADI to Dante bridge and their Dante-enabled Stageboxes — SB 8.8 and SB i16.

SSL had a number of Dante enabled products that looked interesting to me, including a MADI to Dante bridge, all of which were developed for their new System T broadcast platform.

Immersive Audio
Finally, I attended a session in the Auditorium where a panel discussed the current situation around immersive audio. This included an overview and report from Dolby Atmos, Auro 3D and DTS-X, as well as interesting input from a rerecording mixer (Gilbert Lake, who recently finished Mission Impossible 5, which sounds incredible), a representative of the cinema owners and a representative from SMPTE, which has a working group trying to develop some interoperable standards to allow competing technologies to coexist in the market.

The cinema owners are concerned about investing in one format only for another to become the standard. To quote one panel member, “No one wants to own an HD-DVD when the world has gone Blu-ray.”

The reality of mixing all content for all formats is not realistic either as it would be so costly in both time and money. In the UK, at least, the only format that seems to be taking off is Dolby Atmos, with a number of facilities having invested in it. There is still a disappointingly small number of screens showing films in Dolby Atmos in the UK and with the ones that do, it can often be hard to find out which screen and when.

There was an interesting discussion after the presentations that seemed to boil down to two things:

1. The playback system needs to faithfully reproduce the artistic intent that was realized on the mix stage. This may be problematical if the format that the recording mixer mixed in is different to the playback format.  Simply: if you mix in Atmos, playback in Atmos. There was much discussion about whether the open standards that SMPTE is trying to set would help the situation with no real clear word about whether it actually would.

2. Until there appears to be some sort of definitive standard or guidelines as to what system will be used, the cinemas are reluctant to commit time and more importantly, money to upgrading their theaters.

As sound in the cinema is invariably awful due to poorly maintained systems, whatever format is decided upon, it cannot come soon enough.

Simon Ray is head of operations and engineering Goldcrest Post Production in London.

IBC: AJA intros Corvid HEVC, more

At IBC 2015, AJA introduced Corvid HEVC, a 4K and multi-channel HEVC encoding card. AJA also launched a new range of openGear-compatible video and audio rack-frame cards and version 12.3 of the company’s drivers and software for the KONA, T-Tap and Io line of video and audio input/output devices.

As the latest addition to AJA’s developer program, Corvid HEVC (pictured above) is a PCIe 2.0 eight-lane video encoder card providing realtime, low-latency HEVC encoding at 4K, 1080p HD and lower resolutions. Corvid HEVC supports HEVC Main and Main10 profiles, 8- or 10-bit 4:2:0 and 4:2:2, and bit rates for streaming and contribution quality. Development partners can use AJA’s SDK to integrate Corvid HEVC directly into their Windows and Linux applications for a variety of use cases. In addition to HEVC encoding, audio and metadata are captured and included in the encoded file.

For use in standard openGear frames or AJA’s new OG-3 frame, the new openGear-compatible cards from AJA include the OG-1×9-SDI-DA, a 1×9 SDI re-clocking distribution amplifier; the OG-FIBER-2R, a two-channel fiber to SDI converter; and the OG-FIBER-2T, a 2-channel SDI to fiber converter. The new OG-3-FR is a 2RU, 20-slot openGear frame that can support any openGear-compatible card.

System-Test

AJA’s 12.3 software adds new options for closed captioning support for KONA 4 and Io 4K, new output support for Telestream Wirecast and Linux retail drivers for use with apps such as Shotgun Software’s RV and The Foundry’s Nuke. This release also includes the new AJA System Test 2.0, a cross-platform application for Mac or PC, with a redesigned user interface. System Test is used to measure disk performance and PCIe performance and this version also adds a new system report creation tool.

IBC 2015 Blog: Rainy days but impressive displays, solutions

By Robert Keske

While I noted in my first post that we were treated to beautiful weather in Amsterdam during the first days of IBC 2015, the weather on day four was not quite as nice… it was full of rain and thunderstorms, the latter of which was heard eerily through the RAI Exhibition Centre.

CLIPSTER

The next-gen Clipster

I spent day three exploring content delivery and automation platforms.

Rohde & Schwarz’s next-gen Clipster is finally here and is a standout — built on an entirely new hardware platform. It’s seamless, simplified, faster and looks to have a hardware and software future that will not require a forklift upgrade. 

Colorfront, also a leader in on-set dailies solutions, has hit the mark with its Transkoder product. The new HDR mathematical node is nothing less than impressive, which is nothing less than expected from Colorfront engineering.

Colorfront Transkoder

Colorfront Transkoder

UHD and HDR were also forefront at the show as the need for higher quality content continues to grow, and I spent day four examining these emerging display and delivery technologies. Both governments and corporate entities are leading the global community towards delivery of UHD to households starting in 2015, so I was especially interested in seeing how display and content providers would be raising the standards in display tech.

Sony, Samsung and Panasonic (our main image) all showcased impressive results to support UHD and HDR, and I’m looking forward to seeing what further developments and improvements the industry has to offer for both professional and consumer adoption.

Overall, while its seemed like a smaller show this year, I’ve been impressed by the quality of technology on display. IBC never fails to deliver a showcase of imagination and innovation and this year was no different.  

New York-based Robert Keske is CIO/CTO at Nice Shoes (@NiceShoesOnline).

IBC 2015 Blog: HDR displays

By Simon Ray

It was an interesting couple of days in Amsterdam. I was hoping to get some more clarity on where things were going with the High Dynamic Range concept in both professional and consumer panels, as well as delivery mechanisms to get it to the consumers. I am leaving IBC knowing more, but no nearer a coherent idea as to exactly where this is heading.

I initially visited Dolby to get an update on Dolby Vision (our main image), see where they were with their Dolby Vision technology and most importantly get my reserved tickets for the screening of Fantastic Four in the Auditorium (Laser Projection and Dolby Atmos). It all sounded very positive with news of a number of consumer panel manufacturers being close to releasing Dolby Vision-capable TVs. For example, Vizio with their Reference Series panel and streaming services like VUDU streaming Dolby Vision HDR content, although this is just in the USA to begin with. I also had my first look at a Dolby “Quantum Dot” HDR display panel, which did look good and surely has the best name of any tech out here.

There are other HDR offerings out there with Amazon Prime having announced in August that they will be streaming HDR content in the UK, but not initially in the Dolby Vision format (HDR video is available with the Amazon Instant Video app for Samsung SUHD TVs like the JS9000, JS9100 and JS9500 series) and selected LG TVs (G9600 and G9700 series) and the “big” TV manufacturers have or are about to launch HDR panels. So far so good.

Pro HDR Monitors
Things got bit more vague again when I started looking into HDR-equipped professional panels for color correction. There are only two I could find in the show: Sony had an impressive HDR-ready panel connected to a Filmlight BaseLight tucked away on their large stand in Hall 12; and Canon, who had their equally impressive prototype display tucked away in Hall 11 connected to a SGO Mistika. Both displays had different brightness specs and gamma options.

canon

When I asked some other manufacturers about their HDR panels the response was the same: “We are going to wait until the specifications are finalized before committing to an HDR monitor.” This leaves me to think this is a bad time to be buying a monitor. You are either going to buy an HDR monitor now, which may not be correct to the final specifications, or you are going to be buying a non-HDR monitor that is likely to be superseded in the near future.

Another thing I noticed was that the professional HDR panels were all being shown off in a carefully (or as carefully as a trade show allows) light environment to give them the best opportunity to make an impact. Any ambient light getting into the viewing environment is going to detract from the benefits of having the increased dynamic range and brightness of the HDR display, which I imagine might be a problem in the average living room. I hope this does not reduce the chance of this technology making an impact because it is great to see images seemingly having more depth and quality to them. As a representative on the Sony stand said, “It feels more immersive — I am so much more engaged in the picture.”

sony

Dolby
The problem of the ambient light was also picked up on in an interesting talk in the Auditorium as part of the “HDR: From zero to infinity” series. There were speakers from iMax, Dolby, Barco and Sony talking about the challenges of bringing HDR to the cinema. I had come across the idea of HDR in cinema from Dolby through their “Dolby Cinema” project, which brings together HDR picture and immersive sound with Dolby Atmos.

I am in the process of building a theatre to mix theatrical soundtracks in Dolby Atmos, but despite the exciting opportunities for sound that Atmos offers the sound teams, in the UK at least the take up by Cinemas is slow. One of the best things about Dolby Atmos for me is that if you go to see a film in Atmos, you know that the speaker system is going to be of a certain standard, otherwise Dolby would not have given it Atmos status. For too long, cinemas have been allowed to let the speaker systems wear down to the point where it becomes unlistenable. If these new initiatives can give cinemas an opportunity to reinvest in the equipment (and the various financial implications and challenges and who would meet these costs were discussed) and get a return on that investment it could be a chance to stop the rot and improve the cinema going experience. And, importantly, for us in post it gives us an exciting high bench mark to be aiming for when working on films.

Simon Ray is head of operations and engineering Goldcrest Post Production in London.

IBC 2015 Blog: Searching for eye candy in an 8K LED screen

By Tim Spitzer

Having come in on a red eye from New York and done my first few meetings, I wandered through the exhibition halls looking for eye candy because my brain was functioning on way too few cylinders. The most eye-popping thing I saw was a gigantic curved 8K LED cinema screen  at the AOTO booth in Hall 9. The detail and look was very engaging, and gave a sense of what high quality non-projection displays have to offer.  (Please note: The blown out highlights are my iPhone, not the screen.) I was impressed.

In the Emerging Technologies showcase, Professor Marek Domanski  of Poznan University of Technology in Poland, was demonstrating a “Free-Viewpoint” television system.  Based on a nine-camera set-up that was wirelessly synchronized, the free-viewpoint system allows viewers to navigate around a scene from virtual viewpoints extrapolated.  Each viewer navigates independently from a user terminal.  The technology can be used for monoscopic, stereoscopic or autostereoscopic displays.

photo 1

The technology, although in its infancy, has obvious value to medical imaging and interactive training, sports replay and analysis and performance viewing.  It is an interesting combo of real and virtual space without eyewear (except on stereoscopic displays).

Finally, Sweden’s Flowcine had an extraordinarily lightweight grip and stabilization rigs. Their combination of the Serene (removes step bounce),
Gravity, and Puppeteer were beautifully machined: Eye candy! Watching their combo rig in action was amazing; the cameraman looked like he had an exoskeleton.

Outside the show had an amazing Dutch meal at La Falote,  Roelof Hartstraat 26.  The chef and owner Peter is a wonderful cook as well as raconteur who makes you very welcome.

This was the best meal I have ever had during an IBC.

Tim Spitzer, principal of Timescape,LLC a post production services company, has been a fixture in the New York post production landscape establishing digital lab services, from dailies through digital intermediate finishing,  film scanning, library restorations and digital finishing for a wonderful worldwide roster of socially conscious and visionary filmmakers.

IBC: Autodesk to release Extension 1 for Flame 2016 line

Autodesk will soon release Extension 1 for its Flame 2016 family of 3D VFX software, which includes Autodesk Flame, Autodesk Flare, Autodesk Lustre and Autodesk Flame Assist. Inspired by user feedback, Autodesk added workflow improvements, new creative tools and a performance boost. Flame 2016 Extension 1 will be available to subscription customers on September 23.

Highlights of the Flame 2016 Extension 1 release are:
– Connected Conform: A new, unified media management approach to sharing, sorting and syncing media across different sequences for faster finishing in Flame Premium, Flame and Flare. New capabilities include shared sources, source sequence, shots sequence, shared segment syncing and smart replace.
– Advanced Performance: Realtime, GPU-accelerated debayering of Red and ArriRaw source media using high-performance Nvidia K6000 or M6000 graphics cards. The performance boost allows artist to begin work instantly in Flame Premium, Flame, Flare and Lustre.
– GMask Tracer: New to Flame Premium, Flame and Flare, this feature simplifies VFX creation with spline-based shape functionality and a chroma-keying algorithm.
– User-Requested Features: Proxy workflow enhancements, new batch context views, refined cache status, full-screen views, redesigned tools page and more.

Sony’s new PXW-FS5 camera, the FS7’s little brother

By Robert Loughlin

IBC is an incredibly exciting time of year for gearheads like me, but simultaneously frustrating if you making  it over to Amsterdam to see the tech in person. So when I was asked if I wanted to see what Sony was going to display at IBC before the trade show, I jumped at the chance.

I was treated to a great breakfast in the Sony Clubhouse, at the top of their building on Madison Avenue, surrounded by startling views of Manhattan and Long Island to the East. After a few minutes of chitchatting with the other writers, we were invited into a conference room to see what Sony had to show. They started by outlining what they believed their strengths were, and where they see themselves moving in the near future.

They stressed that they have tools for all corners of the market, from the F65 to the A7, and that these tools have been used in all ranges of environmental conditions — from extreme cold to scorching heat. Sony was very proud of the fact that they had a tool for almost any application you could think of. Sony’s director of digital imaging, Francois Gauthier, explained that if you started with the question, “What is my deliverable?” — meaning cinema, TV or web — Sony would have a solution for you. Yet, despite that broad range of product coverage, Sony felt that there was a missing piece in there, particularly between the FS7 and their cheaper A7 series of DSLRs. That’s where the PXW-FS5 comes in.

FS5-FS7The FS5
The FS5 is a brand-new camera that struck me as the FS7’s little brother. It sports a native 4K Super 35mm sensor, and we were told it’s the same 12 million-pixel Exmor sensor as the FS7. It records XAVC-L as well as AVCHD codecs, in S-Log 3, to dual SD card slots. The FS5 can also record high frame rates for both realtime recording and overcranking. The sensor itself is rated at EI 3200 with a dynamic range of about 14 stops. Internal recording is 8-bit 420 (at 4K — HD is 10-bit 4:2:2), but you can go out to an external recorder to get 10-bit 4K over the HDMI 2.0 port in the back. The camera also has one SDI port, but that only supports HD. You can record proxies simultaneously to the second SD card slot (though only when recording XAVC-L), and either have both slots sync up, or have individual record triggers for each. There is a 2K sensor crop mode, as well, that will let you either extend your lens, or use lenses designed for smaller image formats (like 16mm).

Controls-on-FS5

Controls on the side of the FS5

Product manager Juan Martinez stressed the power of the electronics inside, clocking boot time at less than five seconds, and mentioned that it is incredibly efficient (about two hours on the BP-U30, the smallest capacity). Additionally, he added that the camera doesn’t need to reboot if you’re changing recording formats. You just set it and you’re done.

The camera also has a new “Advanced Auto Focus” technology that can use facial recognition to track a subject. In addition to focus tools, the FS5 also has something called “Clear Image Zoom.” Clear Image Zoom is a way to blow up your picture — virtually extending the length of your lens — by first maximizing the optical zoom of the glass, then cleanly enlarging the image digitally. You can do this up to 2x, but it can be paired with the 2K sensor crop to get even more length out of your lens. The FS5 also has a built-in variable ND tool. There’s a dial on the side of the camera that lets you adjust iris to 1/100th of a stop, allowing the operator to do smooth iris pulls. Additionally, the camera has a silver knob on the front that allows you to assign up to three custom ND/iris values that you can quickly switch between.

In terms of design, it looks almost identical to the FS7, just shrunken down a bit. It has similar lines, but has the footprint and depth of the Canon C1/3/500, just a bit shorter. It’s a tiny camera. In like fashion, it’s also incredibly light. It weighs about two pounds — the magnesium body has something to do with that. It’s something I can easily hold in my hand all day. Its size and weight certainly make using this camera on gimbals and medium-sized drones very attractive. The remote operation applications become even more attractive with the FS5’s built in wireless streaming capability. You can stream the image to a computer, wireless streaming hardware (like Teradek), or your smartphone with Sony’s app. However, you can get higher bit-rates out of the stream by going over the Ethernet port on the back. Both Ethernet and wireless streaming are 720p. With the wireless capability, you can also connect to an FTP, enabling you to push media directly to a server from the field (provided you have the uplink available).

It’s also designed to work really well in your hand. The camera comes with a side grip that’s very repositionable with an easily reachable release lever. Just release the lever, and the grip is free to rotate. The grip fit perfectly in my palm, with controls either just under where my fingers naturally fell or within easy reach. The buttons included the standard remote buttons, like zoom and start/stop, but also a user definable button and a corresponding joystick, for quick access to menus.

Handgrip

Top: handgrip in hand, Bottom: button map

Top: the handgrip in hand, Bottom: button map

The grip is mounted very close to the camera body, in order to optimize the center of gravity while holding it. The camera is small and light enough that while holding it this way without the top handle and LCD viewfinder it’s reminiscent of holding a Handicam. However, if you have a long lens, or a similar setup where the center of gravity alters significantly, and need to move the grip up, you can remove it and mount an ARRI rosette plate (sold separately).

The FS5, without top handle or LCD viewfinder

The FS5, without top handle or LCD viewfinder

The camera also comes with a top handle that has GPS built-in, mounting points for the LCD viewfinder, an XLR input, and a Multi Interface hot-shoe mount. The handle also has its own stereo microphone built into the front, but the camera itself can only record two channels of audio.

Sony has positioned this camera to fall between DSLRs and the FS7. The MSRP is $6,699 for the body only, or $7,299 with a kit lens (18-105mm). The actual street prices will be lower than that, so the FS5 should fit comfortably between the two. Sony envisions this as their “grab and go” camera, ideal for remote documentary and unscripted TV or even web series. The camera is small, light and maneuverable enough to certainly be that. They wanted a camera that would be unintimidating to a non-professional, and I think they achieved that. However, without things like genlock timecode, and its E-mount lens mount, this camera is less ideal for cinema applications. There are other cameras around the same price point that are better suited for cinema (Blackmagic, RED Scarlet), so that’s totally fine. This camera definitely has its DNA deeply rooted in the camcorder days of yore, and will feel right at home with someone shooting and producing content for documentaries and TV. They showed a brief clip of footage, and it looked sharp with rich colors. I still tend to favor the color coming out of the Canon C series over the FS5, but it’s still solid footage. Projected availability is November 2015. For a full breakdown of specs, visit www.sony.com/fs5.

Sony PSZ-RA6T

Sony PSZ-RA6T

However, that wasn’t all Sony showed. The FS5 is pretty neat, but I was much more excited for the other thing Sony brought out. Tucked away in a corner of the room where they had put an FS5 in a “studio” set-up was a little download station. Centered around a MacBook Pro, the simple station had a Thunderbolt card reader and offload drive. The PSZ-RA drive is a brand new product from Sony, and I’m almost more excited about this little piece of hardware than I am about the new camera. It’s a small, two disk RAID that comes in 4TB and 6TB options. It’s similar to G-Tech’s popular G-RAIDs, with one notable exception. This thing is ruggedized. Imagine a LaCie Rugged the size and shape of a G-RAID (but without that awful orange — this is Sony-gray). The disks inside are buffered; it’s rated to be dropped from about a foot and can safely be tilted four inches in any direction. It supports RAID-0, -1 and JBOD. To me, set at RAID-1, it’s the perfect on-set shuttle drive. It even has a handle on top!

Overall, I saw a couple of really exciting things from Sony, and while I think a lot of people are really going to like the FS5, I’m dying to get the PSZ-RA drives on set.

Post production professional, specializing in dailies workflows as an Outpost Technician at Light Iron New York, and all-around tech-head.

IBC: Adobe upgrades Creative Cloud and Primetime

Adobe is adding new features to Adobe Creative Cloud, including support for Ultra HD (UHD), color-technology improvements and new touch workflows. In addition, Adobe Primetime, one of eight solutions inside Adobe Marketing Cloud, will extend its delivery and monetization capabilities for HTML5 video and offer new tools for pay-TV providers that make TV Everywhere authentication easier and more streamlined.

New video technology coming soon to Creative Cloud allows tools that will streamline workflows for broadcasters and media companies. They are:

  • Comprehensive native format support for editing 4K-to-8K footage in Premiere Pro CC.
  • Continued color advancements with support for High Dynamic Range (HDR) workflows in Premiere Pro CC.
  • Improved color fidelity and color adjustments in After Effects CC, as well as deeper support for ARRI RAW, Rec. 2020 and other Ultra HD and HDR formats.
  • A touch environment with Premiere Pro CC, After Effects CC and Character Animator optimized for Microsoft Surface Pro, Windows 8 tablets or Apple trackpad devices.
  • Remix, a new feature in Audition CC that adjusts the duration of a song to match video content. Remix automatically rearranges music to any duration while maintaining musicality and structure, creating custom tracks to fit storytelling needs.
  • Updated support for Creative Cloud Libraries across CC desktop video tools, powered by Adobe CreativeSync. Now, assets will instantly appear in After Effects and Premiere Pro.
  • Destination Publishing, a single-action solution in Adobe Media Encoder for rendering and delivering content to popular social platforms, will now support Facebook.
  • Adobe Anywhere, a workflow collaboration platform, can be deployed as either a multilocation streaming solution or a single-location collaboration-only version.

Primetime, Adobe’s multiscreen TV platform, is also getting an upgrade to support OTT and direct-to-consumer offerings. The upgrade includes:

  • Ability to deliver HTML5 content across mobile browsers and additional connected devices, extending its reach and monetization capabilities.
  • An instant-on capability that pre-fetches video content inside an app to start playback in less than a second, speeding the startup time for video-on-demand and live streams by 300 and 500 percent, respectively.
  • Support for Dolby AC-3 to enable high-impact, cinema-quality sound on virtually all desktops and connected devices.
  • Support for the OAUTH 2.0 protocol to make it easier for consumers to access their favorite pay-TV content. Pay-TV providers can enable frictionless TV Everywhere with home-based authentication and offer longer authentication sessions that require users to log in only once per device.
  • New support for OTT and TV Everywhere measurement — including a broad variety of user-engagement metrics — in Adobe Analytics, a tool that is integrated with the Primetime TVSDK.

IBC: iZotope announces RX Post Production Suite and RX 5 Audio Editor

Audio technology company iZotope, Inc. has unveiled its new RX Post Production Suite, a set of tools that enable professionals to edit, mix, and deliver their audio, as well as RX 5 Audio Editor, an update to the company’s RX platform.

The new RX Post Production Suite contains products aimed at each stage of the audio post production workflow including audio repair and editing, mix enhancement and final delivery. The RX Post Production Suite includes the RX 5 Advanced Audio Editor, RX Final Mix, RX Loudness Control, and Groove3, well as the customer’s choice of 50 free sound effects from Pro Sound Effects.

The new RX 5 Audio Editor and RX 5 Advanced Audio Editor are designed to repair and enhance common problematic production audio while speeding up workflows that currently require either multiple manual editing passes, or a non-intuitive collection of tools from different vendors. RX 5’s new Instant Process tool lets editors “paint out” unwanted sonic elements directly on the spectral display with a single mouse gesture. The new Module Chain allows users to define a custom chain of processing (e.g. De-click, De-noise, De-reverb, EQ Match, Leveler, Normalize) and then save that chain as a preset so that multiple processes can be recalled and applied in a single click for repetitive tasks.

For Pro Tools/RX 5 workflows, RX Connect has been enhanced to support individual Pro Tools clips and crossfades with any associated handles so that processed audio returns “in place” to the Pro Tools timeline.

RX 5 Advanced also includes a new De-plosive module that minimizes plosives from letters such as p, t, k, and b, in which strong blasts of air create a massive pressure change at the microphone element, impairing the sound. In addition, the Leveler module has been enhanced with breath and “ess” (sibilance) detection for increased accuracy when performing faster than realtime leveling.

IBC: G-Tech adds four new products to Evolution Series

G-Technology has added four new products to its Evolution (ev) Series, an ecosystem of docking stations and interchangeable and expandable external hard drives and accessories. The new products include the G-Speed Studio XL with two ev Series bay adapters, the ev Series Reader Red Mini-Mag edition, the G-Dock ev Solo and the ev Series FireWire adapter.

G-SPEEDstudioXL-evSeries-BayAdapters-FrontOpen-with-evDrives-HiRes

The G-Speed Studio XL (pictured right) with two ev Series bay adapters is an eight-bay Thunderbolt 2 storage solution that comes with six enterprise-class hard drives and two integrated ev Series bay adapters for greater capacity and performance. The integrated ev Series bay adapters accommodate all ev Series drive modules, enabling cross-functionality with other products in the Evolution Series. Configurable in RAID-0, -1, -5, -6 and -10, it supports multistream compressed 4K workflows with extremely large volumes of data at transfer rates of up to 1,200 MB/sec and the ability to daisy chain via dual Thunderbolt 2 ports.

Designed to optimize a Red camera workflow, the ev Series Reader Red Mini-Mag edition uses high-performance connectivity for fast Red footage transfers and backup. Users can transfer content quickly from a Red Mini-Mag media card onto any G-dock ev docking station or G-Speed Studio XL with ev Series bay adapters. The ev Series all-terrain case, which is watertight, adds protection when shooting on the go.

For those who already have several G-Drive ev modules, the G-Dock ev Solo (pictured below) is a USB 3.0 docking solution for shared environments, including studios, labs and classrooms. Users can transfer, edit and back up an existing Evolution Series hard drive module by inserting it into the G-Dock ev Solo. When paired with the G-Drive ev, G-Drive ev Raw, G-Drive ev 220 or G-Drive ev SSD, the G-Dock ev Solo can store up to 2TB of data and transfer content at rates up to 400MB/sec.

G_Dock_evSOLO

Finally, the new ev Series FireWire adapter attaches to an ev Series drive, allowing connection to an existing FireWire 800 port. Users can connect a G-Drive ev Raw, G-Drive ev, G-Drive ev 220 or G-Drive ev SSD to a computer via one of two FireWire 800 ports or daisy chain them.

The G-Speed Studio XL with two ev Series bay adapters, the ev Series Reader Red Mini-Mag edition and the G-Dock ev Solo will be available in October. The ev Series FireWire adapter is available now.

IBC: EditShare showing XStream shared storage

EditShare, a provider of shared storage and media management solutions, is demonstrating its recently released XStream ST model at IBC2015. XStream ST is EditShare’s entry-level high-performance shared storage solution with integrated Flow Production Asset Management and Ark backup and archiving. Designed to give creative teams performance, collaboration features and fault tolerance at a lower price point, XStream ST includes NLE project sharing, ingest, transcode, remote editorial and back-up and archive tools in a single platform.

The system is designed for easy expandability, which allows facilities to scale their systems as business and production needs grow. Users can connect additional XStream ST storage to their networks and manage all as one integrated system. XStream ST is fault-tolerant with built-in RAID-6 as well as redundant power supplies, fans and OS drives. With no per-seat storage client licenses, users can connect multiple NLEs and creative clients. The system includes Flow Production Media Asset Management and AirFlow for remote editing and “review and approve” workflows. Users can archive projects to disk or LTO for secure backup of assets using Ark (disks and tape library sold separately).

IBC: Tangent showing entry level Ripple panel at show

For those of you who have envied Tangent’s color grading panels but knew it didn’t make sense to invest since grading might not be your main role, Tangent is developing an affordable option.

Tangent’s Ripple is the company’s new entry level panel, which is designed for the occasional colorist, editor and student. Ripple features three tracker balls that speed up primary grading. It is lightweight and offers a footprint small enough to sit beside your keyboard and mouse without getting in the way.

Tangent will be at IBC with pre-production prototypes, so they say there may be changes to the design before it goes on sale in early 2016 for an estimated price of $350 US.

Like all the panels from Tangent, it’s supported by the company’s Mapper software, which means you can customize what the controls do with any software that supports the Mapper. Ripple is already compatible with any grading software that uses the Tangent Hub — Resolve, Nucoda, Scratch, SpeedGrade and others. You can also use Ripple with the other panels from the Element range, including the element-Vs tablet app, so you can expand its functionality.

A rundown of the features:
• Three tracker balls with dials for masters.
• High-resolution optical pick-ups for the balls and dials.
• Independent reset buttons for the balls and dials.
• Programmable A and B buttons.
• USB powered with integral cable.

One of our reviewers, working editor Brady Betzel, is eager to give it a look.  “When cutting side projects, sizzle reels, or any other type of multimedia.  I always want to color correct the footage, but it gets tedious without a set of panels like the Tangent Elements. Just doing some quick superficial color correcting might not justify the price tag, but with the latest Tangent Ripple it gets affordable for everyone who dabbles in color correcting. I am really looking forward to playing with it, and at the lowest price for a panel it might just be the ticket for lots of editors and VFX artists.”

 

Atomos offering lightweight Ninja Assassin for 4K/UHD

Atomos, makers of the established and high-end Shogun, have added the Ninja Assassin to its product line. The Assassin records 4K UHD and 1080 60p and is a 10-bit 4:2:2 recording solution for Apple Final Cut X, Avid Media Composer and Adobe Premiere Pro workflows. Atomos describes it as a lightweight and affordable add-on to existing DSLR, mirrorless, video and cinema cameras. It’s available now.

The Ninja Assassin offers the screen size, screen resolution, advanced recording capability and scopes of the company’s premium Shogun model, but without the 12G/6G/3G-SDI connectivity, RAW recording functionality, in-built conversion, Genlock and balanced XLR audio connections. The main benefit — a 10 percent weight reduction to 430g and a $1,295 (US) price point, including soft case, SSD caddy and AC adaptor.

The Assassin targets 4K DSLM cameras such as the Sony a7S and a7R II, Canon XC10 and Panasonic GH4. The Ninja Assassin has HDMI focused audio/video connections and ships with a brand new red Armor Bumper for increased protection.

Key features include:
• Recording of more accurate, higher resolution colors (4:2:2, 10-bit) direct to visually lossless editing formats.
• No recording time limits.
• Professional shot setup on a calibrated high-resolution 7-inch monitor with more than 320 pixels per inch.
• Anamorphic de-squeeze — a good companion for Panasonic’s GH4 and affordable anamorphic lenses/adaptors.
• Easy to use pro monitoring tools, including focus peaking assist, 1:1 and 2:1 zoom with smooth image pan and scan, False Color (skin tones), Zebra and Waveform/Vectorscopes for in-depth image analysis.
• Pre-Roll cache recording up to 8 seconds of HD or 2 to 3 seconds of 4K.
• Video timelapse with up to 10 different sequences, speed ramp and scheduled start and end times over 24 hours.
• 3D LUTs allow creation of a specific signature look. The 50:50 split / LUT on / LUT off view allows users to compare effects and make creative decisions on the fly.
• Playback for instant review and editing on the fly with a choice of 10 tags in both record and playback mode.

Free public beta of Fusion 8 now available for Mac and PC

The public beta of the free version of Blackmagic’s Fusion 8, the company’s visual effects and motion graphics software, is now available for download from the Blackmagic Design website. This beta is for the free version of Fusion 8 and is available for both Mac OS X and Windows.

A beta for the paid version, Fusion 8 Studio, which adds stereoscopic 3D tools and is designed for multi-user workgroups and larger studios, will be available shortly. However, current Fusion Studio customers can download the public beta for the free version of Fusion 8 and start using it today.

stereoscopic@2x

This public beta is also the first-ever Mac compatible release of Fusion, which was previously a Windows-only product. In addition, projects can be easily moved between Mac and Windows versions of Fusion so customers can work on the platform of their choice.

In the six months since Fusion 8 was launched at NAB there have been many improvements to the user interface — it features a more modern look. There will be many more improvements to the user interface as the Fusion engineering teams continue to work with the visual effects community.

Featuring a node-based interface, Fusion makes it easy to build high-end visual effects compositions very quickly. Nodes are small icons that represent effects, filters and other image processing operations that can be connected together in any order to create unlimited visual effects. Nodes are laid out logically like a flow chart, so customers won’t waste time hunting through nested stacks of confusing layers with filters and effects. With a node-based interface, it’s easy to see and adjust any part of a project in Fusion by clicking on a node.

interface-01@2xWith a massive toolset consisting of hundreds of built in tools, customers can pull keys, track objects, rotoscope, retouch images, animate titles, create amazing particle effects and much more, all in a true 3D workspace. Fusion can also import 3D models, point cloud data, cameras or even entire 3D scenes from Maya, 3ds Max or LightWave and render them seamlessly with other elements. Deep pixel tools can be used to add volumetric fog, lighting and reflection mapping of rendered objects using world position passes so customers can create amazing atmospheric effects that render in seconds, instead of hours.

Fusion has been used on thousands of feature film and television projects, including Thor, Edge of Tomorrow, The Hunger Games trilogy, White House Down, Battlestar Galactica and others.

Forbidden to demo Forscene’s virtualized post workflow at IBC

Forbidden Technologies, makers of the editing software Forscene, will be at IBC in Amsterdam showing its end-to-end virtualized workflow for posting and distributing of video content. The hardware-independent solution is enabled by Forscene’s integration with the Microsoft Azure cloud-computing platform.

The workflow sees the Forscene ingest server running as a virtual machine on the Microsoft Azure platform to transcode and ingest live broadcast streams into Forscene accounts seconds behind the live feed. Video editors can then create subclips or full highlights packages using Forscene’s NLE from anywhere. Once the edit is complete, they can drop the sequence back onto Azure for faster-than-realtime conforming and distribution.

IBC attendees can experience the virtualized workflow by competing in a simulated car race, editing the race footage in Forscene and then sharing the video on social media — without needing any Forscene hardware.