Category Archives: IBC

I was an IBC virgin

By Martina Nilgitsalanont

I recently had the opportunity to attend the IBC show in Amsterdam. My husband, Mike Nuget, was asked to demonstrate workflow and features of FilmLight’s Baselight software, and since I was in between projects — I’m an assistant editor on Showtime’s Billions and will start on Season 4 in early October — we turned his business trip into a bit of a vacation as well.

Although I’ve worked in television for quite some time, this was my first trip to an industry convention, and what an eye opener it was! The breadth and scope of the exhibit halls, the vendors, the attendees and all the fun tech equipment that gets used in the film and television industry took my breath away (dancing robotic cameras??!!). My husband attempted to prepare me for it before we left the states, but I think you have to experience it to fully appreciate it.

Since I edit on Media Composer, I stopped by Avid’s booth to see what new features they were showing off, and while I saw some great new additions, I was most tickled when one of the questions I asked stumped the coders. They took a note of what I was asking of the feature, and let me know, “We’ll work on that.” I’ll be keeping an eye out!

Of course, I spent some time over at the FilmLight booth. It was great chatting with the folks there and getting to see some of Baselight’s new features. And since Mike was giving a demonstration of the software, I got to attend some of the other demos as well. It was a real eye opener as to how much time and effort goes into color correction, whether it’s on a 30-second commercial, documentary or feature film.

Another booth I stopped by was Cinedeck, over at the Launchpad. I got a demo of their CineXtools, and I was blown away. How many times do we receive a finished master (file) that we find errors in? With this software, instead of making the fixes and re-exporting (and QCing) a brand-new file, you can insert the fixes and be done! You can remap audio tracks if they’re incorrect, or even fix an incorrect closed caption. This is, I’m sure, a pretty watered down explanation of some of the things the CineX software is capable of, but I was floored by what I was shown. How more finishing houses aren’t aware of this is beyond me. It seems like it would be a huge time saver for the operator(s) that need to make the fixes.

Amsterdam!
Since we went spent the week before the convention in Amsterdam, Mike and I got to do some sightseeing. One of our first stops was the Van Gogh Museum, which was very enlightening and had an impressive collection of his work. We took a canal cruise at night, which offered a unique vantage point of the city. And while the city is beautiful during the day, it’s simply magical at night —whether by boat or simply strolling through the streets— with the warm glow from living rooms and streetlights reflected in the water below.

One of my favorite things was a food tour in the Jordaan district, where we were introduced to a fantastic shop called Jwo Lekkernijen. They sell assorted cheeses, delectable deli meats, fresh breads and treats. Our prime focus while in Amsterdam was to taste the cheese, so we made a point of revisiting later in the week so that we could delight in some of the best sandwiches EVER.

I could go on and on about all our wanderings (Red Light District? Been there. Done that. Royal Palace? Check.), but I’ll keep it short and say that Amsterdam is definitely a city that should be explored fully. It’s a vibrant and multicultural metropolis, full of warm and friendly people, eager to show off and share their heritage with you.  I’m so glad I tagged along!

AJA Introduces Kona 5 with 12G-SDI I/O

At IBC 2018, AJA debuted Kona 5, a new eight-lane PCIe 3.0 video and audio I/O card supporting 12G-SDI I/O and HDMI 2.0 monitoring/output for workstations or Thunderbolt 3-connected chassis. Kona 5 supports 4K/UltraHD and HD high frame rate, deep color and HDR workflows over one cable. For developers, AJA’s SDK offers support for Kona 5 multi-channel 12G-SDI I/O, enabling multiple 4K streams of input or output.

The Kona 5 capture and output card is interoperable with standard tools such as Adobe Premiere Pro, Apple Final Cut Pro X and Avid Media Composer, using AJA’s Mac OS and Windows drivers and application plug-ins. The card supports simultaneous capture with pass-through monitoring when using 12G-SDI and offers HDMI 2.0 output for connecting to the latest displays.

“With today’s audiences expecting the highest quality content, high resolution, high frame rate and deep color are quickly becoming the norm across broadcast and post workflows, prompting the need for faster, more efficient approaches,” says AJA president Nick Rashby. “Kona 5 combines the flexibility of AJA’s Io 4K Plus into a desktop I/O solution with a more powerful feature set.”

Kona 5 feature highlights include:

• 12G-SDI I/O and HDMI 2.0 monitoring/output for 4K, UltraHD, 2K, HD and SD with HFR support up to 4K 60p at YUV 10-bit 4:2:2 and support for RGB 12-bit 4:4:4 up to 4K 30p
• 4x bi-directional 12G-SDI ports and 1x reference in, on robust HD-BNC connectors, with HD-BNC to full-sized BNC cables included
• 16-channel embedded audio on SDI, 8-channel embedded audio on HDMI
• 8-channel AES audio I/O, LTC I/O, and RS-422 serial control via supplied break-out cable
• 10-bit downstream keyer in hardware, supporting up to 4K resolution
• Compatibility with Adobe Premiere Pro, Apple Final Cut Pro X, Avid Media Composer, Telestream Wirecast, AJA Control Room and others
• AJA SDK compatibility, offering advanced features including multi-channel 4K I/O
• Three-year international warranty

DigitalGlue 12.3

Xytech intros mobile UI, REST APIs for MediaPulse at IBC

Xytech, maker of the MediaPulse facility management software, has introduced a new user interface that extends MediaPulse to a wider range of users and expands support for multiple devices.

The new MediaPulse UI provides custom screens tailored to the needs of operations staff, producers, facility managers, field crews and freelancers. The goal of the new UI, according to Xytech, is to increase efficiency and consistency for the entire organization and speed media workflows.

“Our new MediaPulse Mobile UI is designed to provide a personalized interface for all team members,” explains Greg Dolan of Xytech. “This is the beginning of a crucial strategy for Xytech as we expand our technology from the hands of operational and financial users to all users in the enterprise.”

In addition, Xytech has also announced the latest release of the MediaPulse Development Platform. The platform provides integrations with all systems through an API library now supporting REST calls. Triggered messaging, parameter-based alerts and automated report delivery are all included with the new release.


Adobe updates Creative Cloud

By Brady Betzel

You know it’s almost fall when when pumpkin spice lattes are  back and Adobe announces its annual updates. At this year’s IBC, Adobe had a variety of updates to its Creative Cloud line of apps. From more info on their new editing platform Project Rush to the addition of Characterizer to Character Animator — there are a lot of updates so I’m going to focus on a select few that I think really stand out.

Project Rush

I use Adobe Premiere quite a lot these days; it’s quick and relatively easy to use and will work with pretty much every codec in the universe. In addition, the Dynamic Link between Adobe Premiere Pro and Adobe After Effects is an indispensible feature in my world.

With the 2018 fall updates, Adobe Premiere will be closer to a color tool like Blackmagic’s Resolve with the addition of new hue saturation curves in the Lumetri Color toolset. In Resolve these are some of the most important aspects of the color corrector, and I think that will be the same for Premiere. From Hue vs. Sat, which can help isolate a specific color and desaturate it to Hue vs. Luma, which can help add or subtract brightness values from specific hues and hue ranges — these new color correcting tools further Premiere’s venture into true professional color correction. These new curves will also be available inside of After Effects.

After Effects features many updates, but my favorites are the ability to access depth matte data of 3D elements and the addition of the new JavaScript engine for building expressions.

There is one update that runs across both Premiere and After Effects that seems to be a sleeper update. The improvements to motion graphics templates, if implemented correctly, could be a time and creativity saver for both artists and editors.

AI
Adobe, like many other companies, seem to be diving heavily into the “AI” pool, which is amazing, but… with great power comes great responsibility. While I feel this way and realize others might not, sometimes I don’t want all the work done for me. With new features like Auto Lip Sync and Color Match, editors and creators of all kinds should not lose the forest for the trees. I’m not telling people to ignore these features, but asking that they put a few minutes into discovering how the color of a shot was matched, so you can fix something if it goes wrong. You don’t want to be the editor who says, “Premiere did it” and not have a great solution to fix something when it goes wrong.

What Else?
I would love to see Adobe take a stab at digging up the bones of SpeedGrade and integrating that into the Premiere Pro world as a new tab. Call it Lumetri Grade, or whatever? A page with a more traditional colorist layout and clip organization would go a long way.

In the end, there are plenty of other updates to Adobe’s 2018 Creative Cloud apps, and you can read their blog to find out about other updates.


Presenting at IBC vs. NAB

By Mike Nuget

I have been lucky enough to attend NAB a few times over the years, both as an onlooker and as a presenter. In 2004, I went to NAB for the first time as an assistant online editor, mainly just tagging along with my boss. It was awesome! It was very overwhelming and, for the most part, completely over my head.  I loved seeing things demonstrated live by industry leaders. I felt I was finally a part of this crazy industry that I was new to. It was sort of a rite of passage.

Twelve years later, Avid asked me to present on the main stage. Knowing that I would be one of the demo artists that other people would sit down and watch — as I had done just 12 years earlier — was beyond anything I thought I would do back when I first started. The demo showed the Avid and FilmLight collaboration between the Media Composer and the Baselight color system. Two of my favorite systems to work on.

Thanks to my friend and now former co-worker Matt Schneider, who also presented alongside of me, I had developed a very good relationship with the Avid developers and some of the people who run the Avid booth at NAB. And at the same time, the Filmlight team was quickly being put on my speed dial and that relationship strengthened as well.

This past NAB, Avid once again asked me to come back and present on the main stage about Avid Symphony Color and FilmLight’s Baselight Editions plug-in for Avid, but this time I would get to represent myself and my new freelance career change — I had just left my job at Technicolor-Postworks in New York a few weeks prior. I thought that since I was now a full-time freelancer this might be the last time I would ever do this kind of thing. That was until this past July, when I got an email from the FilmLight team asking me to present at IBC in Amsterdam. I was ecstatic.

Preparing for IBC was similar enough as far as my demo, but I was definitely more nervous than I was at NAB. I think it was two reasons: First, presenting in front of many different people in an international setting. Even though I am from the melting pot of NYC, it is a different and interesting feeling being surrounded by so many different nationalities all day long, and pretty much being the minority. On a personal note, I loved it. My wife and I love traveling, and to us this was an exciting chance to be around people from other cultures. On a business level, I guess I was a little afraid that my fast-talking New Yorker side would lose some people, and I didn’t want that to happen.

The second thing was that this was the first time that I was presenting strictly for FilmLight and not Avid. I have been an Avid guy for over 15 years. It’s my home, it’s my most comfortable system, and I feel like I know it inside and out. I discovered Baselight in 2012, so to be presenting in front of FilmLight people, who might have been using their systems for much longer, was a little intimidating.

When I walked into the room, they had setup a full-on production, along with spotlights, three cameras, a projector… the nerves rushed once again. The demo was standing room only. Sometimes when you are doing presentations, time seems to fly by, so I am not sure I remember every minute of the 50-minute presentation, but I do remember at one point within the first few minutes my voice actually trembled, which internally I thought was funny, because I do not tend to get nervous. So instead of fighting it, I actually just said out loud “Sorry guys, I’m a little nervous here,” then took a deep breath, gathered myself, and fell right into my routine.

I spent the rest of the day watching the other FilmLight demos and running around the convention again saying hello to some new vendors and goodbye to those I had already seen, as Sunday was my last day at the show.

That night I got to hang out with the entire Filmlight staff for dinner and some drinks. These guys are hilarious, what a great tight-knit family vibe they have. At one point they even started to label each other, the uncle, the crazy brother, the funny cousin. I can’t thank them enough for being so kind and welcoming. I kind of felt like a part of the family for a few days, and it was tremendously enjoyable and appreciated.

Overall, IBC felt similar enough to NAB, but with a nice international twist. I definitely got lost more since the layout is much more confusing than NAB’s. There are 14 halls!

I will say that the “relaxing areas” at IBC are much better than NAB’s! There is a sandy beach to sit on, a beautiful canal to sit by while having a Heineken (of course) and the food trucks were much, much better.

I do hope I get to come back one day!


Mike Nuget (known to most as just “Nuget”) is a NYC-based colorist and finishing editor. He recently decided to branch out on his own and become a freelancer after 13 years with Technicolor-Postworks. He has honed a skill set across multiple platforms, including FilmLight’s Baselight, Blackmagic’s Resolve, Avid and more. 


IBC 2018: Convergence and deep learning

By David Cox

In the 20 years I’ve been traveling to IBC, I’ve tried to seek out new technology, work practices and trends that could benefit my clients and help them be more competitive. One thing that is perennially exciting about this industry is the rapid pace of change. Certainly, from a post production point of view, there is a mini revolution every three years or so. In the past, those revolutions have increased image quality or the efficiency of making those images. The current revolution is to leverage the power and flexibly of cloud computing. But those revolutions haven’t fundamentally changed what we do. The images might have gotten sharper, brighter and easier to produce, but TV is still TV. This year though, there are some fascinating undercurrents that could herald a fundamental shift in the sort of content we create and how we create it.

Games and Media Collide
There is a new convergence on the horizon in our industry. A few years ago, all the talk was about the merge between telecommunications companies and broadcasters, as well as the joining of creative hardware and software for broadcast and film, as both moved to digital.

The new convergence is between media content creation as we know it and the games industry. It was subtle, but technology from gaming was present in many applications around the halls of IBC 2018.

One of the drivers for this is a giant leap forward in the quality of realtime rendering by the two main game engine providers: Unreal and Unity. I program with Unity for interactive applications, and their new HDSRP rendering allows for incredible realism, even when being rendered fast enough for 60+ frames per second. In order to create such high-quality images, those game engines must start with reasonably detailed models. This is a departure from the past, where less detailed models were used for games than were used for film CGI shots, to protect for realtime performance. So, the first clear advantage created by the new realtime renderers is that a film and its inevitable related game can use the same or similar model data.

NCam

Being able to use the same scene data between final CGI and a realtime game engine allows for some interesting applications. Habib Zargarpour from Digital Monarch Media showed a system based on Unity that allows a camera operator to control a virtual camera in realtime within a complex CGI scene. The resulting camera moves feel significantly more real than if they had been keyframed by an animator. The camera operator chases high-speed action, jumps at surprises and reacts to unfolding scenes. The subtleties that these human reactions deliver via minor deviations in the movement of the camera can convey the mood of a scene as much as the design of the scene itself.

NCam was showing the possibilities of augmenting scenes with digital assets, using their system based on the Unreal game engine. The NCam system provides realtime tracking data to specify the position and angle of a freely moving physical camera. This data was being fed to an Unreal game engine, which was then adding in animated digital objects. They were also using an additional ultra-wide-angle camera to capture realtime lighting information from the scene, which was then being passed back to Unreal to be used as a dynamic reflection and lighting map. This ensured that digitally added objects were lit by the physical lights in the realworld scene.

Even a seemingly unrelated (but very enlightening) chat with StreamGuys president Kiriki Delany about all things related to content streaming still referenced gaming technology. Delany talked about their tests to build applications with Unity to provide streaming services in VR headsets.

Unity itself has further aspirations to move into storytelling rather than just gaming. The latest version of Unity features an editing timeline and color grading. This allows scenes to be built and animated, then played out through various virtual cameras to create a linear story. Since those scenes are being rendered in realtime, tweaks to scenes such as positions of objects, lights and material properties are instantly updated.

Game engines not only offer us new ways to create our content, but they are a pathway to create a new type of hybrid entertainment, which sits between a game and a film.

Deep Learning
Other undercurrents at IBC 2018 were the possibilities offered by machine learning and deep learning software. Essentially, a normal computer program is hard wired to give a particular output for a given input. Machine learning allows an algorithm to compare its output to a set of data and adjust itself if the output is not correct. Deep learning extends that principle by using neural network structures to make a vast number of assessments of input data, then draw conclusions and predications from that data.

Real-world applications are already prevalent and are largely related in our industry to processing viewing metrics. For example, Netflix suggests what we might want to watch next by comparing our viewing habits to others with a similar viewing pattern.

But deep learning offers — indeed threatens — much more. Of course, it is understandable to think that, say, delivery drivers might be redundant in a world where autonomous vehicles rule, but surely creative jobs are safe, right? Think again!

IBM was showing how its Watson Studio has used deep learning to provide automated editing highlights packages for sporting events. The process is relatively simple to comprehend, although considerably more complicated in practice. A DL algorithm is trained to scan a video file and “listen” for a cheering crowd. This finds the highlight moment. Another algorithm rewinds back from that to find the logical beginning of that moment, such as the pass forward, the beginning of the volley etc. Taking the score into account helps decide whether that highlight was pivotal to the outcome of the game. Joining all that up creates a highlight package without the services of an editor. This isn’t future stuff. This has been happening over the last year.

BBC R&D was talking about their trials to have DL systems control cameras at sporting events, as they could be trained to follow the “two thirds” framing rule and to spot moments of excitement that justified close-ups.

In post production, manual tasks such as rotoscoping and color matching in color grading could be automated. Even styles for graphics, color and compositing could be “learned” from other projects.

It’s certainly possible to see that deep learning systems could provide a great deal of assistance in the creation of day-to-day media. Tasks that are based on repetitiveness or formula would be the obvious targets. The truth is, much of our industry is repetitive and formulaic. Investors prefer content that is more likely to be a hit, and this leads to replication over innovation.

So, are we heading for “Skynet” and need Arnold to save us? I thought it was very telling that IBM occupied the central stand position in Hall 7 — traditionally the home of the tech companies that have driven creativity in post. Clearly, IBM and its peers are staking their claim. I have no doubt that DL and ML will make massive changes to this industry in the years ahead. Creativity is probably, but not necessarily, the only defence for mere humans to keep a hand in.

That said, at IBC2018 the most popular place for us mere humans to visit was a bar area called The Beach, where we largely drank Heineken. If the ultimate deep learning system is tasked to emulate media people, surely it would create digital alcohol and spend hours talking nonsense, rather than try and take over the media world? So perhaps we have a few years left yet.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.


Quantum upgrades Xcellis scale-out storage with StoreNext 6.2, NVMe tech

Quantum has made enhancements to its Xcellisscale-out storage appliance portfolio with an upgrade to StorNext 6.2 and the introduction of NVMe storage. StorNext 6.2 bolsters performance for 4K and 8K video while enhancing integration with cloud-based workflows and global collaborative environments. NVMe storage significantly accelerates ingest and other aspects of media workflows.

Quantum’s Xcellis scale-out appliances provide high performance for increasingly demanding applications and higher resolution content. Adding NVMe storage to the Xcellis appliances offers ultra-fast performance: 22 GB/s single-client, uncached streaming bandwidth. Excelero’s NVMesh technology in combination with StorNext ensures all data is accessible by multiple clients in a global namespace, making it easy to access and cost-effective to share Flash-based resources.

Xcellis provides cross-protocol locking for shared access across SAN, NFS and SMB, helping users share content across both Fibre Channel and Ethernet.

With StorNext 6.2, Quantum now offers an S3 interface to Xcellis appliances, allowing them to serve as targets for applications designed to write to RESTful interfaces. This allows pros to use Xcellis as either a gateway to the cloud or as an S3 target for web-based applications.

Xcellis environments can now be managed with a new cloud monitoring tool that enables Quantum’s support team to monitor critical customer environmental factors, speed time to resolution and ultimately increase uptime. When combined with Xcellis Web Services — a suite of services that lets users set policies and adjust system configuration — overall system management is streamlined.

Available with StorNext 6.2, enhanced FlexSync replication capabilities enable users to create local or remote replicas of multitier file system content and metadata. With the ability to protect data for both high-performance systems and massive archives, users now have more flexibility to protect a single directory or an entire file system.

StorNext 6.2 lets administrators provide defined and enforceable quotas and implement quality of service levels for specific users, and it simplifies reporting of used storage capacity. These new features make it easier for administrators to manage large-scale media archives efficiently.

The new S3 interface and NVMe storage option are available today. The other StorNext features and capabilities will be available by December 2018.

 


mLogic at IBC with four new storage solutions

mLogic will be at partner booths during IBC showing four new products at: the mSpeed Pro, mRack Pro, mShare MDC and mTape SAS.

The mLogic mSpeed Pro (pictured) is a 10-drive RAID system with integrated LTO tape. Thishybrid storage solution and hard drive provides high-speed access to media for coloring, editing and VFX, while also providing an extended, long-term archive for content to LTO tape, which promises more than 30+ years of media preservation.

mSpeed Pro supports multiple RAID levels, including RAID-6 for the ultimate in fault tolerance. It connects to any Linux, macOS, or Windows computer via a fast 40Gb/second Thunderbolt 3 port. The unit ships with the mLogic Linear Tape File System (LTFS) Utility, a simple drag-and-drop application that transfers media from the RAID to the LTO.

The mLogic mSpeed Pro will be available in 60, 80 and 100TB with an LT0-7 or LTO-8 tape drive. Pricing starts at $8,999.

The mRack Pro is a 2U rack-mountable archiving solution that features full-height LTO-8 drives and Thunderbolt 3 connectivity. Full-height (FH) LTO-8 drives offer numerous benefits over their half-height counterparts, including:
– Having larger motors that move media faster
– Working more optimally in LTFS (Linear Tape File System) environments
– Providing increased mechanical reliability
– Being a better choice for high-duty cycle workloads
– Having a lower operating temperature

The mRack Pro is available with one or two LTO-8 FH drives. Pricing starts at $7,999.

mLogic’s mShare is a metadata controller (MDC) with PCIe switch and embedded Storage Area Network (SAN) software, all integrated in a single compact rack-mount enclosure. Designed to work with mLogic’s mSAN Thunderbolt 3 RAID, the unit can be configured with Apple Xsan or Tiger Technology Tiger Store software. With mShare and mSAN, collaborative workgroups can be configured over Thunderbolt at a fraction of the cost of traditional SAN solutions. Pricing TBD.

Designed for archiving media in the Linux and Windows environments, mTape SAS is a desktop LTO-7 or LTO-8 that ships bundled with a high-speed SAS PCIe adapter to install in host computers. The mTape SAS can also be bundled with Xendata Workstation 6 archiving software for Windows. Pricing starts at $3,399.


Winners: IBC2017 Impact Awards

postPerspective has announced the winners of our postPerspective Impact Awards from IBC2017. All winning products reflect the latest version of the product, as shown at IBC.

The postPerspective Impact Award winners from IBC2017 are:

• Adobe for Creative Cloud
• Avid for Avid Nexis Pro
• Colorfront for Transkoder 2017
• Sony Electronics for Venice CineAlta camera

Seeking to recognize debut products and key upgrades with real-world applications, the postPerspective Impact Awards are determined by an anonymous judging body made up of industry pros. The awards honor innovative products and technologies for the post production and production industries that will influence the way people work.

“All four of these technologies are very worthy recipients of our first postPerspective Impact Awards from IBC,” said Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually make users’ working lives easier and projects better, and our winners certainly fall into that category. You’ll notice that our awards from IBC span the entire pro pipeline, from acquisition to on-set dailies to editing/compositing to storage.

“As IBC falls later in the year, we are able to see where companies are driving refinements to really elevate workflow and enhance production. So we’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We’re very proud of that fact, and it makes our awards quite special.”

IBC2017 took place September 15-19 in Amsterdam. postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at the 2018 NAB Show.

Xytech launches MediaPulse Managed Cloud at IBC

Facility management software provider Xytech has introduced a cloud and managed services offering, MediaPulse Managed Cloud. Hosted in Microsoft Azure, MediaPulse Managed Cloud is a secure platform offering full system management.

MediaPulse Managed Cloud is available through any web browser and compatible with iOS, Android and Windows mobile devices. The new managed services handle most administrative functions, including daily backups, user permissions and screen layouts. The offering is available with several options, including a variety of language packs, allowing for customization and localization.

Slated for shipping in October, MediaPulse Managed Cloud is compliant with European privacy laws and enables secure data transmission across multiple geographies.

Xytech debuted MediaPulse Managed Cloud at IBC2017. The show was the company’s first as a member of the Advanced Media Workflow Association, a community-driven forum focused on advancing business-driven solutions for networked media workflows.