Category Archives: VR

Behind the Title: Left Field Labs ECD Yann Caloghiris

NAME: Yann Caloghiris

COMPANY: Left Field Labs (@LeftFieldLabs)

CAN YOU DESCRIBE YOUR COMPANY?
Left Field Labs is a Venice-California-based creative agency dedicated to applying creativity to emerging technologies. We create experiences at the intersection of strategy, design and code for our clients, who include Google, Uber, Discovery and Estée Lauder.

But it’s how we go about our business that has shaped who we have become. Over the past 10 years, we have consciously moved away from the traditional agency model and have grown by deepening our expertise, sourcing exceptional talent and, most importantly, fostering a “lab-like” creative culture of collaboration and experimentation.

WHAT’S YOUR JOB TITLE?
Executive Creative Director

WHAT DOES THAT ENTAIL?
My role is to drive the creative vision across our client accounts, as well as our own ventures. In practice, that can mean anything from providing insights for ongoing work to proposing creative strategies to running ideation workshops. Ultimately, it’s whatever it takes to help the team flourish and push the envelope of our creative work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Probably that I learn more now than I did at the beginning of my career. When I started, I imagined that the executive CD roles were occupied by seasoned industry veterans, who had seen and done it all, and would provide tried and tested direction.

Today, I think that cliché is out of touch with what’s required from agency culture and where the industry is going. Sure, some aspects of the role remain unchanged — such as being a supportive team lead or appreciating the value of great copy — but the pace of change is such that the role often requires both the ability to leverage past experience and accept that sometimes a new paradigm is emerging and assumptions need to be adjusted.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with the team, and the excitement that comes from workshopping the big ideas that will anchor the experiences we create.

WHAT’S YOUR LEAST FAVORITE?
The administrative parts of a creative business are not always the most fulfilling. Thankfully, tasks like timesheeting, expense reporting and invoicing are becoming less exhaustive thanks to better predictive tools and machine learning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The early hours of the morning, usually when inspiration strikes — when we haven’t had to deal with the unexpected day-to-day challenges that come with managing a busy design studio.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d probably be somewhere at the cross-section between an artist, like my mum was, and an engineer like my dad. There is nothing more satisfying than to apply art to an engineering challenge or vice versa.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school in France, and there wasn’t much room for anything other than school and homework. When I got my Baccalaureate, I decided that from that point onward that whatever I did, it would be fun, deeply engaging and at a place where being creative was an asset.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently partnered with ad agency RK Venture to craft a VR experience for the New Mexico Department of Transportation’s ongoing ENDWI campaign, which immerses viewers into a real-life drunk-driving scenario.

ENDWI

To best communicate and tell the human side of this story, we turned to rapid breakthroughs within volumetric capture and 3D scanning. Working with Microsoft’s Mixed Reality Capture Studio, we were able to bring every detail of an actor’s performance to life with volumetric performance capture in a way that previous techniques could not.

Bringing a real actor’s performance into a virtual experience is a game changer because of the emotional connection it creates. For ENDWI, the combination of rich immersion with compelling non-linear storytelling proved to affect the participants at a visceral level — with the goal of changing behavior further down the road.

Throughout this past year, we partnered with the VMware Cloud Marketing Team to create a one-of-a-kind immersive booth experience for VMworld Las Vegas 2018 and Barcelona 2018 called Cloud City. VMware’s cloud offering needed a distinct presence to foster a deeper understanding and greater connectivity between brand, product and customers stepping into the cloud.

Cloud City

Our solution was Cloud City, a destination merging future-forward architecture, light, texture, sound and interactions with VMware Cloud experts to give consumers a window into how the cloud, and more specifically how VMware Cloud, can be an essential solution for them. VMworld is the brand’s penultimate engagement where hands-on learning helped showcase its cloud offerings. Cloud City garnered 4000-plus demos, which led to a 20% lead conversion in 10 days.

Finally, for Google, we designed and built a platform for the hosting of online events anywhere in the world: Google Gather. For its first release, teams across Google, including Android, Cloud and Education, used Google Gather to reach and convert potential customers across the globe. With hundreds of events to date, the platform now reaches enterprise decision-makers at massive scale, spanning far beyond what has been possible with traditional event marketing, management and hosting.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Recently, a friend and I shot and edited a fun video homage to the original technology boom-town: Detroit, Michigan. It features two cultural icons from the region, an original big block ‘60s muscle car and some gritty electro beats. My four-year-old son thinks it’s the coolest thing he’s ever seen. It’s going to be hard for me to top that.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Human flight, the Internet and our baby monitor!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Instagram, Twitter, Medium and LinkedIn.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Where to start?! Music has always played an important part of my creative process, and the joy I derive from what we do. I have day-long playlists curated around what I’m trying to achieve during that time. Being able to influence how I feel when working on a brief is essential — it helps set me in the right mindset.

Sometimes, it might be film scores when working on visuals, jazz to design a workshop schedule or techno to dial-up productivity when doing expenses.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Spend time with my kids. They remind me that there is a simple and unpretentious way to look at life.

Lucid and Eys3D partner on VR180 depth camera module

EYS3D Microelectronics Technology, the company behind embedded camera modules in some top-tier AR/VR headsets, has partnered with that AI startup Lucid. Lucid will power their next-generation depth-sensing camera module, Axis. This means that a single, small, handheld device can capture accurate 3D depth maps with up to a 180-degree field of view at high resolution, allowing content creators to scan, reconstruct and output precise 3D point clouds.

This new camera module, which was demoed for the first time at CES, will allow developers, animators and game designers a way to transform the physical world into a virtual one, ramping up content for 3D, VR and AR all with superior performance in resolution and field of view at a lower cost than some technologies currently available.

A device capturing the environment exactly as you perceive it, but enhanced with capabilities of precise depth, distance and understanding could help eliminate the boundaries between what you see in the real world and what you can create in the VR and AR world. This is what the Lucid-powered EYS3D’s Axis camera module aims to bring to content creators, as they gain the “super power” of transforming anything in their vision into a 3D object or scene which others can experience, interact with and walk in.

What was only previously possible with eight to 16 high-end DSLR cameras, and expensive software or depth sensors is now combined into one tiny camera module with stereo lenses paired with IR sensors. Axis will cover up to a 180-degree field of view while providing millimeter-accurate 3D in point cloud or depth map format. This device provides a simple plug-and-play experience through USB 3.1 Gen1/2 and supported Windows and Linux software suites, allowing users to further develop their own depth applications such as 3D reconstructing an entire scene, scanning faces into 3D models or just determining how far away an object is.

Lucid’s AI-enhanced 3D/depth solution, known as 3D Fusion Technology, is currently deployed in many devices, such as 3D cameras, robots and mobile phones, including the Red Hydrogen One, which just launched through AT&T and Verizon nationwide.

EYS3D’s new depth camera module powered by Lucid will be available in Q3 2019.

DigitalGlue 2.5

From artist to AR technologist: What I learned along the way

By Leon Hui

As an ARwall co-founder and chief technology officer (CTO), I manage all things relating to technology for the company. This includes overseeing software and technology development, designs, engineering, IT, troubleshooting and everything in-between. Launching the company, I solely developed the critical pieces of technology required to achieve the ARwall concept overall.

I came into augmented reality (AR) as a game development software engineer, and that plays a big role in how I approach this new medium. Stepping into ARwall, it became my job to produce artistic realtime graphics for AR backdrops and settings, while also pursuing technological advancements that will move the AR industry forward.

Rene Amador presents in front of an ARwall screen. The TV monitor in the foreground shows the camera’s perspective.

Alongside CEO Rene Amador, the best way we found to make sure the company retained artistic values was to bring on highly talented artists, coders and engineers with a diverse skill set in both art and tech. It’s our mission to not let the scales tip one way or the other, and to focus on bringing in both artistic and tech talent.

With the continuing convergence of entertainment and technology, it is vital for a creative technology company to continue to advance, while maintaining and nurturing artistic integrity.

Here is what we have learned along the way in striking this balance:

Diversify Your Hiring
Going into AR, or any other immersive field, it is very important that one understands realtime graphics.

So, while it’s useful for my company to hire engineers that have graphics and coding backgrounds — as many game engineers do — it’s still crucial to hire for the individual strengths of both tech and art. At ARwall, our open roles could be combined for one gifted individual, or isolated with an emphasis on either artistry or coding, for those with specialties.

Because we are dealing with high-quality realtime graphics, the ARwall team would be similar to the team profiles of any AAA game studio. We never deviated from an artistic trajectory — we just brought technology along for the ride. We think of talent recruitment as a crucial process in our advancement and always have our eyes out for our next game developer to fill roles ranging from technical, environment, material and character artist to graphics, game engine and generalist engineer.

Expand Your Education
If someone with a background in film or TV post production came to work in a new tech industry, like AR, they would need to expand their own education. It’s challenging, but not impossible. While my company’s current emphasis is on game developers and CG artists, the backgrounds of fellow co-founders Rene Amador, Eric Navarrette and Jocelyn Hsu sit in ad agencies, television digital development, post production and beyond.

Jocelyn Hsu on an XR set, a combination of physical set pieces with the CG set extension running in the background.

There are a variety of toolsets and concepts left to learn, including: the software development life cycle; Microsoft Project or Hansoft; Agile methodology; the definition of “realtime graphics” and how it works; the top-dog game engine tools, including Unity and Unreal Engine 4; and digital asset creation pipelines for game engines, among others.

The transition is largely based on ones game development background but, of course, there is always a learning curve when entering a new industry.

Focus on the Balance
We understand that the core of a “technology company,” as we bill ourselves, is still the foundational technology. However, depending on the type of technology, companies need staffers that have a high-level mastery of the technology in order to demonstrate its full potential to others. It just happens that with AR technology there is an inherently visual aspect, which translates to a need for superior artistry in unison with the precise technology.

In order for AR technology to showcase and look more appealing, high-quality artistry is very much needed. This can be a difficult balance to maintain if focus or purpose are lost. For ARwall, we aim to hire talent that excels at art or engineering, or both.

ARwall expanded its offerings to stake its claim as a technology company, but built on each founders’ roots as artists, engineers and producers. Tech and art aren’t mutually-exclusive; rather, with focus, education and time to search for the right talent, technology companies can excel with invention and keep their creative edge, all at once.


Leon Hui brings to the team 20+ years of technical experience as a software engineer focusing on realtime 3D graphics, VR/AR and systems architecture. He has lead/senior technical roles on 15 AAA shipped titles as a veteran of top developers including EA, Microsoft Studios, Konami Digital Entertainment. He was previously TD at Skydance Interactive. ARwall is based in Burbank. 

 


Full-service creative agency Carousel opens in NYC

Carousel, a new creative agency helmed by Pete Kasko and Bernadette Quinn, has opened its doors in New York City. Billing itself as “a collaborative collective of creative talent,” Carousel is positioned to handle projects from television series to ad campaigns for brands, media companies and advertising agencies.

Clients such as PepsiCo’s Pepsi, Quaker and Lays brands; Victoria’s Secret; Interscope Records; A&E Network and The Skimm have all worked with the company.

Designed to provide full 360 capabilities, Carousel allows its brand partners to partake of all its services or pick and choose specific offerings including strategy, creative development, brand development, production, editorial, VFX/GFX, color, music and mix. Along with its client relationships, Carousel has also been the post production partner for agencies such as McGarryBowen, McCann, Publicis and Virtue.

“The industry is shifting in how the work is getting done. Everyone has to be faster and more adaptable to change without sacrificing the things that matter,” says Quinn. “Our goal is to combine brilliant, high-caliber people, seasoned in all aspects of the business, under one roof together with a shared vision of how to create better content in a more efficient way.”

According to managing director Dee Tagert comments, “The name Carousel describes having a full set of capabilities from ideation to delivery so that agencies or brands can jump on at any point in their process. By having a small but complete agency team that can manage and execute everything from strategy, creative development and brand development to production and post, we can prove more effective and efficient than a traditional agency model.”

Danielle Russo, Dee Tagert, AnaLiza Alba Leen

AnaLiza Alba Leen comes on board Carousel as creative director with 15 years of global agency experience, and executive producer Danielle Russo brings 12 years of agency experience.
Tagert adds, “The industry has been drastically changing over the last few years. As clients’ hunger for content is driving everything at a much faster pace, it was completely logical to us to create a fully integrative company to be able to respond to our clients in a highly productive, successful manner.”

Carousel is currently working on several upcoming projects for clients including Victoria’s Secret, DNTL, Subway, US Army, Tazo Tea and Range Rover.

Main Image: Bernadette Quinn and Pete Kasko


Nvidia intros Turing-powered Titan RTX

Nvidia has introduced its new Nvidia Titan RTX, a desktop GPU that provides the kind of massive performance needed for creative applications, AI research and data science. Driven by the new Nvidia Turing architecture, Titan RTX — dubbed T-Rex — delivers 130 teraflops of deep learning performance and 11 GigaRays of raytracing performance.

Turing features new RT Cores to accelerate raytracing, plus new multi-precision Tensor Cores for AI training and inferencing. These two engines — along with more powerful compute and enhanced rasterization — will help speed the work of developers, designers and artists across multiple industries.

Designed for computationally demanding applications, Titan RTX combines AI, realtime raytraced graphics, next-gen virtual reality and high-performance computing. It offers the following features and capabilities:
• 576 multi-precision Turing Tensor Cores, providing up to 130 Teraflops of deep learning performance
• 72 Turing RT Cores, delivering up to 11 GigaRays per second of realtime raytracing performance
• 24GB of high-speed GDDR6 memory with 672GB/s of bandwidth — two times the memory of previous-generation Titan GPUs — to fit larger models and datasets
• 100GB/s Nvidia NVLink, which can pair two Titan RTX GPUs to scale memory and compute
• Performance and memory bandwidth sufficient for realtime 8K video editing
• VirtualLink port, which provides the performance and connectivity required by next-gen VR headsets

Titan RTX provides multi-precision Turing Tensor Cores for breakthrough performance from FP32, FP16, INT8 and INT4, allowing faster training and inference of neural networks. It offers twice the memory capacity of previous-generation Titan GPUs, along with NVLink to allow researchers to experiment with larger neural networks and datasets.

Titan RTX accelerates data analytics with RAPIDS. RAPIDS open-source libraries integrate seamlessly with the world’s most popular data science workflows to speed up machine learning.

Titan RTX will be available later in December in the US and Europe for $2,499.


Storage for Interactive, VR

By Karen Moltenbrey

Every vendor in the visual effects and post production industries relies on data storage. However, for those studios working on new media or hybrid projects, which generate far more content in general, they not only need a reliable solution, they need one that can handle terabytes upon terabytes of data.

Here, two companies in the VR space discuss their needs for a storage solution that serve their business requirements.

Lap Van Luu

Magnopus
Located in downtown Los Angeles, Magnopus creates amazing VR and AR experiences. While a fairly new company — it was founded in 2013 — its staff has an extensive history in the VFX and games industries, with Academy Award winners among its founders. So, there is no doubt that the group knows what it takes to create amazing content.

It also knows the necessity of a reliable storage solution and one that can handle the large data generated by an AR or VR project. At Magnopus, the crew uses a custom-built solution leveraging Supermicro architecture. As Magnopus CTO Lap Van Luu points out, they are using an SSG-6048R-E1CR60N 4U chassis that the studio populates with two types of tier storage: the cache read-and-write layer is NVMe, while the second tier is SAS. Both are in a RAID-10 configuration with 1TB of NVMe and 500TB of SAS raw storage.

“This setup allows us to scale to a larger workforce and meet the demands of our artists,” says Luu. “We leverage faster NVMe Flash and larger SAS for the bulk of our storage requirements.”

Before Magnopus, Luu worked at companies with all kinds of storage systems over the past 20 years, including those from NetApp, BlueArc and Isilon, as well as custom builds of ZFS, FreeNAS, Microsoft Windows Storage Spaces and Hadoop configurations. However, since Magnopus opened, it has only switched to a bigger and faster version of its original setup, starting with a custom Supermicro system with 400GB of SSD and 250TB of SAS in the same configuration.

“We went with this configuration because as we were moving more into realtime production than traditional VFX, the need for larger renderfarms and storage IO demands dropped dramatically,” says Luu. “We also knew that we wanted to leverage smart caching due to the cost of Flash storage dropping to a reasonable price point. It was the ideal situation to be in. We were starting a new company with a less-demanding infrastructure with newer technology that was cheaper, faster and better overall.”

Nevertheless, choosing a specific solution was not a decision that was made lightly. “When you move away from your premier storage solution providers, there is always a concern for scalability and reliability. When working in realtime production, the concern to re-render elements wasn’t a factor of hours or days, but rather seconds and minutes. It was important for us to have redundant backups. But for the cost saving on storage, we could easily get mirrored servers and still be saving a significant amount of money.”

Luu knew the studio wanted to leverage Flash caching, so the big question was, How much Flash was necessary to meet the demands of their artists and processing farm? The processing farm was mainly used to generate textures and environments that were imported over to a real-time engine, such as Unity or Unreal Engine. To this end, Magnopus had to find out who offered a solution for caching that was as hands-off as possible and was invisible to all the users. “LSI, now Avago, had a solution with the RAID controller called cachecade, which dealt with all the caching,” he says. “All you had to do was set up some preferences and the RAID controller would take care of the rest.”

However, cachecade had a size limit on the caching layer of 512GB, so the studio had to do some testing to see if it would ever exceed that, and in a rare situation it did, says Luu. “But it was never a worry because behind the flash cache was a 60 SAS drive RAID-10 configuration.”

As Luu explains, when working with VFX, IOPS (IO operations per second) is always the biggest issue due to the heavy demand from certain types of applications. “VFX work and compositing can typically drive any storage solution to a grinding halt when you have a renderfarm taxing the production storage from your artists,” he explains. However, realtime development IO demands are significantly less since the assets are created in a DCC application but imported into a game engine, where processing occurs in realtime and locally. So, storing all those traditional VFX elements are not necessary, and the overall capacity of storage dropped to one-tenth of what was required with VFX, Luu points out.

And since Magnopus has a Flash-based cache layer that is large enough to meet the company’s IO demands, it does not have to leverage localization to reduce the IO demand off the main production server; as a result, the user gets immediate server response. And, it means that all data within the pipeline resides on the company’s main production server — where the company starts and ends any project.

“Magnopus is a content-focused technology company,” Luu says. “All our assets and projects that we create are digital. Storage is extremely important because it is the lifeblood of everything we create. The storage server can be the difference between if a user can focus on creative content creation where the infrastructure is invisible or the frustration of constantly being blocked and delayed by hardware. Enabling everyone to work as efficiently as possible allows for the best results and products for our clients and customers.”

Light Sail VR
Light Sail VR is a Hollywood-based VR boutique that is a pioneer in cinematic virtual reality storytelling. Since its founding three years ago, the studio has been producing a range of interactive, 360- and 180-degree VR content, including original work and branded pieces for Google, ABC, GoPro and Paramount.

Matt Celia on set for Speak of the Devil.

Because Light Sail VR is a unique but small company, employees often have to wear a number of hats. For instance, co-founder Robert Watts is executive producer and handles many of the logistical issues. His partner, Matthew Celia, is creative director and handles more of the technical aspects of the business. So when it comes to managing the company’s storage needs, Celia is the guy. And, having a reliable system that keeps things running smoothly is paramount, as he is also juggling shoots and post-production work. No one can afford delays in production and post, but for a small company, it can be especially disastrous.

Light Sail VR does not simply dabble in VR; it is what the company does exclusively. Most of the projects thus far have been live action, though the group started its first game engine work this year. When the studio produced a piece with GoPro in the first year of its founding, it was on a sneakernet of G-Drives from G-Technology, “and I was going crazy!” says Celia. “VR is fantastic, but it’s very data-intensive. You can max out a computer’s processing very easily, and the render times are extraordinarily long. There’s a lot of shots to get through because every shot becomes a visual effects shot with either stitching, rotoscoping or compositing needed.”

He continues: “I told Robert [Watts] we needed to get a shared storage server so if I max out one computer while I’m working, I can just go to another computer and keep working, rather than wait eight to 10 hours for a render to finish.”

The Speak of the Devil shoot.

Celia had been dialed into the post world for some time. “Before diving into the world of VR, I was a Final Cut guy, and the LumaForge guys and [founder] Sam Mestman were people I always respected in the industry,” he says. So, Celia reached out to them with a cold call and explained that Light Sail VR was doing virtual reality, an uncharted, pioneering new thing, and was going to need a lot of storage — and needed it fast. “I told them, ‘We want to be hooked up to many computers, both Macs and PCs, and don’t want to deal with file structures and those types of things.’”

Celia points out that they are an independent and small boutique, so finding something that was cost effective and reliable was important. LumaForge responded with a solution called Jellyfish Mobile, geared for small teams and on-set work or portable office environments. “I think we got the 30TB NAS server that has four 10Gb Ethernet connections.” That enabled Light Sail VR to hook up the system to all its computers, “and it worked,” he adds. “I could work on one shot, hit render, and go to another computer and continue working on the next shot and hit render, then kind of ping-pong back and forth. It made our lives a lot easier.”

Light Sail VR has since graduated to the larger-capacity Jellyfish Rack system, which is a 160TB solution (expandable up to 1 petabyte).

The storage is located in Light Sail VR’s main office and is hooked up to its computers. The filmmakers shoot in the field and, if on location, download the data to drives, which they transport back to the office and load onto the server. Then, they transcode all the media to DNX. (VR is captured in H.264 format, which is not user friendly for editing due to the high-res frame size.)

Currently, Celia is in New York, having just wrapped the 20th episode of original content for Refinery29, a media company focused on young women that produces editorial and video programming, live events and social, shareable content delivered across major social media platforms, and covers a variety of categories from style to politics and more. Eight of the episodes are currently in various stages of the post pipeline, due to come out later this year. “And having a solid storage server has been a godsend,” Celia says.

The studio backs up locally onto Seagate drives for archival purposes and sometimes employs G-Technology drives for on-set work. “We just got this new G-Tech SSD that’s 2TB. It’s been great for use on set because having an SSD and downloading all the cards while on set makes your wrap process so much faster,” Celia points out.

Lately, Light Sail VR is shooting a lot of VR-180, requiring two 64GB cards per camera — one for the right eye and one for the left eye. But when they are shooting with the Yi Halo next-gen 3D 360-degree Google Jump camera, they use 17 64GB cards. “That’s a lot of data,” says Celia. “You can have a really bad day if you have really bad drives.”

The studio’s previous solution operated via Thunderbolt 1 in a RAID-5. It only worked on a single machine and was not cross-platform. As the studio made the transition over to PC from Mac to take advantage of better hardware capable of supporting VR playback, that solution was just not practical. They also needed a solution that was plug and play, so they could just pop it into a 10Gb Ethernet connection — they did not want fiber, “which can get expensive.”

The Light Sail team.

“I just wanted something very simple that was cross-platform and could handle what we were doing, which is, by the way, 6K or 8K stereo at 60 frames per second – these workloads are larger than most feature films,” Celia says. “So, we needed a lot of storage. We needed it fast. We needed it to be shared.”

However, while Celia searched for a system, one thing became clear to him: The solutions were technical. “It seemed like I would have to be my own IT department.” And, that was just one more hat he did not want to have to wear. “At LumaForge, they are independent filmmakers. They understood what I was trying to do immediately, and were willing to go on that journey with us.”

Say Celia, “I always call hard drives or storage the underwear of the post production world because it’s the thing you hate spending a lot of money on, but you really need it to perform and work.”

Main Image: Magnopus


Karen Moltenbrey is a long-time VFX and post writer.


Behind the Title: Lobo EP, Europe Loic Francois Marie Dubois

NAME: Loic Francois Marie Dubois

COMPANY: New York- and São Paulo, Brazil-based Lobo

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service creative studio offering design, live action, stop motion, 3D & 2D, mixed media, print, digital, AR and VR.

Day One spot Sunshine

WHAT’S YOUR JOB TITLE?
Creative executive producer for Europe and formerly head of production. I’m based in Brazil, but work out of the New York office as well.

WHAT DOES THAT ENTAIL?
Managing, hiring creative teams, designers, producers and directors for international productions (USA, Europe, Asia). Also, I have served as the creative executive director for TBWA Paris on the McDonald’s Happy Meal global campaign for the last five years. Now as creative EP for Europe, I am also responsible for streamlining information from pre-production to post production between all production parties for a more efficient and prosperous sales outcome.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The patience and the fun psychological side you need to have to handle all the production peeps, agencies, and clients.

WHAT TOOLS DO YOU USE?
Excel, Word, Showbiz, Keynote, Pages, Adobe Package (Photoshop, Illustrator, After Effects, Premiere, InDesign), Maya, Flame, Nuke and AR/VR technology.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with talented creative people on extraordinary projects with a stunning design and working on great narratives, such as the work we have done for clients including Interface, Autism Speaks, Imaginary Friends, Unicef and Travelers, to name a few.

WHAT’S YOUR LEAST FAVORITE?
Monday morning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early afternoon between Europe closing down and the West Coast waking up.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Meditating in Tibet…

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Since I was 13 years old. After shooting and editing a student short film (an Oliver Twist adaptation) with a Bolex 16mm on location in London and Paris, I was hooked.

Promoting Lacta 5Star chocolate bars

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
An animated campaign for the candy company Mondelez’s Lacta 5Star chocolate bars; an animated short film for the Imaginary Friends Society; a powerful animated short on the dangers of dating abuse and domestic violence for nonprofit Day One; a mixed media campaign for Chobani called FlipLand; and a broadcast spot for McDonald’s and Spider-Man.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
My three kids 🙂

It’s really hard to choose one project, as they are all equally different and amazing in their own way, but maybe D&AD Wish You Were Here. It stands out for the number of awards it won and the collective creative production process.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The Internet.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Meditation and yoga.


Panasas’ new ActiveStor Ultra targets emerging apps: AI, VR

Panasas has introduced ActiveStor Ultra, the next generation of its high-performance computing storage solution, featuring PanFS 8, a plug-and-play, portable, parallel file system. ActiveStor Ultra offers up to 75GB/s per rack on industry-standard commodity hardware.

ActiveStor Ultra comes as a fully integrated plug-and-play appliance running PanFS 8 on industry-standard hardware. PanFS 8 is the completely re-engineered Panasas parallel file system, which now runs on Linux and features intelligent data placement across three tiers of media — metadata on non-volatile memory express (NVMe), small files on SSDs and large files on HDDs — resulting in optimized performance for all data types.

ActiveStor Ultra is designed to support the complex and varied data sets associated with traditional HPC workloads and emerging applications, such as artificial intelligence (AI), autonomous driving and virtual reality (VR). ActiveStor Ultra’s modular architecture and building-block design enables enterprises to start small and scale linearly. With dock-to-data in one hour, ActiveStor Ultra offers fast data access and virtually eliminates manual intervention to deliver the lowest total cost of ownership (TCO).

ActiveStor Ultra will be available early in the second half of 2019.


Epic Games’ Unreal Engine 4.21 adds more mobile optimizations, efficiencies

Epic Games’ Unreal Engine 4.21 is designed to offer developers greater efficiency, performance and stability for those working on any platform.

Unreal Engine 4.21 adds even more mobile optimizations to both Android and iOS, up to 60% speed increases when cooking content and more power and flexibility in the Niagara effects toolset for realtime VFX. Also, the new production-ready Replication Graph plugin enables developers to build multiplayer experiences at a scale that hasn’t been possible before, and Pixel Streaming allows users to stream interactive content directly to remote devices with no compromises on rendering quality.

Updates in Unreal Studio 4.21 also offer new capabilities and enhanced productivity for users in the enterprise space, including architecture, manufacturing, product design and other areas of professional visualization. Unreal Studio’s Datasmith workflow toolkit now includes support for Autodesk Revit, and enhanced material translation for Autodesk 3ds Max, all enabling more efficient design review and iteration.

Here is more about the key features:
Replication Graph: The Replication Graph plugin, which is now production-ready, makes it possible to customize network replication in order to build large-scale multiplayer games that would not be viable with traditional replication strategies.

Niagara Enhancements: The Niagara VFX feature set continues to grow, with substantial quality of life improvements and Nintendo Switch support added in Unreal Engine 4.21.

Sequencer Improvements: New capabilities within Sequencer allow users to record incoming video feeds to disk as OpenEXR frames and create a track in Sequencer, with the ability to edit and scrub the track as usual. This enables users to synchronize video with CG assets and play them back together from the timeline.

Pixel Streaming (Early Access): With the new Pixel Streaming feature, users can author interactive experiences such as product configurations or training applications, host them on a cloud-based GPU or local server, and stream them to remove devices via web browser without the need for additional software or porting.

Mobile Optimizations: The mobile development process gets even better thanks to all of the mobile optimizations that were developed for Fortnite‘s initial release on Android, in addition to all of the iOS improvements from Epic’s ongoing updates. With the help of Samsung, Unreal Engine 4.21 includes all of the Vulkan engineering and optimization work that was done to help ship Fortnite on the Samsung Galaxy Note 9 and is 100% feature compatible with OpenGL ES 3.1.

Much Faster Cook Times: In addition to the optimized cooking process, low-level code avoids performing unnecessary file system operations, and cooker timers have been streamlined.

Gauntlet Automation Framework (Early access): The new Gauntlet automation framework enables developers to automate the process of deploying builds to devices, running one or more clients and or/servers, and processing the results. Gauntlet scripts can automatically profile points of interest, validate gameplay logic, check return values from backend APIs and more. Gauntlet has been battle tested for months in the process of optimizing Fortnite, and is a key part of ensuring it runs smoothly on all platforms.

Animation System Optimizations and Improvements: Unreal Engine’s animation system continues to build on best-in-class features thanks to new workflow improvements, better surfacing of information, new tools, and more.

Blackmagic Video Card Support: Unreal Engine 4.21 also adds support for Blackmagic video I/O cards for those working in film and broadcast. Creatives in the space can now choose between Blackmagic and AJA Video Systems, the two most popular options for professional video I/O.

Improved Media I/O: Unreal Engine 4.21 now supports 10-bit video I/O, audio I/O, 4K, and Ultra HD output over SDI, as well as legacy interlaced and PsF HD formats, enabling greater color accuracy and integration of some legacy formats still in use by large broadcasters.

Windows Mixed Reality: Unreal Engine 4.21 natively supports the Windows Mixed Reality (WMR) platform and headsets, such as the HP Mixed Reality headset and the Samsung HMD Odyssey headset.

Magic Leap Improvements: Unreal Engine 4.21 supports all the features needed to develop complete applications on Magic Leap’s Lumin-based devices — rendering, controller support, gesture recognition, audio input/output, media, and more.

Oculus Avatars: The Oculus Avatar SDK includes an Unreal package to assist developers in implementing first-person hand presence for the Rift and Touch controllers. The package includes avatar hand and body assets that are viewable by other users in social applications.

Datasmith for Revit (Unreal Studio): Unreal Studio’s Datasmith workflow toolkit for streamlining the transfer of CAD data into Unreal Engine now includes support for Autodesk Revit. Supported elements include materials, metadata, hierarchy, geometric instancing, lights and cameras.

Multi-User Viewer Project Template (Unreal Studio): A new project template for Unreal Studio 4.21 enables multiple users to connect in a real-time environment via desktop or VR, facilitating interactive, collaborative design reviews across any work site.

Accelerated Automation with Jacketing and Defeaturing (Unreal Studio): Jacketing automatically identifies meshes and polygons that have a high probability of being hidden from view, and lets users hide, remove or move them to another layer; this command is also available through Python so Unreal Studio users can integrate this step into automated preparation workflows. Defeaturing automatically removes unnecessary detail (e.g. blind holes, protrusions) from mechanical models to reduce polygon count and boost performance.

Enhanced 3ds Max Material Translation (Unreal Studio): There is now support for most commonly used 3ds Max maps, improving visual fidelity and reducing rework. Those in the free Unreal Studio beta can now translate 3ds Max material graphs to Unreal graphs when exporting, making materials easier to understand and work with. Users can also leverage improvements in BRDF matching from V-Ray materials, especially metal and glass.

DWG and Alias Wire Import (Unreal Studio): Datasmith now supports DWG and Alias Wire file types, enabling designers to import more 3D data directly from Autodesk AutoCAD and Autodesk Alias.

Satore Tech tackles post for Philharmonia Orchestra’s latest VR film

The Philharmonia Orchestra in London debuted its latest VR experience at Royal Festival Hall alongside the opening two concerts of the Philharmonia’s new season. Satore Tech completed VR stitching for the Mahler 3: Live From London film. This is the first project completed by Satore Tech since it was launched in June of this year.

The VR experience placed users at the heart of the Orchestra during the final 10 minutes of Mahler’s Third Symphony, which was filmed live in October 2017. The stitching project was completed by creative technologist/SFX/VR expert Sergio Ochoa, who leads Satore Tech. The company used SGO Mistika technology to post the project, which Ochoa helped to develop during his time in that company — he was creative technologist and CEO of SGO’s French division.

Luke Ritchie, head of innovation and partnerships at the Philharmonia Orchestra, says, “We’ve been working with VR since 2015, it’s a fantastic technology to connect new audiences with the Orchestra in an entirely new way. VR allows you to sit at the heart of the Orchestra, and our VR experiences can transform audiences’ preconceptions of orchestral performance — whether they’re new to classical music or are a die-hard fan.”

It was a technically demanding project for Satore Tech to stitch together, as the concert was filmed live, in 360 degrees, with no retakes using Google’s latest Jump Odyssey VR camera. This meant that Ochoa was working with four to five different depth layers at any one time. The amount of fast movement also meant the resolution of the footage needed to be up-scaled from 4K to 8K to ensure it was suitable for the VR platform.

“The guiding principle for Satore Tech is we aspire to constantly push the boundaries, both in terms of what we produce and the technologies we develop to achieve that vision,” explains Ochoa. “It was challenging given the issues that arise with any live recording, but the ambition and complexity is what makes it such a very suitable initial project for us.”

Satore Tech’s next project is currently in development in Mexico, using experimental volumetric capture techniques with some of the world’s most famous dancers. It is slated for release early next year.