Arraiy 4.11.19

Category Archives: AR

The Other Art Fair: Brands as benefactors for the arts

By Tom Westerlin

Last weekend, courtesy of Dell, I had the opportunity to attend The Other Art Fair, presented by Saatchi Art here in New York City. My role at Nice Shoes is creative director for VR/AR/360, and I was interested to see how the worlds of interactive and traditional art would intersect. I was also curious to see what role brands like Dell would play, as I feel that as we’ve transitioned from traditional advertising to branded content, brands have emerged as benefactors for the arts.

It was great to have so many artists represented that had created such high-quality work, and unlike other art shows I’ve attended, everything felt affordable and accessible. Art is often priced out for the average person and here was an opportunity to get to know artists, learn about their process and possibly walk away with a beautiful piece to bring into the home.

The curators and sponsors created a very welcoming, jovial atmosphere. Kids had an area where they could draw on the walls, and adults had access to a bar area and lounge where they could converse (I suppose adults could have drawn there as well, but some needed a drink or two to loosen up). The human body was also a canvas as there was an artist offering tattoos. Overall, the organizers created an infectious, creative vibe.

A variety of artists were represented. Traditional paintings, photography, collage, sculpture, neon and VR were all on display in the same space. Seeing VR and digital art amongst traditional art was very encouraging. I’ve encountered bits of this at other shows, but in those instances everything felt cordoned off. At The Other Art Fair, every medium felt as if it were being displayed on equal ground, and, in some cases, the lines between physical and digital art were blurred.

Samsung had framed displays that looked like physical paintings. Their high-quality monitors sat flat on the wall, framed and indistinguishable from physical art.

Dell’s 8K monitor looked amazing. It was such a high resolution and the pixel density was very tight. It looked perfect for displaying a high-resolution photo at 100%. I’d be curious to see how galleries take advantage of monitors like these. Traditionally, prints of photographs would be shown, but monitors like these offer up new potential for showcasing vivid texture, detail and composition.

Although I didn’t walk out with a painting that night, I did come away with the desire to keep my eye on a number of artists — in particular, Glen Gauthier, Paul Richard, Laura Noel and Beth Radford. They all stood out to me.

As the lines between art and advertising blur, there are always new opportunities for brands and artists to come together to create stunning content, and I expect many brands, agencies, and creative studios to engage these artists in the near future.

Behind the Title: Start VR Producer Ela Topcuoglu

NAME: Ela Topcuoglu

COMPANY: Start VR (@Start_VR)

CAN YOU DESCRIBE YOUR COMPANY?
Start VR is a full-service production studio (with offices in Sydney, Australia and Marina Del Rey, California) specializing in immersive and interactive cinematic entertainment. The studio brings expertise in entertainment and technology together with feature film quality visuals with interactive content, creating original and branded narrative experiences in VR.

WHAT’S YOUR JOB TITLE?
Development Executive and Producer

WHAT DOES THAT ENTAIL?
I am in charge of expanding Start VR’s business in North America. That entails developing strategic partnerships and increasing business development in the entertainment, film and technology sectors.

I am also responsible for finding partners for our original content slate as well as seeking existing IP that would fit perfectly in VR. I also develop relationships with brands and advertising agencies to create branded content. Beyond business development, I also help produce the projects that we move forward with.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The title comes with the responsibility of convincing people to invest in something that is constantly evolving, which is the biggest challenge. My job also requires me to be very creative in coming up with a native language to this new medium. I have to wear many hats to ensure that we create the best experiences out there.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is that I get to wear lots of different hats. Being in the emerging field of VR, everyday is different. I don’t have a traditional 9-to-5 office job and I am constantly moving and hustling to set up business meetings and stay updated on the latest industry trends.

Also, being in the ever-evolving technology field, I learn something new almost everyday, which is extremely essential to my professional growth.

WHAT’S YOUR LEAST FAVORITE?
Convincing people to invest in virtual reality and seeing its incredible potential. That usually changes once they experience truly immersive VR, but regardless, selling the future is difficult.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
My favorite part of the day is the morning. I start my day with a much-needed shot of Nespresso, get caught up on emails, take a look at my schedule and take a quick breather before I jump right into the madness.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I wasn’t working in VR, I would be investing my time in learning more about artificial intelligence (AI) and use that to advance medicine/health and education.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved entertaining people from a very young age, and I was always looking for an outlet to do that, so the entertainment business was the perfect fit. There is nothing like watching someone’s reaction to a great piece of content. Virtual reality is the ultimate entertainment outlet and I knew that I wanted to create experiences that left people with the same awe reaction that I had the moment I experienced it.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I worked and assisted in the business and legal affairs department at Media Rights Capital and had the opportunity to work on amazing TV projects, including House of Cards, Baby Driver and Ozark.

Awake: First Contact

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The project that I am most proud of to date is the project that I am currently producing at Start VR. It’s called Awake: First Contact. It was a project I read about and said, “I want to work on that.”

I am in incredibly proud that I get to work on a virtual reality project that is pushing the boundaries of the medium both technically and creatively.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and speakers.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook and LinkedIn

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, especially if I’m working on a pitch deck. It really keeps me in the moment. I usually listen to my favorite DJ mixes on Soundcloud. It really depends on my vibe that day.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I have recently started surfing, so that is my outlet at the moment. I also meditate regularly. It’s also important for me to make sure that I am always learning something new and unrelated to my industry.

Arraiy 4.11.19

Sonic Union adds Bryant Park studio targeting immersive, broadcast work

New York audio house Sonic Union has launched a new studio and creative lab. The uptown location, which overlooks Bryant Park, will focus on emerging spatial and interactive audio work, as well as continued work with broadcast clients. The expansion is led by principal mix engineer/sound designer Joe O’Connell, now partnered with original Sonic Union founders/mix engineers Michael Marinelli and Steve Rosen and their staff, who will work out of both its Union Square and Bryant Park locations. O’Connell helmed sound company Blast as co-founder, and has now teamed up with Sonic Union.

In other staffing news, mix engineer Owen Shearer advances to also serve as technical director, with an emphasis on VR and immersive audio. Former Blast EP Carolyn Mandlavitz has joined as Sonic Union Bryant Park studio director. Executive creative producer Halle Petro, formerly senior producer at Nylon Studios, will support both locations.

The new studio, which features three Dolby Atmos rooms, was created and developed by Ilan Ohayon of IOAD (Architect of Record), with architectural design by Raya Ani of RAW-NYC. Ani also designed Sonic’s Union Square studio.

“We’re installing over 30 of the new ‘active’ JBL System 7 speakers,” reports O’Connell. “Our order includes some of the first of these amazing self-powered speakers. JBL flew a technician from Indianapolis to personally inspect each one on site to ensure it will perform as intended for our launch. Additionally, we created our own proprietary mounting hardware for the installation as JBL is still in development with their own. We’ll also be running the latest release of Pro Tools (12.8) featuring tools for Dolby Atmos and other immersive applications. These types of installations really are not easy as retrofits. We have been able to do something really unique, flexible and highly functional by building from scratch.”

Working as one team across two locations, this emerging creative audio production arm will also include a roster of talent outside of the core staff engineering roles. The team will now be integrated to handle non-traditional immersive VR, AR and experiential audio planning and coding, in addition to casting, production music supervision, extended sound design and production assignments.

Main Image Caption: (L-R) Halle Petro, Steve Rosen, Owen Shearer, Joe O’Connell, Adam Barone, Carolyn Mandlavitz, Brian Goodheart, Michael Marinelli and Eugene Green.

 


postPerspective Impact Award winners from SIGGRAPH 2017

Last April, postPerspective announced the debut of our Impact Awards, celebrating innovative products and technologies for the post production and production industries that will influence the way people work. We are now happy to present our second set of Impact Awards, celebrating the outstanding offerings presented at SIGGRAPH 2017.

Now that the show is over, and our panel of VFX/VR/post pro judges has had time to decompress, dig out and think about what impressed them, we are happy to announce our honorees.

And the winners of the postPerspective Impact Award from SIGGRAPH 2017 are:

  • Faceware Technologies for Faceware Live 2.5
  • Maxon for Cinema 4D R19
  • Nvidia for OptiX 5.0  

“All three of these technologies are very worthy recipients of our first postPerspective Impact Awards from SIGGRAPH,” said Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that define the leading-edge of technology while producing tools that actually make users’ working lives easier and projects better, and our winners certainly fall into that category.

“While SIGGRAPH’s focus is on VFX, animation, VR/AR and the like, the types of gear they have on display vary. Some are suited for graphics and animation, while others have uses that slide into post production. We’ve tapped real-world users in these areas to vote for our Impact Awards, and they have determined what tools might be most impactful to their day-to-day work. That’s what makes our awards so special.”

There were many new technologies and products at SIGGRAPH this year, and while only three won an Impact Award, our judges felt there were other updates that it was important to let people know about as well.

Blackmagic Design’s Fusion 9 was certainly turning heads and Nvidia’s VRWorks 360 Video was called out as well. Chaos Group also caught our judges attention with V-Ray for Unreal Engine 4.

Stay tuned for future Impact Award winners in the coming months — voted on by users for users — from IBC.


Red’s Hydrogen One: new 3D-enabled smartphone

In their always subtle way, Red has stated that “the future of personal communication, information gathering, holographic multi-view, 2D, 3D, AR/VR/MR and image capture just changed forever” with the introduction of Hydrogen One, a pocket-sized, glasses-free “holographic media machine.”

Hydrogen One is a standalone, full-featured, unlocked multi-band smartphone, operating on Android OS, that promises “look around depth in the palm of your hand” without the need for separate glasses or headsets. The device features a 5.7-inch professional hydrogen holographic display that switches between traditional 2D content, holographic multi-view content, 3D content and interactive games, and it supports both landscape and portrait modes. Red has also embedded a proprietary H30 algorithm in the OS system that will convert stereo sound into multi-dimensional audio.

The Hydrogen system incorporates a high-speed data bus to enable a comprehensive and expandable modular component system, including future attachments for shooting high-quality motion, still and holographic images. It will also integrate into the professional Red camera program, working together with Scarlet, Epic and Weapon as a user interface and monitor.

Future-users are already talking about this “nifty smartphone with glasses-free 3D,” and one has gone so far as to describe the announcement as “the day 360-video became Betamax, and AR won the race.” Others are more tempered in their enthusiasm, viewing this as a really expensive smartphone with a holographic screen that may or might not kill 360 video. Time will tell.

Initially priced between $1,195 and $1,595, the Hydrogen One is targeted to ship in Q1 of 2018.


Lenovo’s ‘Transform’ event: IT subscriptions and AR

By Claudio Santos

Last week I had the opportunity to attend Lenovo’s “Transform” event, in which the company unveiled its newest releases as well as its plans for the near future. I must say they had quite the lineup ready.

The whole event was divided into two tracks “Datacenters” and “PC and Smart Devices.” Each focused on its own products and markets, but a single idea permeated all announcements in the day. It’s what Lenovo calls the “Fourth Revolution.” That’s what the company calls the next step in integration between devices and the cloud. Their vision is that soon 5G mobile Internet will be available, allowing for devices to seamlessly connect to the cloud on the go and more importantly, always stay connected.

While there were many interesting announcements throughout the day, I will focus on two that seem more closely relatable to most post facilities.

The first is what Lenovo is calling “PC as a service.” They want to sell the bulk of the IT hardware and support needs for companies as subscription-based deals, and that would be awesome! Why? Well, it’s simply a fact of life now that post production happens almost exclusively with the aid of computer software (sorry, if you’re still one of the few cutting film by hand, this article won’t be that interesting for you).

Having to choose, buy and maintain computers for our daily work takes a lot of research and, most notably, time. Between software updates, managing different licenses, subscriptions and hunting down weird quirks of the system, a lot of time is taken away from more important tasks such as editing or client relationship. When you throw a server and a local network in the mix it becomes a hefty job that takes a lot of maintenance.

That’s why bigger facilities employ IT specialists to deal with all that. But many post facilities aren’t big enough to employ a full-time IT person, nor are their needs complex enough to warrant the investment.

Lenovo sees this as an opportunity to simplify the role of the IT department by selling subscriptions that include the hardware, the software and all the necessary support (including a help desk) to keep the systems running without having to invest in a large IT department. More importantly, the subscription would be flexible. So, during periods in which you have need for more stations/support you can increase the scope of the subscription and then shrink it once again when the demands lower, freeing you from absorbing the cost of unused machines/software that would just sit around unused.

I see one big problem in this vision: Lenovo plans to start the service with a minimum of 1,000 seats for a deal. That is far, far more staff than most post facilities have, and at that point it would probably just be worth hiring a specialist that can also help you automate your workflow and develop customized tools for your projects. It is nonetheless an interesting approach, and I hope to see it trickle down to smaller clients as it solidifies as a feasible model.

AR
The other announcement that should interest post facilities is Lenovo’s interest in the AR market. As many of you might know, augmented reality is projected to be an even bigger market than it’s more popular cousin virtual reality, largely due to its more professional application possibilities.

Lenovo has been investing in AR and has partnered up with Metavision to experiment and start working towards real work-environment offerings of the technology. Besides the hand gestures that are always emphasized in AR promo videos, one very simple use-case seems to be in Lenovo’s sights, and that’s one I hope to see being marketable very soon: workspace expansion. Instead of needing three or four different monitors to accommodate our ever-growing number of windows and displays while working, with AR we will be able to place windows anywhere around us, essentially giving us a giant spherical display. A very simple problem with a very simple solution, but one that I believe would increase the productivity of editors by a considerable amount.

We should definitely keep an eye on Lenovo as they embark one this new quest for high-efficiency solutions for businesses, because that’s exactly what the post production industry finds itself in need of right now.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.


Adobe acquires Mettle’s SkyBox tools for 360/VR editing, VFX

Adobe has acquired all SkyBox technology from Mettle, a developer of 360-degree and virtual reality software. As more media and entertainment companies embrace 360/VR, there is a need for seamless, end-to-end workflows for this new and immersive medium.

The Skybox toolset is designed exclusively for post production in Adobe Premiere Pro CC and Adobe After Effects CC and complements Adobe Creative Cloud’s existing 360/VR cinematic production technology. Adobe will integrate SkyBox plugin functionality natively into future releases of Premiere Pro and After Effects.

To further strengthen Adobe’s leadership in 360-degree and virtual reality, Mettle co-founder Chris Bobotis will join Adobe, bringing more than 25 years of production experience to his new role.

“We believe making virtual reality content should be as easy as possible for creators. The acquisition of Mettle SkyBox technology allows us to deliver a more highly integrated VR editing and effects experience to the film and video community,” says Steven Warner, VP of digital video and audio at Adobe. “Editing in 360/VR requires specialized technology, and as such, this is a critical area of investment for Adobe, and we’re thrilled Chris Bobotis has joined us to help lead the charge forward.”

“Our relationship started with Adobe in 2010 when we created FreeForm for After Effects, and has been evolving ever since. This is the next big step in our partnership,” says Bobotis, now director, professional video at Adobe. “I’ve always believed in developing software for artists, by artists, and I’m looking forward to bringing new technology and integration that will empower creators with the digital tools they need to bring their creative vision to life.”

Introduced in April 2015, SkyBox was the first plugin to leverage Mettle’s proprietary 3DNAE technology, and its success quickly led to additional development of 360/VR plugins for Premiere Pro and After Effects.

Today, Mettle’s plugins have been adopted by companies such as The New York Times, CNN, HBO, Google, YouTube, Discovery VR, DreamWorks TV, National Geographic, Washington Post, Apple and Facebook, as well as independent filmmakers and YouTubers.


Technicolor Experience Center launches with HP Mars Home Planet

By Dayna McCallum

Technicolor’s Tim Sarnoff and Marcie Jastrow oversaw the official opening of the Technicolor Experience Center (TEC), with the help of HP’s Sean Young and Rick Champagne, on June 15. The kickoff event also featured the announcement that TEC is teaming up with HP to develop HP Mars Home Planet, an experimental VR experience to reinvent life on Mars for one million humans.

The purpose-built TEC space is located in Blackwelder creative park, a business district designed specifically for the needs of creative and media companies in Culver City. The center, dedicated to bringing artists and scientists together to explore immersive media, covers almost 27,000 square feet, with 3,000 square feet dedicated to motion capture. The TEC serves as a hub connecting Technicolor’s creative houses and research labs across the globe, including an R&D team from France that made an appearance during event via a remote demo, with technology partners, such as HP.

Sarnoff, Technicolor deputy CEO and president of production services, said, “The TEC is about realizing the aspirations of all the players who are part of the nascent immersive ecosystem we work in, from content creation, to content distribution and content consumption. Designing and delivering immersive experiences will require a massive convergence of artistic, technological and economic talent. They will have to come together productively. That is why the TEC has been formed. It is designed to be a practical place where we take theoretical constructs and move systematically to tactical implementation through a creative and dynamic process of experimentation.”

The HP Mars Home Planet project is a global, immersive media collaboration uniting engineers, architects, designers, artists and students to design an urban area on Mars in a VR environment. The project will be built on the terrain from Fusion’s “Mars 2030” game, which is based on research, images, and expertise based on NASA research. In addition to HP, Fusion and TEC, partners include Nvidia, Unreal Engine, Autodesk and HTCVive. Additional details will be released at Siggraph 2017.

Young, worldwide segment manager for product development and AEC for HP Inc., said of the Mars project, “To ensure fidelity and professional-grade quality and a fantastic end-user experience, the TEC is going to oversee the virtual reality development process of the work that is going to be done by collaborators from all over the world. It is an incredible opportunity for anybody from anywhere in the world that is interested in VR to work with Technicolor.”


VR Audio — Differences between A Format and B Format

By Claudio Santos

A Format and B Format. What is the difference between them after all? Since things can get pretty confusing, especially with such non-descriptive nomenclature, we thought we’d offer a quick reminder of what each is in the spatial audio world.

A Format and B Format are two analog audio standards that are part of the ambisonics workflow.

A Format is the raw recording of the four individual cardioid capsules in ambisonics microphones. Since each microphone has different capsules at slightly different distances, the A Format is somewhat specific to the microphone model.

B Format is the standardized format derived from the A Format. The first channel carries the amplitude information of the signal, while the other channels determine the directionality through phase relationships between each other. Once you get your sound into B Format you can use a variety of ambisonic tools to mix and alter it.

It’s worth remembering that the B Format also has a few variations on the standard itself; the most important to understand are Channel Order and Normalization standards.

Ambisonics in B Format consists of four channels of audio — one channel carries the amplitude signal while the others represent the directionality in a sphere through phase relationships. Since this can only be achieved by the combination between the channels, it is important that:

– The channels follow a known order
– The relative level between the amplitude channel and the others must be known in order to properly combine them together

Each of these characteristics has a few variations, with the most notable ones being

– Channel Order
– Furse-Malham standard
– ACN standard

– Normalization (level)
– MaxN standard
-SN3D standard

The combination of these variations result in two different B Format standards:
– Furse-Malham – Older standard that is still supported by a variety of plug-ins and other ambisonic processing tools
– AmbiX – Modern standard that has been widely adopted by distribution platforms such as YouTube

Regardless of the format you will deliver your ambisonics file in, it is vital to keep track of the standards you are using in your chain and make the necessary conversions when appropriate. Otherwise rotations and mirrors will end up in the wrong direction and the whole soundsphere will break down into a mess.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.

VR audio terms: Gaze Activation v. Focus

By Claudio Santos

Virtual reality brings a lot of new terminology to the post process, and we’re all having a hard time agreeing on the meaning of everything. It’s tricky because clients and technicians sometimes have different understandings of the same term, which is a guaranteed recipe for headaches in post.

Two terms that I’ve seen being confused a few times in the spatial audio realm are Gaze Activation and Focus. They are both similar enough to be put in the same category, but at the same time different enough that most of the times you have to choose completely different tools and distribution platforms depending on which technology you want to use.

Field of view

Focus
Focus is what the Facebook Spatial Workstation calls this technology, but it is a tricky one to name. As you may know, ambisonics represents a full sphere of audio around the listener. Players like YouTube and Facebook (which uses ambisonics inside its own proprietary .tbe format) can dynamically rotate this sphere so the relative positions of the audio elements are accurate to the direction the audience is looking at. But the sounds don’t change noticeably in level depending on where you are looking.

If we take a step back and think about “surround sound” in the real world, it actually makes perfect sense. A hair clipper isn’t particularly louder when it’s in front of our eyes as opposed to when its trimming the back of our head. Nor can we ignore the annoying person who is loudly talking on their phone on the bus by simply looking away.

But for narrative construction, it can be very effective to emphasize what your audience is looking at. That opens up possibilities, such as presenting the viewer with simultaneous yet completely unrelated situations and letting them choose which one to pay attention to simply by looking in the direction of the chosen event. Keep in mind that in this case, all events are happening simultaneously and will carry on even if the viewer never looks at them.

This technology is not currently supported by YouTube, but it is possible in the Facebook Spatial Workstation with the use of high Focus Values.

Gaze Activation
When we talk about focus, the key thing to keep in mind is that all the events happen regardless of the viewer looking at them or not. If instead you want a certain sound to only happen when the viewer looks at a certain prop, regardless of the time, then you are looking for Gaze Activation.

This concept is much more akin to game audio then to film sound because of the interactivity element it presents. Essentially, you are using the direction of the gaze and potentially the length of the gaze (if you want your viewer to look in a direction for x amount of seconds before something happens) as a trigger for a sound/video playback.

This is very useful if you want to make impossible for your audience to miss something because they were looking in the “wrong” direction. Think of a jump scare in a horror experience. It’s not very scary if you’re looking in the opposite direction, is it?

This is currently only supported if you build your experience in a game engine or as an independent app with tools such as InstaVR.

Both concepts are very closely related and I expect many implementations will make use of both. We should all keep an eye on the VR content distribution platforms to see how these tools will be supported and make the best use of them in order to make 360 videos even more immersive.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.