Category Archives: VR

Nvidia intros Turing-powered Titan RTX

Nvidia has introduced its new Nvidia Titan RTX, a desktop GPU that provides the kind of massive performance needed for creative applications, AI research and data science. Driven by the new Nvidia Turing architecture, Titan RTX — dubbed T-Rex — delivers 130 teraflops of deep learning performance and 11 GigaRays of raytracing performance.

Turing features new RT Cores to accelerate raytracing, plus new multi-precision Tensor Cores for AI training and inferencing. These two engines — along with more powerful compute and enhanced rasterization — will help speed the work of developers, designers and artists across multiple industries.

Designed for computationally demanding applications, Titan RTX combines AI, realtime raytraced graphics, next-gen virtual reality and high-performance computing. It offers the following features and capabilities:
• 576 multi-precision Turing Tensor Cores, providing up to 130 Teraflops of deep learning performance
• 72 Turing RT Cores, delivering up to 11 GigaRays per second of realtime raytracing performance
• 24GB of high-speed GDDR6 memory with 672GB/s of bandwidth — two times the memory of previous-generation Titan GPUs — to fit larger models and datasets
• 100GB/s Nvidia NVLink, which can pair two Titan RTX GPUs to scale memory and compute
• Performance and memory bandwidth sufficient for realtime 8K video editing
• VirtualLink port, which provides the performance and connectivity required by next-gen VR headsets

Titan RTX provides multi-precision Turing Tensor Cores for breakthrough performance from FP32, FP16, INT8 and INT4, allowing faster training and inference of neural networks. It offers twice the memory capacity of previous-generation Titan GPUs, along with NVLink to allow researchers to experiment with larger neural networks and datasets.

Titan RTX accelerates data analytics with RAPIDS. RAPIDS open-source libraries integrate seamlessly with the world’s most popular data science workflows to speed up machine learning.

Titan RTX will be available later in December in the US and Europe for $2,499.

Storage for Interactive, VR

By Karen Moltenbrey

Every vendor in the visual effects and post production industries relies on data storage. However, for those studios working on new media or hybrid projects, which generate far more content in general, they not only need a reliable solution, they need one that can handle terabytes upon terabytes of data.

Here, two companies in the VR space discuss their needs for a storage solution that serve their business requirements.

Lap Van Luu

Magnopus
Located in downtown Los Angeles, Magnopus creates amazing VR and AR experiences. While a fairly new company — it was founded in 2013 — its staff has an extensive history in the VFX and games industries, with Academy Award winners among its founders. So, there is no doubt that the group knows what it takes to create amazing content.

It also knows the necessity of a reliable storage solution and one that can handle the large data generated by an AR or VR project. At Magnopus, the crew uses a custom-built solution leveraging Supermicro architecture. As Magnopus CTO Lap Van Luu points out, they are using an SSG-6048R-E1CR60N 4U chassis that the studio populates with two types of tier storage: the cache read-and-write layer is NVMe, while the second tier is SAS. Both are in a RAID-10 configuration with 1TB of NVMe and 500TB of SAS raw storage.

“This setup allows us to scale to a larger workforce and meet the demands of our artists,” says Luu. “We leverage faster NVMe Flash and larger SAS for the bulk of our storage requirements.”

Before Magnopus, Luu worked at companies with all kinds of storage systems over the past 20 years, including those from NetApp, BlueArc and Isilon, as well as custom builds of ZFS, FreeNAS, Microsoft Windows Storage Spaces and Hadoop configurations. However, since Magnopus opened, it has only switched to a bigger and faster version of its original setup, starting with a custom Supermicro system with 400GB of SSD and 250TB of SAS in the same configuration.

“We went with this configuration because as we were moving more into realtime production than traditional VFX, the need for larger renderfarms and storage IO demands dropped dramatically,” says Luu. “We also knew that we wanted to leverage smart caching due to the cost of Flash storage dropping to a reasonable price point. It was the ideal situation to be in. We were starting a new company with a less-demanding infrastructure with newer technology that was cheaper, faster and better overall.”

Nevertheless, choosing a specific solution was not a decision that was made lightly. “When you move away from your premier storage solution providers, there is always a concern for scalability and reliability. When working in realtime production, the concern to re-render elements wasn’t a factor of hours or days, but rather seconds and minutes. It was important for us to have redundant backups. But for the cost saving on storage, we could easily get mirrored servers and still be saving a significant amount of money.”

Luu knew the studio wanted to leverage Flash caching, so the big question was, How much Flash was necessary to meet the demands of their artists and processing farm? The processing farm was mainly used to generate textures and environments that were imported over to a real-time engine, such as Unity or Unreal Engine. To this end, Magnopus had to find out who offered a solution for caching that was as hands-off as possible and was invisible to all the users. “LSI, now Avago, had a solution with the RAID controller called cachecade, which dealt with all the caching,” he says. “All you had to do was set up some preferences and the RAID controller would take care of the rest.”

However, cachecade had a size limit on the caching layer of 512GB, so the studio had to do some testing to see if it would ever exceed that, and in a rare situation it did, says Luu. “But it was never a worry because behind the flash cache was a 60 SAS drive RAID-10 configuration.”

As Luu explains, when working with VFX, IOPS (IO operations per second) is always the biggest issue due to the heavy demand from certain types of applications. “VFX work and compositing can typically drive any storage solution to a grinding halt when you have a renderfarm taxing the production storage from your artists,” he explains. However, realtime development IO demands are significantly less since the assets are created in a DCC application but imported into a game engine, where processing occurs in realtime and locally. So, storing all those traditional VFX elements are not necessary, and the overall capacity of storage dropped to one-tenth of what was required with VFX, Luu points out.

And since Magnopus has a Flash-based cache layer that is large enough to meet the company’s IO demands, it does not have to leverage localization to reduce the IO demand off the main production server; as a result, the user gets immediate server response. And, it means that all data within the pipeline resides on the company’s main production server — where the company starts and ends any project.

“Magnopus is a content-focused technology company,” Luu says. “All our assets and projects that we create are digital. Storage is extremely important because it is the lifeblood of everything we create. The storage server can be the difference between if a user can focus on creative content creation where the infrastructure is invisible or the frustration of constantly being blocked and delayed by hardware. Enabling everyone to work as efficiently as possible allows for the best results and products for our clients and customers.”

Light Sail VR
Light Sail VR is a Hollywood-based VR boutique that is a pioneer in cinematic virtual reality storytelling. Since its founding three years ago, the studio has been producing a range of interactive, 360- and 180-degree VR content, including original work and branded pieces for Google, ABC, GoPro and Paramount.

Matt Celia on set for Speak of the Devil.

Because Light Sail VR is a unique but small company, employees often have to wear a number of hats. For instance, co-founder Robert Watts is executive producer and handles many of the logistical issues. His partner, Matthew Celia, is creative director and handles more of the technical aspects of the business. So when it comes to managing the company’s storage needs, Celia is the guy. And, having a reliable system that keeps things running smoothly is paramount, as he is also juggling shoots and post-production work. No one can afford delays in production and post, but for a small company, it can be especially disastrous.

Light Sail VR does not simply dabble in VR; it is what the company does exclusively. Most of the projects thus far have been live action, though the group started its first game engine work this year. When the studio produced a piece with GoPro in the first year of its founding, it was on a sneakernet of G-Drives from G-Technology, “and I was going crazy!” says Celia. “VR is fantastic, but it’s very data-intensive. You can max out a computer’s processing very easily, and the render times are extraordinarily long. There’s a lot of shots to get through because every shot becomes a visual effects shot with either stitching, rotoscoping or compositing needed.”

He continues: “I told Robert [Watts] we needed to get a shared storage server so if I max out one computer while I’m working, I can just go to another computer and keep working, rather than wait eight to 10 hours for a render to finish.”

The Speak of the Devil shoot.

Celia had been dialed into the post world for some time. “Before diving into the world of VR, I was a Final Cut guy, and the LumaForge guys and [founder] Sam Mestman were people I always respected in the industry,” he says. So, Celia reached out to them with a cold call and explained that Light Sail VR was doing virtual reality, an uncharted, pioneering new thing, and was going to need a lot of storage — and needed it fast. “I told them, ‘We want to be hooked up to many computers, both Macs and PCs, and don’t want to deal with file structures and those types of things.’”

Celia points out that they are an independent and small boutique, so finding something that was cost effective and reliable was important. LumaForge responded with a solution called Jellyfish Mobile, geared for small teams and on-set work or portable office environments. “I think we got the 30TB NAS server that has four 10Gb Ethernet connections.” That enabled Light Sail VR to hook up the system to all its computers, “and it worked,” he adds. “I could work on one shot, hit render, and go to another computer and continue working on the next shot and hit render, then kind of ping-pong back and forth. It made our lives a lot easier.”

Light Sail VR has since graduated to the larger-capacity Jellyfish Rack system, which is a 160TB solution (expandable up to 1 petabyte).

The storage is located in Light Sail VR’s main office and is hooked up to its computers. The filmmakers shoot in the field and, if on location, download the data to drives, which they transport back to the office and load onto the server. Then, they transcode all the media to DNX. (VR is captured in H.264 format, which is not user friendly for editing due to the high-res frame size.)

Currently, Celia is in New York, having just wrapped the 20th episode of original content for Refinery29, a media company focused on young women that produces editorial and video programming, live events and social, shareable content delivered across major social media platforms, and covers a variety of categories from style to politics and more. Eight of the episodes are currently in various stages of the post pipeline, due to come out later this year. “And having a solid storage server has been a godsend,” Celia says.

The studio backs up locally onto Seagate drives for archival purposes and sometimes employs G-Technology drives for on-set work. “We just got this new G-Tech SSD that’s 2TB. It’s been great for use on set because having an SSD and downloading all the cards while on set makes your wrap process so much faster,” Celia points out.

Lately, Light Sail VR is shooting a lot of VR-180, requiring two 64GB cards per camera — one for the right eye and one for the left eye. But when they are shooting with the Yi Halo next-gen 3D 360-degree Google Jump camera, they use 17 64GB cards. “That’s a lot of data,” says Celia. “You can have a really bad day if you have really bad drives.”

The studio’s previous solution operated via Thunderbolt 1 in a RAID-5. It only worked on a single machine and was not cross-platform. As the studio made the transition over to PC from Mac to take advantage of better hardware capable of supporting VR playback, that solution was just not practical. They also needed a solution that was plug and play, so they could just pop it into a 10Gb Ethernet connection — they did not want fiber, “which can get expensive.”

The Light Sail team.

“I just wanted something very simple that was cross-platform and could handle what we were doing, which is, by the way, 6K or 8K stereo at 60 frames per second – these workloads are larger than most feature films,” Celia says. “So, we needed a lot of storage. We needed it fast. We needed it to be shared.”

However, while Celia searched for a system, one thing became clear to him: The solutions were technical. “It seemed like I would have to be my own IT department.” And, that was just one more hat he did not want to have to wear. “At LumaForge, they are independent filmmakers. They understood what I was trying to do immediately, and were willing to go on that journey with us.”

Say Celia, “I always call hard drives or storage the underwear of the post production world because it’s the thing you hate spending a lot of money on, but you really need it to perform and work.”

Main Image: Magnopus


Karen Moltenbrey is a long-time VFX and post writer.

DigitalGlue 12.3

Behind the Title: Lobo EP, Europe Loic Francois Marie Dubois

NAME: Loic Francois Marie Dubois

COMPANY: New York- and São Paulo, Brazil-based Lobo

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service creative studio offering design, live action, stop motion, 3D & 2D, mixed media, print, digital, AR and VR.

Day One spot Sunshine

WHAT’S YOUR JOB TITLE?
Creative executive producer for Europe and formerly head of production. I’m based in Brazil, but work out of the New York office as well.

WHAT DOES THAT ENTAIL?
Managing, hiring creative teams, designers, producers and directors for international productions (USA, Europe, Asia). Also, I have served as the creative executive director for TBWA Paris on the McDonald’s Happy Meal global campaign for the last five years. Now as creative EP for Europe, I am also responsible for streamlining information from pre-production to post production between all production parties for a more efficient and prosperous sales outcome.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The patience and the fun psychological side you need to have to handle all the production peeps, agencies, and clients.

WHAT TOOLS DO YOU USE?
Excel, Word, Showbiz, Keynote, Pages, Adobe Package (Photoshop, Illustrator, After Effects, Premiere, InDesign), Maya, Flame, Nuke and AR/VR technology.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with talented creative people on extraordinary projects with a stunning design and working on great narratives, such as the work we have done for clients including Interface, Autism Speaks, Imaginary Friends, Unicef and Travelers, to name a few.

WHAT’S YOUR LEAST FAVORITE?
Monday morning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early afternoon between Europe closing down and the West Coast waking up.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Meditating in Tibet…

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Since I was 13 years old. After shooting and editing a student short film (an Oliver Twist adaptation) with a Bolex 16mm on location in London and Paris, I was hooked.

Promoting Lacta 5Star chocolate bars

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
An animated campaign for the candy company Mondelez’s Lacta 5Star chocolate bars; an animated short film for the Imaginary Friends Society; a powerful animated short on the dangers of dating abuse and domestic violence for nonprofit Day One; a mixed media campaign for Chobani called FlipLand; and a broadcast spot for McDonald’s and Spider-Man.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
My three kids 🙂

It’s really hard to choose one project, as they are all equally different and amazing in their own way, but maybe D&AD Wish You Were Here. It stands out for the number of awards it won and the collective creative production process.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The Internet.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Meditation and yoga.


Panasas’ new ActiveStor Ultra targets emerging apps: AI, VR

Panasas has introduced ActiveStor Ultra, the next generation of its high-performance computing storage solution, featuring PanFS 8, a plug-and-play, portable, parallel file system. ActiveStor Ultra offers up to 75GB/s per rack on industry-standard commodity hardware.

ActiveStor Ultra comes as a fully integrated plug-and-play appliance running PanFS 8 on industry-standard hardware. PanFS 8 is the completely re-engineered Panasas parallel file system, which now runs on Linux and features intelligent data placement across three tiers of media — metadata on non-volatile memory express (NVMe), small files on SSDs and large files on HDDs — resulting in optimized performance for all data types.

ActiveStor Ultra is designed to support the complex and varied data sets associated with traditional HPC workloads and emerging applications, such as artificial intelligence (AI), autonomous driving and virtual reality (VR). ActiveStor Ultra’s modular architecture and building-block design enables enterprises to start small and scale linearly. With dock-to-data in one hour, ActiveStor Ultra offers fast data access and virtually eliminates manual intervention to deliver the lowest total cost of ownership (TCO).

ActiveStor Ultra will be available early in the second half of 2019.


Epic Games’ Unreal Engine 4.21 adds more mobile optimizations, efficiencies

Epic Games’ Unreal Engine 4.21 is designed to offer developers greater efficiency, performance and stability for those working on any platform.

Unreal Engine 4.21 adds even more mobile optimizations to both Android and iOS, up to 60% speed increases when cooking content and more power and flexibility in the Niagara effects toolset for realtime VFX. Also, the new production-ready Replication Graph plugin enables developers to build multiplayer experiences at a scale that hasn’t been possible before, and Pixel Streaming allows users to stream interactive content directly to remote devices with no compromises on rendering quality.

Updates in Unreal Studio 4.21 also offer new capabilities and enhanced productivity for users in the enterprise space, including architecture, manufacturing, product design and other areas of professional visualization. Unreal Studio’s Datasmith workflow toolkit now includes support for Autodesk Revit, and enhanced material translation for Autodesk 3ds Max, all enabling more efficient design review and iteration.

Here is more about the key features:
Replication Graph: The Replication Graph plugin, which is now production-ready, makes it possible to customize network replication in order to build large-scale multiplayer games that would not be viable with traditional replication strategies.

Niagara Enhancements: The Niagara VFX feature set continues to grow, with substantial quality of life improvements and Nintendo Switch support added in Unreal Engine 4.21.

Sequencer Improvements: New capabilities within Sequencer allow users to record incoming video feeds to disk as OpenEXR frames and create a track in Sequencer, with the ability to edit and scrub the track as usual. This enables users to synchronize video with CG assets and play them back together from the timeline.

Pixel Streaming (Early Access): With the new Pixel Streaming feature, users can author interactive experiences such as product configurations or training applications, host them on a cloud-based GPU or local server, and stream them to remove devices via web browser without the need for additional software or porting.

Mobile Optimizations: The mobile development process gets even better thanks to all of the mobile optimizations that were developed for Fortnite‘s initial release on Android, in addition to all of the iOS improvements from Epic’s ongoing updates. With the help of Samsung, Unreal Engine 4.21 includes all of the Vulkan engineering and optimization work that was done to help ship Fortnite on the Samsung Galaxy Note 9 and is 100% feature compatible with OpenGL ES 3.1.

Much Faster Cook Times: In addition to the optimized cooking process, low-level code avoids performing unnecessary file system operations, and cooker timers have been streamlined.

Gauntlet Automation Framework (Early access): The new Gauntlet automation framework enables developers to automate the process of deploying builds to devices, running one or more clients and or/servers, and processing the results. Gauntlet scripts can automatically profile points of interest, validate gameplay logic, check return values from backend APIs and more. Gauntlet has been battle tested for months in the process of optimizing Fortnite, and is a key part of ensuring it runs smoothly on all platforms.

Animation System Optimizations and Improvements: Unreal Engine’s animation system continues to build on best-in-class features thanks to new workflow improvements, better surfacing of information, new tools, and more.

Blackmagic Video Card Support: Unreal Engine 4.21 also adds support for Blackmagic video I/O cards for those working in film and broadcast. Creatives in the space can now choose between Blackmagic and AJA Video Systems, the two most popular options for professional video I/O.

Improved Media I/O: Unreal Engine 4.21 now supports 10-bit video I/O, audio I/O, 4K, and Ultra HD output over SDI, as well as legacy interlaced and PsF HD formats, enabling greater color accuracy and integration of some legacy formats still in use by large broadcasters.

Windows Mixed Reality: Unreal Engine 4.21 natively supports the Windows Mixed Reality (WMR) platform and headsets, such as the HP Mixed Reality headset and the Samsung HMD Odyssey headset.

Magic Leap Improvements: Unreal Engine 4.21 supports all the features needed to develop complete applications on Magic Leap’s Lumin-based devices — rendering, controller support, gesture recognition, audio input/output, media, and more.

Oculus Avatars: The Oculus Avatar SDK includes an Unreal package to assist developers in implementing first-person hand presence for the Rift and Touch controllers. The package includes avatar hand and body assets that are viewable by other users in social applications.

Datasmith for Revit (Unreal Studio): Unreal Studio’s Datasmith workflow toolkit for streamlining the transfer of CAD data into Unreal Engine now includes support for Autodesk Revit. Supported elements include materials, metadata, hierarchy, geometric instancing, lights and cameras.

Multi-User Viewer Project Template (Unreal Studio): A new project template for Unreal Studio 4.21 enables multiple users to connect in a real-time environment via desktop or VR, facilitating interactive, collaborative design reviews across any work site.

Accelerated Automation with Jacketing and Defeaturing (Unreal Studio): Jacketing automatically identifies meshes and polygons that have a high probability of being hidden from view, and lets users hide, remove or move them to another layer; this command is also available through Python so Unreal Studio users can integrate this step into automated preparation workflows. Defeaturing automatically removes unnecessary detail (e.g. blind holes, protrusions) from mechanical models to reduce polygon count and boost performance.

Enhanced 3ds Max Material Translation (Unreal Studio): There is now support for most commonly used 3ds Max maps, improving visual fidelity and reducing rework. Those in the free Unreal Studio beta can now translate 3ds Max material graphs to Unreal graphs when exporting, making materials easier to understand and work with. Users can also leverage improvements in BRDF matching from V-Ray materials, especially metal and glass.

DWG and Alias Wire Import (Unreal Studio): Datasmith now supports DWG and Alias Wire file types, enabling designers to import more 3D data directly from Autodesk AutoCAD and Autodesk Alias.


Satore Tech tackles post for Philharmonia Orchestra’s latest VR film

The Philharmonia Orchestra in London debuted its latest VR experience at Royal Festival Hall alongside the opening two concerts of the Philharmonia’s new season. Satore Tech completed VR stitching for the Mahler 3: Live From London film. This is the first project completed by Satore Tech since it was launched in June of this year.

The VR experience placed users at the heart of the Orchestra during the final 10 minutes of Mahler’s Third Symphony, which was filmed live in October 2017. The stitching project was completed by creative technologist/SFX/VR expert Sergio Ochoa, who leads Satore Tech. The company used SGO Mistika technology to post the project, which Ochoa helped to develop during his time in that company — he was creative technologist and CEO of SGO’s French division.

Luke Ritchie, head of innovation and partnerships at the Philharmonia Orchestra, says, “We’ve been working with VR since 2015, it’s a fantastic technology to connect new audiences with the Orchestra in an entirely new way. VR allows you to sit at the heart of the Orchestra, and our VR experiences can transform audiences’ preconceptions of orchestral performance — whether they’re new to classical music or are a die-hard fan.”

It was a technically demanding project for Satore Tech to stitch together, as the concert was filmed live, in 360 degrees, with no retakes using Google’s latest Jump Odyssey VR camera. This meant that Ochoa was working with four to five different depth layers at any one time. The amount of fast movement also meant the resolution of the footage needed to be up-scaled from 4K to 8K to ensure it was suitable for the VR platform.

“The guiding principle for Satore Tech is we aspire to constantly push the boundaries, both in terms of what we produce and the technologies we develop to achieve that vision,” explains Ochoa. “It was challenging given the issues that arise with any live recording, but the ambition and complexity is what makes it such a very suitable initial project for us.”

Satore Tech’s next project is currently in development in Mexico, using experimental volumetric capture techniques with some of the world’s most famous dancers. It is slated for release early next year.


Our SIGGRAPH 2018 video coverage

SIGGRAPH is always a great place to wander around and learn about new and future technology. You can get see amazing visual effects reels and learn how the work was created by the artists themselves. You can get demos of new products, and you can immerse yourself in a completely digital environment. In short, SIGGRAPH is educational and fun.

If you weren’t able to make it this year, or attended but couldn’t see it all, we would like to invite you to watch our video coverage from the show.

SIGGRAPH 2018


DeepMotion’s Neuron cloud app trains digital characters using AI

DeepMotion has launched DeepMotion Neuron, the first tool for completely procedural, physical character animation, for presale. The cloud application trains digital characters to develop physical intelligence using advanced artificial intelligence (AI), physics and deep learning. With guidance and practice, digital characters can now achieve adaptive motor control just as humans do, in turn allowing animators and developers to create more lifelike and responsive animations than those possible using traditional methods.

DeepMotion Neuron is a behavior-as-a-service platform that developers can use to upload and train their own 3D characters, choosing from hundreds of interactive motions available via an online library. Neuron will enable content creators to tell more immersive stories by adding responsive actors to games and experiences. By handling large portions of technical animation automatically, the service also will free up time for artists to focus on expressive details.

DeepMotion Neuron is built on techniques identified by researchers from DeepMotion and Carnegie Mellon University who studied the application of reinforcement learning to the growing domain of sports simulation, specifically basketball, where real-world human motor intelligence is at its peak. After training and optimization, the researchers’ characters were able to perform interactive ball-handling skills in real-time simulation. The same technology used to teach digital actors how to dribble can be applied to any physical movement using Neuron.

DeepMotion Neuron’s cloud platform is slated for release in Q4 of 2018. During the DeepMotion Neuron prelaunch, developers and animators can register on the DeepMotion website for early access and discounts.


Dell EMC’s ‘Ready Solutions for AI’ now available

Dell EMC has made available its new Ready Solutions for AI, with specialized designs for Machine Learning with Hadoop and Deep Learning with Nvidia.

Dell EMC Ready Solutions for AI eliminate the need for organizations to individually source and piece together their own solutions. They offer a Dell EMC-designed and validated set of best-of-breed technologies for software — including AI frameworks and libraries — with compute, networking and storage. Dell EMC’s portfolio of services include consulting, deployment, support and education.

Dell EMC’s Data Science Provisioning Portal offers an intuitive GUI that provides self-service access to hardware resources and a comprehensive set of AI libraries and frameworks, such as Caffe and TensorFlow. This reduces the steps it takes to configure a data scientist’s workspace to five clicks. Ready Solutions for AI’s distributed, scalable architecture offers the capacity and throughput of Dell EMC Isilon’s All-Flash scale-out design, which can improve model accuracy with fast access to larger data sets.

Dell EMC Ready Solutions for AI: Deep Learning with Nvidia solutions are built around Dell EMC PowerEdge servers with Nvidia Tesla V100 Tensor Core GPUs. Key features include Dell EMC PowerEdge R740xd and C4140 servers with four Nvidia Tesla V100 SXM2 Tensor Core GPUs; Dell EMC Isilon F800 All-Flash Scale-out NAS storage; and Bright Cluster Manager for Data Science in combination with the Dell EMC Data Science Provisioning Portal.

Dell EMC Ready Solutions for AI: Machine Learning with Hadoop includes an optimized solution stack, along with data science and framework optimization to get up and running quickly, and it allows expansion of existing Hadoop environments for machine learning.

Key features include Dell EMC PowerEdge R640 and R740xd servers; Cloudera Data Science Workbench for self-service data science for the enterprise; the Apache Spark open source unified data analytics engine; and the Dell EMC Data Science Provisioning Engine, which provides preconfigured containers that give data scientists access to the Intel BigDL distributed deep learning library on the Spark framework.

New Dell EMC Consulting services are available to help customers implement and operationalize the Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Dell EMC Education Services offers courses and certifications on data science and advanced analytics and workshops on machine learning in collaboration with Nvidia.

Composer and sound mixer Rob Ballingall joins Sonic Union

NYC-based audio studio Sonic Union has added composer/experiential sound designer/mixer Rob Ballingall to its team. He will be working out of both Sonic Union’s Bryant Park and Union Square locations. Ballingall brings with him experience in music and audio post, with an emphasis on the creation of audio for emerging technology projects, including experiential and VR.

Ballingall recently created audio for an experiential in-theatre commercial for Mercedes-Benz Canada, using Dolby Atmos, D-Box and 4DX technologies. In addition, for National Geographic’s One Strange Rock VR experience, directed by Darren Aronofsky, Ballingall created audio for custom VR headsets designed in the style of astronaut helmets, which contained a pinhole projector to display visuals on the inside of the helmet’s visor.

Formerly at Nylon Studios, Ballingall also composed music on brand campaigns for clients such as Ford, Kellogg’s and Walmart, and provided sound design/engineering on projects for AdCouncil and Resistance Radio for Amazon Studios and The Man in the High Castle, which collectively won multiple Cannes Lion, Clio and One Show awards, as well as garnering two Emmy nominations.

Born in London, Ballingall immigrated to the US eight years ago to seek a job as a mixer, assisting numerous Grammy Award-winning engineers at NYC’s Magic Shop recording studio. Having studied music composition and engineering from high school to college in England, he soon found his niche offering compositional and arranging counterpoints to sound design, mix and audio post for the commercial world. Following stints at other studios, including Nylon Studios in NYC, he transitioned to Sonic Union to service agencies, brands and production companies.