Tag Archives: Siggraph

2nd-gen AMD Ryzen Threadripper processors

At the SIGGRAPH show, AMD announced the availability of its 2nd-generation AMD Ryzen Threadripper 2990WX processor with 32 cores and 64 threads. These new AMD Ryzen Threadripper processors are built using 12nm “Zen+” x86 processor architecture. Second-gen AMD Ryzen Threadripper processors support the most I/O and are compatible with existing AMD X399 chipset motherboards via a simple BIOS update, offering builders a broad choice for designing the ultimate high-end desktop or workstation PC.

The 32-core/64-thread Ryzen Threadripper 2990WX and the 24-core/48-thread Ryzen Threadripper 2970WX are purpose-built for prosumers who crave raw computational compute power to dispatch the heaviest workloads. The 2nd-gen AMD Ryzen Threadripper 2990WX offers up to 53 percent faster multithread performance and up to 47 percent more rendering performance for creators than the core i9-7980XE.

This new AMD Ryzen Threadripper X series comes with a higher base and boost clocks for users who need high performance. The 16 cores and 32 threads in the 2950X model offer up to 41 percent more multithreaded performance than the Core i9-7900X.

Additional performance and value come from:
• AMD StoreMI technology: All X399 platform users will now have free access to AMD StoreMI technology, enabling configured PCs to load files, games and applications from a high-capacity hard drive at SSD-like read speeds.
• Ryzen Master Utility: Like all AMD Ryzen processors, the 2nd-generation AMD Ryzen Threadripper CPUs are fully unlocked. With the updated AMD Ryzen Master Utility, AMD has added new features, such as fast core detection both on die and per CCX; advanced hardware controls; and simple, one-click workload optimizations.
• Precision Boost Overdrive (PBO): A new performance-enhancing feature that allows multithreaded boost limits to be raised by tapping into extra power delivery headroom in premium motherboards.

With a simple BIOS update, all 2nd-generation AMD Ryzen Threadripper CPUs are supported by a full ecosystem of new motherboards and all existing X399 platforms. Designs are available from top motherboard manufacturers, including ASRock, ASUS, Gigabyte and MSI.

The 32-core, 64-thread AMD Ryzen Threadripper 2990WX is available now from global retailers and system integrators. The 16-core, 32-thread AMD Ryzen Threadripper 2950X processor is expected to launch on August 31, and the AMD Ryzen Threadripper 2970WX and 2920X models are slated for launch in October.

Quick Chat: Joyce Cox talks VFX and budgeting

Veteran VFX producer Joyce Cox has a long and impressive list of credits to her name. She got her start producing effects shots for Titanic and from there went on to produce VFX for Harry Potter and the Sorcerer’s Stone, The Dark Knight and Avatar, among many others. Along the way, Cox perfected her process for budgeting VFX for films and became a go-to resource for many major studios. She realized that the practice of budgeting VFX could be done more efficiently if there was a standardized way to track all of the moving parts in the life cycle of a project’s VFX costs.

With a background in the finance industry, combined with extensive VFX production experience, she decided to apply her process and best practices into developing a solution for other filmmakers. That has evolved into a new web-based app called Curó, which targets visual effects budgeting from script to screen. It will be debuting at Siggraph in Vancouver this month.

Ahead of the show we reached out to find out more about her VFX producer background and her path to becoming a the maker of a product designed to make other VFX pros’ lives easier.

You got your big break in visual effects working on the film Titanic. Did you know that it would become such an iconic landmark film for this business while you were in the throes of production?
I recall thinking the rough cut I saw in the early stage was something special, but had no idea it would be such a massive success.

Were there contacts made on that film that helped kickstart your career in visual effects?
Absolutely. It was my introduction into the visual effects community and offered me opportunities to learn the landscape of digital production and develop relationships with many talented, inventive people. Many of them I continued to work with throughout my career as a VFX producer.

Did you face any challenges as a woman working in below-the-line production in those early days of digital VFX?
It is a bit tricky. Visual effects is still a primarily male dominated arena, and it is a highly competitive environment. I think what helped me navigate the waters is my approach. My focus is always on what is best for the movie.

Was there anyone from those days that you would consider a professional mentor?
Yes. I credit Richard Hollander, a gifted VFX supervisor/producer with exposing me to the technology and methodologies of visual effects; how to conceptualize a VFX project and understand all the moving parts. I worked with Richard on several projects producing the visual effects within digital facilities. Those experiences served me well when I moved to working on the production side, navigating the balance between the creative agenda, the approved studio budgets and the facility resources available.

You’ve worked as a VFX producer on some of the most notable studio effects films of all time, including X-Men 2, The Dark Night, Avatar and The Jungle Book. Was there a secret to your success or are you just really good at landing top gigs?
I’d say my skills lie more in doing the work than finding the work. I believe I continued to be offered great opportunities because those I’d worked for before understood that I facilitated their goals of making a great movie. And that I remain calm while managing the natural conflicts that arise between creative desire and financial reality.

Describe what a VFX producer does exactly on a film, and what the biggest challenges are of the job.
This is a tough question. During pre-production, working with the director, VFX supervisor and other department heads, the VFX producer breaks down the movie into the digital assets, i.e., creatures, environments, matte paintings, etc., that need to be created, estimate how many visual effects shots are needed to achieve the creative goals as well as the VFX production crew required to support the project. Since no one knows exactly what will be needed until the movie is shot and edited, it is all theory.

During production, the VFX producer oversees the buildout of the communications, data management and digital production schedule that are critical to success. Also, during production the VFX producer is evaluating what is being shot and tries to forecast potential changes to the budget or schedule.

Starting in production and going through post, the focus is on getting the shots turned over to digital facilities to begin work. This is challenging in that creative or financial changes can delay moving forward with digital production, compressing the window of time within which to complete all the work for release. Once everything is turned over that focus switches to getting all the shots completed and delivered for the final assembly.

What film did you see that made you want to work in visual effects?
Truthfully, I did not have my sights set on visual effects. I’ve always had a keen interest in movies and wanted to produce them. It was really just a series of unplanned events, and I suppose my skills at managing highly complex processes drew me further into the world of visual effects.

Did having a background in finance help in any particular way when you transitioned into VFX?
Yes, before I entered into production, I spent a few years working in the finance industry. That experience has been quite helpful and perhaps is something that gave me a bit of a leg up in understanding the finances of filmmaking and the ability to keep track of highly volatile budgets.

You pulled out of active production in 2016 to focus on a new company, tell me about Curó.
Because of my background in finance and accounting, one of the first things I noticed when I began working in visual effects was, unlike production and post, the lack of any unified system for budgeting and managing the finances of the process.  So, I built an elaborate system of worksheets in Excel that I refined over the years. This design and process served as the basis for Curó’s development.

To this day the entire visual effects community manages the finances, which can be tens, if not hundreds, of millions in spend with spreadsheets. Add to that the fact that everyone’s document designs are different, which makes the job of collaborating, interpreting and managing facility bids unwieldy to say the least.

Why do you think the industry needs Curó, and why is now the right time? 
Visual effects is the fastest growing segment of the film industry, demonstrated in the screen credits of VFX-heavy films. The majority of studio projects are these tent-pole films, which heavily use visual effects. The volatility of visual effects finances can be managed more efficiently with Curó and the language of VFX financial management across the industry would benefit greatly from a unified system.

Who’s been beta testing Curó, and what’s in store for the future, after its Siggraph debut?
We’ve had a variety of beta users over the past year. In addition to Sony and Netflix a number of freelance VFX producers and supervisors as well as VFX facilities have beta access.

The first phase of the Curó release focuses on the VFX producers and studio VFX departments, providing tools for initial breakdown and budgeting of digital and overhead production costs. After Siggraph we will be continuing our development, focusing on vendor bid packaging, bid comparison tools and management of a locked budget throughout production and post, including the accounting reports, change orders, etc.

We are also talking with visual effects facilities about developing a separate but connected module for their internal granular bidding of human and technical resources.

 

Maxon intros Cinema 4D Release 20

Maxon will be at Siggraph this year showing the next iteration of its Cinema 4D Release 20 (R20), an update of its 3D design and animation software. Release 20 introduces high-end features for VFX and motion graphics artists including node-based materials, volume modeling, CAD import and an evolution of the MoGraph toolset.

Maxon expects Cinema 4D Release 20 to be available this September for both Mac and Windows operating systems.

Key highlights in Release 20 include:
Node-Based Materials – This feature provides new possibilities for creating materials — from simple references to complex shaders — in a node-based editor. With more than 150 nodes to choose from that perform different functions, artists can combine nodes to easily build complex shading effects. Users new to a node-based material workflow still can rely on Cinema 4D’s standard Material Editor interface to create the corresponding node material in the background automatically. Node-based materials can be packaged into assets with user-defined parameters exposed in a similar interface to Cinema 4D’s Material Editor.

MoGraph Fields – New capabilities in this procedural animation toolset offer an entirely new way to define the strength of effects by combining falloffs — from simple shapes, to shaders or sounds to objects and formulas. Artists can layer Fields atop each other with standard mixing modes and remap their effects. They can also group multiple Fields together and use them to control effectors, deformers, weights and more.

CAD Data Import – Popular CAD formats can be imported into Cinema 4D R20 with a drag and drop. A new scale-based tessellation interface allows users to adjust detail to build amazing visualizations. Step, Solidworks, JT, Catia V5 and IGES formats are supported.

Volume Modeling – Users can create complex models by adding or subtracting basic shapes in Boolean-type operations using Cinema 4D R20’s OpenVDB–based Volume Builder and Mesher. They can also procedurally build organic or hard-surface volumes using any Cinema 4D object, including new Field objects. Volumes can be exported in sequenced .vdb format for use in any application or render engine that supports OpenVDB.

ProRender Enhancements — ProRender in Cinema 4D R20 extends the GPU-rendering toolset with key features including subsurface scattering, motion blur and multipasses. Also included are Metal 2 support, an updated ProRender core, out-of-core textures and other architectural enhancements.

Core Technology Modernization —As part of the transition to a more modern core in Cinema 4D, R20 comes with substantial API enhancements, the new node framework, further development on the new modeling framework and a new UI framework.

During Siggraph, Maxon will have guest artists presenting at their booth each day of the show. Presentations will be live streamed on C4DLive.com.

 

 

SIGGRAPH conference chair Roy C. Anthony: VR, AR, AI, VFX, more

By Randi Altman

Next month, SIGGRAPH returns to Vancouver after turns in Los Angeles and Anaheim. This gorgeous city, whose convention center offers a water view, is home to many visual effects studios providing work for film, television and spots.

As usual, SIGGRAPH will host many presentations, showcase artists’ work, display technology and offer a glimpse into what’s on the horizon for this segment of the market.

Roy C. Anthony

Leading up to the show — which takes place August 12-16 — we reached out to Roy C. Anthony, this year’s conference chair. For his day job, Anthony recently joined Ventuz Technology as VP, creative development. There, he leads initiatives to bring Ventuz’s realtime rendering technologies to creators of sets, stages and ProAV installations around the world

SIGGRAPH is back in Vancouver this year. Can you talk about why it’s important for the industry?
There are 60-plus world-class VFX and animation studios in Vancouver. There are more than 20,000 film and TV jobs, and more than 8,000 VFX and animation jobs in the city.

So, Vancouver’s rich production-centric communities are leading the way in film and VFX production for television and onscreen films. They are also are also busy with new media content, games work and new workflows, including those for AR/VR/mixed reality.

How many exhibitors this year?
The conference and exhibition will play host to over 150 exhibitors on the show floor, showcasing the latest in computer graphics and interactive technologies, products and services. Due to the increase in the amount of new technology that has debuted in the computer graphics marketplace over this past year, almost one quarter of this year’s 150 exhibitors will be presenting at SIGGRAPH for the first time

In addition to the traditional exhibit floor and conferences, what are some of the can’t-miss offerings this year?
We have increased the presence of virtual, augmented and mixed reality projects and experiences — and we are introducing our new Immersive Pavilion in the east convention center, which will be dedicated to this area. We’ve incorporated immersive tech into our computer animation festival with the inclusion of our VR Theater, back for its second year, as well as inviting a special, curated experience with New York University’s Ken Perlin — he’s a legendary computer graphics professor.

We’ll be kicking off the week in a big VR way with a special session following the opening ceremony featuring Ivan Sutherland, considered by many as “the father of computer graphics.” That 50-year retrospective will present the history and innovations that sparked our industry.

We have also brought Syd Mead, a legendary “visual futurist” (Blade Runner, Tron, Star Trek: The Motion Picture, Aliens, Time Cop, Tomorrowland, Blade Runner 2049), who will display an arrangement of his art in a special collection called Progressions. This will be seen within our Production Gallery experience, which also returns for its second year. Progressions will exhibit more than 50 years of artwork by Syd, from his academic years to his most current work.

We will have an amazing array of guest speakers, including those featured within the Business Symposium, which is making a return to SIGGRAPH after an absence of a few years. Among these speakers are people from the Disney Technology Innovation Group, Unity and Georgia Tech.

On Tuesday, August 14, our SIGGRAPH Next series will present a keynote speaker each morning to kick off the day with an inspirational talk. These speakers are Tony Derose, a senior scientist from Pixar; Daniel Szecket, VP of design for Quantitative Imaging Systems; and Bob Nicoll, dean of Blizzard Academy.

There will be a 25th anniversary showing of the original Jurassic Park movie, being hosted by “Spaz” Williams, a digital artist who worked on that film.

Can you talk about this year’s keynote and why he was chosen?
We’re thrilled to have ILM head and senior VP, ECD Rob Bredow deliver the keynote address this year. Rob is all about innovation — pushing through scary new directions while maintaining the leadership of artists and technologists.

Rob is the ultimate modern-day practitioner, a digital VFX supervisor who has been disrupting ‘the way it’s always been done’ to move to new ways. He truly reflects the spirit of ILM, which was founded in 1975 and is just one year younger than SIGGRAPH.

A large part of SIGGRAPH is its slant toward students and education. Can you discuss how this came about and why this is important?
SIGGRAPH supports education in all sub-disciplines of computer graphics and interactive techniques, and it promotes and improves the use of computer graphics in education. Our Education Committee sponsors a broad range of projects, such as curriculum studies, resources for educators and SIGGRAPH conference-related activities.

SIGGRAPH has always been a welcoming and diverse community, one that encourages mentorship, and acknowledges that art inspires science and science enables advances in the arts. SIGGRAPH was built upon a foundation of research and education.

How are the Computer Animation Festival films selected?
The Computer Animation Festival has two programs, the Electronic Theater and the VR Theater. Because of the large volume of submissions for the Electronic Theater (over 400), there is a triage committee for the first phase. The CAF Chair then takes the high scoring pieces to a jury comprised of industry professionals. The jury selects then become the Electronic Theater show pieces.

The selections for the VR Theater are made by a smaller panel comprised mostly of sub-committee members that watch each film in a VR headset and vote.

Can you talk more about how SIGGRAPH is tackling AR/VR/AI and machine learning?
Since SIGGRAPH 2018 is about the theme of “Generations,” we took a step back to look at how we got where we are today in terms of AR/VR, and where we are going with it. Much of what we know today couldn’t have been possible without the research and creation of Ivan Sutherland’s 1968 head-mounted display. We have a fanatic panel celebrating the 50-year anniversary of his HMD, which is widely considered and the first VR HMD.

AI tools are newer, and we created a panel that focuses on trends and the future of AI tools in VFX, called “Future Artificial Intelligence and Deep Learning Tools for VFX.” This panel gains insight from experts embedded in both the AI and VFX industries and gives attendees a look at how different companies plan to further their technology development.

What is the process for making sure that all aspects of the industry are covered in terms of panels?
Every year new ideas for panels and sessions are submitted by contributors from all over the globe. Those submissions are then reviewed by a jury of industry experts, and it is through this process that panelists and cross-industry coverage is determined.

Each year, the conference chair oversees the program chairs, then each of the program chairs become part of a jury process — this helps to ensure the best program with the most industries represented from across all disciplines.

In the rare case a program committee feels they are missing something key in the industry, they can try to curate a panel in, but we still require that that panel be reviewed by subject matter experts before it would be considered for final acceptance.

 

Disney Animation legend Floyd Norman keynoting SIGGRAPH 2017

Floyd Norman, the first African-American animator to work for Walt Disney Animation Studios, has been named SIGGRAPH 2017’s keynote speaker. The keynote session featuring Norman will be presented as a fireside chat, allowing attendees the opportunity to hear a Disney legend discuss his life and career within an intimate setting. SIGGRAPH 2017 will be held July 30-August 3 at Los Angeles.

Norman was the subject of a 2016 documentary called Floyd Norman: An Animated Life from filmmakers Michael Fiore and Erik Sharkey. The film covers Norman’s life story, also includes interviews with from voice actors and former colleagues.

Norman was hired as the first African-American animator at Walt Disney Studios in 1956 and was later hand-picked by Walt Disney himself to join the story team on The Jungle Book. After Walt’s death, Norman left Disney to start his own company, Vignette Films and produce films on the subject of black history for high schools. He and his partners would later work with Hanna-Barbera to animate the original Fat Albert TV special Hey, Hey, Hey, It’s Fat Albert, as well as the opening title sequence for the TV series Soul Train.

Norman returned to Disney in the 1980s to work in their publishing department, and in 1998 moved to the story department to work on Mulan. After all this, an invite to the Bay Area in the late ‘90s became a career highlight when Norman began working with leaders in the next wave of animation — Pixar and Steve Jobs — adding Toy Story 2 and Monsters, Inc. to his film credits.

Though he technically retired at the age of 65 in 2000, Norman is not one to quit and chose, instead, to occupy an open cubicle at Disney Publishing Worldwide for the last 15 years. As he puts it, “I just won’t leave.”

While not on staff, Norman’s proximity to other Disney personnel has led him to pick up freelance work and continue his impact on animation as both an artist and a mentor. As to his future plans, he says, “I plan to die at the drawing board!

“I’ve been fascinated by computer graphics since I purchased my first computer. I began attending SIGGRAPH when a kiosk was all Pixar could afford,” he says. “Since then, I’ve had the pleasure of working for this fine company and being a part of this amazing technology as it continues to mature. I’ve also enjoyed sharing insights I’ve garnered over the years in this fantastic industry. In recent years, I’ve spoken at several universities and even Apple. Creative imagination and technological innovation have always been a part of my life, and I’m delighted to share my enthusiasm with the fans at SIGGRAPH this year.”

Images courtesy of Michael Fiore Films

Jim Hagarty Photography

Blue Sky Studios’ Mikki Rose named SIGGRAPH 2019 conference chair

Mikki Rose has been named conference chair of SIGGRAPH 2019. Fur technical director at Greenwich, Connecticut-based Blue Sky Studios, Rose chaired the Production Sessions during SIGGRAPH 2016 this past July in Anaheim and has been a longtime volunteer and active member of SIGGRAPH for the last 15 years.

Rose has worked on such film as The Peanuts Movie and Hotel Transylvania. She refers to herself a “CG hairstylist” due to her specialization in fur at Blue Sky Studios — everything from hair to cloth to feathers and even vegetation. She studied general CG production at college and holds BS degrees in Computer Science and Digital Animation from Middle Tennessee State University as well as an MFA in Digital Production Arts from Clemson University. Prior to Blue Sky, she lived in California and held positions with Rhythm & Hues Studios and Sony Pictures Imageworks.

“I have grown to rely on each SIGGRAPH as an opportunity for renewal of inspiration in both my professional and personal creative work. In taking on the role of chair, my goal is to provide an environment for those exact activities to others,” said Rose. “Our industries are changing and developing at an astounding rate. It is my task to incorporate new techniques while continuing to enrich our long-standing traditions.”

SIGGRAPH 2019 will take place in Los Angeles from July 29 to August 2, 2019.


Main Image: SIGGRAPH 2016 — Jim Hagarty Photography

A look at the new AMD Radeon Pro SSG card

By Dariush Derakhshani

My first video card review was on the ATI FireGL 8800 more than 14 years ago. It was one of the first video cards that could support two monitors with only one card, which to me was a revolution. Up until then I had to jam two 3DLabs Oxygen VX1 cards in my system (one AGP and the other PCI) and wrestle them to handle OpenGL with Maya 4.0 running on two screens. It was either that or sit in envy as my friends taunted me with their two screen setups, like waving a cupcake in front of a fat kid (me).

Needless to say, two cards were not ideal, and the 128MB ATI FireGL8800 was a huge shift in how I built my own systems from then on. Fourteen years later, I’m fatter, balder and have two 27-inch HP screens sitting on my desk (one at 4K) that are always hungry for new video cards. I run mulitple applications at once, and I demand to push around a lot of geometry as fast as possible. And now, I’m even rendering a fair amount on the GPU, so my video card is ever more the centerpiece of my home-built rigs.

So when I stopped by AMD’s booth at SIGGRAPH 2016 in Anaheim recently. I was quite interested in what AMD’s John Swinimer had to say about the announcements the company was making at the show. AMD acquired ATI in 2006.

First, I’m just going to jump right into what got me the most wide-eyed, and that is the announcement of the AMD Radeon Pro SSG. This professional card mates a 1TB SSD to the frame buffer of the video card, giving you a huge boost in how much the GPU system can load into memory. Keep in mind that professional card frame buffers range from about 4GB in entry level cards up to 24-32GB in super high-end cards, so 1TB is a huge number to be sure.

One of the things that slows down GPU rendering the most is having to flush and reload textures from its frame buffer, so the idea of having a 1TB frame buffer is intriguing, to say the least (i.e. a lot of drooling). In their press release, AMD mentions that “8K raw video timeline scrubbing was accelerated from 17 frames per second to a stunning 90+ frames per second” in the first demonstration of the Radeon Pro SSG.

Details are still forthcoming, but two PCIe 3.0 m.2 slots on the SSG card can get us up to 1TB of frame buffer. But the question is, how fast will it be? In traditional SSD drives, m.2 enjoys a large bandwidth advantage over regular SATA drives as long as it can access the PICe bus directly. Things are different if the SSG card is an island in and of itself, with the storage bandwidth contained on the card itself, so it’s unclear how the m.2 bus on the SSG card will do in communicating with the GPU directly. I tend to doubt we’ll see the same performance in bandwidth between GDDR5 memory and an on-board m.2 card, but only real-world testing will be able to suss that out.

But, I believe we’ll immediately see great speed improvements in GPU rendering of huge datasets since the SSG will circumvent the offloading and reloading times between the GPU and CPU memories, as well as potentially boosting multi-frame GPU rendering of CG scenes. But in cases where the graphics sub-system doesn’t need to load more than a dozen or so GBs of data, on board GDDR5 memory will certainly still have an edge in communication speed with the GPU.

So, needless to say, but I’m going to say it anyway, I am very much looking forward to slapping one of these into my rig to see GPU render times, as well as operability using large datasets in Maya and 3ds Max. And as long as the Radeon Pro SSG can avoid hitting up the CPU and main system memory, GPU render gains should be quite large on the whole.

Wait, There’s More
On to other AMD announcements at the show: The affordable Radeon Pro WX line-up (due in the fourth quarter of 2016), refreshing the FirePro branded line. The Radeon Pro WX cards are based on AMD’s RX consumer cards (like the RX 480), but with a higher-level professional driver support and certification with professional apps. The end-goal of professional work is stability as well as performance, and AMD promises a great dedicated support system around their Radeon Pro line to give us professionals the warm and fuzzies we always need over consumer level cards.

The top-of-the line Radeon Pro WX7100 features 256-bit 8GB memory and workstation class performance, but at less than $1,000, which I believe replaces the FirePro W8100. This puts the four-simultaneous-display-capable WX7100 in line to compete with the Nvidia Quadro M4000 card in pricing at least, if not in specs as well. But it’s hard to say where the WX7100 will sit with performance. I do hope it’s somewhere in-between the Quadro M4000 and the $1,800 M5000 card. It’s difficult to answer that based on paper specs, as the number of (OpenCL) Compute Units vs. the number of CUDA cores are hard to compare.

The 8GB Radeon Pro WX5100 and 4GB WX4100 round out the new announcements from SIGGRAPH 2016, putting them in line to compete somewhere between the 8GB Quadro M4000 and 4GB M2000 and K1200 cards in performance. Seems though that AMD’s top-of-the-line will still be the $3,400+ FirePro W9100 with 16GB of memory, though a 32GB version is also available.

I have always thought AMD brought a really good price-to-performance ratio, and it seems like the Radeon Pro WX line will continue that tradition, and I look forward to benchmarking these cards in real world CG use.

Dariush Derakhshani is a professor and VFX supervisor in the Los Angeles area and author of Maya and 3ds Max books and videos. He is bald and has flat feet.

Pixar open sources Universal Scene Description for CG workflows

Pixar Animation Studios has released Universal Scene Description (USD) as an open source technology in order to help drive innovation in the industry. Used for the interchange of 3D graphics data through various digital content creation tools, USD provides a scalable solution for the complex workflows of CG film and game studios. 

With this initial release, Pixar is opening up its development process and providing code used internally at the studio.

“USD synthesizes years of engineering aimed at integrating collaborative production workflows that demand a constantly growing number of software packages,” says Guido Quaroni, VP of software research and development at Pixar.

 USD provides a toolset for reading, writing, editing and rapidly previewing 3D scene data. With many of its features geared toward performance and large-scale collaboration among many artists, USD is ideal for the complexities of the modern pipeline. One such feature is Hydra, a high-performance preview renderer capable of interactively displaying large data sets.

“With USD, Hydra, and OpenSubdiv, we’re sharing core technologies that can be used in filmmaking tools across the industry,” says George ElKoura, supervising lead software engineer at Pixar. “Our focus in developing these libraries is to provide high-quality, high-performance software that can be used reliably in demanding production scenarios.”

Along with USD and Hydra, the distribution ships with USD plug-ins for some common DCCs, such as Autodesk’s Maya and The Foundry’s Katana.

 To prepare for open-sourcing its code, Pixar gathered feedback from various studios and vendors who conducted early testing. Studios such as MPC, Double Negative, ILM and Animal Logic were among those who provided valuable feedback in preparation for this release.

Today: AMD/Radeon event at SIGGRAPH introducing Capsaicin graphics tech

At the SIGGRAPH 2016 show, AMD will webcast a live showcase of new creative graphics solutions during their “Capsaicin” event for content creators. Taking place today at 6:30pm PDT, it’s hosted by Radeon Technologies Group’s SVP and chief architect Raja Koduri.

The Capsaicin event at SIGGRAPH will showcase advancements in rendering and interactive experiences. The event will feature:
▪ Guest speakers sharing updates on new technologies, tools and workflows.
▪ The latest in virtual reality with demonstrations and technology announcements.
▪ Next-gengraphics products and technologies for both content creation and consumption, powered by the Polaris architecture.

A realtime video webcast of the event will be accessible from the AMD channel on YouTube, where a replay of the webcast can be accessed a few hours after the conclusion of the live event. It will be available for one year after the event.

For more info on the Caspaicin event and live feed, click here.

Vicon at SIGGRAPH with two new motion tracking cameras

Vicon, which makes precision motion tracking systems and match-moving software, will be at SIGGRAPH this year showing its two new camera families, Vero and Vue. The new offerings join Vicon’s flagship camera, Vantage.

Vero is a range of high-def, synchronized optical video cameras for providing realtime video footage and 3D overlay in motion capture. Designed as an economical system for many types of applications, the Vero range includes a custom 6-12 mm variable focus lens that delivers an optimized field of view, as well as 2.2 megapixel resolution at 330Hz.

With these features, users can capture fast sport movements and multiple actors, drones or robots with low latency. The range also includes a 1.3 megapixel camera. Vero is compatible with existing Vicon T-series, Bonita and Vantage cameras as well as Vicon’s Control app, which allows users to calibrate the system and make adjustments on the fly.

With HD resolution and variable focal lengths, the Vicon Vue camera incorporates a sharp video image into the motion capture volume. It also enables seamless calibration between optical and video volumes, ensuring the optical and video views are aligned to capture fine details.