Unleashing Creativity in Entertainment


Shot on location at Intel’s activation in Park City, UT, the Variety/Intel series offers a deeper look at the convergence of creativity and engineering in entertainment. Key Intel leaders and top independent filmmakers come together to showcase how Intel is inspiring the future of film.

The Future of Storytelling

Answering the Unanswerable

Hot Ticket at Sundance

Artificial intelligence, virtual reality, augmented reality and other technological innovations are changing the way stories can be told. At the Sundance Film Festival in Park City, eager filmmakers lined up on Main Street to experience Intel's latest breakthroughs. They experienced worlds beyond our solar system, were captured in three dimensions and transformed in real time.

The Revolution Will Be All Around You

Intel is transcending today’s 2D cameras and creating the next generation of storytelling, empowering viewers to go inside the action

By Chris Morris

The dome over the Intel Studios' stage holds numerous cameras for 360-degree capture of the action below.

When it comes to capturing viewers’ attention, the sports and entertainment worlds seem to be on the verge of a paradigm shift—and Intel’s betting that it can lead the charge to that new era.

“We’re taking the industry to the next level of interactive content,” says Diego Prilusky, general manager of Intel Studios, an initiative the company says will transform immersive and interactive media.

Intel Studios focuses on the production of virtual reality (VR), augmented reality (AR) and other types of cutting-edge content. It’s designed to capture actors and objects in volumetric 3D, letting viewers see things from a near infinite number of angles.

“Our question is: How do you engage [audiences]?’,” says Prilusky. “We’re looking at how to produce content that will enable a deeper immersion,” he says. “We enable you to move within the [scene]. This is the unlimited point of view we’re offering.”

Intel is working to build awareness of its True View and True VR technologies among other technology companies and filmmakers, hosting volumetric content capture demos at CES in Las Vegas and at the Sundance Film Festival in Park City, UT. It also recently announced the launch of Intel Studios’ facility in Los Angeles, which features the world’s largest volumetric capture stage.

The volumetric capture technology works like this: Multiple cameras, each with multiple lenses, offer a 360-degree view of a scene, whether it’s the action at the upcoming Winter Olympic Games in PyeongChang or a subtle character moment for a film. Audiences can then choose the camera position they want and zoom in from any angle.

Such “virtual camera” capability has been available for some time to animators and videogame developers, who work with Computer Generated Imagery. Now Intel is bringing it to live-action filmmaking.

Intel’s approach is platform-agnostic; its tech can play on any VR, AR and mixed-reality headsets available today, and, more importantly, on those still in development. Headsets are relatively primitive today, nowhere near the potential they’re expected to reach in the years to come, and innovations are already coming quickly.

So instead of focusing on hardware, Intel says it is focusing on the big data component, determining the best ways it can process the enormous amounts of data the system gathers.

How much data? Shooting one minute of an NFL game, True View gathers 3 terabytes of information. In just 15 minutes of shooting, it collects the data equivalent of all of the text in the Library of Congress.

Intel is deliberately leaving space for the technology to evolve, as that’s what its partners in the entertainment industry want for now. Ted Schilowitz, futurist-in-residence at Paramount, explains: “We’re at an early enough stage with this that it’s best to keep the most open mind possible, and try not to lock into anything that will make a presumption about where it will go or what story it will tell.”

Schilowitz says what’s appropriate now is “full exploration mode.”

“This isn’t going to be next year,” he says. “This is going to take generations of technology, experimentation and market finding. I think we’re on a 10-year curve.”

A decade may sound like a long time, but consider YouTube, which Google bought just over 11 years ago for $1.65 billion. Today, it’s a leading video platform where people watch over 1 billion hours of content each day.

Prilusky says filmmakers are realizing this kind of immersive storytelling will hold audiences’ attention more than today’s media can. “It’s something that has very few pioneers today,” he says. “We believe at Intel that we are getting to the point where it’s no longer just a small, independent exploratory segment of the industry.”

Immersive storytelling presents a challenge to Hollywood: Filmmakers have to invent and master a new way to shoot content and craft a narrative. But Schilowitz says that’s an essential step as the entertainment industry moves forward.

“We’re relying on the past to look at the future [right now] and that’s not going to do,” he says. Instead, he says, the industry must look forward as it plans for the future. “I think this could be as big or bigger than anything that has come before it.”

However, he says, once content creators get a sense of the potential of True View and True VR, they’re likely going to be very enthusiastic about being able to offer a storytelling experience that’s more active as well as more immersive than today’s entertainment.

“What we’re trying to do is create a better illusion, a better magic trick,” says Schilowitz. “We’re trying to fool our eyes, our brain and our senses. If this is the next possible step in how to create better magic, then you’re going to get big producers and directors to latch onto this.”

Intel Powers Push For Perfect Digital Humans

Ziva Dynamics relies on chipmaker’s advanced features to ready a new generation of faster, more flexible digital actors

By R. Roosevelt

To quickly render detailed simulations of human and animal flesh, fat and muscle, Ziva Dynamics software draws on advanced features of Intel chips such as the "Math Kernel Index" originally designed for defense and industrial applications.

Imagine an action superstar who’s past doing his own stunts but wants audiences to think he’s still surfing airplanes. Fans should still believe it’s his derring-do they see onscreen, but he’s no longer risking life and limb for the sake of the shot.

Effects software from Ziva Dynamics builds digital actors from the inside out. Instead of digitally layering the star’s face frame by frame over a stunt performer—an imperfect and arduous process—filmmakers can use Ziva’s software to build a fully rendered figure that behaves like real flesh, skin and bone, that can bend and stretch and, if required, scale a skyscraper. Even better, Ziva’s software can render digital actors either either very quickly, when animators need to review and adjust the character, or at feature quality when the time comes.

“Historically in visual effects, you’re creating shots,” says Ziva chief operating officer Michael Smit. “Now we’re creating the virtual talent.”

Ziva was founded by animator James Jacobs, a veteran of Weta Digital who won a Scientific and Engineering Academy Award while with Weta for his advances in tissue simulation. In the five years since, Intel’s advances in chip design have sped up complex animation, so it’s now possible to do precise simulations of the human body that once were out of reach. “Intel hardware is absolutely essential to what we do,” he says. “All of our customers run on Intel hardware. Ziva is configured around Intel architecture and optimized for Intel Xeon Scalable processors, which allows them to do things like running 20 variant shots in parallel in one day. It’s the best choice for the domain we’re working in.” Ziva takes advantage of Intel chips’ “Math Kernel Index,” which was developed for math-intensive tasks like missile guidance and nuclear power plant controls. It proved just the thing for squishier math problems like jiggling flesh and stretching skin. “We haven’t found anything that would be better for the job,” says Jacob.

Slice a Ziva figure down the middle and you can see fat and muscle, the varied textures of the human body. Once built, “classes” in yoga and calisthenics teach the creations how to move, with machine learning handling as much as 80% of the design labor. “Every day we get closer to that ability to reproduce reality in a compelling, realistic way,” Jacobs says. “The limitations now are really only the creative limitations of the story teller.”

Intel and Ziva keep an ongoing conversation going about the next generation of chip technology. “In the past, achieving photo-realism in interactive content such as games and VR was too costly and time-intensive for most independent filmmakers and content creators,” said Lisa Spelman, vice president and general manager of Intel Xeon Products and Data Center Marketing at Intel. “When we designed the new Intel Xeon Scalable processors, we really focused on accelerating performance for demanding workloads like content creation. Now, with these new processors and Ziva’s machine learning approach to character creation, content creators can push the boundaries and open up a whole world that we haven’t seen in immersive media before.”

Jacobs feels Ziva is close to creating digital actors flawless enough to pass for human for a few shots. What about a character who could carry an entire movie if that superstar retired? “I think we’re still a few years off from that,” Jacob sighs. “We’re our own biggest critics and that’s what keeps us pushing the technology forward.”