VFX News and information

THe future of VFX and CGI

Visual effects have come a long way in the 124 years since we first set eyes on the motion picture. Today, the world’s most talented vfx studios create films and TV shows that seamlessly blend the digital image with the analog, allowing artists to make their wildest dreams come to life on the big screen. The full maturation of the tools used by creators working on the cutting edge of the industry - like computer-generated worlds, real-time previewing of motion capture, cloud-based rendering, and hyper realistic digital humans - has laid the groundwork for the media revolution we’re about to experience as 360 degree Virtual Reality goes mainstream. This is a look at the near-term future of visual effects. In order to understand where we’re headed, let’s look at where we’ve come from. [Clip from Jurassic Park] Many of today’s premier VFX artists were hooked by iconic scenes like this one from Jurassic Park: aha! moments that opened their minds to the power of mixing computer-generated imagery with live action footage. Today, nearly 25 years after Spielberg's T-Rex changed the game, every single film on the 50 highest grossing movies of all time list relied heavily on visual effects, or was completely animated. But what might be even more telling is that 25 of them were released in the last five years. That’s an indication of how reliant the big studios are on VFX and how much we, the public, love to see visual effects take us to worlds we otherwise couldn’t visit.



 The king of VFX, is director James Cameron. For both Avatar and Titanic — the two highest grossing films of all time — Cameron pioneered a number of new techniques that heavily influenced the blockbuster movie industry, like a motion capture stage six times larger than any before, a way to preview shots in the virtual world in realtime while motion capturing them on set, and a technique to enable full performance capture using small cameras right in front of the actors' faces to collect facial expressions and, importantly, eye movement. This allowed Zoe Saldana to breath emotional life into her Neytiri character in a way that was compelling to the audience. Full performance capture has become an industry standard. But while it's become commonplace to bring characters like the Na’vi, Maz Kanata, Caesar, or the Hulk to life using facial capture technology, using it — or any other CG technique — to make a believable fully digital human is another story. We are getting closer to crossing the uncanny valley. While it’s still cheaper to hire Tom Hanks than to recreate him digitally, we’re a long way from the waxiness of the CG characters he brought to life in The Polar Express. So far, the most lifelike use of digital humans was Rogue One’s resurrection of Carrie Fisher as Princess Leia and Peter Cushing as Grand Moff Tarkin. But Industrial Light and Magic had hours of reference footage of both actors, mapped the digital recreation of their faces onto full performances of other actors, and even found a lifecast of Peter Cushing’s face from a previous role.


 [Artist] "This was gold for us because it was Peter Cushing as he appeared at a certain time in his life." Neither Leia nor Tarkin’s cameos were very long or involved much movement, and they still didn’t quite look human. But this technology could be about to take another leap. Last week, Weta Digital finally began post production on the four sequels to Avatar, which are slated for December 2020, 2021, 2024, and 2025 releases. And Cameron says he wants the films to set a new high bar. [Quote from director James Cameron] “What Joe Letteri and Weta Digital bring to these stories is impossible to quantify. Since we made Avatar, Weta continued to prove themselves as doing the best CG animation, the most human, the most alive, the most photorealistic effects in the world. And of course, that now means I can push them to take it even further.” But what really needs this push isn’t traditional films like Avatar, it’s the newest form of entertainment, virtual reality. Because effects studios can make literally anything else look photorealistic, creating believable digital humans seems to be the last big hurdle to creating feature-length films for VR. Why do I say that? Well, visual effects are the most expensive part of filmmaking, and that expense is multiplied for live-action shots in VR, where VFX need to be applied to an entire 360 degree environment. Plus, there aren’t anywhere near enough paying customers right now of virtual reality content to make a VR only live-action film with a budget in the tens of millions of dollars worth it, let alone hundreds of millions of dollars. So to help bridge this huge gap and demonstrate what’s possible, Google created a series of short films called spotlight stories each made to be viewed in 360 degrees. But — in the entire two years of the project — only one of them has been live action: the five minute film HELP.


 For a first-time effort it’s a pretty entertaining experience, although it does feel more like a ride at Universal Studios than a film. As the first VR experience to use cinema quality effects in a live action setting, HELP did break new ground and inspired the invention of some entirely new filmmaking techniques. Another challenge was the sheer amount of data that was created during the shoot using four RED cameras each filming in 6k resolution. Factor in the data and rendering time of the monster and all the effects, and you begin to understand why the near-term future of effects-driven VR filmmaking will likely be entirely CG, without any actual real world cameras. But whether it’s VR, 3D, or just effects-heavy hi-res 2D media, vfx houses need increasingly powerful machines to render their massive files. The solution is cloud computing. The company Atomic Fiction saw this trend early on, and became the first studio to render the effects for an entire big-budget hollywood film — 2012’s Flight — entirely in the cloud. Since then, it’s scaled up its operation to take on even bigger projects rendered entirely in the cloud, like the opening, action-packed freeway battle scene in Deadpool which required Atomic to create the entire surrounding city from scratch. It also did the effects on the effects filled Star Trek Beyond. [Kevin Baillie] “But I just want to show you what rendering in the cloud actually looks like. So this is a scene from Star Trek Beyond here in Maya. I hit the 'submit job' button. It uploads any files that it needs to into the cloud. It started 36 computers, they all just did the work that we wanted it to do. This whole process in real life takes about four minutes. And here is our little marauder friend coming out of this spaceship.” Compare that to the 12 hours it took to render each frame in the T-rex Jeep chase scene in Jurassic Park. The progress we’ve made in 25 years is staggering. We’re also seeing the effect this explosion in computing power is having on TV. The two most popular shows, Game of Thrones and The Walking Dead, both rely heavily on VFX. One innovation — used by Game of Thrones to create the epic war scene in the Battle of the Bastards — is the animation and Artificial Intelligence software MASSIVE, first designed for Peter Jackson’s Lord of the Rings trilogy. It showcases the potential impact AI could make on the future of VFX. 


But the most fundamental change in the near-term future of visual effects involves the most crucial element in the entire filmmaking process: the camera itself. Lytro Cinema — developed by former Stanford University researcher Ren Ng — is a camera system that uses light field technology. Light field not only captures the intensity of light in a scene, which is the information captured by conventional cameras, but it also records the direction the light rays are travelling in space. [Jon Karafin] "Lytro cinema offers the ability to focus anywhere in your scene. You have the infinite ability to focus and create any aperture or any depth of field. With depth screen it's as if you had a green screen for every object, but it's not just limited to any one object, it's anywhere in space." So, what’s the catch? The amount of data captured by the camera is huge. [Lytro employee] “Lytro Cinema is the highest resolution video sensor ever designed. Every frame has 755 RAW megapixels, 16 stops of dynamic range, and we can record at up to 300 frames per second.” [Bryce] The adoption of light field cameras will also accelerate thanks to the cloud. That’s exactly why Lytro partnered with The Foundry, the company behind the popular vfx software Nuke. Together, they created Elara, which revolutionizes the entire project pipeline by centralizing every part of post-production in the cloud: the files, the software, the computing, all of it. This allows editors to access and work on all aspects of a project in their web browser. So that’s where we’re headed: 


massive amounts of visual effects applied to media captured without concern for green screens or much worry about lighting, created by completely mobile teams collaborating on projects in real-time, from all around the world. As the infrastructure of the Internet and the speeds it can deliver catches up to the current power of our computers, more and more creators will gain the ability to create and innovate, while also exploring and pushing the boundaries of virtual reality. Thanks for watching. What innovation in visual effects has you most excited for what’s to come? If you enjoyed this video, give it a like and subscribe so you can join us on the regular expeditions we’ll be taking into the future of all sorts of different industries. Until next time, for TDC, I’m Bryce Plank. 

Comments

Popular posts from this blog

top unique technology top tech news tech news in hindi

Realme XT review ,realme xt,realme xt camera