Check out cutting-edge renderer Mitsuba 3
Tuesday, July 26th, 2022 | Posted by Jim Thacker
Research-oriented rendering system Mitsuba 3 can be compiled for either conventional forward rendering, or for inverse rendering – taking a 2D image and reconstructing a 3D scene that matches it.
Its source code can be compiled can be compiled into variants showcasing various cutting-edge rendering technologies, including two focused on inverse rendering.
Rather than taking a scene and rendering a 2D image from it, they take a 2D image and generate a 3D scene matching it.
Back then, it was a highly modular rendering framework – its core could be extended with over 100 different plugins – showcasing experimental techniques not then available in commercial tools.
Take a 2D image and generate a 3D scene matching it
In Mitsuba 3, there are four default variants: two for conventional forward rendering, and two for inverse rendering: taking a 2D image and reconstructing the properties of a 3D scene matching it.
You can see a nice summary in the video above: as well as conventional geometry, Mitsuba can reconstruct volumetrics, and even surfaces that would generate particular caustic lighting patterns.
Unlike differentiable rendering libraries like PyTorch3D and TensorFlow Graphics, Mitsuba uses ray tracing rather than rasterisation, and unlike neural networks, the reconstruction is physically based.
As a consequence, the results “aren’t tied to Mitsuba, and can be processed by many other tools”, including DCC applications like Blender and 3ds Max.
Needs some tech savvy to get the most out of it
Like its predecessors, Mitsuba 3 is a way to try cutting-edge rendering techniques before they make their way into production tools.
If you want to try it for yourself, be aware that it really is a research tool: as well as computer graphics, it’s intended for image analysis in fields like astronomy, microscopy and medical imaging.
Rendering can be done either on the GPU – Mitsuba uses the CUDA API, so you will need a compatible Nvidia GPU – or on the CPU, via LLVM.
Licensing and system requirements
Mitsuba 3 is compatible with Windows, Linux and macOS and requires Python 3.8+. For GPU rendering, you will need a Nvidia RTX GPU. It installs from the command line: you can find installation instructions here.
The source code is freely available: the licence is a custom copyright notice.
Tags: 3ds max, Blender, caustics, caustics reconstruction, CPU rendering, differentiable rendering, download, Dr Jit, experimental rendering techniques, forward+ rendering, free, generated 3D scene matching 2D image, GPU rendering, inverse rendeing, LLVM, Mitsuba, Mitsuba 3, neural network, new rendering technologies, NVIDIA, open source, Python, PyTorch3D, reconstruct 3D scene from render, rendering framework, RTX, Source Code, spectral rendering, system requirements, TensorFlow Graphics, volumetric, volumetric reconstruction, Wenzel Jakob
Специалисты компании Salegor всегда отслеживают самые передовые технологии применяемые в сфере создания 2D / 3D рекламных роликов и компьютерной графики, и будут рады создать для вас продукт на их основе.