Omniverse Audio2Face now generates facial blendshapes
Wednesday, January 5th, 2022 | Posted by Jim Thacker
Nvidia has released Omniverse Audio2Face 2021.3.2, the latest version of its experimental free AI-based software for generating facial animation from audio sources.
The release adds the option to generate a set of facial blendshapes spanning a wide range of expressions for a custom head model, then export them in USD format for editing in software like Maya or Blender.
Generate automatic lip-sync and facial animation for Character Creator characters from audio files
First released last year, Audio2Face is an AI-trained tool for generating facial animation for a 3D character from audio sources: either offline recordings of speech, or a live audio feed.
New options for controlling facial animations via a facial blendshape-based workflow
In the initial release, the only way to modify the animation Audio2Face generates was via post-processing parameters, but Nvidia has since begun implementing an alternative workflow based on facial blendshapes.
To that, Audio2Face 2021.3.2 adds the option to generate a set of blendshapes for a custom head model.
The video above shows the software being used to transfer a set of 46 readymade blendshapes covering a standard range of facial expressions from the A2F asset to a custom head.
The transfer process can be controlled by adjusting correspondence points identifying equivalent features – the locations of the same facial features – on the two head models.
The process preserves UVs, making it possible to reuse the original facial textures.
The resulting set of blendshapes can then be exported in USD format, making it possible to edit individual facial shapes in DCC applications capable of importing USD files, like Maya or Blender.
Other new features: new Streaming Audio Player, support for MetaHumans
Other new features in Audio2Face 2021.3.2 include a Streaming Audio Player, for streaming audio data into the software from external sources like text-to-speech applications via the gRPC protocol.
Since we last wrote about the software, Audio2Face 2021.3.1 added the option to use Audio2Face animations on MetaHumans: 3D characters generated with Epic Games’ free MetaHuman Creator app.
You can see the workflow for transferring facial animation data from Audio2Face to a MetaHuman inside Unreal Engine via the Omniverse Unreal Engine 4 connector in this video.
Pricing and system requirements
Omniverse Audio2Face is available for Windows 10. It requires a Nvidia RTX GPU: the firm recommends a GeForce RTX 3070 or RTX A4000 or higher. All of the Omniverse tools are free to individual artists.
Tags: AI-based, AI-trained, animation, Audio2Face, Blender, blendshape, export facial blendshapes, facial animation, facial shapes, free, game asset, game character, generate facial animation from audio file, generate facial blendshapes, lip sync, machinima, Maya, MetaHuman, MetaHuman Creator, new features, NVIDIA, Omniverse, Omniverse Audio2Face, Omniverse Audio2Face 2021.0, Omniverse Audio2Face 2021.3.1, Omniverse Audio2Face 2021.3.2, Omniverse Machinima, system requirements, UE4, Universal Scene Description, Unreal Engine, USD, USDSkel, UVs
Специалисты компании Salegor всегда отслеживают самые передовые технологии применяемые в сфере создания 2D / 3D рекламных роликов и компьютерной графики, и будут рады создать для вас продукт на их основе.