PoseTalk: Text-and-Audio-based Pose Control and Motion Refinement for One-Shot Talking Head Generation

Jun Ling, Yiwen Wang, Han Xue, Rong Xie, Li Song
Shanghai Jiao Tong University
MY ALT TEXT

Key features of our PoseTalk. Our method can synthesize talking face videos from an image, the driving audio, and driving poses. The driving poses can be sampled from fixed poses (the source image), reference poses (from other talking videos), or text-and-audio-based generated poses.

Abstract

While previous audio-driven talking head generation (THG) methods generate head poses from driving audio, the generated poses or lips cannot match the audio well or are not editable. In this study, we propose PoseTalk, a THG system that can freely generate lip-synchronized talking head videos with free head poses conditioned on text prompts and audio. The core insight of our method is using head pose to connect visual, lingual, and audio signals. First, we propose to generate poses from both audio and text prompts, where the audio offers short-term variations and rhythm correspondence of the head movements and the text prompts describe the long-term semantics of head motions. To achieve this goal, we devise a Pose Latent Diffusion (PLD) model to generate motion latent from text prompts and audio cues in a pose latent space. Second, we observe a loss-imbalance problem that the loss for the lip region contributes less than 4% of the total reconstruction loss caused by both pose and lip, which in turn makes optimization lean towards head movements rather than lip shapes. To address this issue, we propose a refinement-based learning strategy to synthesize natural talking videos using two cascaded networks, i.e., CoarseNet, and RefineNet. The CoarseNet estimates coarse motions to produce animated images in novel poses and the RefineNet focuses on learning finer lip motions by progressively estimating lip motions from low-to-high resolutions, yielding improved lip-synchronization performance. Experiments demonstrate our pose prediction strategy achieves better pose diversity and realness compared to text-only or audio-only, and our video generator model outperforms state-of-the-art methods in synthesizing talking videos with natural head motions.

Method

MY ALT TEXT

The overview of our pose diffusion and talking face video generation. (a) During training, the pose latent diffusion model is conditioned on the pose embedding learned by VAE. The denoising process is conditioned on the text embedding, time stamps, and audio features. (b) Given a source image, the audio features, and the extracted or predicted pose/gaze features, the video generator gradually estimates finer motions and lip-synced talking videos. (c) Our inference pipeline consists of two modules: 1) The Pose Generation module generates diverse poses guided by the audio features and text prompts. 2) The Refinementbased Video Generator synthesizes lip-synchronized talking videos given the input audio features, poses, and source frame. It is worth noting that our poses can also be sampled from template poses.

Video Results

Comparisons

PoseTalk achieves high lip-sync quality and comparable or better visual quality compared to the state-of-the-art methods.

The impact of Motion Refinement

The audios are not paired with the source identity.

The impact of Text Prompts

The audios are not paired with the source identity.

Experimental Results

MY ALT TEXT