ConsistentAvatar: Learning to Diffuse Fully Consistent Talking Head Avatar with Temporal Guidance


*Corresponding author
ACMMM 2024 Oral

Abstract

Diffusion models have shown impressive potential on talking head generation. While plausible appearance and talking effect are achieved, these methods still suffer from temporal, 3D or expression inconsistency due to the error accumulation and inherent limitation of single-image generation ability. In this paper, we propose ConsistentAvatar, a novel framework for fully consistent and high fidelity talking avatar generation. Instead of directly employing multi-modal conditions to the diffusion process, our method learns to first model the temporal representation for stability between adjacent frames. Specifically, we propose a Temporally-Sensitive Detail (TSD) map containing high-frequency feature and contours that vary significantly along the time axis. Using a temporal consistent diffusion module, we learn to align TSD of the initial result to that of the video frame ground truth. The final avatar is generated by a fully consistent diffusion module, conditioned on the aligned TSD, rough head normal, and emotion prompt embedding. We find that the aligned TSD, which represents the temporal patterns, constrains the diffusion process to generate temporally stable talking head. Further, its reliable guidance complements the inaccuracy of other conditions, suppressing the accumulated error while improving the consistency on various aspects. Extensive experiments demonstrate that ConsistentAvatar outperforms the state-of-the-art methods on the generated appearance, 3D, expression and temporal consistency.

Methodology

pipeline

ConsistentAvatar begins with the implementation of the highly efficient INSTA method, leveraging its outputs as initial results (Stage 1). To address temporal consistency, we introduce a concept termed as Temporally-Sensitive Detail (TSD), derived through Fourier transformation. Extracting TSD from the coarse RGB output of INSTA and the target video frame, we develop a temporal consistency diffusion model to accurately align the input TSD with the precise one (Stage 2). Subsequently, we employ the coarse normal output of INSTA as a parameter for 3D perception and introduce an emotion selection moduletogenerateemotionembeddingsforeachframe.ByintegratingalignedTSD,normal,andemotionembeddings as conditioning factors, we propose a fully consistent diffusion model to generate the final avatars (Stage 3).

Comparisons with other Methods

Ground Truth

Our Method

Diffusionrig

INSTA

EDTalk

Ground Truth

Our Method

Diffusionrig

INSTA

EDTalk

Ground Truth

Our Method

Diffusionrig

INSTA

EDTalk

Citation

If you found this repo helpful to your work, please consider cite us:

@misc{yang2024consistentavatarlearningdiffusefully,
      title={ConsistentAvatar: Learning to Diffuse Fully Consistent Talking Head Avatar with Temporal Guidance}, 
      author={Haijie Yang and Zhenyu Zhang and Hao Tang and Jianjun Qian and Jian Yang},
      year={2024},
      eprint={2411.15436},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.15436},}