SVAD: From Single Image to 3D Avatar via Synthetic Data Generation with Video Diffusion and Data Augmentation

SECERN AI1,
CVPR 2025 Workshop
Random Image

Given a single image, we generate high-fidelity 3D avatars using synthetic data from video diffusion and data augmentation, maintaining identity consistency across novel poses and viewpoints while enabling real-time rendering.


Abstract

Creating high-quality animatable 3D human avatars from a single image remains a significant challenge in computer vision due to the inherent difficulty of reconstructing complete 3D information from a single viewpoint. Current approaches face a clear limitation: 3D Gaussian Splatting (3DGS) methods produce high-quality results but require multiple views or video sequences, while video diffusion models can generate animations from single images but struggle with consistency and identity preservation. We present SVAD, a novel approach that addresses these limitations by leveraging complementary strengths of existing techniques. Our method generates synthetic training data through video diffusion, enhances it with identity preservation and image restoration modules, and utilizes this refined data to train 3DGS avatars. Comprehensive evaluations demonstrate that SVAD outperforms state-of-the-art (SOTA) single-image methods in maintaining identity consistency and fine details across novel poses and viewpoints, while enabling real-time rendering capabilities. Through our data augmentation pipeline, we overcome the dependency on dense monocular or multi-view training data typically required by traditional 3DGS approaches. Extensive quantitative, qualitative comparisons show our method achieves superior performance across multiple metrics against baseline models. By effectively combining the generative power of diffusion models with both the high-quality results and rendering efficiency of 3DGS, our work establishes a new approach for high-fidelity avatar generation from a single image input.


Pipeline


Problem Description

Starting from a single input image, our method generates pose-conditioned animations through video diffusion. However, directly using these frames yields poor results with inconsistent identity and details. These challenges are addressed through our data augmentation pipeline, producing high-fidelity animatable 3D avatars.


Problem Description


Video

BibTeX

@inproceedings{SVAD,
    title="SVAD: From Single Image to 3D Avatar via Synthetic Data Generation with Video Diffusion and Data Augmentation",
    author="Choi, Yonwoo",
    booktitle=CVPR Workshop,
    year={2025}
}