GeneMAN: Generalizable Single-Image 3D Human Reconstruction from Multi-Source Human Data

1Shanghai AI Laboratory, 2Peking University, 3Nanyang Technological University, 4Shanghai Jiao Tong University
*Equal Contribution, Corresponding Author


GeneMAN is a generalizable framework for single-view-to-3D human reconstruction, built on a collection of multi-source human data. Given a single in-the-wild image of a person, GeneMAN could reconstruct a high-quality 3D human model, regardless of its clothing, pose, or body proportions (e.g., a full-body, a half-body, or a close-up shot) in the given image.

Abstract

Given a single in-the-wild human photo, it remains a challenging task to reconstruct a high-fidelity 3D human model. Existing methods face difficulties including a) the varying body proportions captured by in-the-wild human images; b) diverse personal belongings within the shot; and c) ambiguities in human postures and inconsistency in human textures. In addition, the scarcity of high-quality human data intensifies the challenge. To address these problems, we propose a Generalizable image-to-3D huMAN reconstruction framework, dubbed GeneMAN, building upon a comprehensive multi-source collection of high-quality human data, including 3D scans, multi-view videos, single photos, and our generated synthetic human data. GeneMAN encompasses three key modules. 1) Without relying on parametric human models (e.g., SMPL), GeneMAN first trains a human-specific text-to-image diffusion model and a view-conditioned diffusion model, serving as GeneMAN 2D human prior and 3D human prior for reconstruction, respectively. 2) With the help of the pretrained human prior models, the Geometry Initialization-&-Sculpting pipeline is leveraged to recover high-quality 3D human geometry given a single image. 3) To achieve high-fidelity 3D human textures, GeneMAN employs the Multi-Space Texture Refinement pipeline, consecutively refining textures in the latent and the pixel spaces. Extensive experimental results demonstrate that GeneMAN could generate high-quality 3D human models from a single image input, outperforming prior state-of-the-art methods. Notably, GeneMAN could reveal much better generalizability in dealing with in-the-wild images, often yielding high-quality 3D human models in natural poses with common items, regardless of the body proportions in the input images.

Overview

Image 1

Overview of the Multi-Source Human Dataset and Our GeneMAN Pipeline. We have constructed a multi-source human dataset comprising 3D scans, videos, 2D images, and synthetic data. This dataset is utilized to train human-specific 2D and 3D prior models, which provide generalizable geometric and texture priors for our GeneMAN framework. Through geometry initialization, sculpting, and multi-space texture refinement in GeneMAN, we achieve high-fidelity 3D human body reconstruction from single in-the-wild images.

Method

Image 1 Image 2

Geometry Initialization & Sculpting. During the geometry reconstruction stage, we initialize a template-free geometry using NeRF, incorporating GeneMAN 2D and 3D priors with SDS losses. Alongside diffusion-based guidance, a reference loss ensures alignment with the input image. We then convert NeRF into DMTet for high-resolution refinement, guided by pretrained human-specific normal- and depth-adapted diffusion models. Multi-Space Texture Refinement Stage. In the texture generation stage, we propose multi-space texture refinement to optimize texture in both latent space and pixel space. First, we generate the coarse textures using multi-view texturing, which are then iteratively refined in latent space. Subsequently, detailed textures are obtained by optimizing the UV map in pixel space with a 2D prior-based ControlNet.

Experiment

Quantitative Comparison

Image 1

Qualitative Comparison

Image 2

More Qualitative Results

Geometric Comparison on in-the-wild data




Comparison on in-the-wild data








Comparison on CAPE data