Phidias : A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion

Intern at Shanghai AI Lab *Equal Contribution
1City University of Hong Kong, 2Shanghai AI Lab, 3Chinese Univeristy of Hong Kong, 4S-Lab, Nanyang Technological University

Phidias supports reference-augmented image-to-3D, text-to-3D, and 3D-to-3D generation, where the 3D reference can be obtained via retrieval or specified by users.

More Results

Phidias generates high-quality 3D assets in just few seconds given a input 3D reference.

Abstract

In 3D modeling, designers often use an existing 3D model as a reference to create new ones. This practice has inspired the development of Phidias, a novel generative model that uses diffusion for reference-augmented 3D generation. Given an image, our method leverages a retrieved or user-provided 3D reference model to guide the generation process, thereby enhancing the generation quality, generalization ability, and controllability. Our model integrates three key components: 1) meta-ControlNet that dynamically modulates the conditioning strength, 2) dynamic reference routing that mitigates misalignment between the input image and 3D reference, and 3) self-reference augmentations that enable self-supervised training with a progressive curriculum. Collectively, these designs result in a clear improvement over existing methods. Phidias establishes a unified framework for 3D generation using text, image, and 3D conditions with versatile applications.

Approach Overview

method

Given one concept image, we aim at leveraging an additional 3D reference model to alleviate 3D inconsistency issues and geometric ambiguity that exist in 3D generation. The 3D reference model can be either provided by the user or retrieved from a large 3D database for different applications. The overall pipeline of Phidias is shown above, which involves two stages: (1) reference-augmented multi-view generation and (2) sparse-view 3D reconstruction.

More Applications

Interactive Generation with Coarse Guidance

Using Phidias you can continually adjust the geometry of generated 3D models using manually created coarse 3D shapes as reference models.

High-Fidelity 3D Completion

Phidias can precisely predicts and fills in the missing parts of an incomplete 3D model while maintaining the integrity and details of the origin.


Comparisons

Qualitative comparisons with state-of-the-art methods on image-to-3D. Phidias generates 3D models with higher quality and better controllability (page 1). Results in page 2 show that Phidias has better generalization ability to handle inputs with atypical viewpoints.

Demo Video

BibTeX

@article{wang2024phidias,
        title={Phidias: A Generative Model for Creating 3D  Content from Text, Image, and 3D Conditions with Reference-Augmented  Diffusion}, 
        author={Zhenwei Wang and Tengfei Wang and Zexin He and Gerhard Hancke and Ziwei Liu and Rynson W.H. Lau},
        eprint={2409.11406},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        year={2024},
        url={https://arxiv.org/abs/2409.11406},
  }