3dshape2vecset: A 3d shape representation for neural fields and generative diffusion models

B Zhang, J Tang, M Niessner, P Wonka - ACM Transactions on Graphics …, 2023 - dl.acm.org
We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for
generative diffusion models. Our shape representation can encode 3D shapes given as …

SDF‐StyleGAN: Implicit SDF‐Based StyleGAN for 3D Shape Generation

X Zheng, Y Liu, P Wang, X Tong - Computer Graphics Forum, 2022 - Wiley Online Library
We present a StyleGAN2‐based deep learning approach for 3D shape generation, called
SDF‐StyleGAN, with the aim of reducing visual and geometric dissimilarity between …

Learning generative vision transformer with energy-based latent space for saliency prediction

J Zhang, J Xie, N Barnes, P Li - Advances in Neural …, 2021 - proceedings.neurips.cc
Vision transformer networks have shown superiority in many computer vision tasks. In this
paper, we take a step further by proposing a novel generative vision transformer with latent …

3dilg: Irregular latent grids for 3d generative modeling

B Zhang, M Nießner, P Wonka - Advances in Neural …, 2022 - proceedings.neurips.cc
We propose a new representation for encoding 3D shapes as neural fields. The
representation is designed to be compatible with the transformer architecture and to benefit …

Deep generative models on 3d representations: A survey

Z Shi, S Peng, Y Xu, A Geiger, Y Liao… - arXiv preprint arXiv …, 2022 - arxiv.org
Generative models aim to learn the distribution of observed data by generating new
instances. With the advent of neural networks, deep generative models, including variational …

Joint-mae: 2d-3d joint masked autoencoders for 3d point cloud pre-training

Z Guo, R Zhang, L Qiu, X Li, PA Heng - arXiv preprint arXiv:2302.14007, 2023 - arxiv.org
Masked Autoencoders (MAE) have shown promising performance in self-supervised
learning for both 2D and 3D computer vision. However, existing MAE-style methods can only …

Learning energy-based prior model with diffusion-amortized mcmc

P Yu, Y Zhu, S Xie, XS Ma, R Gao… - Advances in Neural …, 2023 - proceedings.neurips.cc
Latent space EBMs, also known as energy-based priors, have drawn growing interests in
the field of generative modeling due to its flexibility in the formulation and strong modeling …

[PDF][PDF] Beef: Bi-compatible class-incremental learning via energy-based expansion and fusion

FY Wang, DW Zhou, L Liu, HJ Ye, Y Bian… - The eleventh …, 2022 - drive.google.com
Neural networks suffer from catastrophic forgetting when sequentially learning tasks phase-
by-phase, making them inapplicable in dynamically updated systems. Class-incremental …

Generative pointnet: Deep energy-based learning on unordered point sets for 3d generation, reconstruction and classification

J Xie, Y Xu, Z Zheng, SC Zhu… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We propose a generative model of unordered point sets, such as point clouds, in the forms
of an energy-based model, where the energy function is parameterized by an input …

A tale of two flows: Cooperative learning of langevin flow and normalizing flow toward energy-based model

J Xie, Y Zhu, J Li, P Li - arXiv preprint arXiv:2205.06924, 2022 - arxiv.org
This paper studies the cooperative learning of two generative flow models, in which the two
models are iteratively updated based on the jointly synthesized examples. The first flow …