Artificial intelligence in the creative industries: a review

N Anantrasirichai, D Bull - Artificial intelligence review, 2022 - Springer
This paper reviews the current state of the art in artificial intelligence (AI) technologies and
applications in the context of the creative industries. A brief background of AI, and …

4d gaussian splatting for real-time dynamic scene rendering

G Wu, T Yi, J Fang, L Xie, X Zhang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Representing and rendering dynamic scenes has been an important but challenging task.
Especially to accurately model complex motions high efficiency is usually hard to guarantee …

Generative novel view synthesis with 3d-aware diffusion models

ER Chan, K Nagano, MA Chan… - Proceedings of the …, 2023 - openaccess.thecvf.com
We present a diffusion-based model for 3D-aware generative novel view synthesis from as
few as a single input image. Our model samples from the distribution of possible renderings …

Merf: Memory-efficient radiance fields for real-time view synthesis in unbounded scenes

C Reiser, R Szeliski, D Verbin, P Srinivasan… - ACM Transactions on …, 2023 - dl.acm.org
Neural radiance fields enable state-of-the-art photorealistic view synthesis. However,
existing radiance field representations are either too compute-intensive for real-time …

Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures

Z Chen, T Funkhouser, P Hedman… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize
images of 3D scenes from novel views. However, they rely upon specialized volumetric …

Dynibar: Neural dynamic image-based rendering

Z Li, Q Wang, F Cole, R Tucker… - Proceedings of the …, 2023 - openaccess.thecvf.com
We address the problem of synthesizing novel views from a monocular video depicting a
complex dynamic scene. State-of-the-art methods based on temporally varying Neural …

F2-nerf: Fast neural radiance field training with free camera trajectories

P Wang, Y Liu, Z Chen, L Liu, Z Liu… - Proceedings of the …, 2023 - openaccess.thecvf.com
This paper presents a novel grid-based NeRF called F^ 2-NeRF (Fast-Free-NeRF) for novel
view synthesis, which enables arbitrary input camera trajectories and only costs a few …

Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction

C Sun, M Sun, HT Chen - … of the IEEE/CVF conference on …, 2022 - openaccess.thecvf.com
We present a super-fast convergence approach to reconstructing the per-scene radiance
field from a set of images that capture the scene with known poses. This task, which is often …

Mvimgnet: A large-scale dataset of multi-view images

X Yu, M Xu, Y Zhang, H Liu, C Ye… - Proceedings of the …, 2023 - openaccess.thecvf.com
Being data-driven is one of the most iconic properties of deep learning algorithms. The birth
of ImageNet drives a remarkable trend of" learning from large-scale data" in computer vision …

Robust dynamic radiance fields

YL Liu, C Gao, A Meuleman… - Proceedings of the …, 2023 - openaccess.thecvf.com
Dynamic radiance field reconstruction methods aim to model the time-varying structure and
appearance of a dynamic scene. Existing methods, however, assume that accurate camera …