A comprehensive survey on test-time adaptation under distribution shifts

J Liang, R He, T Tan - International Journal of Computer Vision, 2024 - Springer
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …

State of the Art in Dense Monocular Non‐Rigid 3D Reconstruction

E Tretschk, N Kairanda, M BR, R Dabral… - Computer Graphics …, 2023 - Wiley Online Library
Abstract 3D reconstruction of deformable (or non‐rigid) scenes from a set of monocular 2D
image observations is a long‐standing and actively researched area of computer vision and …

Dynibar: Neural dynamic image-based rendering

Z Li, Q Wang, F Cole, R Tucker… - Proceedings of the …, 2023 - openaccess.thecvf.com
We address the problem of synthesizing novel views from a monocular video depicting a
complex dynamic scene. State-of-the-art methods based on temporally varying Neural …

Nope-nerf: Optimising neural radiance field with no pose prior

W Bian, Z Wang, K Li, JW Bian… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Training a Neural Radiance Field (NeRF) without pre-computed camera poses is
challenging. Recent advances in this direction demonstrate the possibility of jointly …

Test-time training with masked autoencoders

Y Gandelsman, Y Sun, X Chen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Test-time training adapts to a new test distribution on the fly by optimizing a model for each
test input using self-supervision. In this paper, we use masked autoencoders for this one …

Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras

Z Teed, J Deng - Advances in neural information …, 2021 - proceedings.neurips.cc
We introduce DROID-SLAM, a new deep learning based SLAM system. DROID-SLAM
consists of recurrent iterative updates of camera pose and pixelwise depth through a Dense …

Nerfplayer: A streamable dynamic scene representation with decomposed neural radiance fields

L Song, A Chen, Z Li, Z Chen, L Chen… - … on Visualization and …, 2023 - ieeexplore.ieee.org
Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-
term quest. The task is especially appealing when only a few or even single RGB cameras …

Progressively optimized local radiance fields for robust view synthesis

A Meuleman, YL Liu, C Gao… - Proceedings of the …, 2023 - openaccess.thecvf.com
We present an algorithm for reconstructing the radiance field of a large-scale scene from a
single casually captured video. The task poses two core challenges. First, most existing …

Scenescape: Text-driven consistent scene generation

R Fridman, A Abecasis, Y Kasten… - Advances in Neural …, 2024 - proceedings.neurips.cc
We present a method for text-driven perpetual view generation--synthesizing long-term
videos of various scenes solely, given an input text prompt describing the scene and camera …

Nerfingmvs: Guided optimization of neural radiance fields for indoor multi-view stereo

Y Wei, S Liu, Y Rao, W Zhao, J Lu… - Proceedings of the …, 2021 - openaccess.thecvf.com
In this work, we present a new multi-view depth estimation method that utilizes both
conventional SfM reconstruction and learning-based priors over the recently proposed …