Emergent correspondence from image diffusion

L Tang, M Jia, Q Wang, CP Phoo… - Advances in Neural …, 2023 - proceedings.neurips.cc
Finding correspondences between images is a fundamental problem in computer vision. In
this paper, we show that correspondence emerges in image diffusion models without any …

Drivinggaussian: Composite gaussian splatting for surrounding dynamic autonomous driving scenes

X Zhou, Z Lin, X Shan, Y Wang… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present DrivingGaussian an efficient and effective framework for surrounding dynamic
autonomous driving scenes. For complex scenes with moving objects we first sequentially …

Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis

J Luiten, G Kopanas, B Leibe, D Ramanan - arXiv preprint arXiv …, 2023 - arxiv.org
We present a method that simultaneously addresses the tasks of dynamic scene novel-view
synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements. We …

Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis

J Luiten, G Kopanas, B Leibe… - … Conference on 3D …, 2024 - ieeexplore.ieee.org
We present a method that simultaneously addresses the tasks of dynamic scene novel-view
synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements. We …

[HTML][HTML] Tracking and mapping in medical computer vision: A review

A Schmidt, O Mohareri, S DiMaio, MC Yip… - Medical Image …, 2024 - Elsevier
As computer vision algorithms increase in capability, their applications in clinical systems
will become more pervasive. These applications include: diagnostics, such as colonoscopy …

Dynmf: Neural motion factorization for real-time dynamic view synthesis with 3d gaussian splatting

A Kratimenos, J Lei, K Daniilidis - European Conference on Computer …, 2025 - Springer
Accurately and efficiently modeling dynamic scenes and motions is considered so
challenging a task due to temporal dynamics and motion complexity. To address these …

Videoflow: Exploiting temporal cues for multi-frame optical flow estimation

X Shi, Z Huang, W Bian, D Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
We introduce VideoFlow, a novel optical flow estimation framework for videos. In contrast to
previous methods that learn to estimate optical flow from two frames, VideoFlow concurrently …

Videoswap: Customized video subject swapping with interactive semantic point correspondence

Y Gu, Y Zhou, B Wu, L Yu, JW Liu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Current diffusion-based video editing primarily focuses on structure-preserved editing by
utilizing various dense correspondences to ensure temporal consistency and motion …

Dense optical tracking: connecting the dots

G Le Moing, J Ponce, C Schmid - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Recent approaches to point tracking are able to recover the trajectory of any scene point
through a large portion of a video despite the presence of occlusions. They are however too …

4dgen: Grounded 4d content generation with spatial-temporal consistency

Y Yin, D Xu, Z Wang, Y Zhao, Y Wei - arXiv preprint arXiv:2312.17225, 2023 - arxiv.org
Aided by text-to-image and text-to-video diffusion models, existing 4D content creation
pipelines utilize score distillation sampling to optimize the entire dynamic 3D scene …