Emergent correspondence from image diffusion
Finding correspondences between images is a fundamental problem in computer vision. In
this paper, we show that correspondence emerges in image diffusion models without any …
this paper, we show that correspondence emerges in image diffusion models without any …
Drivinggaussian: Composite gaussian splatting for surrounding dynamic autonomous driving scenes
We present DrivingGaussian an efficient and effective framework for surrounding dynamic
autonomous driving scenes. For complex scenes with moving objects we first sequentially …
autonomous driving scenes. For complex scenes with moving objects we first sequentially …
Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis
We present a method that simultaneously addresses the tasks of dynamic scene novel-view
synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements. We …
synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements. We …
Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis
We present a method that simultaneously addresses the tasks of dynamic scene novel-view
synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements. We …
synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements. We …
[HTML][HTML] Tracking and mapping in medical computer vision: A review
As computer vision algorithms increase in capability, their applications in clinical systems
will become more pervasive. These applications include: diagnostics, such as colonoscopy …
will become more pervasive. These applications include: diagnostics, such as colonoscopy …
Dynmf: Neural motion factorization for real-time dynamic view synthesis with 3d gaussian splatting
Accurately and efficiently modeling dynamic scenes and motions is considered so
challenging a task due to temporal dynamics and motion complexity. To address these …
challenging a task due to temporal dynamics and motion complexity. To address these …
Videoflow: Exploiting temporal cues for multi-frame optical flow estimation
We introduce VideoFlow, a novel optical flow estimation framework for videos. In contrast to
previous methods that learn to estimate optical flow from two frames, VideoFlow concurrently …
previous methods that learn to estimate optical flow from two frames, VideoFlow concurrently …
Videoswap: Customized video subject swapping with interactive semantic point correspondence
Current diffusion-based video editing primarily focuses on structure-preserved editing by
utilizing various dense correspondences to ensure temporal consistency and motion …
utilizing various dense correspondences to ensure temporal consistency and motion …
Dense optical tracking: connecting the dots
Recent approaches to point tracking are able to recover the trajectory of any scene point
through a large portion of a video despite the presence of occlusions. They are however too …
through a large portion of a video despite the presence of occlusions. They are however too …
4dgen: Grounded 4d content generation with spatial-temporal consistency
Aided by text-to-image and text-to-video diffusion models, existing 4D content creation
pipelines utilize score distillation sampling to optimize the entire dynamic 3D scene …
pipelines utilize score distillation sampling to optimize the entire dynamic 3D scene …