Digital twin-driven intelligence disaster prevention and mitigation for infrastructure: Advances, challenges, and opportunities

D Yu, Z He - Natural hazards, 2022 - Springer
Natural hazards, which have the potential to cause catastrophic damage and loss to
infrastructure, have increased significantly in recent decades. Thus, the construction …

Structure and content-guided video synthesis with diffusion models

P Esser, J Chiu, P Atighehchian… - Proceedings of the …, 2023 - openaccess.thecvf.com
Text-guided generative diffusion models unlock powerful image creation and editing tools.
Recent approaches that edit the content of footage while retaining structure require …

Pix2video: Video editing using image diffusion

D Ceylan, CHP Huang, NJ Mitra - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Image diffusion models, trained on massive image collections, have emerged as the most
versatile image generator model in terms of quality and diversity. They support inverting real …

Stablevideo: Text-driven consistency-aware diffusion video editing

W Chai, X Guo, G Wang, Y Lu - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Diffusion-based methods can generate realistic images and videos, but they struggle to edit
existing objects in a video while preserving their geometry over time. This prevents diffusion …

Text2live: Text-driven layered image and video editing

O Bar-Tal, D Ofri-Amar, R Fridman, Y Kasten… - European conference on …, 2022 - Springer
We present a method for zero-shot, text-driven editing of natural images and videos. Given
an image or a video and a text prompt, our goal is to edit the appearance of existing objects …

Codef: Content deformation fields for temporally consistent video processing

H Ouyang, Q Wang, Y Xiao, Q Bai… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present the content deformation field (CoDeF) as a new type of video representation
which consists of a canonical content field aggregating the static contents in the entire video …

Tokenflow: Consistent diffusion features for consistent video editing

M Geyer, O Bar-Tal, S Bagon, T Dekel - arXiv preprint arXiv:2307.10373, 2023 - arxiv.org
The generative AI revolution has recently expanded to videos. Nevertheless, current state-of-
the-art video models are still lagging behind image models in terms of visual quality and …

Ai-generated content (aigc): A survey

J Wu, W Gan, Z Chen, S Wan, H Lin - arXiv preprint arXiv:2304.06632, 2023 - arxiv.org
To address the challenges of digital intelligence in the digital economy, artificial intelligence-
generated content (AIGC) has emerged. AIGC uses artificial intelligence to assist or replace …

Avatarcraft: Transforming text into neural human avatars with parameterized shape and pose control

R Jiang, C Wang, J Zhang, M Chai… - Proceedings of the …, 2023 - openaccess.thecvf.com
Neural implicit fields are powerful for representing 3D scenes and generating high-quality
novel views, but it remains challenging to use such implicit representations for creating a 3D …

Neural style transfer: A critical review

A Singh, V Jaiswal, G Joshi, A Sanjeeve, S Gite… - IEEE …, 2021 - ieeexplore.ieee.org
Neural Style Transfer (NST) is a class of software algorithms that allows us to transform
scenes, change/edit the environment of a media with the help of a Neural Network. NST …