关注
Kyle Min
Kyle Min
其他姓名Byungsu Min
Intel Labs
在 intel.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
TASED-net: Temporally-aggregating spatial encoder-decoder network for video saliency detection
K Min, JJ Corso
Proceedings of the IEEE International Conference on Computer Vision (ICCV …, 2019
1642019
Adversarial Background-Aware Loss for Weakly-supervised Temporal Activity Localization
K Min, JJ Corso
Proceedings of the European Conference on Computer Vision (ECCV), 2020
1042020
Hierarchical novelty detection for visual object recognition
K Lee, K Lee, K Min, Y Zhang, J Shin, H Lee
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
852018
Integrating Human Gaze into Attention for Egocentric Activity Recognition
K Min, JJ Corso
Proceedings of the IEEE Winter Conference on Applications of Computer Vision …, 2020
502020
Learning long-term spatial-temporal graphs for active speaker detection
K Min, S Roy, S Tripathi, T Guha, S Majumdar
European Conference on Computer Vision, 371-387, 2022
24*2022
Sourya Roy, Subarna Tripathi, Tanaya Guha, and Somdeb Majumdar. Learning long-term spatialtemporal graphs for active speaker detection
K Min
arXiv preprint arXiv:2207.07783 2 (3), 2022
172022
WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models
C Kim*, K Min*, M Patel, S Cheng, Y Yang
arXiv preprint arXiv:2306.04744, 2023
142023
Unbiased scene graph generation in videos
S Nag, K Min, S Tripathi, AK Roy-Chowdhury
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
142023
Svitt: Temporal learning of sparse video-text transformers
Y Li, K Min, S Tripathi, N Vasconcelos
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
102023
Intel Labs at Ego4D Challenge 2022: A Better Baseline for Audio-Visual Diarization
K Min
2nd International Ego4D Workshop @ ECCV 2022, 2022
82022
RACE: Robust Adversarial Concept Erasure for Secure Text-to-Image Diffusion Model
C Kim*, K Min*, Y Yang
arXiv preprint arXiv:2405.16341, 2024
22024
STHG: Spatial-Temporal Heterogeneous Graph Learning for Advanced Audio-Visual Diarization
K Min
3rd International Ego4D Workshop @ CVPR 2023, 2023
22023
Contrastive Language Video Time Pre-training
H Liu, K Min, HA Valdez, S Tripathi
arXiv preprint arXiv:2406.02631, 2024
12024
Action Scene Graphs for Long-Form Understanding of Egocentric Videos
I Rodin*, A Furnari*, K Min*, S Tripathi, GM Farinella
arXiv preprint arXiv:2312.03391, 2023
12023
Ego-VPA: Egocentric Video Understanding with Parameter-efficient Adaptation
TY Wu, K Min, S Tripathi, N Vasconcelos
arXiv preprint arXiv:2407.19520, 2024
2024
SViTT-Ego: A Sparse Video-Text Transformer for Egocentric Video
HA Valdez, K Min, S Tripathi
arXiv preprint arXiv:2406.09462, 2024
2024
Long duration structured video action segmentation
AD Rhodes, K Min, S Tripathi, G Raffa, S Biswas
US Patent App. 18/459,824, 2024
2024
Intel Labs at ActivityNet Challenge 2022: SPELL for Long-Term Active Speaker Detection
K Min, S Roy, S Tripathi, T Guha, S Majumdar
International Challenge on Activity Recognition (ActivityNet), 2022
2022
Video Understanding with Minimal Human Supervision
K Min
2021
Unbiased Scene Graph Generation in Videos Supplementary Material
S Nag, K Min, S Tripathi, AK Roy-Chowdhury
系统目前无法执行此操作,请稍后再试。
文章 1–20