Deep contextual attention for human-object interaction detection
Proceedings of the IEEE/CVF International Conference on …, 2019•openaccess.thecvf.com
Human-object interaction detection is an important and relatively new class of visual
relationship detection tasks, essential for deeper scene understanding. Most existing
approaches decompose the problem into object localization and interaction recognition.
Despite showing progress, these approaches only rely on the appearances of humans and
objects and overlook the available context information, crucial for capturing subtle
interactions between them. We propose a contextual attention framework for human-object …
relationship detection tasks, essential for deeper scene understanding. Most existing
approaches decompose the problem into object localization and interaction recognition.
Despite showing progress, these approaches only rely on the appearances of humans and
objects and overlook the available context information, crucial for capturing subtle
interactions between them. We propose a contextual attention framework for human-object …
Abstract
Human-object interaction detection is an important and relatively new class of visual relationship detection tasks, essential for deeper scene understanding. Most existing approaches decompose the problem into object localization and interaction recognition. Despite showing progress, these approaches only rely on the appearances of humans and objects and overlook the available context information, crucial for capturing subtle interactions between them. We propose a contextual attention framework for human-object interaction detection. Our approach leverages context by learning contextually-aware appearance features for human and object instances. The proposed attention module then adaptively selects relevant instance-centric context information to highlight image regions likely to contain human-object interactions. Experiments are performed on three benchmarks: V-COCO, HICO-DET and HCVRD. Our approach outperforms the state-of-the-art on all datasets. On the V-COCO dataset, our method achieves a relative gain of 4.4% in terms of role mean average precision (mAP role), compared to the existing best approach.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果