Reconstructing hands in 3d with transformers
Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2024•openaccess.thecvf.com
We present an approach that can reconstruct hands in 3D from monocular input. Our
approach for Hand Mesh Recovery HaMeR follows a fully transformer-based architecture
and can analyze hands with significantly increased accuracy and robustness compared to
previous work. The key to HaMeR's success lies in scaling up both the data used for training
and the capacity of the deep network for hand reconstruction. For training data we combine
multiple datasets that contain 2D or 3D hand annotations. For the deep model we use a …
approach for Hand Mesh Recovery HaMeR follows a fully transformer-based architecture
and can analyze hands with significantly increased accuracy and robustness compared to
previous work. The key to HaMeR's success lies in scaling up both the data used for training
and the capacity of the deep network for hand reconstruction. For training data we combine
multiple datasets that contain 2D or 3D hand annotations. For the deep model we use a …
Abstract
We present an approach that can reconstruct hands in 3D from monocular input. Our approach for Hand Mesh Recovery HaMeR follows a fully transformer-based architecture and can analyze hands with significantly increased accuracy and robustness compared to previous work. The key to HaMeR's success lies in scaling up both the data used for training and the capacity of the deep network for hand reconstruction. For training data we combine multiple datasets that contain 2D or 3D hand annotations. For the deep model we use a large scale Vision Transformer architecture. Our final model consistently outperforms the previous baselines on popular 3D hand pose benchmarks. To further evaluate the effect of our design in non-controlled settings we annotate existing in-the-wild datasets with 2D hand keypoint annotations. On this newly collected dataset of annotations HInt we demonstrate significant improvements over existing baselines. We will make our code data and models publicly available upon publication. We make our code data and models available on the project website: https://geopavlakos. github. io/hamer/.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果