[HTML][HTML] A review on reinforcement learning for contact-rich robotic manipulation tasks

Í Elguea-Aguinaco, A Serrano-Muñoz… - Robotics and Computer …, 2023 - Elsevier
Research and application of reinforcement learning in robotics for contact-rich manipulation
tasks have exploded in recent years. Its ability to cope with unstructured environments and …

Aligning cyber space with physical world: A comprehensive survey on embodied ai

Y Liu, W Chen, Y Bai, X Liang, G Li, W Gao… - arXiv preprint arXiv …, 2024 - arxiv.org
Embodied Artificial Intelligence (Embodied AI) is crucial for achieving Artificial General
Intelligence (AGI) and serves as a foundation for various applications that bridge cyberspace …

Perceiver: General perception with iterative attention

A Jaegle, F Gimeno, A Brock… - International …, 2021 - proceedings.mlr.press
Biological systems understand the world by simultaneously processing high-dimensional
inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The …

Binding touch to everything: Learning unified multimodal tactile representations

F Yang, C Feng, Z Chen, H Park… - Proceedings of the …, 2024 - openaccess.thecvf.com
The ability to associate touch with other modalities has huge implications for humans and
computational systems. However multimodal learning with touch remains challenging due to …

[HTML][HTML] Multibench: Multiscale benchmarks for multimodal representation learning

PP Liang, Y Lyu, X Fan, Z Wu, Y Cheng… - Advances in neural …, 2021 - ncbi.nlm.nih.gov
Learning multimodal representations involves integrating information from multiple
heterogeneous sources of data. It is a challenging yet crucial area with numerous real-world …

Dexpoint: Generalizable point cloud reinforcement learning for sim-to-real dexterous manipulation

Y Qin, B Huang, ZH Yin, H Su… - Conference on Robot …, 2023 - proceedings.mlr.press
We propose a sim-to-real framework for dexterous manipulation which can generalize to
new objects of the same category in the real world. The key of our framework is to train the …

Robotap: Tracking arbitrary points for few-shot visual imitation

M Vecerik, C Doersch, Y Yang… - … on Robotics and …, 2024 - ieeexplore.ieee.org
For robots to be useful outside labs and specialized factories we need a way to teach them
new useful behaviors quickly. Current approaches lack either the generality to onboard new …

Object detection recognition and robot grasping based on machine learning: A survey

Q Bai, S Li, J Yang, Q Song, Z Li, X Zhang - IEEE access, 2020 - ieeexplore.ieee.org
With the rapid development of machine learning, its powerful function in the machine vision
field is increasingly reflected. The combination of machine vision and robotics to achieve the …

Learning vision-guided quadrupedal locomotion end-to-end with cross-modal transformers

R Yang, M Zhang, N Hansen, H Xu, X Wang - arXiv preprint arXiv …, 2021 - arxiv.org
We propose to address quadrupedal locomotion tasks using Reinforcement Learning (RL)
with a Transformer-based model that learns to combine proprioceptive information and high …

Rh20t: A robotic dataset for learning diverse skills in one-shot

HS Fang, H Fang, Z Tang, J Liu, J Wang… - RSS 2023 Workshop …, 2023 - openreview.net
A key challenge in learning task and motion planning in open domains is how to acquire
diverse and generalizable skills for robots. Recent research in one-shot imitation learning …