Papers
arxiv:2602.16705

Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation

Published on Feb 18
ยท Submitted by
Runpei Dong
on Feb 19
#2 Paper of the day
Authors:
,
,

Abstract

HERO enables humanoid robots to perform object manipulation in diverse real-world environments by combining accurate end-effector control with open-vocabulary vision models for generalizable scene understanding.

AI-generated summary

Visual loco-manipulation of arbitrary objects in the wild with humanoid robots requires accurate end-effector (EE) control and a generalizable understanding of the scene via visual inputs (e.g., RGB-D images). Existing approaches are based on real-world imitation learning and exhibit limited generalization due to the difficulty in collecting large-scale training datasets. This paper presents a new paradigm, HERO, for object loco-manipulation with humanoid robots that combines the strong generalization and open-vocabulary understanding of large vision models with strong control performance from simulated training. We achieve this by designing an accurate residual-aware EE tracking policy. This EE tracking policy combines classical robotics with machine learning. It uses a) inverse kinematics to convert residual end-effector targets into reference trajectories, b) a learned neural forward model for accurate forward kinematics, c) goal adjustment, and d) replanning. Together, these innovations help us cut down the end-effector tracking error by 3.2x. We use this accurate end-effector tracker to build a modular system for loco-manipulation, where we use open-vocabulary large vision models for strong visual generalization. Our system is able to operate in diverse real-world environments, from offices to coffee shops, where the robot is able to reliably manipulate various everyday objects (e.g., mugs, apples, toys) on surfaces ranging from 43cm to 92cm in height. Systematic modular and end-to-end tests in simulation and the real world demonstrate the effectiveness of our proposed design. We believe the advances in this paper can open up new ways of training humanoid robots to interact with daily objects.

Community

Paper author Paper submitter

"HERO: Learning Humanoid End-Effector ContROl for Open-Vocabulary Visual Loco-Manipulation"
Project page: https://hero-humanoid.github.io/

arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/learning-humanoid-end-effector-control-for-open-vocabulary-visual-loco-manipulation-7624-028ee103

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.16705 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.16705 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.16705 in a Space README.md to link it from this page.

Collections including this paper 1

Paper page - Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation
Papers
arxiv:2602.16705

Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation

Published on Feb 18
ยท Submitted by
Runpei Dong
on Feb 19
#2 Paper of the day
Authors:
,
,

Abstract

HERO enables humanoid robots to perform object manipulation in diverse real-world environments by combining accurate end-effector control with open-vocabulary vision models for generalizable scene understanding.

AI-generated summary

Visual loco-manipulation of arbitrary objects in the wild with humanoid robots requires accurate end-effector (EE) control and a generalizable understanding of the scene via visual inputs (e.g., RGB-D images). Existing approaches are based on real-world imitation learning and exhibit limited generalization due to the difficulty in collecting large-scale training datasets. This paper presents a new paradigm, HERO, for object loco-manipulation with humanoid robots that combines the strong generalization and open-vocabulary understanding of large vision models with strong control performance from simulated training. We achieve this by designing an accurate residual-aware EE tracking policy. This EE tracking policy combines classical robotics with machine learning. It uses a) inverse kinematics to convert residual end-effector targets into reference trajectories, b) a learned neural forward model for accurate forward kinematics, c) goal adjustment, and d) replanning. Together, these innovations help us cut down the end-effector tracking error by 3.2x. We use this accurate end-effector tracker to build a modular system for loco-manipulation, where we use open-vocabulary large vision models for strong visual generalization. Our system is able to operate in diverse real-world environments, from offices to coffee shops, where the robot is able to reliably manipulate various everyday objects (e.g., mugs, apples, toys) on surfaces ranging from 43cm to 92cm in height. Systematic modular and end-to-end tests in simulation and the real world demonstrate the effectiveness of our proposed design. We believe the advances in this paper can open up new ways of training humanoid robots to interact with daily objects.

Community

Paper author Paper submitter

"HERO: Learning Humanoid End-Effector ContROl for Open-Vocabulary Visual Loco-Manipulation"
Project page: https://hero-humanoid.github.io/

arXivLens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/learning-humanoid-end-effector-control-for-open-vocabulary-visual-loco-manipulation-7624-028ee103

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.16705 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.16705 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.16705 in a Space README.md to link it from this page.

Collections including this paper 1