GuidedVLA: Specifying Task-Relevant Factors via Plug-and-Play Action Attention Specialization

Xiaosong Jia*†,1,2, Bowen Yang*,3, Zuhao Ge*,1,2, Xian Nie*,3, Yuchen Zhou*,1,2, Cunxin Fan*†,3, Yufeng Li3, Yilin Chai3, Chao Jing1,2, Zijian Liang3, Qingwen Bu4, Haidong Cao1,2, Chao Wu1,2, Qifeng Li3, Zhenjie Yang3, Chenhe Zhang1,2, Hongyang Li4, Zuxuan Wu✉,1,2, Junchi Yan✉,3, Yu-Gang Jiang✉,1,2

1 Institute of Trustworthy Embodied AI (TEAI), Fudan University   2 Shanghai Key Laboratory of Multimodal Embodied AI   3 Shanghai Jiao Tong University   4 OpenDriveLab, The University of Hong Kong

* Core Contributors   Project Lead   Correspondence Authors

Accepted to Robotics: Science and Systems (RSS) 2026, Sydney, Australia

Overview

Overview Teaser

We present GuidedVLA, a VLA paradigm in which the action decoder is explicitly guided to capture task-relevant information such as object grounding, spatial geometry, and temporal skill logic. Across simulation and real-robot experiments, GuidedVLA significantly improves success rates in both in-domain and out-of-domain settings, demonstrating the effectiveness of specifying action-decoder attention heads with explicit guidance.

Pipeline

GuidedVLA Pipeline

Architecture of GuidedVLA. We introduce explicit, structured guidance into the multi-head attention layers of the VLA action decoder. Instead of relying on implicitly entangled representations, we repurpose dedicated attention heads to specialize in distinct task-relevant factors: (i) Object Head supervises its attention maps to explicitly ground task-relevant objects and suppress distractors via object; (ii) Skill Head aligns internal feature representations with temporal skill phases (e.g., Pick → Place) through auxiliary classification skill; (iii) Depth Head injects geometric cues via cross attention only to features from a depth encoder. These guidance signals make the policy explicitly aware of spatial, temporal, and geometric structures.

Object Grounding Head

Supervises attention maps to explicitly ground task-relevant objects and suppress distractors via attention mask alignment loss. Critical for precise localization on transparent/refractive objects and small targets.

Key insight: Forces action tokens to attend to semantically meaningful regions rather than incidental visual contrast.

Skill Recognition Head

Aligns internal feature representations with temporal skill phases (e.g., Pick → Place) through auxiliary classification loss. Prevents stage-skipping in multi-step behaviors.

Key insight: Encodes temporal intent progression to maintain stage awareness across extended horizons.

Geometry Perception Head

Injects explicit 3D spatial information by constraining dedicated attention heads to process only features from a frozen depth encoder (Depth Anything 3).

Key insight: Provides metric geometric reasoning for sub-centimeter precision tasks where monocular RGB cues are insufficient.

Experiment

GuidedVLA achieves significant performance gains across simulation benchmarks and real-world platforms, with particularly strong improvements under distribution shifts.

SIMULATION 1: LIBERO-Plus Benchmark Results

The proposed model achieves the highest average success rate, with a significant boost compared to its base model π0. Notably, single-head ablations reveal task-specific alignment: the object head is strongest among single-head variants on the Object and Long suites, the skill head gives the best single-head result on the Goal suite, and the depth head performs best on the Spatial suite.

Model Perturbation Dimensions Task Suites Total
Camera Robot Language Light Backg. Noise Layout Spatial Object Goal Long
OpenVLA0.83.523.08.134.815.228.519.414.015.114.315.6
OpenVLA-OFT56.431.979.588.793.375.874.284.066.563.066.469.6
NORA2.237.065.145.758.612.862.147.634.438.836.339.0
WorldVLA0.127.941.643.717.110.938.032.528.631.88.225.0
UniVLA1.846.269.669.081.021.231.955.536.740.739.943.9
pi_0-Fast65.121.661.073.273.274.468.874.472.757.543.461.6
RIPT-VLA55.231.277.688.491.673.574.285.864.358.067.568.4
DreamVLA65.040.963.585.782.785.074.079.779.061.759.869.9
AdaMoE53.817.520.673.773.858.665.851.057.953.338.150.1
Spatial Forcing20.113.440.929.133.425.739.352.931.028.25.429.1
VLA-Adapter36.237.974.670.676.158.069.785.046.356.050.459.1
π062.339.863.186.082.882.469.677.774.161.460.168.2
w/ object head71.745.863.592.486.985.177.480.682.567.164.073.4
w/ skill head70.045.061.790.283.088.476.379.878.968.962.772.5
w/ depth head68.143.965.890.783.485.672.881.479.065.461.871.7
w/ all heads (Ours)73.751.462.694.689.085.279.984.080.970.866.275.4
Robotwin 2.0 Results

SIMULATION 2: Robotwin 2.0 Benchmark

RoboTwin 2.0 Benchmark Performance. Success rates across 8 manipulation tasks comparing the π0 baseline, single-head experts, and our full model. While specific heads excel at aligned tasks (e.g., depth head for geometry-heavy Beat Hammer Block), the full model (purple) integrates these capabilities to achieve the best overall average performance (90.63%).

Factor Quality Analysis

ABLATION: Factor Quality Correlation

Higher Factor Quality Leads to Better Task Performance. Top: Quantitative analysis on the LIBERO-Plus layout perturbation track shows that improving the quality of each specialized head consistently boosts success rates. (a) Object Head: as the proportion of attention focused on task-relevant object regions increases, success rises from 61.3% to 74.6%, highlighting the importance of precise object-centric attention. (b) Skill Head: higher skill-recognition accuracy, measured by a linear probe, correlates with improved performance (66.2% to 72.9%), indicating that better temporal understanding enhances control. (c) Depth Head: increasing the ratio of true depth features (versus noise) dramatically improves both qualitative depth estimation and quantitative success (15.6% to 76.7%), confirming that explicit 3D cues are critical for robust manipulation. Bottom: Qualitative visualizations show how changes along the x-axis metrics are reflected in the corresponding feature representations.

REAL WORLD: Cross-Platform Generalization

Real-world robot platforms and evaluation tasks

Cross-Platform Real-World Generalization. Success rates (N=20) across three generalization settings on ALOHA and PSI-Bot platforms. Our method consistently outperforms baseline, achieving performance gains across all settings (up to 52.7%) and demonstrating robustness under challenging out-of-domain conditions. Task 1–6 correspond to: (1) pick up fruits and vegetables, (2) stack the bowls, (3) clean the tabletop, (4) place beaker in heating mantle, (5) stack beakers, and (6) heat beaker. In-domain generalization includes variations in object positions within the training distribution.

Generalization Setting Method ALOHA AgileX PSI-Bot RealMan Average (%)
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6
In-Domain Base Policy 10/20 11/20 9/20 12/20 12/20 13/20 55.8
Ours 14/20 15/20 14/20 16/20 17/20 15/20 75.8
Scene Base Policy 7/20 8/20 6/20 12/20 11/20 9/20 44.2
Ours 13/20 12/20 11/20 15/20 16/20 14/20 67.5
Lighting Base Policy 11/20 9/20 10/20 14/20 12/20 13/20 57.5
Ours 13/20 16/20 15/20 17/20 18/20 16/20 79.2

Tasks: (1) pick up fruits and vegetables, (2) stack the bowls, (3) clean the tabletop, (4) pick up the beaker, (5) stack the beakers, (6) heat the beaker.

Real Robot Tasks

Demonstrations of GuidedVLA executing complex long-horizon tasks across different domains.

Task 1: Pick up fruits and vegetables (4×)

Task 2: Stack the bowls (4×)

Task 3: Clean the tabletop (4×)

Task 4: Pick up the beaker (4×)

Task 5: Stack the beakers (4×)

Task 6: Heat the beaker (4×)

Citation

If you find GuidedVLA useful in your research, please cite:

@misc{jia2026guidedvla,
  title         = {GuidedVLA: Specifying Task-Relevant Factors via Plug-and-Play Action Attention Specialization},
  author        = {Xiaosong Jia and Bowen Yang and Zuhao Ge and Xian Nie and Yuchen Zhou and Cunxin Fan and Yufeng Li and Yilin Chai and Chao Jing and Zijian Liang and Qingwen Bu and Haidong Cao and Chao Wu and Qifeng Li and Zhenjie Yang and Chenhe Zhang and Hongyang Li and Zuxuan Wu and Junchi Yan and Yu-Gang Jiang},
  year          = {2026},
  eprint        = {2605.12369},
  archivePrefix = {arXiv},
  primaryClass  = {cs.RO},
  url           = {https://arxiv.org/abs/2605.12369}
}