Skip to content

OpenHelix-Team/OpenTrajBooster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

28 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

TrajBooster: Boosting Humanoid Whole-Body Manipulation via Trajectory-Centric Learning

๐Ÿ“ Paper | ๐ŸŒ Project Page | ๐Ÿค— Model | ๐Ÿ›ข๏ธ Dataset

TrajBooster Demo

๐Ÿ”ฅ๐Ÿ”ฅ News

  • (๐Ÿ”ฅ New) [2026/02/01] TrajBooster is accepted by ICRA-2026. ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰

Overview

TrajBooster leverages abundant existing robot manipulation datasets to enhance humanoid whole-body manipulation capabilities. Our approach retargets end-effector trajectories from diverse robots to target humanoids using a specialized retargeting model. We then perform post-pre-training on a pre-trained Vision-Language-Action (VLA) model with this retargeted data, followed by fine-tuning with minimal real-world data. This methodology significantly reduces the burden of human teleoperation while improving action space comprehension and zero-shot skill transfer capabilities.

๐Ÿš€ What's Included

This repository provides the official implementation of TrajBooster, featuring:

  • ๐Ÿค— 35-hour retargeted dataset: Unitree G1 whole-body manipulation actions retargeted from Agibot
  • ๐Ÿค— Pre-trained model checkpoint: PPT_model ready for post-training with teleoperation data
  • ๐Ÿค– Hardware deployment: Complete setup and code for Unitree G1 robot
  • ๐Ÿ•น๏ธ Teleoperation system: Real-robot teleoperation implementation and data collection pipeline
  • ๐Ÿง  VLA model deployment: Real-robot deployment implementation for Vision-Language-Action models
  • ๐Ÿ“ˆ Training scripts: Retargeting model training code
  • ๐Ÿ“‹ Documentation Hub: Comprehensive installation guides, deployment tutorials, and troubleshooting resources

Note: This repository builds upon our previous work at OpenWBC. If you find this work useful for your research or projects, please consider giving both repositories a โญ star to support our ongoing open-source contributions to the robotics community!

๐ŸŽฏ Key Features

  • ๐ŸŽฏ Trajectory-Centric Learning: Revolutionary approach leveraging end-effector trajectory retargeting for precise manipulation control
  • ๐Ÿ”„ Cross-Robot Knowledge Transfer: Seamlessly adapt and transfer skills across diverse robot platforms and morphologies
  • โšก Minimal Real-World Training: Dramatically reduce dependency on expensive human teleoperation data collection
  • ๐Ÿš€ Zero-Shot Capabilities: Enhanced generalization and skill transfer to previously unseen manipulation tasks
  • ๐Ÿค– Whole-Body Control: Complete humanoid robot manipulation with integrated Vision-Language-Action model capabilities

๐Ÿ“‹ Deployment Guide

This comprehensive guide covers three essential deployment phases:

  1. ๐Ÿ•น๏ธ Unitree G1 Teleoperation & Data Collection - Complete setup and implementation
  2. ๐ŸŽฏ Post-Training Pipeline - Utilizing collected data for VLA model fine-tuning
  3. ๐Ÿค– Autonomous Deployment - Real-robot manipulation using post-trained VLA models

๐Ÿ’ก Quick Start: We provide a PPT (Post-Pre-Trained) model for immediate deployment. Follow the sequential steps below for complete project reproduction.

๐Ÿ”ฌ Advanced Users: Interested in retargeting model training? Jump directly to Bonus: Retargeting Model Training

๐Ÿ”ง Troubleshooting Resources

For deployment issues, you could reference these excellent projects first:


๐Ÿ•น๏ธ Phase 1: Teleoperation & Data Collection

Project Structure

g1_deploy/
โ”‚
โ”œโ”€โ”€ avp_teleoperation/    # Upper-body control & image transmission
โ”‚
โ”œโ”€โ”€ Hardware/            # Wrist camera hardware specs (optional)
โ”‚
โ””โ”€โ”€ HomieDeploy/         # Lower-body locomotion control

Setup Instructions

1. ๐Ÿ“ท Wrist Camera Setup (Recommended)

  • Hardware: Camera specifications and 3D-printable mount files available in g1_deploy/Hardware/
  • Benefits: Significantly improves VLA depth perception and manipulation accuracy

2. ๐Ÿฆต Lower-Body Control Configuration

  • Deploy g1_deploy/HomieDeploy/ to Unitree G1 onboard computer
  • Follow setup instructions in g1_deploy/HomieDeploy/README.md
  • Result: Enable joystick-based teleoperation for locomotion

3. ๐Ÿ–๏ธ Upper-Body Control Setup

  • Configure AVP Teleoperation: Set up avp_teleoperation following the instructions in g1_deploy/avp_teleoperate/README.md. Configure the tv conda environment and set up the required certificates.

  • Dual Deployment: Deploy the system on both your local PC (image client) and the G1 robot (image server).

    On the Unitree robot terminal, run:

    cd avp_teleoperate/teleop/
    python image_server/image_server.py

    On your PC, run:

    cd avp_teleoperate/teleop/
    python image_server/image_client.py

    If you can see the video feed properly, the setup is working correctly. You can then close the image_client program and proceed with the following operations.

  • Collect Teleoperation Data (On Your PC):

    (tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_data_collecting.py --arm=G1_29 --hand=dex3  --task_dir='./utils/data'  --record 

    Follow the interaction methods described in g1_deploy/avp_teleoperate/README.md to have the operator perform corresponding interactions using the Apple Vision Pro headset.

โœ… Verification Checklist

  • Operator 1: Real-time first-person robot view in Apple Vision Pro
  • Operator 1: Smooth arm and hand control via AVP interface
  • Operator 2: Responsive locomotion control (walking, squating)

๐Ÿ“Š Data Processing

Follow setup instructions in OpenWBC_to_Lerobot/README.md

Convert collected teleoperation data to LeRobot format:

python convert_3views_to_lerobot.py \
    --input_dir /path/to/input \
    --output_dir ./lerobot_dataset \
    --dataset_name "YOUR_TASK" \
    --robot_type "g1" \
    --fps 30

๐ŸŽฏ Phase 2: VLA Model Post-Training

Utilize your collected and processed teleoperation data for model fine-tuning:

๐Ÿ“– Detailed Instructions: VLA_model/gr00t_modified_for_OpenWBC/README.md

Training Pipeline: Post-train our PPT (Post-Pre-Trained) Model with your domain-specific data


๐Ÿค– Phase 3: Autonomous VLA Deployment

Step 1: Initialize Image Server

# Terminal 1 (on Unitree G1)
cd avp_teleoperate/teleop/image_server
python image_server.py

๐Ÿ” Verification: Test image stream on local PC with python image_client.py, then close before proceeding

Step 2: Lower-Body Control Activation

A. โš ๏ธ CRITICAL - System Reset

Execute: L1+A โ†’ L2+R2 โ†’ L2+A โ†’ L2+B
Expected: Arms hang (L2+A) โ†’ Arms down (L2+B)

B. Initialize Robot Control

# Terminal 2 (on Unitree G1)
cd unitree_sdk2/build/bin
./g1_control eth0  # or eth1 depending on network configuration

C. Launch Policy Inference

# Terminal 3 (on Unitree G1) 
python g1_gym_deploy/scripts/deploy_policy_infer.py

D. Legs Activation

  1. Place robot on ground
  2. Press R2 (robot stands)
  3. Press R2 again (activate autonomous mode)

โš ๏ธ SAFETY NOTICE: Ensure complete understanding of all system components before deployment. Improper usage may result in hardware damage or safety hazards.

E. Start VLA Model Server

python scripts/G1_inference.py \
  --arm=G1_29 \
  --hand=dex3 \
  --model-path YOUR_MODEL_PATH \
  --goal YOUR_TASK \
  --frequency 20 \
  --vis \
  --filt

Bonus: Retargeting Model Training

๐Ÿ“– For detailed instructions, please refer to: retargeting_model/README.md

๐Ÿ”— Resources

Resource Description Link
Dataset 35-hour Agibotโ†’UnitreeG1 retargeted data (~30GB) ๐Ÿค— HuggingFace
Model Pre-trained PPT model checkpoint (~6GB) ๐Ÿค— HuggingFace
Paper Full technical details and evaluation ๐Ÿ“ arXiv
Base Code Underlying deployment framework ๐Ÿ”— WBC_Deploy

๐Ÿ“– Citation

If you find our work helpful, please consider citing:

@article{liu2025trajbooster,
  title={TrajBooster: Boosting Humanoid Whole-Body Manipulation via Trajectory-Centric Learning},
  author={Liu, Jiacheng and Ding, Pengxiang and Zhou, Qihang and Wu, Yuxuan and Huang, Da and Peng, Zimian and Xiao, Wei and Zhang, Weinan and Yang, Lixin and Lu, Cewu and Wang, Donglin},
  journal={arXiv preprint arXiv:2509.11839},
  year={2025}
}

๐Ÿ™ Acknowledgments

We thank the open-source robotics community and all contributors who made this work possible.

About

Official implementation of TrajBooster

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

โšก