Project Page

Proactive Robotic Assistance via Context-Aware Human Intent and Motion Prediction in Human-Centric Digital Twins

🚀 View Code on GitHub

About This Research

This repository contains research on developing proactive robotic assistants capable of anticipating human needs and actions in complex, semi-structured environments. The project focuses on Human-Centric Digital Twins (HCDTs) that can predict human intent and generate corresponding 3D motion for enhanced human-robot collaboration.

Key Research Areas:

Technical Approach:

Our modular framework combines state-of-the-art components including:

The system is designed to be particularly beneficial for Small and Medium Enterprises (SMEs) seeking adaptable human-robot collaboration solutions without requiring extensive end-to-end model retraining.

Getting Started

Quick Setup

To set up the HCDT framework on your system:

# Create conda environment
conda create -n hcdt python=3.10 -y
conda activate hcdt

# Install PyTorch with CUDA
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia

# Install dependencies
pip install -r requirements.txt
pip install pandas numpy matplotlib opencv-python pillow tqdm

Running Experiments

1. Comprehensive Model Evaluation

Execute experiments across multiple AI models and configurations:

chmod +x run_all_models.sh
./run_all_models.sh

Features:

2. Results Analysis and Visualization

Generate comprehensive evaluation tables and visualizations:

# Generate results tables
python generate_results_table.py

# Process Phase 2 hand position predictions
python eval/batch_process_phase2.py

Outputs:

3. Data Processing Pipeline

Set up new experiments with automated preprocessing:

python process_task.py --exp_name "YourExperiment"

Pipeline includes:

Requirements

Project Results

Visualization of our prediction results and framework performance:

Visualization of the Prediction Results

Demo Videos

Watch our project demonstration videos: