Skip to content

Attakuan/Visuo-Tactile-Deep-Learning-Model-for-Data-Efficient-Robotic-Arm-Manipulation

 
 

Repository files navigation

Visuo-Tactile Deep Learning Model for Data Efficient Robotic Arm Manipulation

Baselines forked from CoRL 2022 Paper: Visuo-Tactile Transformers for Manipulation

In this project, we present a novel model for fusing Visuo-Tactile data for robot manipulation task. We propose:

Convolutional Layer for visual data stream.
LSTM for tactile data stream
Attention layer for fusing the both streams.

Baselines for fusing Visuo-Tactile data:

  Visuo-Tactile Transformer (VTT)
  Product-of-Experts (POE)
  Concatenation (Concat)

Requirements:

  torch==1.9.0
  tqdm==4.48.2
  pybullet==3.1.8
  gym==0.17.2
  matplotlib==3.4.3
  numpy==1.21.2
  pandas==1.1.2

Minitouch Installation:

cd Minitouch
pip install -e .

Available Tasks:

Pushing-v0
Opening-v0
Picking-v0
Inserting-v0

Example of Running Code:

python train.py --encoder="VTT" --seed=0 --task_name="Pushing-v0"

Read Results with read_pickle.py

DEMO

For detailed information check Report.pdf

Credits

Orignal Codes:

About

Visuo-Tactile Deep Learning Model for Data Efficient Robotic Arm Manipulation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.3%
  • Jupyter Notebook 6.7%