Help with implementation and grabbing logic of the shadow hand in mujoco + guidance of how to implement reinforcment learning too #2379
Unanswered
brunohawkins
asked this question in
Asking for Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Intro
Hi!
I am an undergrad studentat UCL, I use MuJoCo for my research on bionic hands.
My setup
pycharm api, latest version of mujoco. powerful windows pc with 16 gb ram and rtx3070
My question
Long story short, my goal is to use reinforcement learning and computer vision to augment robotic prosthetics for a more intuitive, user-friendly experience when picking up household objects. I have decided to do this in a mujoco and use the shadow hand to complete the project.
I want to compare the performance of 3 different models:
Pre-programming the shadow hand without any AI using logic to grab, pick up, place back down and release a 3D model of a cup.
Use reinforcement learning so the hand can 'learn' how to pick up the cup optimally (e.g. Using enough gripping strength to pick up the cup, minimising energy consumption of the shadow hand)
To see if extracting image data about the cup can further improve gripping, use a combination of object detection and reinforcement learning from model 2.
I have already trained an object detection model and have started understanding how mujoco and XML file formats work. I already have a background in python, so I am using the official mujoco python bindings to help with this project.
I am struggling to implement the first two models and was wondering if anyone has any guidance.
I have focused since the start of January on model 1, however, getting the logic right to do this has proven to be a challenge in itself. I am essentially treating the shadow hand as a free joint and getting it to 'float' towards the cup.
However, this isn't working very well and wondered if you had a different suggestion to approach this.
My background in robotics is limited, but if it is easier to attach the shadow hand to a robotic arm in the simulator, do you recommend I do this?
Secondly, I want to get started on model 2 as soon as I have implemented model 1 (as I assume there may be some crossover between the two in terms of the python code).
I would like to know the support for reinforcement learning surrounding gripping objects with the shadow hand and how to implement this.
The key metrics I am looking to analyse is:
Model training time
Gripping strength in each model based on force sensors
How long each gripping cycle takes to complete
Processing power and energy consumption
I appreciate this is a lot of information to take in and that this is quite a specific request, but any help or guidance would be greatly appreciated. I'm focusing on the first 2 models for now cause the cv part will be its own challenge.
Minimal model and/or code that explain my question
Using shadow hand model here, edited orientation so it starts in a way that the arm would be angled to pick up a cup: https://github.com/google-deepmind/mujoco_menagerie/blob/main/shadow_hand/README.md
Implemented in a scene with a mug object here:
python script that has the shadow hand and mug in a scene, and the hand is trying to do a gripping motion, but doesn't work. no reinforcement learning is implemented yet:
Confirmations
Beta Was this translation helpful? Give feedback.
All reactions