GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping
Omid Taheri, Vasileios Choutas, Michael J. Black, and Dimitrios Tzionas
Generating digital humans that move realistically has many applications and is widely studied, but existing methods focus on the major limbs of the body, ignoring the hands and head. Hands have been separately studied but the focus has been on generating realistic static grasps of objects. To synthesize virtual characters that interact with the world, we need to generate full-body motions and realistic hand grasps simultaneously. Both sub-problems are challenging on their own and, together, the state-space of poses is significantly larger, the scales of hand and body motions differ, and the whole-body posture and the hand grasp must agree, satisfy physical constraints, and be plausible. Additionally, the head is involved because the avatar must look at the object to interact with it. For the first time, we address the problem of generating full-body, hand and head motions of an avatar grasping an unknown object. As input, our method, called GOAL, takes a 3D object, its position, and a starting 3D body pose and shape. GOAL outputs a sequence of whole-body poses using two novel networks. First, GNet generates a goal whole-body grasp with a realistic body, head, arm, and hand pose, as well as hand-object contact. Second, MNet generates the motion between the starting and goal pose. This is challenging, as it requires the avatar to walk towards the object with foot-ground contact, orient the head towards it, reach out, and grasp it with a realistic hand pose and hand-object contact. To achieve this the networks exploit a representation that combines SMPL-X body parameters and 3D vertex offsets. We train and evaluate GOAL, both qualitatively and quantitatively, on the GRAB dataset. Results show that GOAL generalizes well to unseen objects, outperforming baselines. A perceptual study shows that GOAL’s generated motions approach the realism of GRAB’s ground truth. GOAL takes a step towards synthesizing realistic full-body object grasping. Our models and code will be available for research.
Video
Publication
Data and Code
Please register and accept the License Agreement on this website to access the GOAL models.
When creating an account, please opt-in for email communication, so that we can reach out to you via email to announce potential significant updates.
- GNet and MNet model files/weights (works only after sign-in)
- Code for GOAL (GitHub)
Referencing GOAL
@inproceedings{taheri2021goal,
title = {{GOAL}: {G}enerating {4D} Whole-Body Motion for Hand-Object Grasping},
author = {Taheri, Omid and Choutas, Vasileios and Black, Michael J. and Tzionas, Dimitrios},
booktitle = {Conference on Computer Vision and Pattern Recognition ({CVPR})},
year = {2022},
url = {https://goal.is.tue.mpg.de}
}
@inproceedings{GRAB:2020,
title = {{GRAB}: {A} Dataset of Whole-Body Human Grasping of Objects},
author = {Taheri, Omid and Ghorbani, Nima and Black, Michael J. and Tzionas, Dimitrios},
booktitle = {European Conference on Computer Vision ({ECCV})},
year = {2020},
url = {https://grab.is.tue.mpg.de}
}
Disclaimer:
MJB has received research gift funds from Adobe, Intel, Nvidia, Facebook, and Amazon. While MJB is a part-time employee of Amazon, his research was performed solely at, and funded solely by, Max Planck. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH.
Acknowledgments:
This research was partially supported by the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and the Max Planck ETH Center for Learning Systems (CLS).
This work was supported by the German Federal Ministry of Education and Research (BMBF): Tubingen AI Center, FKZ: 01IS18039B.
We thank:
- Tsvetelina Alexiadis for the Mechanical Turk experiments.
- Taylor McConnell for the voice recordings.
- Joachim Tesch for the help with renderings.
- Benjamin Pellkofer for website design, IT, and web support.
Contact
For questions, please contact goal@tue.mpg.de.
For commercial licensing, please contact ps-licensing@tue.mpg.de.