ObMan Dataset

synthetic Object Manipulation

The ObMan dataset is a a large-scale synthetic image dataset of hands grasping objects. Body poses are sampled from MoCap data, and hand poses are generated by the automatic robotic grasping software GraspIt. The realistic body model SMPL+H is rendered grasping ShapeNet object models under a large variation in pose, background, texture, and lighting. 150K images are generated, along with ground truth 3D hand and object meshes, 2D/3D hand keypoints, object and hand segmentations, and depth maps.

>> Download request

BibTeX

When using ObMan please reference:
@INPROCEEDINGS{hasson19_obman,
  title     = {Learning joint reconstruction of hands and manipulated objects},
  author    = {Hasson, Yana and Varol, G{\"u}l and Tzionas, Dimitrios and Kalevatykh, Igor and Black, Michael J. and Laptev, Ivan and Schmid, Cordelia},
  booktitle = {CVPR},
  year      = {2019}
}

More information