TUB Dataset (December 2016)

Dataset: TUB Human Grasping Dataset (December 2016)

Research Group: Technical University of Berlin (TUB)

Hand Type: Human Hand

Data Type: Human Motion Data, Human Postures, Forces Exerted

Data Structure: Joint Angles (raw values)

Data Format: .txt

Sampling Rate: >=100 Hz (100 Hz)

Action Type: Reach and Grasp

Objects Type: Real Objects

Kin. Model #DOFs: >20 (21)

Equipment: a motion capture system with 18 infrared cameras, four cameras, a Cyberglove (Cyberglove Systems), a touchscreen, a force-torque sensor

# of Actions: > 20 (4250)

# of Subjects: > 5 (17)

Year: 2016

Description:

Grasping data were recorded using five sensors, a touchscreen mounted on top of a force torque sensor, four cameras, a Cyberglove II and a motion capture system with passive markers. The motion capture system with 18 infrared cameras was calibrated before each experiment to ensure measurement accuracy. It was used to measure position and orientation of the right hand during grasping. The force-torque sensor was used to record contact forces and torques introduced to the surface. The 22-inch touchscreen display was embedded in the desk to gather touch information and display visual hints in some grasping scenarios. The participants had to wear three gloves, a rather slim glove for hygienic reasons, the Cyberglove and a conductive glove which was needed in combination with the touchscreen. The four cameras were placed in arbitrary angles to capture the whole grasping area from several perspectives.

Dataset Information:

In each trial, the participant grasps one out of 25 different objects placed in front of her on a table and lifts the object in accordance with the experimental protocol (detailed below). At the start of a trial, participants’ hands rested at the starting position on the table. Following an audio signal, the participant initiates the grasp using only the right hand. Data from the trial is recorded starting with the audio signal and ending with the lifting of the object. Seventeen right-handed subjects (seven female, age range from 23 to 35 years) participated in the experiment. Subjects had no prior knowledge of the purpose of the experiment and participated in a single experimental session, lasting about two hours. Participants gave informed consent prior to the experiments and the experimental protocol was approved by the Institutional Review Board of the University. Participants received a financial compensation of 8 Euro per hour.

We used 25 different objects grouped into one of six categories. The objects in each of the categories were chosen so as to elicit different grasping actions. The categories are named based on the targeted grasping behavior, including the category ’new’ for strategies that have not been observed in previous experiments:

• flip: button, french chalk, key, shell
• edge grasp: credit card, CD, comb, game card
• closing: salt shaker, tape, toy, chestnut, matchbox
• pinch: screw, match, cigarette, rubber band
• rotation: marker, screw driver, shashlik, glasses
• new: coffee mug, plate, book, bowl

Subjects grasped under different experimental conditions. In addition to the normal vision condition, we included an impaired vision condition in which subjects wore frosted-glass goggles. We also manipulated whether or not the participants would use the table for support.

We told the subjects to imagine that the table is extremely hot and to avoid contact with it. To augment the instructions, we presented an image of burning charcoal on the touchscreen and played a loud noise upon contact. The visual conditions (normal vs. impaired) and the surface conditions (normal vs. hot) were factorially combined. In these four experimental conditions, observers were instructed to grasp, lift, and hold the object. We also ran a ’use’ condition, which was the condition with normal vision and the normal surface, with the instruction to grasp the object with the intention to use it. To make this scenario more realistic we attached a “use-board” to the setup
including several possibilities to interact with the objects Each object was grasped twice in each of the five experimental conditions, resulting in a total number of 250 trials per participant.

Directory Organization:

– “mocap_ft_cyberglove_touchscreen”: Contains the data from the motion capture, force/torque sensor,
Cyberglove II and the touchscreen.
– “labels1”: Contains the labelling of the camera data by an expert.
– “labels2”: Contains the labelling of the camera data by a novice viewer.
– “log”: Contains information about possibly missing data in some trials due to technical reasons.
– “cameras”: Contains the video files.

How to Cite:

You can use the following BibTeX citation:

@inproceedings{puhlmann2016compact,
title={A Compact Representation of Human Single-Object Grasping},
author={Puhlmann, Steffen and Heinemann, Fabian and Brock, Oliver and Maertens, Marianne},
booktitle={2016 IEEE International Conference on Intelligent Robots and Systems (IROS)},
pages={1954–1959},
year={2016},
organization={IEEE}
}