This is a release of the motion capture data from a walking experiment conducted at UC Berkeley. The rest of this page provides a brief introduction to the data itself. At the bottom of the page is a form for you to fill out upon completion you will receive a link to the data. If you use this data, we ask that you cite our original paper describing the data collection and processing:
A. Ames, R.Vasudevan, and R. Bajcsy, “Human-Data Based Cost of Bipedal Robotic Walking,” Hybrid Systems Computation and Control - 2011.
Additional information about this data in terms of a development of a cost function for the data can be found here:
R.Vasudevan, A. Ames, and R. Bajcsy, “Using Persistent Homology to Determine a Human-Data Based Cost for Bipedal Walking” International Federation of Automatic Control - 2011.
Experimental Setup
The data available to download is collected using the Phase Space System, which computes the 3D position of 10 LED sensors at 480 frames per second using 12 cameras at 1 millimeter level of accuracy. The cameras were calibrated prior to the experiment and were placed to achieve a 1 millimeter level of accuracy for a space of size 4 by 4 by 4 meters cubed. 5 LED sensors were placed on each leg as the figure illustrates. Each sensor was fastened to subjects in a manner that ensured that they did not move during the experiment.
Each trial of the experiment required the subject to walk 3 meters along a line drawn on the floor (in the figure this line is drawn in blue). To simplify the data analysis each subject was required to place their right foot at the starting point of the line at the outset of the experiment and was told to walk in a natural manner. Each subject performed 12 trials, which constituted a single experiment. 3 female and 6 male subjects with ages ranging between 17 and 77, heights ranging between 161 and 189 centimeters, and weights ranging between 47.6 and 90.7 kilograms performed the experiment.
Data Processing
To make the data collected from walking experiment amenable to analysis, it was processed through a three-step procedure: interpolation, data rotation and averaging. Since the motion capture information drops out periodically due to self-occlusions, we first interpolate the data to compensate for sensors dropping out of contact with the camera. The result of this initial data processing is relatively clean data over the course of a few steps (with the number of steps depending on the individual). From each of the trials, at least two steps are isolated (one with the right leg and another with the left leg) by ensuring that the data repeats. The data is then rotated so that the walking occurs in the x-direction. Since we are only interested in the data corresponding to constraint enforcement, only the sensor data for the heel and toe on each leg are considered. For each subject, this data is considered for all 12 walking trials and averaged (after appropriately shifting the data in time) which results in a single trajectory for each constraint for each subject for at least two steps (one step per leg).