BASES Conference 2011

Find below an abstract presented at the British Association of Sports and Exercise Sciences in 2011. The abstract can also be found in the Journal of Sports Sciences [1]

[1] Wheat, J., Hart, J., Domone, S., & Outram, T. (2011). Obtaining body segment intertia parameters using structured light scanning with Microsoft Kinect. Journal of Sports Sciences, 29(sup2), S23–S24. doi:10.1080/02640414.2011.609363

Background

Accurate estimates of body segment inertia parameters (BSIP) are important for many biomechanical analyses. BSIP have been estimated using techniques such as those based on proportional scaling, geometric modelling and medical imaging techniques. However, these methods are limited by being either, for example, not specific to the individual, time consuming, hard to access, or subject to ethical constraints. Laser scanning methods have also been used to obtain BSIP but the equipment required is often very expensive. Recently, Wicke and Dumas (2010, Journal of Applied Biomechanics, 26, 26-31) suggested that structured light scanning techniques would be suitable for obtaining individual-specific BSIP. Microsoft recently released a peripheral capable of structured light scanning – costing ~£100 – offering the possibility of obtaining individual-specific estimate of BSIP using commodity hardware.

Purpose

The purpose of this study was to investigate the accuracy with which BSIP can be obtained from structured-light scanning using Microsoft Kinect.

Method

The ‘lumbar’ segment of a Choking Charlie mannequin was scanned using a ModelMaker D100 non-contact laser scanner which provided a gold-standard estimate of the volume and – after assuming uniform density – inertia parameters of the test object. Volume and inertia parameter estimates of the test object were then obtained using the Kinect system. Briefly, the system comprised the Kinect, fixed on a tripod and an Polhemus Liberty electromagnetic tracking system. When scanning the object, consecutive scans were taken by rotating the object and transforming the 3D coordinates of the points from each ‘snap-shot’ to a moving coordinate system (CS) fixed in the test object. The object-fixed CS was defined relative to a Polhemus sensor CS (attached to the object) in a calibration procedure by digitising relevant landmarks. Once a complete geometry of the object was obtained it was post-processed using 1 cm uniform sub-sampling and mesh fitting. Subsequently, inertia parameters of the test object – mass, centre of mass location (COM), moment of inertia (I) – were calculated. Twenty-five scans were collected.

Results

Percentage errors for object mass, COM, Ixx, Iyy, Izz were -1.9 ± 1.6 %, 0.5 ± 0.4 %, -3.2 ± 2.7 %, 2.8 ± 2.3 %, -3.0 ± 2.8 %, respectively.

Discussion

Errors in the estimation of important inertia parameters obtained with the current system compare favorably with errors reported in the literature – for example mass, COM and I compare well to geometric models (e.g. Wicke and Dumas, 2010). Importantly, Wicke and Dumas’s gold standard inertia data did not assume uniform density. This possibly explains some of the difference but the uniform density assumption has been reported to have only a secondary influence (Wicke and Dumas, 2010). A strength of the system presented here is that it leverages cheap, readily available, commodity hardware (Kinect) to obtain the scan data. However, a limitation is that – to register multiple point clouds – a Polhemus six degrees of freedom tracking system was used. The use of an electromagnetic tracking system adds expense and complexity. However, combining the Kinect with such a system allows point clouds to be scanned into an anatomical CS and segmentation to be performed ‘on-the-fly’. Furthermore, analyses requiring BSIP are rarely conducted without this type of system – or similar (e.g. optoelectronic system). Being mindful of potential interference with optoelectronic systems based in infrared light, the Polhemus system could easily be replaced with any system capable of tracking 3D position and orientation.
Conclusion. Our results suggest structured light scanning using a system based on Microsoft Kinect can be used to obtain accurate BSIP.