Elsevier

Gait & Posture

Volume 51, January 2017, Pages 162-168
Gait & Posture

Full length article
Validity and sensitivity of the longitudinal asymmetry index to detect gait asymmetry using Microsoft Kinect data

https://doi.org/10.1016/j.gaitpost.2016.08.022Get rights and content

Highlights

  • The Continuous Relative Phase is not useable with Kinect.

  • New longitudinal index Ilong is sensitive and validated to detect gait asymmetry.

  • The Ilong index works with Kinect placed either in front of or behind subjects.

  • Only few strides (at least 5) are needed to compute a valid Ilong.

Abstract

Gait asymmetry information is a key point in disease screening and follow-up. Constant Relative Phase (CRP) has been used to quantify within-stride asymmetry index, which requires noise-free and accurate motion capture, which is difficult to obtain in clinical settings. This study explores a new index, the Longitudinal Asymmetry Index (ILong) which is derived using data from a low-cost depth camera (Kinect). ILong is based on depth images averaged over several gait cycles, rather than derived joint positions or angles. This study aims to evaluate (1) the validity of CRP computed with Kinect, (2) the validity and sensitivity of ILong for measuring gait asymmetry based solely on data provided by a depth camera, (3) the clinical applicability of a posteriorly mounted camera system to avoid occlusion caused by the standard front-fitted treadmill consoles and (4) the number of strides needed to reliably calculate ILong. The gait of 15 subjects was recorded concurrently with a marker-based system (MBS) and Kinect, and asymmetry was artificially reproduced by introducing a 5 cm sole attached to one foot. CRP computed with Kinect was not reliable. ILong detected this disturbed gait reliably and could be computed from a posteriorly placed Kinect without loss of validity. A minimum of five strides was needed to achieve a correlation coefficient of 0.9 between standard MBS and low-cost depth camera based ILong. ILong provides a clinically pragmatic method for measuring gait asymmetry, with application for improved patient care through enhanced disease, screening, diagnosis and monitoring.

Introduction

Lower-limb gait asymmetry (GA) is characteristic of a number of chronic neurological and musculoskeletal diseases [1], including cerebral palsy [2], Parkinson’s disease [3] and knee and hip osteoarthritis [4], [5]. Onset of GA can also follow acute events such as stroke [6] and amputation [7]. Thus, quantification of GA has become central to the clinical monitoring and treatment of many chronic and acute neurological and musculoskeletal diseases [8], [9].

Gait asymmetry is typically evaluated using kinematic data and spatio-temporal gait parameters (step/stride length and duration). Based on these global variables, indicators have been introduced to quantify asymmetry, including the Symmetry Ratio [6] and Symmetry Index [6]. These indicators provide global asymmetry information for a studied gait cycle, but cannot be used to analyse how asymmetry occurs at specific instances within the gait cycle.

To overcome this limitation, some authors have proposed analysis of the Constant Relative Phase (CRP) and its derivatives [10]. For a given joint, this indicator constructs a phase diagram based on angular value and angular velocity. Gait asymmetry can be assessed by comparing the CRP of the corresponding left and right joints. It has been used to measure asymmetry caused by weights placed on one foot [11] and pathologies such as stroke [12].

However, CRP is based on sagittal joint angles and their derivatives, which impose the use of accurate and noise-free motion capture systems, such as optoelectronic systems [11], [12]. Such systems are difficult to use in daily practice in small clinics as they require specific technical skills, calibration, and manual processing. In everyday practice, especially in small clinics, easy-to-use and low-cost systems are preferable, such as depth cameras developed for the gaming industries, which have been recently proposed for clinical gait analysis [13], [14]. However, the authors reported problems when computing temporal information due to inaccuracies and noise. These problems could impose important limitations when computing CRP, especially for the computation of reliable joint angular velocities but this has not yet been tested.

To tackle this potential problem, recent works proposed the direct use of depth images instead of skeleton data to perform clinical gait analysis. Accurate determination of global spatiotemporal data [15] and a continuous asymmetry index – the Longitudinal Asymmetry Index (ILong) [16] has been investigated using data obtained from low cost depth cameras such as the Kinect. ILong evaluates gait asymmetry at each instant of the gait cycle according to the instantaneous spatial distance between the right leg motion expressed during a stride starting with a right-foot strike, and the mirrored left leg motion expressed during a stride starting with a left-foot strike. To reduce noise, a representative gait cycle made of a sequence of averaged depth images (obtained with hundreds of gait cycles) was used to compute ILong. However, no benefit of ILong over CRP was investigated. Moreover, the authors validated this approach with a Kinect placed in front of a treadmill in order to capture and use numerous gait cycles. This was achieved using a specialised treadmill with a remotely placed console so as not to obscure the camera view. ILong computed from an anteriorly placed Kinect would not be applicable to standard treadmills available within a clinical setting, all of which have a front fitting console for safety reasons.

Therefore, this study aimed to evaluate (1) the validity and sensitivity of the CRP computed with Kinect, (2) validity and sensitivity of ILong for measuring gait asymmetry given the noisy data provided by Kinect, (3) the clinical applicability of ILong by placing the Kinect posteriorly rather than anteriorly to the treadmill, to reduce occlusion caused by front-fitted treadmill consoles, and (4) the number of strides needed to reliably calculate ILong.

The main hypotheses were:

  • -

    H1: CRP computed with Kinect data is not reliable, supporting the idea of developing a noise-resistant continuous asymmetry index based on Kinect data,

  • -

    H2: ILong computed with Kinect and Vicon data is valid and sensitive enough to detect asymmetrical gaits compared to natural ones,

  • -

    H3: placing the Kinect behind the subject does not affect the validity of the ILong,

  • -

    H4: the number of gait cycles averaged to compute ILong does not strongly affect the results, supporting the idea that a low number of cycles may be possible for some patients.

Section snippets

Methods

A protocol similar to that which has been described previously [16], [17] was carried out. Healthy subjects were asked to walk on a treadmill with unilateral imposed perturbation (a 5 cm sole placed below one of the feet) to induce gait asymmetry. CRP and ILong were then both computed with Kinect and Vicon data concurrently.

Results

Fig. 2 depicts CRP computed with Vicon and with Kinect data. As shown in Table 1A, CRP computed with Vicon data was able to detect asymmetry within the stride up to 95% of the time. However, CRP computed with Kinect data at the shank could only detect asymmetry for 73% (left deformation) and 52% (right deformation) within the stride. The results were similar when the CRP was computed at the thigh. Correlation between CRP computed with Vicon and Kinect data all along the cycle were 0.45 and 0.11

Discussion

The poor results obtained with the CRP computed with Kinect data supports hypothesis H1: CRP was not reliably computed from gait data and this could be explained by two reasons. Firstly, the derivation of noisy joint angles delivered by the Kinect induces noisy derivatives. Previous work demonstrated a correlation of 0.8 between joint angles delivered by the Kinect and an optoelectronic system [21]. This could partly explain the low correlation between CRP computed with Kinect and Vicon data.

Conclusion

This study shows that CRP computed with skeleton Kinect data failed to assess GA, whereas validity and sensitivity analysis demonstrated reliable assessment when using the new GA index (ILong) based on depth images. This provides the clinician with a longitudinal space-time GA index within the gait cycle for patients walking on a treadmill. We have shown that placing the depth camera in front of or behind the subject had no strong influence on the results and that even a small number of gait

Conflict of interest

The authors declare that they have no conflict of interest.

Acknowledgements

This work was funded by the Wellcome Trust and EPSRC grant number 088844/Z/09/Z and the Association nationale de la recherche et de la technologie (ANRT) (France) for the Conventions Industrielles de Formation par la REcherche (CIFRE) 146/2009 funding.

References (23)

Cited by (28)

  • Concurrent validity of a custom computer vision algorithm for measuring lumbar spine motion from RGB-D camera depth data

    2021, Medical Engineering and Physics
    Citation Excerpt :

    RGB-D cameras are less expensive and require less time than traditional optoelectronic motion capture equipment for tracking human motion, and they can be easily installed within clinical settings. The performance and implementation of these cameras have been studied for the assessment of various experimental and clinical outcome measures relative to gold-standard motion tracking equipment, such as: 1) clinical measurements of motor function and postural control [12-14]; 2) spatiotemporal gait parameters during overground and treadmill walking [14-15]; and 3) spatial accuracy (i.e., location and orientation) of landmarks during various movement tasks [12,16-17]. The majority of studies that utilize RGB-D camera systems for tracking human motion achieve adequate results using the integrated skeletal model [12,15,18].

  • Effects of camera viewing angles on tracking kinematic gait patterns using Azure Kinect, Kinect v2 and Orbbec Astra Pro v2

    2021, Gait and Posture
    Citation Excerpt :

    Gait patterns can be characterized using spatiotemporal (e.g., walking speed, step length, stance time) and kinematic measurements (e.g., joint angles) [4]. Underlying gait impairments can be identified by examining gait variability [5] and symmetry [6]. Three-dimension (3D) opto-electronic motion capture is considered the gold standard of gait analysis by the scientific community for reliable and accurate measurement of gait patterns [7].

  • Fall risk assessment in the wild: A critical examination of wearable sensor use in free-living conditions

    2021, Gait and Posture
    Citation Excerpt :

    A wide range of methods have been investigated to measure free-living mobility behaviour. Ambient sensors, such as radar [18], passive infrared [19], third-person video, and depth cameras [20–23] have been investigated as a means to extract gait parameters, detect falls, and track longitudinal changes in a person's mobility patterns. However, ambient sensors have limitations due to visual occlusions (e.g., furniture), inability to extract spatiotemporal data when full-body view is unavailable, and tracking the same person in spaces with multiple residents with similar body characteristics [24].

  • Simplified digital balance assessment in typically developing school children

    2021, Gait and Posture
    Citation Excerpt :

    Against VICON™, the Kinect™ demonstrated > 90 % spatial accuracy for movements in a frontal plane, and 83 % for depth [25]. Multiple studies demonstrated the validity of markerless 3DMA to assess postural stability [8–10] and gait abnormalities [26]. While the accuracy in tracking smaller joints such as the ankle or wrist is reported lower than for central joints and trunk [27], the reliability in detecting and tracking kinematics even in small children was found to be moderate to good [28].

  • SpiderNet: A spiderweb graph neural network for multi-view gait recognition

    2020, Knowledge-Based Systems
    Citation Excerpt :

    In this way, it can only be explored from the multi-modal identification of data. At present, some visual sensors, such as depth camera, infrared camera, and binocular camera [2–5], acquire different types of gait images, which are further analyzed and recognized using fusion algorithms. But it can only express features from a single angle, ignoring the gait view observed from other angles.

View all citing articles on Scopus
View full text