On learning disentangled representations for gait recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020•ieeexplore.ieee.org
Gait, the walking pattern of individuals, is one of the important biometrics modalities. Most of
the existing gait recognition methods take silhouettes or articulated body models as gait
features. These methods suffer from degraded recognition performance when handling
confounding variables, such as clothing, carrying and viewing angle. To remedy this issue,
we propose a novel AutoEncoder framework, GaitNet, to explicitly disentangle appearance,
canonical and pose features from RGB imagery. The LSTM integrates pose features over …
the existing gait recognition methods take silhouettes or articulated body models as gait
features. These methods suffer from degraded recognition performance when handling
confounding variables, such as clothing, carrying and viewing angle. To remedy this issue,
we propose a novel AutoEncoder framework, GaitNet, to explicitly disentangle appearance,
canonical and pose features from RGB imagery. The LSTM integrates pose features over …
Gait, the walking pattern of individuals, is one of the important biometrics modalities. Most of the existing gait recognition methods take silhouettes or articulated body models as gait features. These methods suffer from degraded recognition performance when handling confounding variables, such as clothing, carrying and viewing angle. To remedy this issue, we propose a novel AutoEncoder framework, GaitNet, to explicitly disentangle appearance, canonical and pose features from RGB imagery. The LSTM integrates pose features over time as a dynamic gait feature while canonical features are averaged as a static gait feature. Both of them are utilized as classification features. In addition, we collect a Frontal-View Gait (FVG) dataset to focus on gait recognition from frontal-view walking, which is a challenging problem since it contains minimal gait cues compared to other views. FVG also includes other important variations, e.g., walking speed, carrying, and clothing. With extensive experiments on CASIA-B, USF, and FVG datasets, our method demonstrates superior performance to the SOTA quantitatively, the ability of feature disentanglement qualitatively, and promising computational efficiency. We further compare our GaitNet with state-of-the-art face recognition to demonstrate the advantages of gait biometrics identification under certain scenarios, e.g., long-distance/lower resolutions, cross viewing angles. Source code is available at http://cvlab.cse.msu.edu/project-gaitnet.html .
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果