Learning 3d action models from a few 2d videos for view invariant action recognition
2010 IEEE Computer Society Conference on Computer Vision and …, 2010•ieeexplore.ieee.org
Most existing approaches for learning action models work by extracting suitable low-level
features and then training appropriate classifiers. Such approaches require large amounts of
training data and do not generalize well to variations in viewpoint, scale and across
datasets. Some work has been done recently to learn multi-view action models from Mocap
data, but obtaining such data is time consuming and requires costly infrastructure. We
present a method that addresses both these issues by learning action models from just a few …
features and then training appropriate classifiers. Such approaches require large amounts of
training data and do not generalize well to variations in viewpoint, scale and across
datasets. Some work has been done recently to learn multi-view action models from Mocap
data, but obtaining such data is time consuming and requires costly infrastructure. We
present a method that addresses both these issues by learning action models from just a few …
Most existing approaches for learning action models work by extracting suitable low-level features and then training appropriate classifiers. Such approaches require large amounts of training data and do not generalize well to variations in viewpoint, scale and across datasets. Some work has been done recently to learn multi-view action models from Mocap data, but obtaining such data is time consuming and requires costly infrastructure. We present a method that addresses both these issues by learning action models from just a few video training samples. We model each action as a sequence of primitive actions, represented as functions which transform the actor's state. We formulate model learning as a curve-fitting problem, and present a novel algorithm for learning human actions by lifting 2D annotations of a few keyposes to 3D and interpolating between them. Actions are inferred by sampling the models and accumulating the feature weights learned discriminatively using a latent state Perceptron algorithm. We show results comparable to state-of-art on the standard Weizmann dataset, with a much smaller train:test ratio, and also in datasets for visual gesture recognition and cluttered grocery store environments.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果