Human object articulation for CCTV video forensics

I Zafar, M Fraz, EA Edirisinghe - Video Surveillance and …, 2013 - spiedigitallibrary.org
Video Surveillance and Transportation Imaging Applications, 2013spiedigitallibrary.org
In this paper we present a system which is focused on developing algorithms for automatic
annotation/articulation of humans passing through a surveillance camera in a way useful for
describing a person/criminal by a crime scene witness. Each human is articulated/annotated
based on two appearance features: 1. primary colors of clothes in the head, body and legs
region. 2. presence of text/logo on the clothes. The annotation occurs after a robust
foreground extraction based on a modified approach to Gaussian Mixture model and …
In this paper we present a system which is focused on developing algorithms for automatic annotation/articulation of humans passing through a surveillance camera in a way useful for describing a person/criminal by a crime scene witness. Each human is articulated/annotated based on two appearance features: 1. primary colors of clothes in the head, body and legs region. 2. presence of text/logo on the clothes. The annotation occurs after a robust foreground extraction based on a modified approach to Gaussian Mixture model and detection of human from segmented foreground images. The proposed pipeline consists of a preprocessing stage where we improve color quality of images using a basic color constancy algorithm and further improve the results using a proposed post-processing method. The results show a significant improvement to the illumination of the video frames. In order to annotate color information for human clothes, we apply 3D Histogram analysis (with respect to Hue, Saturation and Value) on HSV converted image regions of human body parts along with extrema detection and thresholding to decide the dominant color of the region. In order to detect text/logo on the clothes as another feature to articulate humans, we begin with the extraction of connected components of enhanced horizontal, vertical and diagonal edges in the frames. These candidate regions are classified as text or non-text on the bases of their Local Energy based Shape Histogram (LESH) features combined with KL divergence as classification criteria. To detect humans, a novel technique has been proposed that uses a combination of Histogram of Oriented Gradients (HOG) and Contourlet transform based Local Binary Patterns (LBP) with Adaboost as classifier. Initial screening of foreground objects is performed by using HOG features. To further eliminate the false positives due to noise form background and improve results, we apply Contourlet-LBP feature extraction on the images. In the proposed method, we extract the LBP feature descriptor for Contourlet transformed high pass sub-images from vertical and diagonal directional bands. In the final stage, extracted Contourlet-LBP descriptors are applied to Adaboost for classification. The proposed frame work showed fairly fine performance when tested on a CCTV test dataset.
SPIE Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果