BSCGAN: Deep background subtraction with conditional generative adversarial networks
2018 25th IEEE International Conference on Image Processing (ICIP), 2018•ieeexplore.ieee.org
This paper proposes a deep background subtraction method based on conditional
Generative Adversarial Network (cGAN). The proposed model consists of two successive
networks: generator and discriminator. The generator learns the mapping from the observing
input (ie, image and background), to the output (ie, foreground mask). Then, the
discriminator learns a loss function to train this mapping by comparing real foreground (ie,
ground-truth) and fake foreground (ie, predicted output) with observing the input image and …
Generative Adversarial Network (cGAN). The proposed model consists of two successive
networks: generator and discriminator. The generator learns the mapping from the observing
input (ie, image and background), to the output (ie, foreground mask). Then, the
discriminator learns a loss function to train this mapping by comparing real foreground (ie,
ground-truth) and fake foreground (ie, predicted output) with observing the input image and …
This paper proposes a deep background subtraction method based on conditional Generative Adversarial Network (cGAN). The proposed model consists of two successive networks: generator and discriminator. The generator learns the mapping from the observing input (i.e., image and background), to the output (i.e., foreground mask). Then, the discriminator learns a loss function to train this mapping by comparing real foreground (i.e., ground-truth) and fake foreground (i.e., predicted output) with observing the input image and background. Evaluating the model performance with two public datasets, CDnet 2014 and BMC, shows that the proposed model outperforms the state-of-the-art methods.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果