Neural network classification: Maximizing zero-error density

LM Silva, LA Alexandre, JM de Sá - International Conference on Pattern …, 2005 - Springer
We propose a new cost function for neural network classification: the error density at the
origin. This method provides a simple objective function that can be easily plugged in the …

Batch-sequential algorithm for neural networks trained with entropic criteria

JM Santos, JM de Sá, LA Alexandre - International Conference on Artificial …, 2005 - Springer
The use of entropy as a cost function in the neural network learning phase usually implies
that, in the back-propagation algorithm, the training is done in batch mode. Apart from the …