Backpropagation neural networks could be used to learn a mapping f from Rn to Rm. A set of examples, the training set, is available to achieve this goal. The learning process is carried out by presenting, to the network, elements from the training set, and modifying the weights according to the error on the output. These elements are selected using a fixed probability density function. This paper presents a new method, assuming that there is a uniform probability density function, but taking into account the varying difficulty of learning for each component. This new method produces smaller errors on almost all the cases.
© 2001-2024 Fundación Dialnet · Todos los derechos reservados