Abstract
In a multilevel neural network, the output of each neuron is to produce a multi-bit representation. Therefore, the total network size can be significantly smaller than a conventional network. The reduction in network size is a highly desirable feature in large-scale applications. The procedure for applying hardware annealing by continuously changing the neuron gain from a low value to a certain high value, to reach the globally optimal solution is described. Several simulation results are also presented. The hardware annealing technique can be applied to the neurons in a parallel format, and is much faster than the simulated annealing method on digital computers.
Original language | English |
---|---|
Pages (from-to) | 46-49 |
Number of pages | 4 |
Journal | IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing |
Volume | 42 |
Issue number | 1 |
DOIs | |
State | Published - 01 1995 |
Externally published | Yes |