Neural Networks Theory

Neural Networks Theory

Language: English

Pages: 396

ISBN: 3642080065

Format: PDF / Kindle (mobi) / ePub


This book, written by a leader in neural network theory in Russia, uses mathematical methods in combination with complexity theory, nonlinear dynamics and optimization. It details more than 40 years of Soviet and Russian neural network research and presents a systematized methodology of neural networks synthesis. The theory is expansive: covering not just traditional topics such as network architecture but also neural continua in function spaces as well.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

capacity increases. A dead-state character of the desire to increase the capacity of a computer with the MIMD architecture by means of increasing the capacity of the node or by increasing the number of nodes is determined by the fact that the capacity increases more slowly (or significantly more slowly) than the system cost. The reason for this observation is the higher rate of the node cost increase as compared to its capacity increase as well as the fact that under the increasing number of

have had varying degrees of success (and failure); however, as more remarkable advances in brain science are made, brain-style computer technology is becoming increasingly more promising. In the earliest days, neural network theories developed in America, Europe, Russia and Japan independently. While the scientific traditions for each of these regions are significantly different, Russian science is most unique due in part to its years of isolation from the Western world. Its researchers were able

Fault-tolerance in the sense of monotonic but not catastrophic change of the problem solving quality depending on the number of failing elements. 23 24 I · Introduction The main goal of this section is to explain why the system aimed at the solution of some specific problem must be designed namely in the form of the neural network and how to choose this neural network topology (the number of layers, the number of layer elements, connections characteristics, topology). I.10.2 Investigation of

for x′g: In the above expressions in order to provide the equality The minimum criterion for R under condition p1r1 = p2r2 is determined in the following way. The gradient R* estimation (7.17) using adjustable coefficients is determined by (9.6) with A, B and C from (7.18). The estimation of gradient R* along λ is determined in the form of estimation of the first moment of distribution for the transformed discrete error according to (7.19) and (7.20): (9.8a) Expressions (9.6), (7.18), (7.20),

Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 163 164 167 169 170 173 175 177 178 180 182 183 185 187 Contents Adjustment of Continuum Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adjustment of a Neuron with a Feature Continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adjustment of the Continuum

Download sample

Download