'
Научный журнал «Вестник науки»

Режим работы с 09:00 по 23:00

zhurnal@vestnik-nauki.com

Информационное письмо

  1. Главная
  2. Архив
  3. Вестник науки №5 (62) том 4
  4. Научная статья № 127

Просмотры  39 просмотров

Mametsaliyev R.R.

  


BUILDING A MATHEMATICAL MODEL OF MULTILAYER PERCEPTRON IN NEURAL NETWORK *

  


Аннотация:
a method of selection adaptation of the structure of a neural network is proposed, its positive and negative features are considered, a block diagram of the algorithm for adapting the network structure is presented, as well as the results of a computational experiment confirming the efficiency of the algorithm   

Ключевые слова:
approximation, neural networks, neural network structure   


УДК 004.822

Mametsaliyev R.R.

Lecturer,

Engineering and Technology University of Turkmenistan named after Oguzhan

(Turkmenistan, Ashgabat)

 

BUILDING A MATHEMATICAL MODEL OF MULTILAYER PERCEPTRON IN NEURAL NETWORK

 

Abstract: a method of selection adaptation of the structure of a neural network is proposed, its positive and negative features are considered, a block diagram of the algorithm for adapting the network structure is presented, as well as the results of a computational experiment confirming the efficiency of the algorithm.

 

Key words: approximation, neural networks, neural network structure.

 

Traditional algorithms for training a neural network (for example, the backpropagation algorithm) imply finding the best coefficients for this network with its structure fixed. The problem of choosing an appropriate neural network structure with this approach lies on the shoulders of the modeler and largely determines the success of the model building. In addition, the problem of selecting the network structure is extremely complex in itself.

To simplify the construction of neural networks, there are currently many methods and algorithms. All of them can be divided into two groups: methods and algorithms for increasing the network structure and methods and algorithms for simplifying the network structure. Methods for simplifying the network structure are more formalized, but they are associated with obviously large expenditures of resources for training the initial structures of neural networks. Algorithms for increasing the structure of the network, as a rule, are based on empirical data on the improvement in the degree of learning of a neural network when increasing its structure. Thus, an algorithm for increasing the structure of a neural network is described, based on the sequential filling of the hidden layers of the network with neurons to a state of saturation (in which, when a new neuron is added to a given layer, the discrepancy of the trained neural network practically ceases to decrease), while empirically limiting the number of hidden layers of the network.

In the algorithm considered above, there is no rigid fixation of the number of hidden layers, however, their number can be limited to two inner layers due to the fact that the sufficiency of one inner layer is theoretically proven, and an empirical statement is derived about the optimal number of hidden layers, equal to two. Note that the use of neural networks with a large number of internal layers can lead to satisfactory network training results with smaller network sizes in some specific examples, but cannot significantly affect the residual level of the trained network in the absence of strict restrictions on its size. Based on this statement, it is possible to build a genetic algorithm for adapting the structure of a neural network, which will have a number of advantages:

1) unlike the algorithms for sequential filling of the hidden layers of the neural network, it does not have steps at which the network's computing power is lost (this can happen when a new hidden layer containing one neuron is added - in this case, such a neuron becomes the "bottleneck" of the entire network and can lead to a large discrepancy in the trained network;

2) this algorithm takes into account the possibility of changing the saturation level of the hidden layers of the network when increasing the structure of other hidden layers;

3) it can be argued that at each step of building up the structure of the neural network, we have a network with an optimal distribution of neurons over hidden layers, which provides a wider choice of criteria for stopping the process of adapting the network structure.

The algorithm for adapting the structure of a neural network is based on the genes for adding a new neuron to a certain hidden layer and looks like this:

1) set the initial structure of the neural network N

2) create a copy of the neural network M';

3) add a new neuron for the N network to the first hidden layer, for the N network - to the second hidden layer;

4) to train networks N and M';

5) if the residual of the network N is less than the residual of the network M, assign N = A^;

6) check the fulfillment of the stop criterion. If it is done, finish adapting the network structure, otherwise return to step 2.

The described algorithm has some disadvantages in comparison with the sequential network growth algorithm. Its first drawback can be considered a rigid setting of the number of hidden layers. However, note that this number may be different from two. The algorithm allows you to grow a network with n hidden layers, but in this case you will have to work with n competing genes and n parallel learning neural networks. This implies the second drawback - the described algorithm for adapting the structure of neural networks works n times slower than the algorithms for sequential growth.

Thus, having determined in advance the number of hidden layers of the neural network, it is possible to build a genetic algorithm for adapting the structure of the neural network, which at each iteration will provide a neural network with the optimal distribution of neurons over the hidden layers.

This algorithm can be considered as genetic in the broad sense of the definition of a genetic algorithm, since the stage of copying a neural network is an inheritance mechanism, the stage of various modifications of each neural network is a mutation mechanism, and the subsequent choice of the best neural network is a selection mechanism. However, unlike traditional genetic algorithms, there is no need to impose additional restrictions on the size of the neural network, except for those that are included in the training stop criterion.

If there are no connections in the network, it is necessary to search for the “bottleneck” place. It is defined as follows: the neuron in the layer being modified that has retained the largest error is copied. The connections for the new neuron are identical to the connections of the neuron with the largest error, but their coefficients are replaced by random values.

As a learning criterion for the neural network, the residual of the network is used, which is found as the arithmetic mean of the errors of all output neurons over all examples of the test set. The structure of the neural network is considered trained if, at the end of the next epoch, the residual of the network has not decreased. A similar criterion was used to stop growing the structure of the neural network.

 

REFERENCES:

 

  1. Арзамасцев А.А., Крючин О.В., Азарова П.А., Зенкова Н.А. Универсальный программный комплекс для компьютерного моделирования на основе искусственной нейронной сети с самоорганизацией структуры // Вестник Тамбовского университета. Серия Естественные и технические науки. Тамбов, 2006. Т. 11. Вып. 4. С. 564570.
  2. Хайкин C. Нейронные сети: полный курс, 2-е изд.: пер. с англ. М.: Издат. дом «Вильямс», 2006.
  


Полная версия статьи PDF

Номер журнала Вестник науки №5 (62) том 4

  


Ссылка для цитирования:

Mametsaliyev R.R. BUILDING A MATHEMATICAL MODEL OF MULTILAYER PERCEPTRON IN NEURAL NETWORK // Вестник науки №5 (62) том 4. С. 738 - 741. 2023 г. ISSN 2712-8849 // Электронный ресурс: https://www.вестник-науки.рф/article/8474 (дата обращения: 29.04.2024 г.)


Альтернативная ссылка латинскими символами: vestnik-nauki.com/article/8474



Нашли грубую ошибку (плагиат, фальсифицированные данные или иные нарушения научно-издательской этики) ?
- напишите письмо в редакцию журнала: zhurnal@vestnik-nauki.com


Вестник науки СМИ ЭЛ № ФС 77 - 84401 © 2023.    16+




* В выпусках журнала могут упоминаться организации (Meta, Facebook, Instagram) в отношении которых судом принято вступившее в законную силу решение о ликвидации или запрете деятельности по основаниям, предусмотренным Федеральным законом от 25 июля 2002 года № 114-ФЗ 'О противодействии экстремистской деятельности' (далее - Федеральный закон 'О противодействии экстремистской деятельности'), или об организации, включенной в опубликованный единый федеральный список организаций, в том числе иностранных и международных организаций, признанных в соответствии с законодательством Российской Федерации террористическими, без указания на то, что соответствующее общественное объединение или иная организация ликвидированы или их деятельность запрещена.