language-iconOld Web
English
Sign In

Neural gas

Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991 by Thomas Martinetz and Klaus Schulten. The neural gas is a simple algorithm for finding optimal data representations based on feature vectors. The algorithm was coined 'neural gas' because of the dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data space. It is applied where data compression or vector quantization is an issue, for example speech recognition, image processing or pattern recognition. As a robustly converging alternative to the k-means clustering it is also used for cluster analysis. Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991 by Thomas Martinetz and Klaus Schulten. The neural gas is a simple algorithm for finding optimal data representations based on feature vectors. The algorithm was coined 'neural gas' because of the dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data space. It is applied where data compression or vector quantization is an issue, for example speech recognition, image processing or pattern recognition. As a robustly converging alternative to the k-means clustering it is also used for cluster analysis. Given a probability distribution P ( x ) {displaystyle P(x)} of data vectors x {displaystyle x} and a finite number of feature vectors w i , i = 1 , ⋯ , N {displaystyle w_{i},i=1,cdots ,N} . With each time step t {displaystyle t} , a data vector x {displaystyle x} randomly chosen from P ( x ) {displaystyle P(x)} is presented. Subsequently, the distance order of the feature vectors to the given data vector x {displaystyle x} is determined. Let i 0 {displaystyle i_{0}} denote the index of the closest feature vector, i 1 {displaystyle i_{1}} the index of the second closest feature vector, and i N − 1 {displaystyle i_{N-1}} the index of the feature vector most distant to x {displaystyle x} . Then each feature vector is adapted according to with ε {displaystyle varepsilon } as the adaptation step size and λ {displaystyle lambda } as the so-called neighborhood range. ε {displaystyle varepsilon } and λ {displaystyle lambda } are reduced with increasing t {displaystyle t} . After sufficiently many adaptation steps the feature vectors cover the data space with minimum representation error. The adaptation step of the neural gas can be interpreted as gradient descent on a cost function. By adapting not only the closest feature vector but all of them with a step size decreasing with increasing distance order, compared to (online) k-means clustering a much more robust convergence of the algorithm can be achieved. The neural gas model does not delete a node and also does not create new nodes. A number of variants of the neural gas algorithm exists in the literature so as to mitigate some of its shortcomings. More notable is perhaps Bernd Fritzke's growing neural gas, but also one should mention further elaborations such as the Growing When Required network and also the incremental growing neural gas. Fritzke describes the growing neural gas (GNG) as an incremental network model that learns topological relations by using a 'Hebb-like learning rule', only, unlike the neural gas, it has no parameters that change over time and it is capable of continuous learning. Having a network with a growing set of nodes, like the one implemented by the GNG algorithm was seen as a great advantage, however some limitation on the learning was seen by the introduction of the parameter λ, in which the network would only be able to grow when iterations were a multiple of this parameter. The proposal to mitigate this problem was a new algorithm, the Growing When Required network (GWR), which would have the network grow more quickly, by adding nodes as quickly as possible whenever the network identified that the existing nodes would not describe the input well enough. Another neural gas variant inspired in the GNG algorithm is the incremental growing neural gas (IGNG). The authors propose the main advantage of this algorithm to be 'learning new data (plasticity) without degrading the previously trained network and forgetting the old input data (stability).'

[ "Deep learning", "Recurrent neural network", "Time delay neural network", "neural gas network" ]
Parent Topic
Child Topic
    No Parent Topic
Baidu
map