ELM
Extreme learning machines (ELM) are basically FFNNs but with random connections. They look very similar to LSMs and ESNs, but they are not recurrent nor spiking. They also do not use backpropagation. Instead, they start with random weights and train the weights in a single step according to the least-squares fit (lowest error across all functions). This results in a much less expressive network but it’s also much faster than backpropagation.
极限学习机(ELM)基本上是FFNNs,但具有随机连接。它们看起来非常类似于LSMs和ESNs,但它们既不复发也不尖峰。它们也不使用反向传播。相反,它们从随机权重开始,并根据最小二乘拟合(所有函数的最小误差)一步训练权重。这导致了一个更少的表达网络,但它也比反向传播快得多。
Huang, Guang-Bin, et al. “Extreme learning machine: Theory and applications.” Neurocomputing 70.1-3 (2006): 489-501.
Original Paper PDF
ESN
Echo state networks (ESN) are yet another different type of (recurrent) network. This one sets itself apart from others by having random connections between the neurons (i.e. not organised into neat sets of layers), and they are trained differently. Instead of feeding input and back-propagating the error, we feed the input, forward it and update the neurons for a while, and observe the output over time. The input and the output layers have a slightly unconventional role as the input layer is used to prime the network and the output layer acts as an observer of the activation patterns that unfold over time. During training, only the connections between the observer and the (soup of) hidden units are changed.
回声状态网络(ESN)是另一种不同类型的(循环)网络。这个神经元通过神经元之间的随机连接(即没有组织成整齐的一组层)将自己与其他神经元区分开来,而且它们的训练方式也不同。我们不再输入和反向传播错误,而是输入、转发和更新神经元一段时间,并随着时间观察输出。输入层和输出层有一个稍微非常规的角色,因为输入层用于启动网络,而输出层作为随时间展开的激活模式的观察者。在训练中,只有观察者和隐藏单位之间的连接被改变。
Jaeger, Herbert, and Harald Haas. “Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication.” science 304.5667 (2004): 78-80.
Original Paper PDF