Sponsored Links

Sabtu, 19 Mei 2018

Sponsored Links

Volatility Prediction by Extreme Learning Machines - YouTube
src: i.ytimg.com

Extreme learning machines are feedforward neural networks for classification, regression, clustering, sparse approximation, compression and feature learning with a single layer or multiple layers of hidden nodes, where the parameters of hidden nodes (not just the weights connecting inputs to hidden nodes) need not be tuned. These hidden nodes can be randomly assigned and never updated (i.e. they are random projection but with nonlinear transforms), or can be inherited from their ancestors without being changed. In most cases, the output weights of hidden nodes are usually learned in a single step, which essentially amounts to learning a linear model. The name "extreme learning machine" (ELM) was given to such models by its main inventor Guang-Bin Huang.

According to their creators, these models are able to produce good generalization performance and learn thousands of times faster than networks trained using backpropagation. In literature, it also shows that these models can outperform support vector machines (SVM) and SVM provides suboptimal solutions in both classification and regression applications.


Video Extreme learning machine



History

From 2001-2010, ELM research mainly focuses on the unified learning framework for "generalized" single-hidden layer feedforward neural networks (SLFNs), including but not limited to sigmoid networks, RBF networks, threshold networks, trigonometric networks, fuzzy inference systems, Fourier series, Laplacian transform, wavelet networks, etc. One typical achievements made in those years is to successfully prove the universal approximation and classification capabilities of ELM in theory.

From 2010 to 2015, ELM research extends to the unified learning framework for kernel learning, SVM and a few typical feature learning methods such as Principal Component Analysis (PCA) and Non-negative Matrix Factorization (NMF). It is shown that SVM actually provides suboptimal solutions compared to ELM, and ELM can provide the whitebox kernel mapping, which is implemented by ELM random feature mapping, instead of the blackbox kernel used in SVM. PCA and NMF can be considered as special cases where linear hidden nodes are used in ELM.

From 2015 onwards, more and more researchs on ELM have been on its hierarchical implementation. Since 2011, accumulated concrete direct biological evidences have supported such ELM theories.

In a recent announcement from Google Scholar: "Classic Papers: Articles That Have Stood The Test of Time", two pieces of ELM works have been listed in its "Top 10 in Artificial Intelligence" as Top 2 and Top 7, respectively.


Maps Extreme learning machine



Algorithms

Given a single hidden layer of ELM, suppose that the output function of the i {\displaystyle i} -th hidden node is h i ( x ) = G ( a i , b i , x ) {\displaystyle h_{i}(\mathbf {x} )=G(\mathbf {a} _{i},b_{i},\mathbf {x} )} , where a i {\displaystyle \mathbf {a} _{i}} and b i {\displaystyle b_{i}} are the parameters of the i {\displaystyle i} -th hidden node. The output function of the ELM for SLFNs with L {\displaystyle L} hidden nodes is:

f L ( x ) = ? i = 1 L ? i h i ( x ) {\displaystyle f_{L}({\bf {x}})=\sum _{i=1}^{L}{\boldsymbol {\beta }}_{i}h_{i}({\bf {x}})} , where ? i {\displaystyle {\boldsymbol {\beta }}_{i}} is the output weight of the i {\displaystyle i} -th hidden node.

h ( x ) = [ G ( h i ( x ) , . . . , h L ( x ) ) ] {\displaystyle \mathbf {h} (\mathbf {x} )=[G(h_{i}(\mathbf {x} ),...,h_{L}(\mathbf {x} ))]} is the hidden layer output mapping of ELM. Given N {\displaystyle N} training samples, the hidden layer output matrix H {\displaystyle \mathbf {H} } of ELM is given as: H = [ h ( x 1 ) ? h ( x N ) ] = [ G ( a 1 , b 1 , x 1 ) ? G ( a L , b L , x 1 ) ? ? ? G ( a 1 , b 1 , x N ) ? G ( a L , b L , x N ) ] {\displaystyle {\bf {H}}=\left[{\begin{matrix}{\bf {h}}({\bf {x}}_{1})\\\vdots \\{\bf {h}}({\bf {x}}_{N})\end{matrix}}\right]=\left[{\begin{matrix}G({\bf {a}}_{1},b_{1},{\bf {x}}_{1})&\cdots &G({\bf {a}}_{L},b_{L},{\bf {x}}_{1})\\\vdots &\vdots &\vdots \\G({\bf {a}}_{1},b_{1},{\bf {x}}_{N})&\cdots &G({\bf {a}}_{L},b_{L},{\bf {x}}_{N})\end{matrix}}\right]}

and T {\displaystyle \mathbf {T} } is the training data target matrix: T = [ t 1 ? t N ] {\displaystyle {\bf {T}}=\left[{\begin{matrix}{\bf {t}}_{1}\\\vdots \\{\bf {t}}_{N}\end{matrix}}\right]}

General speaking, ELM is a kind of regularization neural networks but with non-tuned hidden layer mappings (formed by either random hidden nodes, kernels or other implementations), its objective function is:

Minimize:  ? ? ? p ? 1 + C ? H ? - T ? q ? 2 {\displaystyle {\text{Minimize: }}\|{\boldsymbol {\beta }}\|_{p}^{\sigma _{1}}+C\|{\bf {H}}{\boldsymbol {\beta }}-{\bf {T}}\|_{q}^{\sigma _{2}}}

where ? 1 > 0 , ? 2 > 0 , p , q = 0 , 1 2 , 1 , 2 , ? , + ? {\displaystyle \sigma _{1}>0,\sigma _{2}>0,p,q=0,{\frac {1}{2}},1,2,\cdots ,+\infty } .

Different combinations of ? 1 {\displaystyle \sigma _{1}} , ? 2 {\displaystyle \sigma _{2}} , p {\displaystyle p} and q {\displaystyle q} can be used and result in different learning algorithms for regression, classification, sparse coding, compression, feature learning and clustering.

As a special case, a simplest ELM training algorithm learns a model of the form (for single hidden layer sigmoid neural networks):

Y ^ = W 2 ? ( W 1 x ) {\displaystyle \mathbf {\hat {Y}} =\mathbf {W} _{2}\sigma (\mathbf {W} _{1}x)}

where W1 is the matrix of input-to-hidden-layer weights, ? {\displaystyle \sigma } is an activation function, and W2 is the matrix of hidden-to-output-layer weights. The algorithm proceeds as follows:

  1. Fill W1 with random values (e.g, Gaussian random noise);
  2. estimate W2 by least-squares fit to a matrix of response variables Y, computed using the pseudoinverse ?+, given a design matrix X:
    W 2 = ? ( W 1 X ) + Y {\displaystyle \mathbf {W} _{2}=\sigma (\mathbf {W} _{1}\mathbf {X} )^{+}\mathbf {Y} }

Taylor diagram comparing extreme learning machine to MARS and SVM ...
src: i.ytimg.com


Architectures

In most cases, ELM is used as a single hidden layer feedforward network (SLFN) including but not limited to sigmoid networks, RBF networks, threshold networks, fuzzy inference networks, complex neural networks, wavelet networks, Fourier transform, Laplacian transform, etc. Due to its different learning algorithm implementations for regression, classification, spare coding, compression, feature learning and clustering, multi ELMs have been used to form multi hidden layer networks, deep learning or hierarchical networks.

A hidden node in ELM is a computational elements, which need not be considered as classical neuron. A hidden node in ELM can be classical artificial neurons, basis functions, or a subnetwork formed by some hidden nodes..


Framework on wavelet-based nonlinear features and extreme learning ...
src: i.ytimg.com


Theories

Both universal approximation and classification capabilities have been proved for ELM in literature. Especially, Guang-Bin Huang and his team spent almost seven years (2001-2008) on the rigorous proofs of ELM's universal approximation capability.

Universal approximation capability

In theory, any nonconstant piecewise continuous function can be used as activation function in ELM hidden nodes, such an activation function need not be differential. If tuning the parameters of hidden nodes could make SLFNs approximate any target function f ( x ) {\displaystyle f(\mathbf {x} )} , then hidden node parameters can be randomly generated according to any continuous distribution probability, and lim L -> ? ? ? i = 1 L ? i h i ( x ) - f ( x ) ? = 0 {\displaystyle \lim _{L\rightarrow \infty }\left\|\sum _{i=1}^{L}{\boldsymbol {\beta }}_{i}h_{i}({\bf {x}})-f({\bf {x}})\right\|=0} holds with probability one with appropriate output weights ? {\displaystyle {\boldsymbol {\beta }}} .

Classification capability

Given any nonconstant piecewise continuous function as the activation function in SLFNs, if tuning the parameters of hidden nodes can make SLFNs approximate any target function f ( x ) {\displaystyle f(\mathbf {x} )} , then SLFNs with random hidden layer mapping h ( x ) {\displaystyle \mathbf {h} (\mathbf {x} )} can separate arbitrary disjoint regions of any shapes.


Meysam Alizamir's scientific contributions while affiliated with ...
src: www.researchgate.net


Neurons

Wide type of nonlinear piecewise continuous functions G ( a , b , x ) {\displaystyle G(\mathbf {a} ,b,\mathbf {x} )} can be used in hidden neurons of ELM, for example:

Real domain

Sigmoid function: G ( a , b , x ) = 1 1 + exp ( - ( a ? x + b ) ) {\displaystyle G(\mathbf {a} ,b,\mathbf {x} )={\frac {1}{1+\exp(-(\mathbf {a} \cdot \mathbf {x} +b))}}}

Fourier function: G ( a , b , x ) = sin ( a ? x + b ) {\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=\sin(\mathbf {a} \cdot \mathbf {x} +b)}

Hardlimit function: G ( a , b , x ) = { 1 , if  a ? x - b >= 0 0 , otherwise {\displaystyle G(\mathbf {a} ,b,\mathbf {x} )={\begin{cases}1,&{\text{if }}{\bf {a}}\cdot {\bf {x}}-b\geq 0\\0,&{\text{otherwise}}\end{cases}}}

Gaussian function: G ( a , b , x ) = exp ( - b ? x - a ? 2 ) {\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=\exp(-b\|\mathbf {x} -\mathbf {a} \|^{2})}

Multiquadrics function: G ( a , b , x ) = ( ? x - a ? 2 + b 2 ) 1 / 2 {\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=(\|\mathbf {x} -\mathbf {a} \|^{2}+b^{2})^{1/2}}

Wavelet: G ( a , b , x ) = ? a ? - 1 / 2 ? ( x - a b ) {\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=\|a\|^{-1/2}\Psi \left({\frac {\mathbf {x} -\mathbf {a} }{b}}\right)} where ? {\displaystyle \Psi } is a single mother wavelet function.

Complex domain

Circular functions:

tan ( z ) = e i z - e - i z i ( e i z + e - i z ) {\displaystyle \tan(z)={\frac {e^{iz}-e^{-iz}}{i(e^{iz}+e^{-iz})}}}

sin ( z ) = e i z - e - i z 2 i {\displaystyle \sin(z)={\frac {e^{iz}-e^{-iz}}{2i}}}

Inverse circular functions:

arctan ( z ) = ? 0 z d t 1 + t 2 {\displaystyle \arctan(z)=\int _{0}^{z}{\frac {dt}{1+t^{2}}}}

arccos ( z ) = ? 0 z d t ( 1 - t 2 ) 1 / 2 {\displaystyle \arccos(z)=\int _{0}^{z}{\frac {dt}{(1-t^{2})^{1/2}}}}

Hyperbolic functions:

tanh ( z ) = e z - e - z e z + e - z {\displaystyle \tanh(z)={\frac {e^{z}-e^{-z}}{e^{z}+e^{-z}}}}

sinh ( z ) = e z - e - z 2 {\displaystyle \sinh(z)={\frac {e^{z}-e^{-z}}{2}}}

Inverse hyperbolic functions:

arctanh ( z ) = ? 0 z d t 1 - t 2 {\displaystyle {\text{arctanh}}(z)=\int _{0}^{z}{\frac {dt}{1-t^{2}}}}

arcsinh ( z ) = ? 0 z d t ( 1 + t 2 ) 1 / 2 {\displaystyle {\text{arcsinh}}(z)=\int _{0}^{z}{\frac {dt}{(1+t^{2})^{1/2}}}}


Machinism Larval Subjects Thesis Machine Design Topics Phd In ...
src: skatearea.com


Reliability

The black-box character of neural networks in general and extreme learning machines (ELM) in particular is one of the major concerns that repels engineers from application in unsafe automation tasks. This particular issue was approached by means of several different techniques. One approach is to reduce the dependence on the random input. Another approach focuses on the incorporation of continuous constraints into the learning process of ELMs which are derived from prior knowledge about the specific task. This is reasonable, because machine learning solutions have to guarantee a safe operation in many application domains. The mentioned studies revealed that the special form of ELMs, with its functional separation and the linear read-out weights, is particularly well suited for the efficient incorporation of continuous constraints in predefined regions of the input space.




Controversy

The purported invention of the ELM provoked some debates in 2008 and 2015, respectively. In particular, it was pointed out in a letter to the editor of IEEE Transactions on Neural Networks that the idea of using a hidden layer connected to the inputs by random untrained weights was already suggested in the original papers on RBF networks in the late 1980s; Guang-Bin Huang replied by pointing out subtle differences. In a 2015 paper, Huang responded to complaints about his invention of the name ELM for already-existing methods, complaining of "very negative and unhelpful comments on ELM in neither academic nor professional manner due to various reasons and intentions" and an "irresponsible anonymous attack which intends to destroy harmony research environment", arguing that his work "provides a unifying learning platform" for various types of neural nets, including hierarchical structured ELM. In 2015, Huang also gave a formal rebuttal to what he considered as "malign and attack." Since 2016, the voice of these debates has been gradually quiet in public. Recent research replaces the random weights with constrained random weights.




Open sources

  • Matlab Library
  • Python Library



See also

  • Reservoir computing
  • Random projection
  • Random matrix



References

Source of the article : Wikipedia

Comments
0 Comments