Sponsored Links

Kamis, 28 Desember 2017

Sponsored Links

Deep Belief Network â€
src: agollp.files.wordpress.com

In machine learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer.

When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. The layers then act as feature detectors. After this learning step, a DBN can be further trained with supervision to perform classification.

DBNs can be viewed as a composition of simple, unsupervised networks such as restricted Boltzmann machines (RBMs) or autoencoders, where each sub-network's hidden layer serves as the visible layer for the next. An RBM is an undirected, generative energy-based model with a "visible" input layer and a hidden layer and connections between but not within layers. This composition leads to a fast, layer-by-layer unsupervised training procedure, where contrastive divergence is applied to each sub-network in turn, starting from the "lowest" pair of layers (the lowest visible layer is a training set).

Teh's observation that DBNs can be trained greedily, one layer at a time, led to one of the first effective deep learning algorithms. Overall, there are many attractive implementations and uses of DBNs in real-life applications and scenarios (e.g., electroencephalography, drug discovery).


Video Deep belief network



Training

The training method for RBMs proposed by Hinton for use with training "Product of Expert" models is called contrastive divergence (CD). CD provides an approximation to the maximum likelihood method that would ideally be applied for learning the weights. In training a single RBM, weight updates are performed with gradient descent via the following equation: ? w i j ( t + 1 ) = w i j ( t ) + ? ? log ( p ( v ) ) ? w i j {\displaystyle \Delta w_{ij}(t+1)=w_{ij}(t)+\eta {\frac {\partial \log(p(v))}{\partial w_{ij}}}}

where, p ( v ) {\displaystyle p(v)} is the probability of a visible vector, which is given by p ( v ) = 1 Z ? h e - E ( v , h ) {\displaystyle p(v)={\frac {1}{Z}}\sum _{h}e^{-E(v,h)}} . Z {\displaystyle Z} is the partition function (used for normalizing) and E ( v , h ) {\displaystyle E(v,h)} is the energy function assigned to the state of the network. A lower energy indicates the network is in a more "desirable" configuration. The gradient ? log ( p ( v ) ) ? w i j {\displaystyle {\frac {\partial \log(p(v))}{\partial w_{ij}}}} has the simple form ? v i h j ? data - ? v i h j ? model {\displaystyle \langle v_{i}h_{j}\rangle _{\text{data}}-\langle v_{i}h_{j}\rangle _{\text{model}}} where ? ? ? p {\displaystyle \langle \cdots \rangle _{p}} represent averages with respect to distribution p {\displaystyle p} . The issue arises in sampling ? v i h j ? model {\displaystyle \langle v_{i}h_{j}\rangle _{\text{model}}} because this requires extended alternating Gibbs sampling. CD replaces this step by running alternating Gibbs sampling for n {\displaystyle n} steps (values of n = 1 {\displaystyle n=1} perform well). After n {\displaystyle n} steps, the data are sampled and that sample is used in place of ? v i h j ? model {\displaystyle \langle v_{i}h_{j}\rangle _{\text{model}}} . The CD procedure works as follows:

  1. Initialize the visible units to a training vector.
  2. Update the hidden units in parallel given the visible units: p ( h j = 1 | V ) = ? ( b j + ? i v i w i j ) {\displaystyle p(h_{j}=1\mid {\textbf {V}})=\sigma (b_{j}+\sum _{i}v_{i}w_{ij})} . ? {\displaystyle \sigma } is the sigmoid function and b j {\displaystyle b_{j}} is the bias of h j {\displaystyle h_{j}} .
  3. Update the visible units in parallel given the hidden units: p ( v i = 1 | H ) = ? ( a i + ? j h j w i j ) {\displaystyle p(v_{i}=1\mid {\textbf {H}})=\sigma (a_{i}+\sum _{j}h_{j}w_{ij})} . a i {\displaystyle a_{i}} is the bias of v i {\displaystyle v_{i}} . This is called the "reconstruction" step.
  4. Re-update the hidden units in parallel given the reconstructed visible units using the same equation as in step 2.
  5. Perform the weight update: ? w i j ? ? v i h j ? data - ? v i h j ? reconstruction {\displaystyle \Delta w_{ij}\propto \langle v_{i}h_{j}\rangle _{\text{data}}-\langle v_{i}h_{j}\rangle _{\text{reconstruction}}} .

Once an RBM is trained, another RBM is "stacked" atop it, taking its input from the final trained layer. The new visible layer is initialized to a training vector, and values for the units in the already-trained layers are assigned using the current weights and biases. The new RBM is then trained with the procedure above. This whole process is repeated until the desired stopping criterion is met.

Although the approximation of CD to maximum likelihood is crude (does not follow the gradient of any function), it is empirically effective.


Maps Deep belief network



See also

  • Bayesian network
  • Deep learning

Deep Belief Network first layer before pre-training phase MNIST ...
src: i.ytimg.com


References


Deep Learning: An Overview - ppt download
src: slideplayer.com


External links

  • "Deep Belief Networks". Deep Learning Tutorials. 
  • "Deep Belief Network Example". Deeplearning4j Tutorials. 

Source of the article : Wikipedia

Comments
0 Comments