Artificial neural networks

Miquel Noguer i Alonso, Daniel Bloch and David Pacheco Aznar

7.1 PRESENTING RECURRENT NEURAL NETWORKS

7.1.1 An overview

Recurrent neural networks (RNNs) are an extension of feedforward neural networks (FNNs) in which we allow connections between hidden units associated with time delay (Elman 1990). Thus, RNNs can retain information about the past and can be used to discover temporal correlations between events that occurred deep in the past. While the connections between units in FNNs do not form a cycle, an RNN has a feedback connection, meaning the network nodes have cyclical connections. The existence of cycles allows RNNs to develop self-sustained temporal activation dynamics along their recurrent connection pathways, even without input, making them a dynamical system.

RNNs can be visualised by unfolding them along the whole input sequence. That is, in the case of time series, the RNNs are unfolded through time. The unfolded graph has no cycles (as in FNNs), so the forward and backward passes of a Multilayer Perceptron (MLP) can be applied. The MLP approximates non-linear functions. Since an RNN can be unfolded to a deep feedforward network, it has an advantage over MLP models, making the RNN a better approximator.

There are two main classes

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here