fynance.models.recurrent_neural_network.GatedRecurrentUnit¶
-
class
fynance.models.recurrent_neural_network.
GatedRecurrentUnit
(X, y, drop=None, x_type=None, y_type=None, bias=True, forward_activation=<class 'torch.nn.modules.activation.Softmax'>, hidden_activation=<class 'torch.nn.modules.activation.Tanh'>, hidden_state_size=None, reset_activation=<class 'torch.nn.modules.activation.Sigmoid'>, update_activation=<class 'torch.nn.modules.activation.Sigmoid'>)¶ Gated Recurrent Unit neural network.
Parameters: - X, y : array-like or int
- If it’s an array-like, respectively inputs and outputs data.
- If it’s an integer, respectively dimension of inputs and outputs.
- drop : float, optional
Probability of an element to be zeroed.
- forward_activation, hidden_activation : torch.nn.Module, optional
Activation functions, default is respectively Softmax and Tanh function.
- hidden_state_size : int, optional
Size of hidden states, default is the same size than input.
- reset_activation, updated_activation : torch.nn.Module, optional
Activation functions for reset and update gate, default are both Sigmoid function.
See also
Attributes: - criterion : torch.nn.modules.loss
A loss function.
- optimizer : torch.optim
An optimizer algorithm.
- W_c, W_r, W_u, W_y : torch.nn.Linear
Respectively recurrent (hidden), reset, update and forward wheights.
- f_h, f_r, f_u, f_y : torch.nn.Module
Respectively hidden (recurrent), reset, update and forward activation functions.
Methods
__call__
(*input, **kwargs)set_optimizer
(criterion, optimizer[, params])Set the optimizer object. train_on
(X, y, H)Trains the neural network model. predict
(X, H)Predicts outputs of neural network model. set_data
(X, y[, x_type, y_type])Set data inputs and outputs.