fynance.models.neural_network.MultiLayerPerceptron.register_backward_hook¶
-
MultiLayerPerceptron.
register_backward_hook
(hook)¶ Registers a backward hook on the module.
The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> Tensor or None
The
grad_input
andgrad_output
may be tuples if the module has multiple inputs or outputs. The hook should not modify its arguments, but it can optionally return a new gradient with respect to input that will be used in place ofgrad_input
in subsequent computations.- Returns:
torch.utils.hooks.RemovableHandle
:- a handle that can be used to remove the added hook by calling
handle.remove()
Warning
The current implementation will not have the presented behavior for complex
Module
that perform many operations. In some failure cases,grad_input
andgrad_output
will only contain the gradients for a subset of the inputs and outputs. For suchModule
, you should usetorch.Tensor.register_hook
directly on a specific input or output to get the required gradients.