mxfusion.inference.minibatch_loop¶
Members¶
-
class
mxfusion.inference.minibatch_loop.
MinibatchInferenceLoop
(batch_size=100, rv_scaling=None)¶ Bases:
mxfusion.inference.grad_loop.GradLoop
The class for the main loop for minibatch gradient-based optimization. The batch_size specifies the size of mini-batch is used in mini-batch inference. rv_scaling is used to re-scale the log-likelihood over data due to the sub-sampling of the training set. The scaling should be applied to all the observed and hidden random variables that are impacted by data sub-sampling. The scaling factor is the size of training set dividing the batch size.
Parameters: - batch_size (int) – the size of minibatch for optimization
- rv_scaling ({Variable: scaling factor}) – the scaling factor of random variables
-
run
(infr_executor, data, param_dict, ctx, optimizer='adam', learning_rate=0.001, max_iter=1000, verbose=False, update_shape_constants=None)¶ Parameters: - infr_executor (MXNet Gluon Block) – The MXNet function that computes the training objective.
- data ([mxnet.ndarray]) – a list of observed variables
- param_dict (mxnet.gluon.ParameterDict) – The MXNet ParameterDict for Gradient-based optimization
- ctx (mxnet.cpu or mxnet.gpu) – MXNet context
- optimizer (str) – the choice of optimizer (default: ‘adam’)
- learning_rate (float) – the learning rate of the gradient optimizer (default: 0.001)
- max_iter (int) – the maximum number of iterations of gradient optimization
- verbose (boolean) – whether to print per-iteration messages.
- update_shape_constants (Python function) – The callback function to update the shape constants according to the size of minibatch