FAQ for MXFusion APIs

Zhenwen Dai (2019-05-30)

1. How to access a variable or a factor in my model?

There are a few ways to access a variable or a factor in a model:

  1. If a variable is named such as m.x = Variable(), we use x as the name of the variable and this variable can be accessed later by calling m.x.
  2. A factor can also be named in the way as a variable, e.g., m.f = MXFusionGluonFunction(func, 1), in which we name the wrapper of a MXNet function func as f. This function can be accessed by calling m.f.
  3. If a variable is the random variable following a distribution or the output of a function, e.g., m.x = Normal.define_variable(mx.nd.array([0]), mx.nd.array([1]), shape=(1,)), the distribution or the function can be accessed by calling m.x.factor.

3. How to access the parameters after inference?

The inference in MXFusion is done by creating an Inference object, which takes an inference algorithm as the input argument. After the execution of the inference algorithm, all estimated parameters are stored in a InferenceParameters object. If we have an Inference instance infr, the InferenceParameters can be access by infr.params. The individual parameters in the model and posterior can be obtained by passing in the reference of the corresponding variables, e.g., infr.params[m.x] returns the estimated value of the parameter x in the model m.

4. How to serialize the inference results?

Serialization can be conveniently in MXFusion by simply calling the save method of a Inference instance, which takes a filename as the input argument. An example is shown below:

m = Model()
...
infr = ...
infr.save('inference_file.zip')

To load back the inference result of a model, one need recreate the model and posterior instance and the corresponding inference instance with exactly the same configurations. Then, the estimated parameters can be loaded by calling the load method of the Inference instance. See the example below:

m = Model()
...
infr = ...
infr.load('inference_file.zip')

5. How to run the computation in single/double (float32/float64) precision?

When creating random variables from probabilistic distributions and the Inference instance, the argument dtype specifies the precision of the corresponding objects. At the moment, we only support the single and double precision by taking the value “float32” or “float64”.

Alternatively, the computation precision can be set globally by changing the default precision type:

from mxfusion.common import config
config.DEFAULT_DTYPE = 'float64'

6. How to run the computation on GPU?

When creating random variables from probabilistic distributions and the Inference instance, the argument ctx or context specifies the device in which the variables are expected to be stored. One can pass in the MXNet device reference such as mxnet.gpu() to switch the computation to be run on GPU.

Alternatively, the computational device can also be set globally by changing the default device of MXNet:

import mxnet as mx
mx.context.Context.device_ctx = mx.gpu()

7. How to view TensorBoard logs?

To use TensorBoard to inspect inference logs you must have TensorBoard and MXBoard installed. Instructions for installing these packages can be found here.

To produce the logs required for TensorBoard, pass a Logger with a log_dir (and an optional log_name) to your inference object instantiation.

infr = Inference(logger=Logger(log_dir='logs'))

To run the TensorBoard server to view the results, run the following command (for more details see here):

$ tensorboard --logdir=path/to/log-directory

Now you can open the server in a browser and view the logs.

[ ]: