FAQ for MXFusion APIs¶
Zhenwen Dai (2019-05-30)
1. How to access a variable or a factor in my model?¶
There are a few ways to access a variable or a factor in a model:
- If a variable is named such as
m.x = Variable()
, we usex
as the name of the variable and this variable can be accessed later by callingm.x
. - A factor can also be named in the way as a variable, e.g.,
m.f = MXFusionGluonFunction(func, 1)
, in which we name the wrapper of a MXNet functionfunc
asf
. This function can be accessed by callingm.f
. - If a variable is the random variable following a distribution or the output of a function, e.g.,
m.x = Normal.define_variable(mx.nd.array([0]), mx.nd.array([1]), shape=(1,))
, the distribution or the function can be accessed by callingm.x.factor
.
2. How does a Posterior
instance link to my model?¶
When stochastic variational inference, we often need to specify the variational posterior by hand. This can be done by creating a Posterior
instance from our model definition m
, e.g., q = Posterior(m)
. After the creation of the Posterior
instance, all the variables defined in the model also exist in the posterior under the same names. For example, if a variable m.x
is defined in the model, the same variable can be access via q.x
in the posterior. A variational posterior
is often constructed by defining the posterior distributions for all the latent variables. For example, we can specify a variational posterior for the variable x
by q.x.assign_factor(Normal(mx.nd.array([0]), mx.nd.array([1])))
.
3. How to access the parameters after inference?¶
The inference in MXFusion is done by creating an Inference
object, which takes an inference algorithm as the input argument. After the execution of the inference algorithm, all estimated parameters are stored in a InferenceParameters
object. If we have an Inference
instance infr
, the InferenceParameters
can be access by infr.params
. The individual parameters in the model and posterior can be obtained by passing in the reference of the corresponding variables, e.g.,
infr.params[m.x]
returns the estimated value of the parameter x
in the model m
.
4. How to serialize the inference results?¶
Serialization can be conveniently in MXFusion by simply calling the save
method of a Inference
instance, which takes a filename as the input argument. An example is shown below:
m = Model()
...
infr = ...
infr.save('inference_file.zip')
To load back the inference result of a model, one need recreate the model and posterior instance and the corresponding inference instance with exactly the same configurations. Then, the estimated parameters can be loaded by calling the load
method of the Inference
instance. See the example below:
m = Model()
...
infr = ...
infr.load('inference_file.zip')
5. How to run the computation in single/double (float32/float64) precision?¶
When creating random variables from probabilistic distributions and the Inference
instance, the argument dtype
specifies the precision of the corresponding objects. At the moment, we only support the single and double precision by taking the value “float32” or “float64”.
Alternatively, the computation precision can be set globally by changing the default precision type:
from mxfusion.common import config
config.DEFAULT_DTYPE = 'float64'
6. How to run the computation on GPU?¶
When creating random variables from probabilistic distributions and the Inference
instance, the argument ctx
or context
specifies the device in which the variables are expected to be stored. One can pass in the MXNet device reference such as mxnet.gpu()
to switch the computation to be run on GPU.
Alternatively, the computational device can also be set globally by changing the default device of MXNet:
import mxnet as mx
mx.context.Context.device_ctx = mx.gpu()
7. How to view TensorBoard logs?¶
To use TensorBoard to inspect inference logs you must have TensorBoard and MXBoard installed. Instructions for installing these packages can be found here.
To produce the logs required for TensorBoard, pass a Logger
with a log_dir
(and an optional log_name
) to your inference object instantiation.
infr = Inference(logger=Logger(log_dir='logs'))
To run the TensorBoard server to view the results, run the following command (for more details see here):
$ tensorboard --logdir=path/to/log-directory
Now you can open the server in a browser and view the logs.
[ ]: