Skip to main content

Multi variable nodes

Introduction

Typically, a node in a Bayesian network represents a single variable. However, Bayes Server also allows a node to contain more than one variable while retaining the same functionality. We call such a node a Multi Variable Node (MVN). The approach differs from simply using vector-valued variables, and has the following advantages.

  • Graphical simplicity
  • Alternative parameterization/semantics
  • Often leads to increased performance

Importantly, an MVN node can have missing data on some of its variables, and queries can be performed at the granularity of a variable. For example, if a node was made up of the variables {A,B,C}, we could set evidence on A, leaving B and C missing, and then calculate P(B,C). In addition, queries can contain variables spanning multiple nodes, and if required evidence can be soft/virtual on one or more variables within an MVN node.

Mixture model

A simple example of a Bayesian network with a multi variable node is a mixture model, shown in Figure 1, allowing parameters to be specified in terms of means and covariances, and without representing each variable as an individual node with lots of additional links complicating the graphical structure, as shown in the network in Figure 2 without the multi variable node.

We do however want to retain the ability to allow evidence to be set on some, but not all of the variables, and in the case of discrete variables, set soft/virtual evidence on them.

Note that the result of the queries between the two Bayesian networks are identical.

Mixture model with a multi variable node

Figure 1 - mixture model with a multi variable node


Mixture model with a multi variable node

Figure 2 - mixture model without a multi variable node

Representation

We refer to a node with only a single variable as a Single Variable Node (SVN). A network with no MVN nodes is called an SVN network, whilst a network with at least one MVN node will be called an MVN network.

Multiple discrete variables

Figure 3 shows a simple Bayesian network with the nodes A and B. In this example, node A has a single variable A1 and node B contains two variables B1 and B2 (Note that Node A could also have contained more than one variable). As with an SVN network P(U) = P(A,B)=P(A)P(B|A) where U represents the universe of nodes or variables. This can be written in terms of variables as follows: P(U) = P(A1, B1, B2)=P(A1)P(B1,B2|A1). An example distribution for node B is shown in Table 1 assuming all discrete variables have the states True and False. Note that, as with SVN networks, values sum to 1 for each parent combination, however each parent combination now varies over both B1 and B2. In this example sum(P(B1,B2|A1=True)) = P(B1=True, B2=True, A1=True) + P(B1=False, B2=True, A1=True) + P(B1=True, B2=False,A1=True) + P(B1=False, B2=False, A1=True) = 0.2 + 0.3 + 0.4 + 0.1 = 1.

Discrete multi variable node

Figure 3 - discrete multi variable node


A1B2B1=TrueB1=False
TrueTrue0.20.3
TrueFalse0.40.1
FalseTrue0.150.25
FalseFalse0.10.5

Table 1 - P(B1,B2|A1)

Multiple continuous variables

Figure 4 shows an example of a network containing a node with multiple continuous variables.

info

Note that in Bayes Server continuous variables are identified using brackets ( ).

The probability specification of the network in terms of probabilities and densities is shown in equation 1.

Multi variable node probability density Equation 1

Table 2 shows an example distribution for node C, assuming that node A has the states True and False.

info

Note that all continuous distributions are Conditional Linear Gaussian distributions, and are specified in terms of Mean, Covariance and Weight (Regression coefficient).

Discrete multi variable node

Figure 4 - continuous multi variable node


A1 = TrueC1C2
Mean3.56.1
Covariance (C1)2.40.54
Covariance (C2)0.541.6
Weight (B1)0.250.34
A1 = FalseC1C2
Mean-1.50.2
Covariance (C1)3.060.88
Covariance (C2)0.882.45
Weight (B1)-0.540.23

Table 2 - P(C1,C2|A1, B1)

Factorization

The use of MVN nodes does not dictate that we must, for example, group all continuous nodes together in one node.
That would defeat the purpose of Bayesian networks in providing a compact representation of a joint probability distribution. As shown in figure 5 we can still factorize a network as we see fit.

Multiple MVN and SVN nodes

Figure 5 - Multiple MVN and SVN nodes

Constraints

MVN networks that contain continuous variables extend the rules that apply for SVN networks with continuous variables, namely:

info

Nodes which contain one or more continuous variables cannot have child nodes containing any discrete variables.

Vector-valued variables

Whilst there is some crossover between this approach and using vector-valued variables (i.e. variables that accept more than one value), the key differences are listed below.

  • With MVN networks the fundamental unit in a query is a variable, not a node.
  • MVN networks allow an MVN node to have missing values.
  • MVN networks allow an MVN node to have evidence set on some variables, whilst others remain missing.
  • MVN networks allow soft/virtual evidence to be set on one or more variables within an MVN node.
  • A probabilistic query can contain some, but need not include all, variables from an MVN node.
  • A probabilistic query can span variables in multiple MVN nodes

The following examples show a few potential queries from an MVN network. Note that temporal nodes are shown using the symbols << and >>, and t=0 (for example) denotes a temporal variable at time 0.

  • P(A1,B1|A2=True,B2=True) (from figure 6)
  • P(Obs1(t=0), Obs2(t=1)| Obs2(t=0)=True) (from figure 7)

Discrete multi variable node

Figure 6 - Vector values variable comparison


Temporal multi variable node

Figure 7 - Vector values variable comparison (temporal)

This is in contrast to systems supporting vector-valued variables, which will likely force either all or none of the values for each vector-valued variable to have evidence set.

Advantages

In this section we outline the advantages of the MVN framework.

Performance

Increased performance can be achieved during inference (and therefore parameter learning), using MVN networks.

info

The MVN Bayesian network can often be 10 times faster than the SVN equivalent in Bayes Server.

The performance improvements tend to be down to the following:

  • Reduced elimination costs
  • Fewer distributions to combine

Elimination is the process of marginalizing out variables that are relevant to a query and do not have evidence.
Algorithms that determine the order in which variables should be eliminated are sensitive to the number of nodes and links in a network. An MVN network has significantly fewer nodes and links than its SVN equivalent, and hence elimination can be far more efficient.

During the elimination process, distributions are combined, before each elimination takes place. In an MVN network, many of the distributions have already been combined in the native network format.

The performance benefits of Bayesian networks (or dynamic Bayesian networks) that are tree structures are well known. MVN networks extend the class of networks that can be represented in their initial/native format as trees.

Using MVN networks, we often find that our native representation is already a tree, where the SVN equivalent would not be.

Clearly we could just always use the joint distribution over all variables, which would defeat the object of Bayesian networks. In practice there is a middle ground, whereby a mixture of SVN and MVN nodes is preferred.

Graphical simplicity

To illustrate the fact that MVN networks provide a much simpler and appealing graphical representation, consider the MVN network in figure 10 and the equivalent SVN counterpart in figures 11.

The SVN networks lose the visually appealing structure that Bayesian networks are popular for, whereas the MVN networks retain the simple structure which is easy to interpret.

Hidden Markov model - unrolled

Figure 10 - Hidden Markov model (unrolled)

Hidden Markov model - unrolled &amp; decomposed

Figure 11 - Hidden Markov model (unrolled & decomposed)

Alternative parameterization

Consider the Bayesian Network shown in figure 1 which is an example of a Gaussian Mixture Model. It was created from the ubiquitous Iris data set used in machine learning and data mining. The Cluster node contains a single discrete variable with each state representing an individual cluster or mixture and its associated probability. The Observations node contains 4 continuous variables; Sepal Length, Sepal Width, Petal Length and Petal Width. The distribution associated with the Gaussian node is a mixture of multi variate Gaussians, where each Gaussian is the position and covariance matrix of the specific cluster.

MVN Iris Bayesian network

Figure 12 - MVN iris Bayesian network

Figure 12 shows two dimensions of the Mixture model. Note that the ellipses are rotated, indicating that the model includes non zero covariance entries. An equivalent SVN Bayesian network is shown in figure 2.

info

Had the model contained zero covariances, the ellipses would not be rotated, and we could have used the Bayesian network shown in figure 13 instead.

Not only is the MVN variant easier to visualize graphically, but provides us with an alternative, but equivalent parameterization.
This can be useful when manually defining parameter values or interpreting learned values.

Tables 3, 4, 5, 6 detail the MVN distributions, while figures 3, 7, 8, 9, 10 detail SVN equivalents (Cov = Covariance, Sl = Sepal Length, Sw = Sepal Width, Pl = Petal Length, and Pw = Petal Width). As you would expect, the two equivalent parameterizations have the same parameter count, and the distribution for node Cluster is the same for the MVN network and the SVN network.

Notice that the MVN distributions are parameterized in terms of covariances, whereas the SVN network is parameterized in terms of weights (regression) coefficients. Depending on the context, a Bayesian network modeler might prefer to enter weights, or enter covariances. Covariance parameterization is often preferable when the semantics of a directed link is not clear, but correlation/covariance is. Put another way, the modeler wants to model an association/link between the variables, due to the correlations shown in figure 12 however the direction of the link is not clear. In the MVN case the modeler is specifying joint probabilities conditional on parent nodes.

In the case of MVN nodes with discrete variables, as with the continuous case, the modeler would specify the parameters in terms of a joint distribution (conditioned on the node's parents). As with the continuous case, this is useful when the semantics of a directed link are not clear, or when data has been collected in that form.

Cluster 1Cluster 2Cluster 3
0.3670.3330.299

Table 3 - Iris MVN and SVN P(Cluster)


SlSwPlPw
Mean6.5452.9495.4801.985
Cov(Sl)0.3870.09220.3030.062
Cov(Sw)0.1100.0840.056
Cov(Pl)0.3280.0745
Cov(Pw)0.086

Table 4 - Iris MVN P(Observations|Cluster 1)


SlSwPlPw
Mean5.0063.4181.4640.244
Cov(Sl)0.1220.0980.0160.010
Cov(Sw)0.1420.0110.011
Cov(Pl)0.0300.006
Cov(Pw)0.011

Table 5 - Iris MVN | P(Observations|Cluster 2)


SlSwPlPw
Mean5.9152.7784.2021.297
Cov(Sl)0.2750.0970.1850.054
Cov(Sw)0.0930.0910.043
Cov(Pl)0.2010.061
Cov(Pw)0.032

Table 6 - Iris MVN | P(Observations|Cluster 3)


Cluster 1Cluster 2Cluster 3
Mean6.5455.0065.915
Var0.3870.1220.275

Table 7 - Iris SVN | P(Sl|Cluster)


Cluster 1Cluster 2Cluster 3
Mean1.390-0.6230.695
Var0.0880.0630.059
Weight (Sl)0.2380.8070.352

Table 8 - Iris SVN | P(Sw|Sl,Cluster)


Cluster 1Cluster 2Cluster 3
Mean0.1690.801-0.076
Var0.0890.0270.065
Weight (Sl)0.7500.1470.514
Weight (Sw)0.137-0.0210.45

Table 9 - Iris SVN | P(Pl|Sl,Sw,Cluster)


Cluster 1Cluster 2Cluster 3
Mean0.254-0.278-0.139
Var0.0520.0100.009
Weight (Sl)-0.1250.025-0.055
Weight (Sw)0.4360.04890.312
Weight (Pl)0.2310.1570.213

Table 10 - Iris SVN | P(Pw|Sl,Sw,Pl,Cluster)

While the graphical representation may be simplified, it does not impose constraints on how an inference engine may represent the model under the hood. For example an inference engine may wish to decompose a network to its SVN equivalent, although more likely it will further amalgamate variables into a tree structure, if it is not already a tree. MVN networks are much more likely to be tree structures, which leads to performance gains, and in fact a designer may wish to design a network as a tree for just that reason.

Diagonal Iris Mixture model

Figure 13 - Diagonal iris Mixture model

Evidence

A further advantage of this representation is that we do not lose the distinction of a variable, and therefore we can apply evidence to each variable individually within a node that contains multiple variables. In fact, some variables can have missing data, and we can still incorporate soft/virtual evidence on individual variables.

Decomposition

Bayesian networks with MVN nodes can be fully decomposed into their single variable node equivalents. We do not need to decompose an MVN network to perform inference, however it is useful to examine the equivalent SVN network.

MVN Inference

Inference in MVN Bayesian networks is largely the same as for SVN networks, however there are a number of important differences.

  • We can query a node, a group of nodes, a variable, or more than one variable which may span multiple nodes
  • Evidence can be applied at the granularity of a variable, so a node may be partially instantiated if some variables have missing data.

Conclusion

This article has outlined the major advantages of MVN networks, which are performance gains, graphical simplicity and an alternative parameterization.
In practice they become an indispensable tool, particularly when building models of multivariate continuous data, using techniques (or extensions of them) such as Hidden Markov Models, Kalman filters and Vector Auto Regressive models.