Anomaly detection


Anomaly detection, also known as out-lier detection, is the process of identifying data which is unusual.

Anomaly detection can be used to give advanced warning of a mechanical component failing (system health monitoring) in order to perform predictive maintenance or improve safety. It can be used to detect fraudulent transactions or even the detection of unusual patterns in medical research.

Bayesian networks can be used to build sophisticated anomaly detection models from data. A model can then be used to make predictions on real-time or batch data, by fusing the information probabilistically and outputting a single value (the log-likelihood) which indicates how anomalous a particular record is.

We can also perform anomaly detection on time series.


Article  Walk-through  Visualization Presentation

Anomaly detection - a system degrading

Anomaly detection - a system degrading. View animation.


Classification


Classification is the process of using a model to predict unknown values (output variables), using a number of known values (input variables)

In order to perform classification, we can build a Bayesian network to model the relationship between the input variables and the output variables we are predicting. This process involves learning a model using data in which both the input variables and the output variables are present. Expert opinion can also be used to build/enhance a model. This model can subsequently be used to make real-time or batch predictions of the outputs on unseen data in which only the input data is present.

We can also perform classification with time series models.


Article  Walk-through




Regression


Regression is a term typically used when predicting continuous output variables from continuous input variables. The only difference between regression and classification is that we are typically dealing with continuous variables, and the mathematics behind the predictions is different.

Regression can be performed with Bayesian networks, and in fact the interactions between variables can be more complex than simple regression techniques.

We can also perform regression on time series models.


Clustering / Mixture models / Segmentation / Density Estimation


Mixture models (clustering) are a type of Bayesian network model which are capable of detecting similar groups of data. Each similar group is known as a cluster. The process of grouping similar data is known as clustering, segmentation or density estimation.



Article  Visualization Interactive demo


Time series


Bayesian networks that model time series or sequences are known as dynamic Bayesian networks or DBNs.

They allow us to model either continuous time series variables or discrete sequence variables or both in the same model.

We can also mix time series and non time series variables in the same model, and use latent time series variables.

Once we have trained a time-series model from data, we can use it to perform prediction a single variable or the joint distribution of multiple time series variables, or we can fill in past or current values.

We can also perform anomaly detection using a time series model, evaluating either real-time or batch data.


Introduction  Walk-through 

Time series model (DBN)

Latent variables


Latent variables allow patterns that are not directly available as inputs in your data, to be modeled.

A cluster/mixture model is an example of a model with a discrete latent variable. Instead of only having a single ellipse (Gaussian), the latent variable allows the model to fit multiple ellipses (Gaussians), even though we do not have information about which records belong to which ellipse. The expectation maximization (EM) algorithm automatically finds the patterns for us.

Bayes Server supports both discrete and continuous latent variables, multiple latent variables in a model as well as time series latent variables.

Latent variables are useful when a simple model does not fit the data well. For example we can construct a mixture of Gaussians, a mixture of Naive Bayes models, or a mixture of Time series.

Latent variable model

Article  Visualization


Virtual/soft accuracy


Virtual/soft accuracy enables us to make use of Bayesian network models that perform better or worse under certain conditions.

This is useful when the overall accuracy of the model is not sufficient, but under certain (soft) conditions it performs very well.


Article 


Automated insight


Automated insight enables us to automatically identify information which is either significant or unusual using a Bayesian network.


Article