Blog

Introducing Neural Boltzmann Machines

By
Charles K. Fisher

May 16, 2023

Conditional generative models have taken the world by storm. These models take in contextual information via a prompt and then probabilistically generate outputs based on the provided context. Some well known models are capable of accepting a description of an image as text and can generate an image of nearly anything you describe, or can fluently respond back to you in a conversation.

The problems that we encounter in medical research are often different than problems in other applications of deep learning because they are noisier. For example, most of our work at Unlearn focuses on modeling clinical trajectories as multivariate stochastic processes and these data -- which describe the state of a patient's health over time -- include measure noise, natural variation in health outcomes, and plenty of missing observations.

We've found the performance of many popular generative model architectures to be underwhelming on the problems we work on in medicine, so we've taken it upon ourselves to invent something better.

In 2019, we invented a method to train Restricted Boltzmann Machines (RBMs) adversarially (US Patent US11636309B2) to create an architecture we called Boltzmann Encoded Adversarial Machines at the time, but I now prefer the name BoltGAN Machines. An autoregressive formulation of Conditional BoltGAN Machines has been the backbone of our work modeling clinical trajectories for the past 4 years.

Although these models have performed well on our problems, they have some drawbacks. For example, they only work as discrete time models. The biggest drawback, however, is just that they are difficult to work with. They are slow. Training can be finicky. The architecture is difficult to modify. All of these issues have slowed down our iteration time, and we got tired of it.

So we've invented something new again.

Today, we're introducing Neural Boltzmann Machines (NBMs) -- a new class of conditional generative model that combines feed forward deep neural networks with RBMs to get the best of both worlds. The basic idea is to train a deep neural network to accept contextual information and then output the parameters of an RBM based on the provided context.

More details are provided in our paper on arXiv and as an open source example on GitHub.

We've only provided a couple of standard toy examples so far in order to illustrate how NBMs work on problems they will likely be familiar with. But, we're hard at work developing new NBMs to model clinical trajectories. This line of research has allowed us to create continuous time models, and to leverage all of the convenience and power of modern deep neural networks -- increasing our iteration speed by at least 10x.

We're excited to share more about our clinical NBMs soon.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Example of a caption
Block quote that is a longer piece of text and wraps lines.

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript