Ffjord Free form Continuous Dynamics for Scalable Reversible Generative Models

FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models

Will Grathwohl   , Ricky T. Q. Chen12 , Jesse Bettencourt2 , Ilya Sutskever3 , David Duvenaud2 Equal contribution. Order determined by coin toss. {wgrathwohl, rtqichen} cs.toronto.eduUniversity of Toronto and Vector InstituteOpenAI

Abstract

A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation. In this paper, we use Hutchinson's trace estimator to give a scalable unbiased estimate of the log-density. The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures. We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling.

\LetLtxMacro \originaleqref

(LABEL:)

1 Introduction

Figure 1: FFJORD transforms a simple base distribution at into the target distribution at by integrating over learned continuous dynamics.

Reversible generative models use cheaply invertible neural networks to transform samples from a fixed base distribution. Examples include NICE(Dinh et al., 2014), Real NVP(Dinh et al., 2017), and Glow(Kingma & Dhariwal, 2018). These models are easy to sample from, and can be trained by maximum likelihood using the change of variables formula. However, this requires placing awkward restrictions on their architectures, such as partitioning dimensions or using rank one weight matrices, in order to avoid an cost determinant computation.

Recently, Chen et al. (2018) introduced a continuous-time analog of normalizing flows, defining the mapping from latent variables to data using ordinary differential equations (ODEs). In their model, the likelihood can be computed using relatively cheap trace operations. A more flexible, but still restricted, family of network architectures can be used to avoid this time cost.

Extending this work, we introduce an unbiased stochastic estimator of the likelihood that has time cost, allowing completely unrestricted architectures. Furthermore, we have implemented GPU-based adaptive ODE solvers to train and evaluate these models on modern hardware. We call our approach Free-form Jacobian of Reversible Dynamics (FFJORD).

2 Background: Generative Models and Change of Variables

In contrast to directly parameterizing a normalized distribution(Oord et al., 2016; Germain et al., 2015), the change of variables formula allows one to specify a complex normalized distribution implicitly by warping a normalized base distribution through an invertible function . Given a random variable the log density of follows

where is the Jacobian of . In general, computation of the log determinant has a time cost of . Much work have gone into using restricted neural network architectures to make computing the Jacobian determinant more tractable. These approaches broadly fall into three categories:

  1. Normalizing flows. By restricting the functional form of , various determinant identities can be exploited (Rezende & Mohamed, 2015; Berg et al., 2018). These models cannot be trained directly on data and be able to sample because they do not have a tractable analytic inverse but have been shown to be useful in representing the approximate posterior for variational inference(Kingma & Welling, 2014).

  2. Autoregressive transformations. By using an autoregressive model and specifying an ordering in the dimensions, the Jacobian of is enforced to be lower triangular(Kingma et al., 2016; Oliva et al., 2018). These models excel at density estimation for tabular datasets(Papamakarios et al., 2017), but require sequential evaluations of to invert, which is prohibitive when is large.

  3. Partitioned transformations. Partitioning the dimensions and using affine transformations makes the determinant of the Jacobian cheap to compute, and the inverse computable with the same cost as (Dinh et al., 2014, 2017). This method allows the use of convolutional architectures, excelling at density estimation for image data(Dinh et al., 2017; Kingma & Dhariwal, 2018).

Throughout this work, we refer to reversible generative models those that use the change of variables to transform a base distribution to the model distribution while maintaining both efficient density estimation and efficient sampling capabilities using a single pass of the model.

2.1 Other Generative Models

There exist several approaches to generative modeling approaches which don't use the change of variables equation for training. Generative adversarial networks (GANs)(Goodfellow et al., 2014) use large, unrestricted neural networks to transform samples from a fixed base distribution. Lacking a closed-form likelihood, an auxiliary discriminator model must be trained to estimate various divergences in order to provide a training signal. Autoregressive models(Germain et al., 2015; Oord et al., 2016) directly specify the joint distribution as a sequence of explicit conditional distributions using the product rule. These models require at least evaluations to sample from.Variational autoencoders (VAEs)Kingma & Welling (2014) use an unrestricted architecture to explicitly specify the conditional likelihood , but can only efficiently provide a lower bound on the marginal likelihood .

Method Train on data One-pass Sampling Exact log-likelihood Free-form Jacobian
Variational Autoencoders
Generative Adversarial Nets
Likelihood-based Autoregressive

Change of Variables

Normalizing Flows
Reverse-NF, MAF, TAN
NICE, Real NVP, Glow, Planar CNF
FFJORD
Table 1: A comparison of the abilities of generative modeling approaches.

2.2 Continuous Normalizing Flows

Chen et al. (2018) define a generative model for data similar to those based on (1) which replaces the warping function with an integral of continuous-time dynamics. The generative process works by first sampling from a base distribution . Then, given an ODE defined by the parametric function , we solve the initial value problem to obtain which constitutes our observable data. These models are called Continous Normalizing Flows (CNF). The change in log-density under this model follows a second differential equation, called the instantaneous change of variables formula:(Chen et al., 2018),

We can compute total change in log-density by integrating across time:

Given a datapoint , we can compute both the point which generates , as well as under the model by solving the initial value problem:

which integrates the combined dynamics of and the log-density of the sample backwards in time from to . We can then compute using the solution of (4) and adding . The existence and uniqueness of (4) require that and its first derivatives be Lipschitz continuous(Khalil, 2002), which can be satisfied in practice using neural networks with smooth Lipschitz activations.

2.2.1 Backpropagating through ODE Solutions with the Adjoint Method

CNFs are trained to maximize (3). This objective involves the solution to an initial value problem with dynamics parameterized by . For any scalar loss function which operates on the solution to an initial value problem

then Pontryagin (1962) shows that its derivative takes the form of another initial value problem

The quantity is known as the adjoint state of the ODE. Chen et al. (2018) use a black-box ODE solver to compute , and then another call to a solver to compute (6) with the initial value . This approach is a continuous-time analog to the backpropgation algorithm(Rumelhart et al., 1986; Andersson, 2013) and can be combined with gradient-based optimization methods to fit the parameters .

3 Scalable density evaluation with unrestricted architectures

Switching from discrete-time dynamics to continuous-time dynamics reduces the primary computational bottleneck of normalizing flows from to , at the cost of introducing a numerical ODE solver. This allows the use of more expressive architectures. For example, each layer of the original normalizing flows model of Rezende & Mohamed (2015) is a one-layer neural network with only a single hidden unit. In contrast, the instantaneous transformation used in planar continuous normalizing flows(Chen et al., 2018) is a one-layer neural network with many hidden units. In this section, we construct an unbiased estimate of the log-density with cost, allowing completely unrestricted neural network architectures to be used.

3.1 Unbiased Linear-time Log-Density Estimation

In general, computing exactly costs , or approximately the same cost as evaluations of , since each entry of the diagonal of the Jacobian requires computing a separate derivative of . However, there are two tricks that can help. First, vector-Jacobian products can be computed for approximately the same cost as evaluating , using reverse-mode automatic differentiation. Second, we can get an unbiased estimate of the trace of a matrix by taking a double product of that matrix with a noise vector:

The above equation holds for any -by- matrix and distribution over -dimensional vectors such that and . The Monte Carlo estimator derived from (7) is known as the Hutchinson's trace estimator(Hutchinson, 1989; Adams et al., 2018).

To keep the dynamics deterministic within each call to the ODE solver, we can use a fixed noise vector for the duration of each solve without introducing bias:

Typical choices of are a standard Gaussian or Rademacher distribution(Hutchinson, 1989).

3.1.1 Reducing Variance with Bottleneck Capacity

Often, there exist bottlenecks in the architecture of the dynamics network, i.e. hidden layers whose width is smaller than the dimensions of the input . In such cases, we can reduce the variance of Hutchinson's estimator by using the cyclic property of trace. Since the variance of the estimator for grows asymptotic to (Hutchinson, 1989), we suspect that having fewer dimensions should help reduce variance. If we view view the dynamics as a composition of two functions then we observe

When has multiple hidden layers, we choose to be the smallest dimension. This bottleneck trick can reduce the norm of the matrix which may also help reduce the variance of the trace estimator.

3.2 FFJORD: A Continuous-time Reversible Generative Model

Our complete method uses the dynamics defined in (2) and the efficient log-likelihood estimator of (3.1) to produce the first scalable and reversible generative model with an unconstrained Jacobian, leading to the name Free-Form Jacobian of Reversible Dyanamics (FFJORD). Pseudo-code of our method is given in Algorithm 1, and Table1 summarizes the capabilities of our model compared to previous work.

dynamics , start time , stop time , minibatch of samples .

sample_unit_variance( .shape) Sample outside of the integral

function ( ): Augment with log-density dynamics.

Evaluate dynamics

Compute vector-Jacobian product with automatic differentiation

) Unbiased estimate of

return Concatenate dynamics of state and log-density

endfunction

odeint( , , , ) Solve the ODE, ie.

- Add change in log-density

return

Algorithm 1 Unbiased stochastic log-density estimation using the FFJORD model

Assuming the cost of evaluating is on the order of where is the dimensionality of the data and is the size of the largest hidden dimension in , then the cost of computing the likelihood in models which stack transformations that exploit (1) is where is the number of transformations used. For CNF, this reduces to for CNFs, where is the number of evaluations of used by the ODE solver. With FFJORD, this reduces further to .

4 Experiments

Data
Data
Glow
Glow
FFJORD
FFJORD
Figure 2: Comparison of trained FFJORD and Glow models on 2-dimensional distributions including multi-modal and discontinuous densities.

We demonstrate the power of FFJORD on a variety of density estimation tasks as well as approximate inference within variational autoencoders (Kingma & Welling, 2014). Experiments were conducted using a suite of GPU-based ODE-solvers and an implementation of the adjoint method for backpropagation 1 1 1We plan on releasing the full code, including our GPU-based implementation of ODE solvers and the adjoint method, upon publication. . In all experiments the Runge-Kutta 4(5) algorithm with the tableau from Shampine (1986) was used to solve the ODEs. We ensure tolerance is set low enough so numerical error is negligible; see Appendix C.

We used Hutchinson's trace estimator (7) during training and the exact trace when reporting test results. This was done in all experiments except for our density estimation models trained on MNIST and CIFAR10 where computing the exact Jacobian trace was not computationally feasible. There, we observed that the variance of the log-likelihood over the validation set induced by the trace estimator is less than .

The dynamics of FFJORD are defined by a neural network which takes as input the current state and the current time . We experimented with several ways to incorporate as an input to , such as hyper-networks, but found that simply concatenating on to at the input to every layer worked well and was used in all of our experiments.

4.1 Density Estimation On Toy 2D Data

We first train on 2 dimensional data to visualize the model and the learned dynamics. 2 2 2Videos of the learned dynamics can be found at https://imgur.com/a/Rtr3Mbq. In Figure 2, we show that by warping a simple isotropic Gaussian, FFJORD can fit both multi-modal and even discontinuous distributions. The number of evaluations of the ODE solver is roughly 70-100 on all datasets, so we compare against a Glow model with 100 discrete layers.

The learned distributions of both FFJORD and Glow can be seen in Figure 2. Interestingly, we find that Glow learns to stretch the single mode base distribution into multiple modes but has trouble modeling the areas of low probability between disconnected regions. In contrast, FFJORD is capable of modeling disconnected modes and can also learn convincing approximations of discontinuous density functions (middle row in Figure 2).

4.2 Density Estimation on Real Data

We perform density estimation on five tabular datasets preprocessed as in Papamakarios et al. (2017) and two image datasets; MNIST and CIFAR10. On the tabular datasets, FFJORD performs the best out of reversible models by a wide margin but is outperformed by recent autoregressive models. Of those, FFJORD outperforms MAF(Papamakarios et al., 2017) on all but one dataset and manages to outperform TANOliva et al. (2018) on the MINIBOONE dataset. These models require sequential computations to sample from while the best performing method, MAF-DDSF(Huang et al., 2018), cannot be sampled from analytically.

On MNIST we find that FFJORD can model the data as well as Glow and Real NVP by integrating a single flow defined by one neural network. This is in contrast to Glow and Real NVP which must compose many flows together to achieve similar performance. When we use multiple flows in a multiscale architecture (like those used by Glow and Real NVP) we obtain better performance on MNIST and comparable performance to Glow on CIFAR10. Notably, FFJORD is able to achieve this performance while using less than 2% as many parameters as Glow. We also note that Glow uses a learned base distribution whereas FFJORD and Real NVP use a fixed Gaussian. A summary of our results on density estimation can be found in Table 2 and samples can be seen in Figure 3. Full details on architectures used, our experimental procedure, and additional samples can be found in Appendix B.1.

Samples

Data

Figure 3: Samples and data from our image models. MNIST on left, CIFAR10 on right.

In general, our approach is slower than competing methods, but we find the memory-efficiency of the adjoint method allows us to use much larger batch sizes than those methods. On the tabular datasets we used a batch sizes up to 10,000 and on the image datasets we used a batch size of 900.

POWER GAS HEPMASS MINIBOONE BSDS300 MNIST CIFAR10
Real NVP -0.17 -8.33 18.71 13.55 -153.28 1.06* 3.49*
Glow -0.17 -8.15 18.92 11.35 -155.07 1.05* 3.35*
FFJORD -0.46 -8.59 15.26 10.43 -157.67 0.99* (1.05 ) 3.40*
MADE 3.08 -3.56 20.98 15.59 -148.85 2.04 5.67
MAF -0.24 -10.08 17.70 11.75 -155.69 1.89 4.31
TAN -0.48 -11.19 15.12 11.01 -157.03 - -
MAF-DDSF -0.62 -11.96 15.09 8.86 -157.73 - -
Table 2: Negative log-likehood on test data for density estimation models; lower is better. In nats for tabular data and bits/dim for MNIST and CIFAR10. *Results use multi-scale convolutional architectures. Results use a single flow with a convolutional encoder-decoder architecture.
Table 3: Negative ELBO on test data for VAE models; lower is better. In nats for all datasets except Frey Faces which is presented in bits per dimension. Mean/stdev are estimated over 3 runs.

4.3 Variational Autoencoder

We compare FFJORD to other normalizing flows for use in variational inference. We train a VAE(Kingma & Welling, 2014) on four datasets using a FFJORD flow and compare to VAEs with no flow, Planar Flows (Rezende & Mohamed, 2015), Inverse Autoregressive Flow (IAF)(Kingma et al., 2016), and Sylvester normalizing flows (Berg et al., 2018). To provide a fair comparison, our encoder/decoder architectures and learning setup exactly mirror those of Berg et al. (2018).

In VAEs it is common for the encoder network to also output the parameters of the flow as a function of the input . With FFJORD, we found this led to differential equations which were too difficult to integrate numerically. Instead, the encoder network outputs a low-rank update to a global weight matrix and an input-dependent bias vector. Neural network layers inside of FFJORD take the form

where is the input to the layer, is an element-wise activation function, and are the input and output dimensionality of this layer, and , , are data-dependent parameters returned from the encoder networks. A full description of the model architectures used and our experimental setup can be found in Appendix B.2.

On every dataset tested, FFJORD outperforms all other competing normalizing flows. A summary of our variational inference results can be found in Table 3.

5 Analysis and Discussion

Figure 4: The variance of our model's log-density estimator can be reduced using neural network architectures with a bottleneck layer, speeding up training.

We perform a series of ablation experiments to gain a better understanding of the proposed model.

5.1 Faster Training with Bottleneck Trick

We plot the training losses on MNIST using an encoder-decoder architecture (see Appendix B.1 for details). Loss during training is plotted in Figure 4, where we use the trace estimator directly on the Jacobian or we use the bottleneck trick to reduce the dimension to . Interestingly, we find that while the bottleneck trick (9) can lead to faster convergence when the trace is estimated using a Gaussian-distributed , we did not observe faster convergence when using a Rademacher-distributed .

5.2 Number of Function Evaluations vs.
Data Dimension

Figure 5: Number of function evaluates used by the adaptive ODE solver (NFE) is approximately independent of data-dimension.

The full computational cost of integrating the instantaneous change of variables (2) is where is dimensionality of the data, is the size of the hidden state, and is the number of function evaluations (NFE) that the adaptive solver uses to integrate the ODE. In general, each evaluation of the model is and in practice, is typically chosen to be close to . Since the general form of the discrete change of variables equation (1) requires -cost, one may wonder whether the number of evaluations depends on .

We train VAEs using FFJORD flows with increasing latent dimension . The NFE throughout training is shown in Figure 5. In all models, we find that the NFE increases throughout training, but converges to the same value, independent of . This phenomenon can be verified with a simple thought experiment. Take an isotropic Gaussian distribution as the data distribution and set the base distribution of our model to be an isotropic Gaussian. Then the optimal differential equation is zero for any , and the number evaluations is zero. We can conclude that the number of evaluations is not dependent on the dimensionality of the data but the complexity of its distribution, or more specifically, how difficult it is to transform its density into the base distribution.

5.3 Single-scale vs. Multi-scale FFJORD

Figure 6: For image data, multiscale architectures require the ODE solver to use a greater number of function evaluations (NFE), but these models achieve better performance.

Crucial to the scalability of Real NVP and Glow is the multiscale architecture originally proposed in Dinh et al. (2017). We compare an single-scale encoder-decoder style FFJORD with a multiscale FFJORD on the MNIST dataset where both models have a comparable number of parameters and plot the total NFE–in both forward and backward passes–against the loss achieved in Figure 6. We find that while the single-scale model uses approximately one half as many function evaluations as the multiscale model, it is not able to achieve the same performance as the multiscale model.

6 Scope and Limitations

Number of function evaluations can be prohibitive.

The number of function evaluations required to integrate the dynamics is not fixed ahead of time and is a function of the data, model architecture, and model parameters. We find that this tends to grow as the models trains and can become prohibitively large, even when memory stays constant due to the adjoint method. Various forms of regularization such as weight decay and spectral normalization(Miyato et al., 2018) can be used to reduce the this quantity but their use tends to hurt performance slightly.

Limitations of general-purpose ODE solvers.

In theory, we can model any differential equation (given mild assumptions based on existence and uniqueness of the solution), but in practice our reliance on general-purpose ODE solvers restricts us to non-stiff differential equations that can be efficiently solved. ODE solvers for stiff dynamics exist, but they evaluate many more times to achieve the same error. We find that using a small amount of weight decay sufficiently constrains the ODE to be non-stiff.

7 Conclusion

We have presented FFJORD, a reversible generative model for high dimensional data which can compute exact log-likelihoods and can be sampled from efficiently. Our model uses continuous-time dynamics to produce a generative model which is parameterized by an unrestricted neural network. All required quantities for training and sampling can be computed using automatic-differentiation, Hutchinson's trace estimator, and black-box ODE solvers. Our model stands in contrast to other methods with similar properties which rely on restricted, hand-engineered neural network architectures. We have demonstrated that this additional flexibility allows our approach to achieve improved performance on density estimation and variational inference. We also demonstrate FFJORD's ability to model distributions which comparable methods such as Glow and Real NVP cannot model.

We believe there is much room for further work exploring and improving this method. We are interested specifically in ways to reduce the number of function evaluations used by the ODE-solver without hurting predictive performance. Advancements like these will be crucial in scaling this method to even higher-dimensional datasets.

8 Acknowledgements

We thank Roger Grosse and Yulia Rubanova for helpful discussions.

References

  • Adams et al. (2018) R. P. Adams, J. Pennington, M. J. Johnson, J. Smith, Y. Ovadia, B. Patton, and J. Saunderson. Estimating the Spectral Density of Large Implicit Matrices. ArXiv e-prints, February 2018.
  • Andersson (2013) Joel Andersson. A general-purpose software framework for dynamic optimization. PhD thesis, 2013.
  • Berg et al. (2018) Rianne van den Berg, Leonard Hasenclever, Jakub M Tomczak, and Max Welling. Sylvester normalizing flows for variational inference. arXiv preprint arXiv:1803.05649, 2018.
  • Chen et al. (2018) Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. Advances in Neural Information Processing Systems, 2018.
  • Dinh et al. (2014) Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: Non-linear independent components estimation. International Conference on Learning Representations Workshop, 2014.
  • Dinh et al. (2017) Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. International Conference on Learning Representations, 2017.
  • Germain et al. (2015) Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. In International Conference on Machine Learning, pp. 881–889, 2015.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
  • Huang et al. (2018) Chin-Wei Huang, David Krueger, Alexandre Lacoste, and Aaron Courville. Neural autoregressive flows. International Conference on Machine Learning, 2018.
  • Hutchinson (1989) M.F. Hutchinson. A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. 18:1059–1076, 01 1989.
  • Khalil (2002) H.K. Khalil. Nonlinear Systems. Pearson Education. Prentice Hall, 2002.
  • Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Kingma & Dhariwal (2018) Diederik P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. arXiv preprint arXiv:1807.03039, 2018.
  • Kingma & Welling (2014) Diederik P Kingma and Max Welling. Auto-encoding variational bayes. International Conference on Learning Representations, 2014.
  • Kingma et al. (2016) Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems, pp. 4743–4751, 2016.
  • Miyato et al. (2018) Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. International Conference on Learning Representations, 2018.
  • Oliva et al. (2018) Junier B Oliva, Avinava Dubey, Barnabás Póczos, Jeff Schneider, and Eric P Xing. Transformation autoregressive networks. International Conference on Machine Learning, 2018.
  • Oord et al. (2016) Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. International Conference on Machine Learning, 2016.
  • Papamakarios et al. (2017) George Papamakarios, Iain Murray, and Theo Pavlakou. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pp. 2338–2347, 2017.
  • Pontryagin (1962) Lev Semenovich Pontryagin. Mathematical theory of optimal processes. Routledge, 1962.
  • Rezende & Mohamed (2015) Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. International Conference on Machine Learning, 2015.
  • Rumelhart et al. (1986) David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. nature, 323(6088):533, 1986.
  • Shampine (1986) Lawrence F Shampine. Some practical Runge-Kutta formulas. Mathematics of Computation, 46(173):135–150, 1986.

lankfordwastione.blogspot.com

Source: https://www.arxiv-vanity.com/papers/1810.01367/

0 Response to "Ffjord Free form Continuous Dynamics for Scalable Reversible Generative Models"

ارسال یک نظر

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel