The canonical method in generative modeling is to separate mannequin becoming into two blocks: outline first the best way to pattern noise (e.g. Gaussian) and select subsequent what to do with it (e.g. utilizing a single map or flows). We discover on this work an alternate route that ties sampling and mapping. We discover inspiration in second measures, a end result that states that for any measure ρ, there exists a singular convex potential u such that ρ = ∇u♯e–u. Whereas this does appear to tie successfully sampling (from log–concave distribution e–u) and motion (pushing particles via ∇u), we observe on easy examples (e.g., Gaussians or 1D distributions) that this alternative is ailing–fitted to sensible duties. We examine an alternate factorization, the place ρ is factorized as ∇w*♯e–w, the place w* is the convex conjugate of a convex potential w. We name this method conjugate second measures, and present much more intuitive outcomes on these examples. As a result of ∇w* is the Monge map between the log–concave distribution e–w and ρ, we depend on optimum transport solvers to suggest an algorithm to get well w from samples of ρ, and parameterize w as an enter–convex neural community. We additionally tackle the frequent sampling situation wherein the density of ρ is thought solely as much as a normalizing fixed, and suggest an algorithm to study w on this setting.
- † CREST–ENSAE, Institut Polytechnique de Paris
