We’ve seen fairly just a few examples of unsupervised studying (or self-supervised studying, to decide on the extra appropriate however much less
standard time period) on this weblog.
Typically, these concerned Variational Autoencoders (VAEs), whose attraction lies in them permitting to mannequin a latent house of
underlying, impartial (ideally) components that decide the seen options. A doable draw back could be the inferior
high quality of generated samples. Generative Adversarial Networks (GANs) are one other standard method. Conceptually, these are
extremely engaging resulting from their game-theoretic framing. Nonetheless, they are often tough to coach. PixelCNN variants, on the
different hand – we’ll subsume all of them right here below PixelCNN – are usually recognized for his or her good outcomes. They appear to contain
some extra alchemy although. Below these circumstances, what might be extra welcome than a straightforward means of experimenting with
them? By TensorFlow Chance (TFP) and its R wrapper, tfprobability, we now have
such a means.
This publish first provides an introduction to PixelCNN, concentrating on high-level ideas (leaving the main points for the curious
to look them up within the respective papers). We’ll then present an instance of utilizing tfprobability to experiment with the TFP
implementation.
PixelCNN rules
Autoregressivity, or: We’d like (some) order
The fundamental concept in PixelCNN is autoregressivity. Every pixel is modeled as relying on all prior pixels. Formally:
[p(mathbf{x}) = prod_{i}p(x_i|x_0, x_1, …, x_{i-1})]
Now wait a second – what even are prior pixels? Final I noticed one photos had been two-dimensional. So this implies now we have to impose
an order on the pixels. Generally this will likely be raster scan order: row after row, from left to proper. However when coping with
colour photos, there’s one thing else: At every place, we even have three depth values, one for every of purple, inexperienced,
and blue. The unique PixelCNN paper(Oord, Kalchbrenner, and Kavukcuoglu 2016) carried by way of autoregressivity right here as properly, with a pixel’s depth for
purple relying on simply prior pixels, these for inexperienced relying on these similar prior pixels however moreover, the present worth
for purple, and people for blue relying on the prior pixels in addition to the present values for purple and inexperienced.
[p(x_i|mathbf{x}
Here, the variant implemented in TFP, PixelCNN++(Salimans et al. 2017) , introduces a simplification; it factorizes the joint
distribution in a less compute-intensive way.
Technically, then, we know how autoregressivity is realized; intuitively, it may still seem surprising that imposing a raster
scan order “just works” (to me, at least, it is). Maybe this is one of those points where compute power successfully
compensates for lack of an equivalent of a cognitive prior.
Masking, or: Where not to look
Now, PixelCNN ends in “CNN” for a reason – as usual in image processing, convolutional layers (or blocks thereof) are
involved. But – is it not the very nature of a convolution that it computes an average of some sorts, looking, for each
output pixel, not just at the corresponding input but also, at its spatial (or temporal) surroundings? How does that rhyme
with the look-at-just-prior-pixels strategy?
Surprisingly, this problem is easier to solve than it sounds. When applying the convolutional kernel, just multiply with a
mask that zeroes out any “forbidden pixels” – like in this example for a 5×5 kernel, where we’re about to compute the
convolved value for row 3, column 3:
[left[begin{array}
{rrr}
1 & 1 & 1 & 1 & 1
1 & 1 & 1 & 1 & 1
1 & 1 & 1 & 0 & 0
0 & 0 & 0 & 0 & 0
0 & 0 & 0 & 0 & 0
end{array}right]
]
This makes the algorithm sincere, however introduces a distinct downside: With every successive convolutional layer consuming its
predecessor’s output, there’s a repeatedly rising blind spot (so-called in analogy to the blind spot on the retina, however
positioned within the prime proper) of pixels which are by no means seen by the algorithm. Van den Oord et al. (2016)(Oord et al. 2016) repair this
through the use of two completely different convolutional stacks, one continuing from prime to backside, the opposite from left to proper.
Conditioning, or: Present me a kitten
To date, we’ve at all times talked about “producing photos” in a purely generic means. However the actual attraction lies in creating
samples of some specified kind – one of many courses we’ve been coaching on, or orthogonal info fed into the community.
That is the place PixelCNN turns into Conditional PixelCNN(Oord et al. 2016), and it’s also the place that feeling of magic resurfaces.
Once more, as “basic math” it’s not laborious to conceive. Right here, (mathbf{h}) is the extra enter we’re conditioning on:
[p(mathbf{x}| mathbf{h}) = prod_{i}p(x_i|x_0, x_1, …, x_{i-1}, mathbf{h})]
However how does this translate into neural community operations? It’s simply one other matrix multiplication ((V^T mathbf{h})) added
to the convolutional outputs ((W mathbf{x})).
[mathbf{y} = tanh(W_{k,f} mathbf{x} + V^T_{k,f} mathbf{h}) odot sigma(W_{k,g} mathbf{x} + V^T_{k,g} mathbf{h})]
(In the event you’re questioning concerning the second half on the best, after the Hadamard product signal – we gained’t go into particulars, however in a
nutshell, it’s one other modification launched by (Oord et al. 2016), a switch of the “gating” precept from recurrent neural
networks, corresponding to GRUs and LSTMs, to the convolutional setting.)
So we see what goes into the choice of a pixel worth to pattern. However how is that call really made?
Logistic combination chance , or: No pixel is an island
Once more, that is the place the TFP implementation doesn’t observe the unique paper, however the latter PixelCNN++ one. Initially,
pixels had been modeled as discrete values, selected by a softmax over 256 (0-255) doable values. (That this really labored
looks as if one other occasion of deep studying magic. Think about: On this mannequin, 254 is as removed from 255 as it’s from 0.)
In distinction, PixelCNN++ assumes an underlying steady distribution of colour depth, and rounds to the closest integer.
That underlying distribution is a mix of logistic distributions, thus permitting for multimodality:
[nu sim sum_{i} pi_i logistic(mu_i, sigma_i)]
General structure and the PixelCNN distribution
General, PixelCNN++, as described in (Salimans et al. 2017), consists of six blocks. The blocks collectively make up a UNet-like
construction, successively downsizing the enter after which, upsampling once more:

In TFP’s PixelCNN distribution, the variety of blocks is configurable as num_hierarchies, the default being 3.
Every block consists of a customizable variety of layers, known as ResNet layers because of the residual connection (seen on the
proper) complementing the convolutional operations within the horizontal stack:

In TFP, the variety of these layers per block is configurable as num_resnet.
num_resnet and num_hierarchies are the parameters you’re more than likely to experiment with, however there are just a few extra you possibly can
try within the documentation. The variety of logistic
distributions within the combination can be configurable, however from my experiments it’s greatest to maintain that quantity relatively low to keep away from
producing NaNs throughout coaching.
Let’s now see an entire instance.
Finish-to-end instance
Our playground will likely be QuickDraw, a dataset – nonetheless rising –
obtained by asking individuals to attract some object in at most twenty seconds, utilizing the mouse. (To see for your self, simply try
the web site). As of as we speak, there are greater than a fifty million cases, from 345
completely different courses.
Firstly, these knowledge had been chosen to take a break from MNIST and its variants. However similar to these (and plenty of extra!),
QuickDraw could be obtained, in tfdatasets-ready type, through tfds, the R wrapper to
TensorFlow datasets. In distinction to the MNIST “household” although, the “actual samples” are themselves extremely irregular, and infrequently
even lacking important elements. So to anchor judgment, when displaying generated samples we at all times present eight precise drawings
with them.
Getting ready the information
The dataset being gigantic, we instruct tfds to load the primary 500,000 drawings “solely.”
To hurry up coaching additional, we then zoom in on twenty courses. This successfully leaves us with ~ 1,100 – 1,500 drawings per
class.
# bee, bicycle, broccoli, butterfly, cactus,
# frog, guitar, lightning, penguin, pizza,
# rollerskates, sea turtle, sheep, snowflake, solar,
# swan, The Eiffel Tower, tractor, practice, tree
courses <- c(26, 29, 43, 49, 50,
125, 134, 172, 218, 225,
246, 255, 258, 271, 295,
296, 308, 320, 322, 323
)
classes_tensor <- tf$forged(courses, tf$int64)
train_ds <- train_ds %>%
dataset_filter(
operate(file) tf$reduce_any(tf$equal(classes_tensor, file$label), -1L)
)
The PixelCNN distribution expects values within the vary from 0 to 255 – no normalization required. Preprocessing then consists
of simply casting pixels and labels every to float:
Creating the mannequin
We now use tfd_pixel_cnn to outline what would be the
loglikelihood utilized by the mannequin.
dist <- tfd_pixel_cnn(
image_shape = c(28, 28, 1),
conditional_shape = checklist(),
num_resnet = 5,
num_hierarchies = 3,
num_filters = 128,
num_logistic_mix = 5,
dropout_p =.5
)
image_input <- layer_input(form = c(28, 28, 1))
label_input <- layer_input(form = checklist())
log_prob <- dist %>% tfd_log_prob(image_input, conditional_input = label_input)
This tradition loglikelihood is added as a loss to the mannequin, after which, the mannequin is compiled with simply an optimizer
specification solely. Throughout coaching, loss first decreased shortly, however enhancements from later epochs had been smaller.
mannequin <- keras_model(inputs = checklist(image_input, label_input), outputs = log_prob)
mannequin$add_loss(-tf$reduce_mean(log_prob))
mannequin$compile(optimizer = optimizer_adam(lr = .001))
mannequin %>% match(practice, epochs = 10)
To collectively show actual and faux photos:
for (i in courses) {
real_images <- train_ds %>%
dataset_filter(
operate(file) file$label == tf$forged(i, tf$int64)
) %>%
dataset_take(8) %>%
dataset_batch(8)
it <- as_iterator(real_images)
real_images <- iter_next(it)
real_images <- real_images$picture %>% as.array()
real_images <- real_images[ , , , 1]/255
generated_images <- dist %>% tfd_sample(8, conditional_input = i)
generated_images <- generated_images %>% as.array()
generated_images <- generated_images[ , , , 1]/255
photos <- abind::abind(real_images, generated_images, alongside = 1)
png(paste0("draw_", i, ".png"), width = 8 * 28 * 10, top = 2 * 28 * 10)
par(mfrow = c(2, 8), mar = c(0, 0, 0, 0))
photos %>%
purrr::array_tree(1) %>%
purrr::map(as.raster) %>%
purrr::iwalk(plot)
dev.off()
}
From our twenty courses, right here’s a alternative of six, every exhibiting actual drawings within the prime row, and faux ones beneath.






We most likely wouldn’t confuse the primary and second rows, however then, the precise human drawings exhibit monumental variation, too.
And nobody ever stated PixelCNN was an structure for idea studying. Be at liberty to mess around with different datasets of your
alternative – TFP’s PixelCNN distribution makes it simple.
Wrapping up
On this publish, we had tfprobability / TFP do all of the heavy lifting for us, and so, might deal with the underlying ideas.
Relying in your inclinations, this may be a great state of affairs – you don’t lose sight of the forest for the bushes. On the
different hand: Must you discover that altering the offered parameters doesn’t obtain what you need, you have got a reference
implementation to begin from. So regardless of the consequence, the addition of such higher-level performance to TFP is a win for the
customers. (In the event you’re a TFP developer studying this: Sure, we’d like extra :-)).
To everybody although, thanks for studying!
Salimans, Tim, Andrej Karpathy, Xi Chen, and Diederik P. Kingma. 2017. “PixelCNN++: A PixelCNN Implementation with Discretized Logistic Combination Probability and Different Modifications.” In ICLR.
