Monday, May 11, 2026

CSPNet Paper Walkthrough: Simply Higher, No Tradeoffs


CNN-based mannequin extra light-weight? Simply take the smaller model of that mannequin, proper? Like with ResNet, for example, if ResNet-152 feels too heavy, why not simply use ResNet-101? Or within the case of DenseNet, why not go together with DenseNet-121 moderately than DenseNet-169? — Sure, that’s true, however you would need to sacrifice some accuracy for that. Mainly, if you would like a lighter mannequin then you need to count on your accuracy to drop as properly. 

Now, what if I instructed you a few mannequin that’s extra light-weight than its base however can nonetheless compete on accuracy? Meet CSPNet (Cross Stage Partial Community). You’ll be shocked that it might successfully cut back computational complexity whereas sustaining excessive accuracy — no tradeoff! On this article we’re going to speak in regards to the CSPNet structure, together with the way it works and the right way to implement it from scratch.


A Transient Historical past of CSPNet

CSPNet was first launched in a paper titled “CSPNet: A New Spine That Can Improve Studying Functionality of CNN” written by Wang et al. again in November 2019 [1]. CSPNet was initially proposed to handle the restrictions of DenseNet. Regardless of already being computationally cheaper than ResNet, the authors thought that the computation of DenseNet itself remains to be thought-about costly. Check out the primary constructing block of a DenseNet in Determine 1 beneath to grasp why.

Determine 1. The primary constructing block of a DenseNet mannequin [2].

In a DenseNet constructing block — known as dense block — each convolution layer takes data from all earlier layers, inflicting it to have a number of redundant gradient data that makes coaching inefficient. We are able to consider it like a scholar taught by 5 totally different lecturers for a similar materials. It’s truly good because the scholar can get a number of views about that particular matter. Nevertheless, sooner or later it turns into redundant and thus inefficient. Within the case of DenseNet, we are able to see the deeper layers as college students and all of the tensors from shallower layers as lecturers. Within the instance above, if we assume H₄ as our scholar, then the x₀, x₁, x₂, and x₃ tensors act because the lecturers. Right here you may simply think about how that scholar would get overwhelmed by all that data!

Earlier than we get into CSPNet, I even have a complete separate article particularly speaking about DenseNet (reference [3]), which I extremely suggest you learn if you would like the total image of how this structure works.

Aims

The target of CSPNet is to allow a community to have cheaper computational complexity and higher gradient mixture. The rationale for the latter is that the majority gradient data in DenseNet consists of duplicates of one another. You will need to notice that CSPNet will not be a standalone community. As an alternative, it’s a new paradigm we apply to DenseNet.

Now let’s check out Determine 2 beneath to see how CSPNet achieves its aims. You may see the illustration on the left that the variety of function maps steadily will increase as we get deeper into the community. In case you have learn my earlier article about DenseNet, that is basically one thing we are able to management via the progress price parameter, i.e., the variety of function maps produced by every convolution layer inside a dense block. In truth, this improve within the variety of function maps is what the authors see as a computational bottleneck.

Determine 2. Left: the unique DenseNet constructing block (similar as Determine 1). Proper: The CSPNet model of the DenseNet constructing block (known as CSPDenseNet) [1].

By making use of the Cross Stage Partial mechanism, we are able to principally make the computation of a DenseNet cheaper. If we check out the illustration on the suitable, we are able to see that we’ve got an extra department popping out from x₀ that goes on to the so-called Partial Transition Layer. There are no less than two benefits we get with this mechanism, that are in accordance with the aims I discussed earlier. First, we are able to save numerous computations because the variety of function maps processed by the dense block is simply half of the unique one. And second, the gradient data turns into extra numerous since we obtained an extra path with unprocessed function maps that avoids the redundant gradient data. So in brief, the thought of CSPNet eliminates the computational redundancy of DenseNet (via the skip-path) whereas on the similar time nonetheless preserves its feature-reuse property (via the dense block).


The Detailed CSPNet Structure

Talking of the main points, the unique function map is first divided into two elements in channel-wise method, the place every of them will probably be processed in several paths. Suppose we obtained 64 enter channels, the primary 32 function maps (half 1) will skip via all computations, whereas the remaining 32 (half 2) will probably be processed by a dense block. Though this splitting step is fairly straightforward, the merging step is definitely not fairly trivial. You may see in Determine 3 beneath that we obtained a number of totally different mechanisms to take action.

Determine 3. A number of other ways to carry out function mixture in CSPNet [1].

Within the construction known as fusion first (c), we concatenate the half 1 tensor with the half 2 tensor that has been processed by the dense block previous to passing them via the transition layer. So, possibility (c) is definitely fairly easy to implement as a result of the spatial dimension of the 2 tensors is precisely the identical, permitting us to concatenate them simply.

In my earlier article [3], I discussed that the transition layer of a DenseNet is used to scale back each the spatial dimension and the variety of channels. In truth, this property requires us to rethink the right way to implement the fusion final (d) construction. That is basically as a result of the transition layer will trigger the half 2 tensor to have a smaller spatial dimension than the half 1 tensor. So technically talking, we have to both apply one thing like a pooling with a stride of two to the half 1 department or just omitting the downsampling operation within the transition layer. By doing this, the spatial dimension of the 2 tensors would be the similar, and thus they’re now concatenable.

As an alternative of simply utilizing a single transition layer positioned both earlier than or after function mixture, the authors additionally proposed one other technique which they discuss with as CSPDenseNet (b). We are able to consider this as a mix of (c) and (d), the place we obtained two transition layers positioned earlier than and after the tensor concatenation course of. On this explicit case, the primary transition layer (the one positioned within the half 2 department) will carry out channel discount by cross-channel pooling, i.e., a pooling layer that operates throughout channel dimension. In the meantime, the second transition layer will carry out each spatial downsampling and channel depend discount. So principally, on this method we cut back the variety of channels twice — properly, no less than that’s what I perceive from the paper in regards to the two transition layers, because the detailed processes inside these layers usually are not explicitly mentioned.

Experimental Outcomes

Speaking in regards to the experimental outcomes relating to these function mixture mechanisms, it’s defined within the paper that fusion final (d) is best than fusion first (c), the place the previous can considerably cut back computational complexity whereas solely suffers from a really slight drop in accuracy. Variant (c) truly additionally reduces computational complexity, but the degradation in accuracy can also be vital. Authors discovered that variant (b) obtained a fair higher end result than the 2. Determine 4 beneath shows a number of experimental outcomes displaying how the three function mixture mechanisms carried out in comparison with the bottom mannequin. Nevertheless, as a substitute of utilizing DenseNet, they by some means determined to make use of PeleeNet to match these constructions.

Determine 4. Efficiency comparability of the bottom PeleeNet (corresponds to (a) in Determine 3), CSPPeleeNet (b), PeleeNet with fusion first technique (c), and PeleeNet with fusion final technique (d) [1].

Based mostly on the above determine, we are able to see that the CSP fusion final (inexperienced) certainly performs higher in comparison with the CSP fusion first (crimson). That is based mostly on the truth that its accuracy solely degrades by 0.1% from its base mannequin whereas having 21% smaller computational complexity. In the meantime, although CSP fusion first efficiently reduces computational complexity by 26%, however the accuracy drop is fairly vital because it performs 1.5% worse than the bottom PeleeNet. And essentially the most spectacular construction is the CSPPeleeNet variant (blue), i.e., the one which makes use of two transition layers. Right here we are able to clearly see that though the computational complexity is decreased by 13%, the accuracy of the mannequin truly improves by 0.2% — once more, no tradeoff!

Not solely that, however the authors additionally tried to implement CSPNet on different spine fashions. The leads to Determine 5 beneath exhibits that the CSPNet construction efficiently reduces the computational complexity of DenseNet -201-Elastic and ResNeXt-50 by 19% and 22%, respectively. It’s attention-grabbing to see that the accuracy of the ResNeXt mannequin improves regardless of the discount in mannequin complexity, which is in accordance with the end result obtained by CSPPeleeNet in Determine 4.

Determine 5. Efficiency enchancment of DenseNet-201-Elastic and ResNeXt-50 after implementing the CSPNet mechanism [1].

The Mathematical Expression of CSPDenseNet

For individuals who love math, right here I obtained you some notations that you just may discover attention-grabbing to know. Figures 6 and seven beneath show the mathematical expressions of DenseNet and CSPDenseNet blocks through the ahead propagation section.

Within the DenseNet block, x₁ corresponds to the tensor produced by the primary conv layer w₁ based mostly on the enter tensor x₀. Subsequent, we concatenate the unique tensor x₀ with x₁ and use them because the enter for the w₂ layer (or to be extra exact, w is definitely the weights of the conv layer, not the conv layer itself). We preserve producing extra function maps and concatenating the prevailing ones as we get deeper into the community. On this approach, we are able to principally say that the outputs of all earlier layers turn out to be the enter of the present layer.

Determine 6. The mathematical illustration of ahead propagation inside a DenseNet block [1].

The case is totally different for CSPDenseNet. You may see within the notation beneath that we obtained x₀’ and x₀’’, which we beforehand discuss with because the half 1 and half 2. The x₀’’ tensor undergoes processing just like the one in DenseNet block till we obtained xₖ. Subsequent, the output of this dense block is then forwarded to the primary transition layer, which is denoted as wᴛ. The ensuing tensor xᴛ is then concatenated with the half 1 tensor x₀’ earlier than finally being handed via the second transition layer wᴜ to acquire the ultimate output tensor xᴜ.

Determine 7. The mathematical expression of the ahead propagation in CSPDenseNet block [1].

CSPDenseNet Implementation

Now let’s get even deeper into the CSPNet structure by implementing it from scratch. Though we are able to principally apply the CSPNet construction to any spine, right here I’m going to try this on the DenseNet mannequin to match it with the illustrations and equations I confirmed you earlier. Determine 8 beneath shows what the entire DenseNet structure appears to be like like. Simply do not forget that each single dense block on this structure initially follows the DenseNet construction in Determine 3a, and our goal right here is to exchange all these dense blocks with CSPDenseNet block illustrated in Determine 3b.

Determine 8. The entire DenseNet structure [2].

The very first thing we do is to import the required modules and initialize the configurable parameters as proven in Codeblock 1. The GROWTH variable is the progress price parameter, which denotes the variety of function maps produced by every bottleneck inside the dense block. Subsequent, CHANNEL_POOLING is the parameter we use to regulate the conduct of the channel-pooling mechanism in our first transition layer. Right here I set this parameter to 0.8, which means that we’ll shrink the variety of channels to 80% of its unique channel depend. The COMPRESSION parameter works equally to the CHANNEL_POOLING variable, but this one operates within the second transition layer. Lastly, right here we outline the REPEATS record, which is used to set the variety of bottleneck blocks we are going to initialize inside the dense block of every stage.

# Codeblock 1
import torch
import torch.nn as nn

GROWTH          = 12
CHANNEL_POOLING = 0.8
COMPRESSION     = 0.5
REPEATS         = [6, 12, 24, 16]

Bottleneck Block Implementation

Under is the implementation of the bottleneck block to be positioned inside the dense block. This Bottleneck class is precisely the identical because the one I utilized in my DenseNet article [3]. I instantly copy-pasted the code from there since we don’t want to change this half in any respect. Simply remember that a bottleneck block includes a 1×1 convolution adopted by a 3×3 convolution.

# Codeblock 2
class Bottleneck(nn.Module):
    def __init__(self, in_channels):
        tremendous().__init__()
        
        self.relu = nn.ReLU()
        self.dropout = nn.Dropout(p=0.2)
        
        self.bn0   = nn.BatchNorm2d(num_features=in_channels)
        self.conv0 = nn.Conv2d(in_channels=in_channels, 
                               out_channels=GROWTH*4,          
                               kernel_size=1, 
                               padding=0, 
                               bias=False)
        
        self.bn1   = nn.BatchNorm2d(num_features=GROWTH*4)
        self.conv1 = nn.Conv2d(in_channels=GROWTH*4, 
                               out_channels=GROWTH,            
                               kernel_size=3, 
                               padding=1, 
                               bias=False)
    
    def ahead(self, x):
        print(f'originalt: {x.measurement()}')
        
        out = self.dropout(self.conv0(self.relu(self.bn0(x))))
        print(f'after conv0t: {out.measurement()}')
        
        out = self.dropout(self.conv1(self.relu(self.bn1(out))))
        print(f'after conv1t: {out.measurement()}')
        
        concatenated = torch.cat((out, x), dim=1)              
        print(f'after concatt: {concatenated.measurement()}')
        
        return concatenated

The next testing code simulates the primary bottleneck block inside the dense block. Keep in mind that the very first conv layer within the structure (the one with 7×7 kernel) produces 64 function maps, however since within the case of CSPNet we solely need to course of half of them (the half 2 tensor), therefore right here we are going to check it with a tensor of 32 function maps.

# Codeblock 3
bottleneck = Bottleneck(in_channels=32)

x = torch.randn(1, 32, 56, 56)
x = bottleneck(x)
# Codeblock 3 Output
unique     : torch.Measurement([1, 32, 56, 56])
after conv0  : torch.Measurement([1, 48, 56, 56])
after conv1  : torch.Measurement([1, 12, 56, 56])
after concat : torch.Measurement([1, 44, 56, 56])

You may see within the ensuing output above that the variety of function maps turns into 44 on the finish of the method, the place this quantity is obtained by including the enter channel depend and the expansion price, i.e., 32 + 12 = 44. Once more, you may simply try my DenseNet article [3] if you wish to get a greater understanding about this calculation.

Dense Block Implementation

Now to create a sequence of bottleneck blocks simply, we are able to simply wrap it contained in the DenseBlock class in Codeblock 4 beneath. Afterward, we are able to simply specify the variety of bottleneck blocks to be stacked via the repeats parameter. Once more, this class can also be copy-pasted from my DenseNet article, so I’m not going to clarify it any additional.

# Codeblock 4
class DenseBlock(nn.Module):
    def __init__(self, in_channels, repeats):
        tremendous().__init__()
        self.bottlenecks = nn.ModuleList()
        
        for i in vary(repeats):
            current_in_channels = in_channels + i * GROWTH
            self.bottlenecks.append(Bottleneck(in_channels=current_in_channels))
        
    def ahead(self, x):
        print(f'originalttt: {x.measurement()}')
        
        for i, bottleneck in enumerate(self.bottlenecks):
            x = bottleneck(x)
            print(f'after bottleneck #{i}tt: {x.measurement()}')
            
        return x

As a way to verify if our DenseBlock class works correctly, we are going to check it utilizing the Codeblock 5 beneath. Right here I’m making an attempt to simulate the half 2 tensor processed by the primary dense block, which incorporates a sequence of 6 bottleneck blocks.

# Codeblock 5
dense_block = DenseBlock(in_channels=32, repeats=6)
x = torch.randn(1, 32, 56, 56)

x = dense_block(x)

And beneath is what the output appears to be like like. Right here we are able to clearly see that every bottleneck block efficiently will increase the function maps by 12.

# Codeblock 5 Output
unique             : torch.Measurement([1, 32, 56, 56])
after bottleneck #0  : torch.Measurement([1, 44, 56, 56])
after bottleneck #1  : torch.Measurement([1, 56, 56, 56])
after bottleneck #2  : torch.Measurement([1, 68, 56, 56])
after bottleneck #3  : torch.Measurement([1, 80, 56, 56])
after bottleneck #4  : torch.Measurement([1, 92, 56, 56])
after bottleneck #5  : torch.Measurement([1, 104, 56, 56])

First Transition

Keep in mind that the CSPDenseNet variant in Determine 3b makes use of two transition layers. On this part we’re going to focus on the primary transition layer, i.e., the one used to course of the tensor within the half 2 department. Right here we is not going to carry out spatial downsampling, which is the rationale why you don’t see any pooling layer inside the __init__() technique in Codeblock 6 beneath. As an alternative, right here we are going to solely carry out cross-channel pooling, which will be perceived as a typical pooling operation but is completed throughout the channel dimension. To implement it, we are able to merely use a 1×1 convolution (#(2)) and specify the variety of output channels we wish (#(1)). We are able to consider it like this: in a spatial downsampling course of, we are able to principally do this through the use of both pooling or a strided convolution layer, which within the latter case it is going to mixture the pixel values with particular weightings from the native neighborhood. Within the case of cross-channel pooling, since we don’t have a particular PyTorch layer for that, we are able to merely exchange it with a pointwise convolution layer, which by doing so we are able to principally mixture pixel values throughout the channel dimension.

# Codeblock 6
class FirstTransition(nn.Module):
    def __init__(self, in_channels, out_channels):
        tremendous().__init__()
        
        self.bn   = nn.BatchNorm2d(num_features=in_channels)
        self.relu = nn.ReLU()
        self.conv = nn.Conv2d(in_channels=in_channels, 
                              out_channels=out_channels,   #(1)
                              kernel_size=1,               #(2)
                              padding=0,
                              bias=False)
        self.dropout = nn.Dropout(p=0.2)
     
    def ahead(self, x):
        print(f'originaltt: {x.measurement()}')
        
        out = self.dropout(self.conv(self.relu(self.bn(x))))
        print(f'after first_transitiont: {out.measurement()}')
        
        return out

The end result given within the Codeblock 5 Output exhibits that the half 2 tensor can have the form of 104×56×56 after being processed by the dense block. Thus, within the testing code beneath I’ll use this tensor form to simulate the primary transition layer inside that stage. To regulate the variety of output channels, we are able to merely multiply the enter channel depend with the CHANNEL_POOLING variable we initialized earlier as proven at line #(1) in Codeblock 7 beneath.

# Codeblock 7
first_transition = FirstTransition(in_channels=104, 
                                   out_channels=int(104*CHANNEL_POOLING)) #(1)

x = torch.randn(1, 104, 56, 56)
x = first_transition(x)

Now because the code above is run, we are able to see that the variety of function maps shrinks from 104 to 83 (80% of the unique).

# Codeblock 7 Output
unique		        : torch.Measurement([1, 104, 56, 56])
after first_transition  : torch.Measurement([1, 83, 56, 56])

Second Transition

The construction of the second transition layer is kind of a bit the identical as the primary one, besides that right here we even have a median pooling layer with a stride of two to scale back the spatial dimension by half (#(1)).

# Codeblock 8
class SecondTransition(nn.Module):
    def __init__(self, in_channels, out_channels):
        tremendous().__init__()
        
        self.bn   = nn.BatchNorm2d(num_features=in_channels)
        self.relu = nn.ReLU()
        self.conv = nn.Conv2d(in_channels=in_channels, 
                              out_channels=out_channels, 
                              kernel_size=1, 
                              padding=0,
                              bias=False)
        self.dropout = nn.Dropout(p=0.2)
        self.pool = nn.AvgPool2d(kernel_size=2, stride=2)    #(1)
     
    def ahead(self, x):
        print(f'originaltt: {x.measurement()}')

        out = self.pool(self.dropout(self.conv(self.relu(self.bn(x)))))
        print(f'after second_transitiont: {out.measurement()}')
        
        return out

Keep in mind that the tensor coming into the second transition layer is a concatenation of the half 1 and the half 2 tensors. That is basically the rationale why within the testing code beneath I set this layer to simply accept 32 + 83 = 115 function maps. Much like the primary transition layer, right here we multiply this variety of function maps with the COMPRESSION variable (#(1)) to scale back the variety of channels even additional.

# Codeblock 9
second_transition = SecondTransition(in_channels=115, 
                                     out_channels=int(115*COMPRESSION))  #(1)

x = torch.randn(1, 115, 56, 56)
x = second_transition(x)

Within the ensuing output beneath we are able to see that the spatial dimension halves due to the typical pooling layer. On the similar time, the variety of function maps additionally decreases from 115 to 57 since we set the COMPRESSION parameter to 0.5.

# Codeblock 9 Output
unique                : torch.Measurement([1, 115, 56, 56])
after second_transition : torch.Measurement([1, 57, 28, 28])

The CSPDenseNet Mannequin

With all of the elements prepared, we are able to now construct all the CSPDenseNet structure, which I break down in Codeblocks 10a, 10b, and 10c beneath. Let’s now give attention to the Codeblock 10a first, the place I initialize all of the layers in line with the construction given in Determine 8. Right here you may see at line #(1) that we initialize a 7×7 convolution layer, which acts because the enter layer of the community. This layer is then adopted by a maxpooling layer (#(2)). These two layers use the stride of two, which means that the spatial dimensions of the enter tensor will probably be decreased to one-fourth of its unique measurement.

# Codeblock 10a
class CSPDenseNet(nn.Module):
    def __init__(self):
        tremendous().__init__()
        
        self.first_conv = nn.Conv2d(in_channels=3,         #(1)
                                    out_channels=64, 
                                    kernel_size=7,    
                                    stride=2,         
                                    padding=3,        
                                    bias=False)
        self.first_pool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)  #(2)
        channel_count = 64
        
        
        
        ##### Stage 0
        self.dense_block_0 = DenseBlock(in_channels=channel_count//2, 
                                        repeats=REPEATS[0])
        
        self.first_transition_0 = FirstTransition(in_channels=(channel_count//2)+(REPEATS[0]*GROWTH), 
                                                  out_channels=int(((channel_count//2)+(REPEATS[0]*GROWTH))*CHANNEL_POOLING))
        
        channel_count = (channel_count - (channel_count//2)) + int(((channel_count//2)+(REPEATS[0]*GROWTH))*CHANNEL_POOLING)
        
        self.second_transition_0 = SecondTransition(in_channels=channel_count, 
                                                  out_channels=int(channel_count*COMPRESSION))
        
        channel_count = int(channel_count*COMPRESSION)
        #####
        
        
        ##### Stage 1
        self.dense_block_1 = DenseBlock(in_channels=channel_count//2, 
                                        repeats=REPEATS[1])
        
        self.first_transition_1 = FirstTransition(in_channels=(channel_count//2)+(REPEATS[1]*GROWTH), 
                                                  out_channels=int(((channel_count//2)+(REPEATS[1]*GROWTH))*CHANNEL_POOLING))
        
        channel_count = (channel_count - (channel_count//2)) + int(((channel_count//2)+(REPEATS[1]*GROWTH))*CHANNEL_POOLING)
        
        self.second_transition_1 = SecondTransition(in_channels=channel_count, 
                                                  out_channels=int(channel_count*COMPRESSION))
        
        channel_count = int(channel_count*COMPRESSION)
        #####
        
        
        ##### Stage 2
        self.dense_block_2 = DenseBlock(in_channels=channel_count//2, 
                                        repeats=REPEATS[2])
        
        self.first_transition_2 = FirstTransition(in_channels=(channel_count//2)+(REPEATS[2]*GROWTH), 
                                                  out_channels=int(((channel_count//2)+(REPEATS[2]*GROWTH))*CHANNEL_POOLING))
        
        channel_count = (channel_count - (channel_count//2)) + int(((channel_count//2)+(REPEATS[2]*GROWTH))*CHANNEL_POOLING)
        
        self.second_transition_2 = SecondTransition(in_channels=channel_count, 
                                                  out_channels=int(channel_count*COMPRESSION))
        
        channel_count = int(channel_count*COMPRESSION)
        #####
        
        
        ##### Stage 3
        self.dense_block_3 = DenseBlock(in_channels=channel_count//2, 
                                        repeats=REPEATS[3])
        
        self.first_transition_3 = FirstTransition(in_channels=(channel_count//2)+(REPEATS[3]*GROWTH), 
                                                  out_channels=int(((channel_count//2)+(REPEATS[3]*GROWTH))*CHANNEL_POOLING))
        
        channel_count = (channel_count - (channel_count//2)) + int(((channel_count//2)+(REPEATS[3]*GROWTH))*CHANNEL_POOLING)
        #####
        
        
        self.avgpool = nn.AdaptiveAvgPool2d(output_size=(1,1))             #(3)
        self.fc = nn.Linear(in_features=channel_count, out_features=1000)  #(4)

Nonetheless with the above codeblock, right here I group the layers I initialize based mostly on the stage they belong to. Let’s now give attention to the half I discuss with as Stage 0. Right here you may see that we obtained a dense block (dense_block_0) and the primary transition layer (first_transition_0). These two elements are accountable to course of the half 2 tensor. Subsequent, we initialize the second transition layer (second_transition_0), which is used to course of the concatenation results of the half 1 and half 2 tensors. Because the channel depend is dynamic relying on the GROWTH, CHANNEL_POOLING, COMPRESSION, and REPEATS variables, we have to preserve observe of the channel depend after every step in order that the mannequin can adaptively alter itself in line with these variables. We do the identical factor for all of the remaining phases, besides in Stage 3 we don’t initialize the second transition layer since at that time we received’t cut back the channels and the spatial dimension any additional. As an alternative, we are going to instantly go the concatenated half 1 and half 2 tensors to the typical pooling (#(3)) and the classification (#(4)) layers. And that ends our dialogue in regards to the Codeblock 10a above.

Earlier than we get into the ahead() technique, there’s one other operate we have to create: split_channels(). Because the identify suggests, this operate, which is written in Codeblock 10b beneath, is used to separate a tensor into half 1 and half 2. The if-else assertion right here is used to verify if the variety of channels is odd and even. In truth, it might be very straightforward if the channel depend is a fair quantity as we are able to simply divide them into two (#(4)). But when the channel depend is odd, we have to manually decide the scale of every half as seen at line #(1) and #(2) earlier than finally splitting them (#(3)).

# Codeblock 10b
    def split_channels(self, x):

        channel_count = x.measurement(1)

        if channel_countpercent2 != 0:
            split_size_2 = channel_count // 2            #(1)
            split_size_1 = channel_count - split_size_2  #(2)
            return torch.break up(x, [split_size_1, split_size_2], dim=1)  #(3)

        else:
            return torch.break up(x, channel_count // 2, dim=1)            #(4)

As we’ve got completed defining the __init__() and the split_channel() strategies, we are able to now implement the ahead() technique in Codeblock 10c beneath. Typically talking, what we do right here is solely ahead the tensor sequentially. However now let’s take note of the half I discuss with as Stage 0. Right here you may see that after the tensor is handed via the first_pool layer (#(1)), we then break up it into two utilizing the split_channels() operate we declared earlier (#(2)). From there, we now get hold of the part1 and part2 tensors. We’ll go away the part1 tensor as is all the way in which to the tip of the stage. In the meantime, for the part2 tensor, we are going to course of it with the dense block (#(3)) and the primary transition layer (#(4)). Subsequent, we concatenate the ensuing tensor with the part1 tensor to create the skip-connection (#(5)). After which, we lastly go it via the second transition layer (#(6)). The identical steps are repeated for all phases till we finally attain the output layer to make classification. Simply do not forget that the Stage 3 is kind of totally different as a result of right here we don’t have the second transition layer.

# Codeblock 10c
    def ahead(self, x):
        print(f'originalttt: {x.measurement()}')
        
        x = self.first_conv(x)
        print(f'after first_convtt: {x.measurement()}')
        
        x = self.first_pool(x)      #(1)
        print(f'after first_pooltt: {x.measurement()}n')
        
        
        
        ##### Stage 0
        part1, part2 = self.split_channels(x)    #(2)
        print(f'part1tttt: {part1.measurement()}')
        print(f'part2tttt: {part2.measurement()}')
        
        part2 = self.dense_block_0(part2)        #(3)
        print(f'part2 after dense block 0t: {part2.measurement()}')
        
        part2 = self.first_transition_0(part2)   #(4)
        print(f'part2 after first trans 0t: {part2.measurement()}')
        
        x = torch.cat((part1, part2), dim=1)     #(5)
        print(f'after concatenatett: {x.measurement()}')
        
        x = self.second_transition_0(x)          #(6)
        print(f'after second transition 0t: {x.measurement()}n')
        
        
        
        ##### Stage 1
        part1, part2 = self.split_channels(x)
        print(f'part1tttt: {part1.measurement()}')
        print(f'part2tttt: {part2.measurement()}')
        
        part2 = self.dense_block_1(part2)
        print(f'part2 after dense block 1t: {part2.measurement()}')
        
        part2 = self.first_transition_1(part2)
        print(f'part2 after first trans 1t: {part2.measurement()}')
        
        x = torch.cat((part1, part2), dim=1)
        print(f'after concatenatett: {x.measurement()}')
        
        x = self.second_transition_1(x)
        print(f'after second transition 1t: {x.measurement()}n')
        
        
        
        ##### Stage 2
        part1, part2 = self.split_channels(x)
        print(f'part1tttt: {part1.measurement()}')
        print(f'part2tttt: {part2.measurement()}')
        
        part2 = self.dense_block_2(part2)
        print(f'part2 after dense block 2t: {part2.measurement()}')
        
        part2 = self.first_transition_2(part2)
        print(f'part2 after first trans 2t: {part2.measurement()}')
        
        x = torch.cat((part1, part2), dim=1)
        print(f'after concatenatett: {x.measurement()}')
        
        x = self.second_transition_2(x)
        print(f'after second transition 2t: {x.measurement()}n')
        
        
        
        ##### Stage 3
        part1, part2 = self.split_channels(x)
        print(f'part1tttt: {part1.measurement()}')
        print(f'part2tttt: {part2.measurement()}')
        
        part2 = self.dense_block_3(part2)
        print(f'part2 after dense block 2t: {part2.measurement()}')
        
        part2 = self.first_transition_3(part2)
        print(f'part2 after first trans 2t: {part2.measurement()}')
        
        x = torch.cat((part1, part2), dim=1)
        print(f'after concatenatett: {x.measurement()}n')
        
        
        
        x = self.avgpool(x)
        print(f'after avgpoolttt: {x.measurement()}')
        
        x = torch.flatten(x, start_dim=1)
        print(f'after flattenttt: {x.measurement()}')
        
        x = self.fc(x)
        print(f'after fcttt: {x.measurement()}')
        
        return x

Now let’s check the CSPDenseNet class we simply created by operating the Codeblock 11 beneath. Right here I take advantage of a dummy tensor of form 3×224×224 to simulate a 224×224 RGB picture handed via the community.

# Codeblock 11
cspdensenet = CSPDenseNet()

x = torch.randn(1, 3, 224, 224)
x = cspdensenet(x)

And beneath is what the output appears to be like like. Right here you may see that each time a tensor will get right into a community, our split_channels() technique accurately divides the tensor into two (#(1–2)). Then, the bottleneck block inside every stage additionally accurately provides the variety of channels of the half 2 tensor by 12 earlier than finally being handed via the primary transition layer. The primary transition layer itself efficiently reduces the variety of channels by 20% as seen at line #(3), simulating the cross-channel pooling mechanism. Afterwards, the ensuing tensor is then concatenated with the tensor from half 1 (#(4)) and handed via the second transition layer (#(5)) to additional cut back the variety of channels and halve the spatial dimension. We do the identical factor for all phases till finally we obtained the 1000-class prediction.

# Codeblock 11 Output
unique                  : torch.Measurement([1, 3, 224, 224])
after first_conv          : torch.Measurement([1, 64, 112, 112])
after first_pool          : torch.Measurement([1, 64, 56, 56])

part1                     : torch.Measurement([1, 32, 56, 56])    #(1)
part2                     : torch.Measurement([1, 32, 56, 56])    #(2)
after bottleneck #0       : torch.Measurement([1, 44, 56, 56])
after bottleneck #1       : torch.Measurement([1, 56, 56, 56])
after bottleneck #2       : torch.Measurement([1, 68, 56, 56])
after bottleneck #3       : torch.Measurement([1, 80, 56, 56])
after bottleneck #4       : torch.Measurement([1, 92, 56, 56])
after bottleneck #5       : torch.Measurement([1, 104, 56, 56])
part2 after dense block 0 : torch.Measurement([1, 104, 56, 56])
part2 after first trans 0 : torch.Measurement([1, 83, 56, 56])    #(3)
after concatenate         : torch.Measurement([1, 115, 56, 56])   #(4)
after second transition 0 : torch.Measurement([1, 57, 28, 28])    #(5)

part1                     : torch.Measurement([1, 29, 28, 28])
part2                     : torch.Measurement([1, 28, 28, 28])
after bottleneck #0       : torch.Measurement([1, 40, 28, 28])
after bottleneck #1       : torch.Measurement([1, 52, 28, 28])
after bottleneck #2       : torch.Measurement([1, 64, 28, 28])
after bottleneck #3       : torch.Measurement([1, 76, 28, 28])
after bottleneck #4       : torch.Measurement([1, 88, 28, 28])
after bottleneck #5       : torch.Measurement([1, 100, 28, 28])
after bottleneck #6       : torch.Measurement([1, 112, 28, 28])
after bottleneck #7       : torch.Measurement([1, 124, 28, 28])
after bottleneck #8       : torch.Measurement([1, 136, 28, 28])
after bottleneck #9       : torch.Measurement([1, 148, 28, 28])
after bottleneck #10      : torch.Measurement([1, 160, 28, 28])
after bottleneck #11      : torch.Measurement([1, 172, 28, 28])
part2 after dense block 1 : torch.Measurement([1, 172, 28, 28])
part2 after first trans 1 : torch.Measurement([1, 137, 28, 28])
after concatenate         : torch.Measurement([1, 166, 28, 28])
after second transition 1 : torch.Measurement([1, 83, 14, 14])

part1                     : torch.Measurement([1, 42, 14, 14])
part2                     : torch.Measurement([1, 41, 14, 14])
after bottleneck #0       : torch.Measurement([1, 53, 14, 14])
after bottleneck #1       : torch.Measurement([1, 65, 14, 14])
after bottleneck #2       : torch.Measurement([1, 77, 14, 14])
after bottleneck #3       : torch.Measurement([1, 89, 14, 14])
after bottleneck #4       : torch.Measurement([1, 101, 14, 14])
after bottleneck #5       : torch.Measurement([1, 113, 14, 14])
after bottleneck #6       : torch.Measurement([1, 125, 14, 14])
after bottleneck #7       : torch.Measurement([1, 137, 14, 14])
after bottleneck #8       : torch.Measurement([1, 149, 14, 14])
after bottleneck #9       : torch.Measurement([1, 161, 14, 14])
after bottleneck #10      : torch.Measurement([1, 173, 14, 14])
after bottleneck #11      : torch.Measurement([1, 185, 14, 14])
after bottleneck #12      : torch.Measurement([1, 197, 14, 14])
after bottleneck #13      : torch.Measurement([1, 209, 14, 14])
after bottleneck #14      : torch.Measurement([1, 221, 14, 14])
after bottleneck #15      : torch.Measurement([1, 233, 14, 14])
after bottleneck #16      : torch.Measurement([1, 245, 14, 14])
after bottleneck #17      : torch.Measurement([1, 257, 14, 14])
after bottleneck #18      : torch.Measurement([1, 269, 14, 14])
after bottleneck #19      : torch.Measurement([1, 281, 14, 14])
after bottleneck #20      : torch.Measurement([1, 293, 14, 14])
after bottleneck #21      : torch.Measurement([1, 305, 14, 14])
after bottleneck #22      : torch.Measurement([1, 317, 14, 14])
after bottleneck #23      : torch.Measurement([1, 329, 14, 14])
part2 after dense block 2 : torch.Measurement([1, 329, 14, 14])
part2 after first trans 2 : torch.Measurement([1, 263, 14, 14])
after concatenate         : torch.Measurement([1, 305, 14, 14])
after second transition 2 : torch.Measurement([1, 152, 7, 7])

part1                     : torch.Measurement([1, 76, 7, 7])
part2                     : torch.Measurement([1, 76, 7, 7])
after bottleneck #0       : torch.Measurement([1, 88, 7, 7])
after bottleneck #1       : torch.Measurement([1, 100, 7, 7])
after bottleneck #2       : torch.Measurement([1, 112, 7, 7])
after bottleneck #3       : torch.Measurement([1, 124, 7, 7])
after bottleneck #4       : torch.Measurement([1, 136, 7, 7])
after bottleneck #5       : torch.Measurement([1, 148, 7, 7])
after bottleneck #6       : torch.Measurement([1, 160, 7, 7])
after bottleneck #7       : torch.Measurement([1, 172, 7, 7])
after bottleneck #8       : torch.Measurement([1, 184, 7, 7])
after bottleneck #9       : torch.Measurement([1, 196, 7, 7])
after bottleneck #10      : torch.Measurement([1, 208, 7, 7])
after bottleneck #11      : torch.Measurement([1, 220, 7, 7])
after bottleneck #12      : torch.Measurement([1, 232, 7, 7])
after bottleneck #13      : torch.Measurement([1, 244, 7, 7])
after bottleneck #14      : torch.Measurement([1, 256, 7, 7])
after bottleneck #15      : torch.Measurement([1, 268, 7, 7])
part2 after dense block 2 : torch.Measurement([1, 268, 7, 7])
part2 after first trans 2 : torch.Measurement([1, 214, 7, 7])
after concatenate         : torch.Measurement([1, 290, 7, 7])

after avgpool             : torch.Measurement([1, 290, 1, 1])
after flatten             : torch.Measurement([1, 290])
after fc                  : torch.Measurement([1, 1000])

Ending

And that’s it! We’ve got efficiently discovered CSPNet and applied it on DenseNet spine. As I’ve talked about earlier, we are able to truly use the thought of CSPNet to enhance the efficiency of another spine fashions reminiscent of ResNet or ResNeXt. So right here I problem you to implement CSPNet on these fashions from scratch.

To be trustworthy I can not verify that my implementation is 100% right because the official GitHub repo [4] of the paper doesn’t present the PyTorch implementation — however that’s no less than every part I perceive from the manuscript. Please let me know if you happen to discover any mistake within the code or in my explanations. Thanks for studying, and see you once more in my subsequent article. Bye!

Btw you too can discover the code used on this article on my GitHub repo [5].


References

[1] Chien-Yao Wang et al. CSPnet: A New Spine That Can Improve Studying Functionality of CNN. Arxiv. https://arxiv.org/abs/1911.11929 [Accessed October 1, 2025].

[2] Gao Huang et al. Densely Linked Convolutional Networks. Arxiv. https://arxiv.org/abs/1608.06993 [Accessed September 18, 2025].

[3] Muhammad Ardi. DenseNet Paper Walkthrough: All Linked. In direction of Information Science. https://towardsdatascience.com/densenet-paper-walkthrough-all-connected/ [Accessed April 26, 2026].

[4] WongKinYiu. CrossStagePartialNetworks. GitHub. https://github.com/WongKinYiu/CrossStagePartialNetworks [Accessed October 1, 2025].

[5] MuhammadArdiPutra. CSPNet. GitHub. https://github.com/MuhammadArdiPutra/medium_articles/blob/most important/DenseNet.ipynb [Accessed October 1, 2025].

Related Articles

Latest Articles