The output of the convolutional layer is generally handed from the ReLU activation function to bring non-linearity to the model. It takes the function map and replaces the many destructive values with zero. A VGG-block had lots of 3x3 convolutions padded by one to maintain the size of output https://financefeeds.com/bombshell-5-best-copyright-presales-that-will-make-traditional-investing-look-dead-2/