Convolutional Neural Networks

6 minute read

Updated:

9. Convolutional Neural Networks

Learning outcomes

  • Describe the main elements of a Convolutional Neural Network (CNN)
  • Compute the convolution btwn a filter and an image
  • Assemble an architecture for a CNN

Why Convnets?

  • Convolutional neural network was introduced to find the special relationship in the data
  • There’s no relationship btwn variables in multi-layer network – what’s important is that in comp vision is that position of the pixel is important pixels in one area will form together and you need tbat recognize the pattern of the wheel in that area
    • So if I swap the position, then it won’t look like a wheel anymore
  • So is used for the classification of images
  • The order matters –the 2d position of pixels matter convolutional neural network

What is the convolution?

  • image
  • Two signals, aka 2 functions, we have some function f(x), and we have a filter g(x). We slide the g(x) over the other, and point by point it is multiplied and integrated.
  • Imp thing to remember is that convolution has these characteristics:
    1. One function is a filter, and is slid across the other function
    2. Two functions are then multiplied and integrated
  • Outcome = integral of filter slid over the whole domain of the function * function
  • The main operation = sliding, multiplication, and convolution does the integral and since we work with finite number its summation

Filter applicaiton

  • Screenshot 2020-02-18 at 4 52 31 pm
  • We start w an image which is a 7x7 matrix of pixels – black and white pixel so 0 or 1.
  • We define a filter - a 3x3 matrix
  • Filter is apeopleied to every 3x3 sub matrix of the image. So in this case the outcome would be 5x5

  • Steps
  1. We take the filter and multiply one by one in that sub-matrix of the image and sum them all fill this number in the output matrix. activation of this filter on the image.

  2. And you keep on sliding the filter and do it again. In the places where its 1*1, we put 1.

    • The number will be higher for where the submatrix is the same as filter – very activated in this part of the image - Multiplication of pixel by pixel and summation is a dot product of two!! - Can be implemented by a neuron. Difference is you have to slide the image, so you gotta give the neuron a diff chunks of image.

Filter app II

  • We have to have the filter to match the image size. So there’s a constrain in the size of the filter by the size of the image – some of the image on the end could be cut off.
  • U end up with activation that is smaller than the image – you loose information about edge and the corner But to avoid that we add a padding

Padding

  • So we add numbers around, most common is to pad it with zero, but also common is to repeat whatever that’s in the edge of the image
  • Image doesn’t downsize this way just by apeopleying filters

Stride

  • But there’s no correct size of the filter – so you can change many parameters to make it more affective.
    • Smaller filter will recognize smaller features.
    • But it’s hard coz diff architectures have diff size of filters to be able to catch smaller and large image
  • Another thing you can change is the stride – you can move more than 1 pixel.
    • This does downsamples the image
    • Stride will give smaller activation than the original image

Question

  • What if we apply this filter to this image?
  • image
  • So we overlay the filter to the original image, and we multiply one by one. So where the 1 overlaps is where we count it up. So in total we’ll have a size of matrix that is equal to the number of filters that can fit in that image, and we fill up the number.

Images and filters

  • The effect of this convolution is that for each filter we create an activation layer
  • 32x32x3 image with 3 channel – RGB.
  • Filter is a smaller matrix with same depth as the image, 5x5x3.
  • With a 3D filter, the operation stays the same, but we do it on a channel each time
  • Each filter produces an activation layer of depth 1. And we do this w bunch of filters and we’ll get the array of activation layers. – 1 layer per filter for convolution
  • image
  • In this example we use 10, which produce a volume of activation layers of size 32x32x10.

Architectures

  • We keep doing this, which is the hierarchical part of the deep learning
  • RELU, CONV and POOL layer
  • Activation of other filters which makes it more complicating
  • We can create units that contains some CONV, some RELU and some POOL, and put them together in any way that works for your data set
    • RELU– whatever that’s below zero to 0.
    • POOL – for every convolution, you need the pooling layer to do abstraction

Rectified Linear Units (RELU)

  • We don’t start with filter w a specific shape –we let the algo figure it out. Some will end up in a negative value and RELU fixes this. It makes sure that the output of the convolutional layer is positive.
    • 0 if x < 0 forces them to 0 if less than 0
    • x if x>= 0

Pooling

  • Every once in a while, we want to summarize what macro did and create a downsample – to do abstraction
  • U need tbat generalize – and generalization is done through pooling
  • Take some subset of image and you summarize it with one number – most common way is MAX POOL. you take the largest activation and summarize it.
    • With 2*2, we take the chunk and we summarize it with largest activation.
    • Screenshot 2020-02-18 at 4 55 57 pm
  • It tells us that whatever we are looking for is likely to be on the bottom of the image – since it has highest activation.
    • So we lost exact info of the image but we know that there’s sth

From convolutional to MLP

  • If we do this multiple time, we get a large and flat image, and you get sth that’s narrow and deep
  • Each convolution will give new activation layer, and pooling makes small – more convolution, so bigger – pooling making it small, and until we get a one single long vector= summary of the image
    • Reduces resolution but increasing depth!
    • By the end we get many very small activation layers, which summarises all the spatial information.
  • Input to a multi-layer, dense neural network that is basically normal multilayer perceptron
  • What we’ve done with initial chunks of layers, is we converged the filters to sth that we consider important.
    • E.g. important – sth that looks like eyes. The images that contains eyes is considered important by convolution
  • Deep learning – because we are creating bunch of layers.
  • Dense multilayer neural network – every neuron is connected to every neuron in the next layer.

Features

  • Convolutions at different layers create a hierarchy of features
  • First layer - It will normally learn low level features like edges and corners
  • Middle layer – will recognize it being composed into various shapes
  • Last layer - Then you can recognize pieces of objects– mimicking the object you want to represent

Architectures

  • AlexNet (2012)– Instead of doing it by ML engineers, they managed to do this w/o them - network w 60 million parameters down to 2. Done through stride and max pooling layers
  • VGG16 (2014)– popular architecture with 138 million parameters down to 1x1x1000.combination of convolution+ReLU, max pooling, fully connected+ ReLU, and softmax.
  • Inception by google (2015)– ridiculously long layers– created a unit with convolutional layers with filters of a certain size All the outputs of these are put together as an image. They repeat the unit several times to create deep layers

  • Carbon footprint of ML (training) is ridiculous in parallel clusters of computers. To avoid transfer learning.

    • Transfer learning - basically, cut the network and keep the low-level features – Low-level features (edges and corners) won’t change much if you change the objects. So only re-train the last layers with ur dataset for objects.
      • Borrow few layers and only re-train the last layer!! (higher level finer stuff)

Leave a comment