What is zero mean and unit variance in terms of image data?

I am new to deep learning. I am trying to understand some concepts. I know “mean” is an average value and “variance” is deviation from mean.I have read some research papers, all say that we pre-process our data first. But how these concepts are related to image pre-processing? Why these concepts are used as pre-processing of image data?
Actually i am unable to understand that how these techniques contribute towards classification.I searched it on google but may be i searched with less descriptive keywords
Please bear my very basic question,

Answer

This is a very good question and you need to understand this to gain more understanding into deep learning.

Basically, you have raw images, lets take one image. This image has 3 channels and in each channel pixel values range from 0 to 255.

Our goal here is to squash the range of values for all the pixels in the three channels to a very small range. This is where Preprocessing comes in.
But dont think preprocessing only involves the mean and std devtn techniques there are many other like PCA, whitening etc.

1) Using mean:
By computing the mean of say, the first red pixel values across all the training images will get you the average red color value that is present across all the training images at the first position. Similarly you find this for all the red channel values, green channel values. Finally you get an average image from all the training images.

Now if you subtract this mean image from all the training images you obviously transform the pixel values of the images, the image is no longer interpretable to the human eye, the pixal values now lie in range from positive to negative where the mean lies at zero.

2) Now if you again divide these by std deviation you essentially squash the pixel value range before to a small range.

BUT WHY ALL THIS?
I will say from my experience that doing this preprocessing on the images and then giving these transformed images to the classifier model will run faster and better. That’s why.

As you are into deep learning, look into batch normalization after you understand this normalization concept

Attribution
Source : Link , Question Author : Rafay Zia Mir , Answer Author : Sreekar Nimbalkar

Leave a Comment