Open In App

How OpenCV’s blobFromImage works?

Last Updated : 02 Jun, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we’ll try to understand how the blobfromImage function from the dnn module in the OpenCV library works and when should you use it.

 blobfromImage() function

It returns a 4-dimensional array/blob for the input image. You can additionally use it to preprocess your image to match your input requirements. You can use its different parameters to transform your image, so let’s discuss all its parameters: 

Syntax: blob = cv2.dnn.blobFromImage(image, scalefactor=1.0, size, mean, swapRB=True)

Parameters: 

  • image – This is the image that we want to preprocess (for our model)
  • scalefactor– scale factor basically multiplies(scales) our image channels. And remember that it scales it down by a factor of 1/n, where n is the scalefactor you provided.
  • size – this is the target size that we want our image to become. Most common CNNs use 224×224 or 229×229 pixels as their input image array, but you can set it to meet your model requirements.
  • mean– this is the mean subtracting values. You can input a single value or a 3-item tuple for each channel RGB, it will basically subtract the mean value from all the channels accordingly, this is done to normalize our pixel values. Note: mean argument utilizes the RGB sequence
  • swapRB- OpenCV by default reads an image in BGR format, but as I mentioned above that the mean argument takes in the values in RGB sequence, so to prevent any disruption this function, as the name implies swaps the red and blue channels. ( i.e why its default value is True)

Example 1: Scaling and Matching the Target size

Most of the time you’re model input you have to scale down your image pixel values between 0-1 instead of 0-255, i.e where you can use the scalefactor argument- And also change the Target size of the Image:

Python3




import cv2
import numpy as np
 
# change the image path to your image
image = cv2.imread('ad.png'
 
# let's say this is the required size
size = (640, 720
 
# let's print the image pixel values
print(image)
print(f'Image Shape : {np.array(image).shape}')
 
blob = cv2.dnn.blobFromImage(image,
                             scalefactor=1/255,
                             size=size,
                             swapRB=True)
 
# let's see our transformed image- blob
print(blob)
print(f'Blob Shape : {np.array(blob).shape}')


Output: 

[[[255 255 255] 
  [255 255 255] 
  [255 255 255] 
  ...
  ...
  [255 255 255]
  [255 255 255]
  [255 255 255]]]
Image Shape : (817, 861, 3)
[[[[1. 1. 1. ... 1. 1. 1.]
   [1. 1. 1. ... 1. 1. 1.]
   [1. 1. 1. ... 1. 1. 1.]
   ...

   ...
   [1. 1. 1. ... 1. 1. 1.]
   [1. 1. 1. ... 1. 1. 1.]
   [1. 1. 1. ... 1. 1. 1.]]]]
Blob Shape : (1, 3, 720, 640)

As you can see all the values which were 255 at the start got converted into 1, this is because our scale factor was 1/255. And also our final blob has the size of 720,640, which is what we wanted.

Example 2: Using it with a model to create the input image

What scalefactor you use or what target size you use or what mean values you normalize with, this all depends on the model that’s used. And if you’re using a pre-trained model from a framework read their documentation for all the requirements for the input image.

For eg: If I was creating a gender classifier program and I decided to use the caffe model for it then, 

#——Model Files———–#

genderProto=”Models/gender_deploy.prototxt”

genderModel=”Models/gender_net.caffemodel”

#############———–Blob Variables—————###################

scalefactor = 1.0

MODEL_MEAN_VALUES=(78.4263377603, 87.7689143744, 114.895847746) #mean values 

size = (227,227)

#############———————————————-##############

#classes in our model ——–> Required for predictions

ageList=[‘(0-2)’, ‘(4-6)’, ‘(8-12)’, ‘(15-20)’, ‘(25-32)’, ‘(38-43)’, ‘(48-53)’, ‘(60-100)’]

genderList=[‘Male’,’Female’]

These are all the required values or data for using that model to create our blog, so: 

blob=cv2.dnn.blobFromImage(face, scalefactor=scalefactor, size=size, MODEL_MEAN_VALUES, swapRB=True)

This blob now is transformed to directly pass as input to our model.

And this is the basic process if you’re using a pre-trained model and want to transform your image to fit the model requirements.

What next?

Try to use pre-trained models from frameworks like TensorFlow and PyTorch to create computer vision programs.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads