Vehicle’s Number Plate Detection using CNN model using python and Flask API…

Manasvi Agarwal
10 min readSep 7, 2021

--

In this task :

👉Create a model that will detect a car in a live stream or video and recognize characters on the number plate of the car.

👉Secondly, it will use the characters and fetch the owner’s information using RTO APIs.

👉Create a Web portal where all this information will be displayed (using HTML, CSS, and JS).

Step 1: (Vehicle’s Number Plate Detection)

import numpy as np
import cv2
import matplotlib.pyplot as plt
#detecting license plate on the vehicle
plateCascade = cv2.CascadeClassifier('indian_license_plate.xml')
def plate_detect(img):
plateImg = img.copy()
roi = img.copy()
plateRect = plateCascade.detectMultiScale(plateImg,scaleFactor = 1.2, minNeighbors = 7)

for (x,y,w,h) in plateRect:
roi_ = roi[y:y+h, x:x+w, :]
plate_part = roi[y:y+h, x:x+w, :]
cv2.rectangle(plateImg,(x+2,y),(x+w-3, y+h-5),(0,255,0),3)
return plateImg, plate_part

The following tasks will be performed in the above code:

  1. First of all, the important libraries are imported like NumPy, cv2, and matplotlib.
  2. Then the CascadeClassifier is used for detecting the vehicle’s number plate region. Cascading classifiers are used to detect a particular feature or a region inside an image. The feature here is the Number Plate of a Vehicle.
  3. Cascading classifiers are trained with several hundred “positive” sample views of a particular object and arbitrary “negative” images of the same size. After the classifier is trained it can be applied to a region of an image and detect the object.
  4. Then there is a function called plate_detect() to detect the vehicle’s number plate and mark it with a green rectangle on it then crop that image’s plate region and return it to another function.
  5. This function will be called from another function called display_img().

Step3: (Displaying the Image)

def display_img(img):
img_ = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
plt.imshow(img_)
plt.show()

inputImg = cv2.imread('car.jpg')
inpImg, plate = plate_detect(inputImg)
display_img(inpImg)
display_img(plate)

The following tasks will be performed in the above code:

  1. The above code is used to display the image.
  2. There is a function called display_img() which will take an image as a parameter and convert it from BGR color code to RGB Color code then will display it on screen using matplotlib.
  3. Then we will read an image called car.jpg and call the function plate_detect() and then display the car image and the cropped plate image.

Step3:

def find_contours(dimensions, img) :    #finding all contours in the image using 
#retrieval mode: RETR_TREE
#contour approximation method: CHAIN_APPROX_SIMPLE
cntrs, _ = cv2.findContours(img.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#Approx dimensions of the contours
lower_width = dimensions[0]
upper_width = dimensions[1]
lower_height = dimensions[2]
upper_height = dimensions[3]

#Check largest 15 contours for license plate character respectively
cntrs = sorted(cntrs, key=cv2.contourArea, reverse=True)[:15]

ci = cv2.imread('contour.jpg')

x_cntr_list = []
target_contours = []
img_res = []
for cntr in cntrs :
#detecting contour in binary image and returns the coordinates of rectangle enclosing it
intX, intY, intWidth, intHeight = cv2.boundingRect(cntr)

#checking the dimensions of the contour to filter out the characters by contour's size
if intWidth > lower_width and intWidth < upper_width and intHeight > lower_height and intHeight < upper_height :
x_cntr_list.append(intX)
char_copy = np.zeros((44,24))
#extracting each character using the enclosing rectangle's coordinates.
char = img[intY:intY+intHeight, intX:intX+intWidth]
char = cv2.resize(char, (20, 40))
cv2.rectangle(ci, (intX,intY), (intWidth+intX, intY+intHeight), (50,21,200), 2)
plt.imshow(ci, cmap='gray')
char = cv2.subtract(255, char)
char_copy[2:42, 2:22] = char
char_copy[0:2, :] = 0
char_copy[:, 0:2] = 0
char_copy[42:44, :] = 0
char_copy[:, 22:24] = 0
img_res.append(char_copy) # List that stores the character's binary image (unsorted)

#return characters on ascending order with respect to the x-coordinate

plt.show()
#arbitrary function that stores sorted list of character indeces
indices = sorted(range(len(x_cntr_list)), key=lambda k: x_cntr_list[k])
img_res_copy = []
for idx in indices:
img_res_copy.append(img_res[idx])# stores character images according to their index
img_res = np.array(img_res_copy)
return img_res

Step 4: ( Segmentation of Image’s Characters)

def segment_characters(image) :
#pre-processing cropped image of plate
#threshold: convert to pure b&w with sharpe edges
#erod: increasing the backgroung black
#dilate: increasing the char white
img_lp = cv2.resize(image, (333, 75))
img_gray_lp = cv2.cvtColor(img_lp, cv2.COLOR_BGR2GRAY)
_, img_binary_lp = cv2.threshold(img_gray_lp, 200, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
img_binary_lp = cv2.erode(img_binary_lp, (3,3))
img_binary_lp = cv2.dilate(img_binary_lp, (3,3))
LP_WIDTH = img_binary_lp.shape[0]
LP_HEIGHT = img_binary_lp.shape[1]
img_binary_lp[0:3,:] = 255
img_binary_lp[:,0:3] = 255
img_binary_lp[72:75,:] = 255
img_binary_lp[:,330:333] = 255
#estimations of character contours sizes of cropped license plates
dimensions = [LP_WIDTH/6,
LP_WIDTH/2,
LP_HEIGHT/10,
2*LP_HEIGHT/3]
plt.imshow(img_binary_lp, cmap='gray')
plt.show()
cv2.imwrite('contour.jpg',img_binary_lp)
#getting contours
char_list = find_contours(dimensions, img_binary_lp)
return char_list


char = segment_characters(plate)

for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(char[i], cmap='gray')
plt.axis('off')

Now this part is the Preprocessing of the Image i.e (Number Plate of Vehicle)

Step 5: (Image Augmentation and Calculate Accuracy)

import keras.backend as K
import tensorflow as tf
from sklearn.metrics import f1_score
from keras import optimizers
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Flatten, MaxPooling2D, Dropout, Conv2D
train_datagen = ImageDataGenerator(rescale=1./255, width_shift_range=0.1, height_shift_range=0.1)
path = 'data/data/'
train_generator = train_datagen.flow_from_directory(
path+'/train',
target_size=(28,28),
batch_size=1,
class_mode='sparse')
validation_generator = train_datagen.flow_from_directory(
path+'/val',
target_size=(28,28),
#It is the harmonic mean of precision and recall
#Output range is [0, 1]
#Works for both multi-class and multi-label classification
def f1score(y, y_pred):
return f1_score(y, tf.math.argmax(y_pred, axis=1), average='micro')
def custom_f1score(y, y_pred):
return tf.py_function(f1score, (y, y_pred), tf.double)
class_mode='sparse')

Now it’s time to get the training dataset and validation dataset and create a model. We have generated our data that is the preprocessed images dataset from previous steps.

Step 6: (Creating and Training the Model)

K.clear_session()
model = Sequential()
model.add(Conv2D(16, (22,22), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(32, (16,16), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (8,8), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (4,4), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(4, 4)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(36, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.0001), metrics=[custom_f1score])class stop_training_callback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('val_custom_f1score') > 0.99):
self.model.stop_training = True
batch_size = 1
callbacks = [stop_training_callback()]
model.fit_generator(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
epochs = 80, verbose=1, callbacks=callbacks)

Step 7:

def fix_dimension(img):
new_img = np.zeros((28,28,3))
for i in range(3):
new_img[:,:,i] = img
return new_img

def show_results():
dic = {}
characters = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'
for i,c in enumerate(characters):
dic[i] = c
output = []
for i,ch in enumerate(char):
img_ = cv2.resize(ch, (28,28), interpolation=cv2.INTER_AREA)
img = fix_dimension(img_)
img = img.reshape(1,28,28,3)
y_ = model.predict_classes(img)[0]
character = dic[y_] #
output.append(character)

plate_number = ''.join(output)

return plate_number
final_plate = show_results()
print(final_plate)

Step 8:

import requests
import xmltodict
import json
def get_vehicle_info(plate_number):
r = requests.get("http://www.regcheck.org.uk/api/reg.asmx/CheckIndia?RegistrationNumber={0}&username=mama".format(str(plate_number)))
data = xmltodict.parse(r.content)
jdata = json.dumps(data)
df = json.loads(jdata)
df1 = json.loads(df['Vehicle']['vehicleJson'])
return df1

get_vehicle_info(final_plate)
model.save('license_plate_character.pkl')

(Getting Vehicle’s Owner Information)

Now the last part is to test the model using the API to get the vehicle’s information.

Flask Web App

from preprocess import plate_detect, find_contours, segment_characters, f1score, custom_f1score, fix_dimension, get_vehicle_info, plate_info
import matplotlib.pyplot as plt
from tensorflow.keras import models
import tensorflow.keras.backend as K
import cv2
import numpy as np
import warnings
warnings.filterwarnings("ignore")
# import all preprocess functionsmodel = models.load_model('license_plate_character.pkl', custom_objects={
'custom_f1score': custom_f1score})
def fix_dimension(img):
new_img = np.zeros((28, 28, 3))
for i in range(3):
new_img[:, :, i] = img
return new_img
def show_results(pl_char):
dic = {}
characters = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'
for i, c in enumerate(characters):
dic[i] = c
output = []
for i, ch in enumerate(pl_char):
img_ = cv2.resize(ch, (28, 28), interpolation=cv2.INTER_AREA)
img = fix_dimension(img_)
img = img.reshape(1, 28, 28, 3)
y_ = model.predict_classes(img)[0]
character = dic[y_]
output.append(character)
plate_number = ''.join(output) return plate_number
def runVideo(vpath):
# license_video.mp4 have to be yours, I haven't uploaded for privacy concern
cam = cv2.VideoCapture(vpath)
if cam.isOpened() == False:
print("Video not imported")
plate_list = []
info_list = []
while(cam.isOpened()):
ret, frame = cam.read()
if ret == True:
car_plate, plate_img = plate_detect(frame)
#cv2.imshow("License Video", car_plate)
if len(plate_img) > 0:
plate_char = segment_characters(plate_img)
# print(plate_char)
number_plate = show_results(plate_char)
if number_plate not in plate_list:
final_result = plate_info(number_plate)
if final_result != None:
plate_list.append(number_plate)
info_list.append(final_result)
break
# print(final_result)
if cv2.waitKey(1) == 27:
break
else:
break
cam.release()
cv2.destroyAllWindows()
return info_list[0]

Now we will convert the above Deep Learning code into a Flask WebApp.

Code for Getting Vehicle’s Owner Info (prediction.py)

import cv2
import tensorflow as tf
from sklearn.metrics import f1_score
import requests
import xmltodict
import json
import numpy as np
import re
import warnings
warnings.filterwarnings("ignore")
plateCascade = cv2.CascadeClassifier('indian_license_plate.xml')
def plate_detect(img):
plateImg = img.copy()
roi = img.copy()
plate_part = np.array([])
plateRect = plateCascade.detectMultiScale(
plateImg, scaleFactor=1.2, minNeighbors=7)
for (x, y, w, h) in plateRect:
roi_ = roi[y:y+h, x:x+w, :]
plate_part = roi[y:y+h, x:x+w, :]
cv2.rectangle(plateImg, (x+2, y), (x+w-3, y+h-5), (0, 255, 0), 3)
# print(type(roi))
# print(roi.shape)
return plateImg, plate_part
def find_contours(dimensions, img): # finding all contours in the image using
# retrieval mode: RETR_TREE
# contour approximation method: CHAIN_APPROX_SIMPLE
cntrs, _ = cv2.findContours(
img.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Approx dimensions of the contours
lower_width = dimensions[0]
upper_width = dimensions[1]
lower_height = dimensions[2]
upper_height = dimensions[3]
# Check largest 15 contours for license plate character respectively
cntrs = sorted(cntrs, key=cv2.contourArea, reverse=True)[:15]
ci = cv2.imread('contour.jpg') x_cntr_list = []
target_contours = []
img_res = []
for cntr in cntrs:
# detecting contour in binary image and returns the coordinates of rectangle enclosing it
intX, intY, intWidth, intHeight = cv2.boundingRect(cntr)
# checking the dimensions of the contour to filter out the characters by contour's size
if intWidth > lower_width and intWidth < upper_width and intHeight > lower_height and intHeight < upper_height:
x_cntr_list.append(intX)
char_copy = np.zeros((44, 24))
# extracting each character using the enclosing rectangle's coordinates.
char = img[intY:intY+intHeight, intX:intX+intWidth]
char = cv2.resize(char, (20, 40))
cv2.rectangle(ci, (intX, intY), (intWidth+intX,
intY+intHeight), (50, 21, 200), 2)
#plt.imshow(ci, cmap='gray')
char = cv2.subtract(255, char)
char_copy[2:42, 2:22] = char
char_copy[0:2, :] = 0
char_copy[:, 0:2] = 0
char_copy[42:44, :] = 0
char_copy[:, 22:24] = 0
# List that stores the character's binary image (unsorted)
img_res.append(char_copy)
# return characters on ascending order with respect to the x-coordinate # plt.show()
# arbitrary function that stores sorted list of character indeces
indices = sorted(range(len(x_cntr_list)), key=lambda k: x_cntr_list[k])
img_res_copy = []
for idx in indices:
# stores character images according to their index
img_res_copy.append(img_res[idx])
img_res = np.array(img_res_copy)
return img_res
def segment_characters(image): # pre-processing cropped image of plate
# threshold: convert to pure b&w with sharpe edges
# erod: increasing the backgroung black
# dilate: increasing the char white
img_lp = cv2.resize(image, (333, 75))
img_gray_lp = cv2.cvtColor(img_lp, cv2.COLOR_BGR2GRAY)
_, img_binary_lp = cv2.threshold(
img_gray_lp, 200, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
img_binary_lp = cv2.erode(img_binary_lp, (3, 3))
img_binary_lp = cv2.dilate(img_binary_lp, (3, 3))
LP_WIDTH = img_binary_lp.shape[0]
LP_HEIGHT = img_binary_lp.shape[1]
img_binary_lp[0:3, :] = 255
img_binary_lp[:, 0:3] = 255
img_binary_lp[72:75, :] = 255
img_binary_lp[:, 330:333] = 255
# estimations of character contours sizes of cropped license plates
dimensions = [LP_WIDTH/6,
LP_WIDTH/2,
LP_HEIGHT/10,
2*LP_HEIGHT/3]
#plt.imshow(img_binary_lp, cmap='gray')
# plt.show()
cv2.imwrite('contour.jpg', img_binary_lp)
# getting contours
char_list = find_contours(dimensions, img_binary_lp)
return char_list
def f1score(y, y_pred):
return f1_score(y, tf.math.argmax(y_pred, axis=1), average='micro')
def custom_f1score(y, y_pred):
return tf.py_function(f1score, (y, y_pred), tf.double)
def fix_dimension(img):
new_img = np.zeros((28, 28, 3))
for i in range(3):
new_img[:, :, i] = img
return new_img
def get_vehicle_info(plate_number):
r = requests.get(
"http://www.regcheck.org.uk/api/reg.asmx/CheckIndia?RegistrationNumber={0}&username=tom123".format(str(plate_number)))
data = xmltodict.parse(r.content)
jdata = json.dumps(data)
df = json.loads(jdata)
df1 = json.loads(df['Vehicle']['vehicleJson'])
return df1
def plate_info(numberPlate):
pattern = '^[A-Z]{2}[0-9]{1,2}([A-Z])?([A-Z]*)?[0-9]{4}$'
if len(numberPlate) > 10:
numberPlate = numberPlate[-10:]
return get_vehicle_info(numberPlate)
# else:
# return get_vehicle_info(numberPlate)
elif re.match(pattern, numberPlate) != None:
return get_vehicle_info(numberPlate)
else:
return None

Now this is the main app.py file for Flask

from prediction import runVideo
import glob
import os
from flask import Flask, render_template, request, redirect, url_for
app = Flask(__name__)
app = Flask(__name__)
app.secret_key = "detectform"
ALLOWED_EXTENSIONS = {'mp4'}
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
path = os.getcwd()
UPLOAD_FOLDER = os.path.join(path, 'uploads')
if not os.path.isdir(UPLOAD_FOLDER):
os.mkdir(UPLOAD_FOLDER)
BASE_DIR = os.getcwd()
dir = os.path.join(BASE_DIR, "uploads")
for root, dirs, files in os.walk(dir):
for file in files:
path = os.path.join(dir, file)
os.remove(path)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER# MAIN APP MAKING# Home Page
@app.route('/')
def index():
return render_template('index.html')
# Prediction - Vehicle Details Page
@app.route('/cardetails', methods=['POST'])
def upload_file():
global videoPath
uploaded_file = request.files['file']
if uploaded_file.filename != '':
videoPath = UPLOAD_FOLDER + "\\" + uploaded_file.filename
uploaded_file.save(videoPath)
vehInfo = runVideo(videoPath)
print(vehInfo)
# EMPTY UPLOAD FOLDER
BASE_DIR = os.getcwd()
dir = os.path.join(BASE_DIR, "uploads")
for root, dirs, files in os.walk(dir):
for file in files:
path = os.path.join(dir, file)
os.remove(path)
carDesc = vehInfo["CarMake"]["CurrentTextValue"]
carModel = vehInfo["CarModel"]["CurrentTextValue"]
return render_template("/carDetails.html", carDesc=carDesc, carModel=carModel, vehInfo=vehInfo)
view rawa

The functionality of this code is to:

  1. Get the HTML file index.html and render it.
  2. Check for the extension of the file that is uploaded (to check for image or mp4 files)
  3. At last, it will get the carDetails.html file which uses the jinja2 template, and Pass the vehicle information to it.

Index.html code

<!DOCTYPE html>
<html style="font-size: 20px;">
<head>
<title>Number-Plate-Detection-Using-Deep_learning</title>
</head>
<body>
<section>
<div style="background-image: url('/static/images/carimg.jpg');">
<div>
<br>
<br>
<br>
<center>
<h1>Flask Web App for Number Plate Detection System Using Deep Learning</h1>
</center>
<br>
<br>
<center>
<h2>Upload any photo/video of the vehicle.</h2>
<center>
<div>
<form action="/cardetails" enctype="multipart/form-data" method="POST" source="custom" name="form">
<div>
<br>
<br>
<center>
<input type="file" placeholder="Email" id="email-6797" name="file" required="">
<center>
</div>
<div>
<br>
<br>
<center>
<input type="submit" value="Submit">
<center>
<br>
<br>
</div>
<input type="hidden" value="" name="recaptchaResponse">
</form>
</div>
</div>
</div>
</section>
</body>
</html>

This is a simple HTML file with a background image with some heading and a submit button where you can submit your image or video to test the model.

CarDetails.html code

<!DOCTYPE html>
<html lang="en">
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css"
integrity="sha384-JcKb8q3iqJ61gNV9KGb8thSsNjpSL0n8PARn9HuZOnIxN0hoP+VmmDGMN5t9UJ0Z" crossorigin="anonymous">
<link href="main.css" rel="stylesheet">
<meta charset="UTF-8">
<script src="https://kit.fontawesome.com/1edca5e833.js" crossorigin="anonymous"></script>
<script src="http://code.jquery.com/jquery-1.10.1.min.js"></script>
</head>
<body>
<div class="cover-container d-flex w-100 h-100 p-3 mx-auto flex-column">
<header class="masthead mb-auto">
<nav class="nav nav-masthead justify-content-center">
<h3 style="font-size:2.5rem;color:blue">Information about vehicle in video provided is below </h3><br>
<br>
</nav>
</header>
</div>
<br>
<h4 style="color:blue;display:inline;">Vehile : </h4>
<h4 style="display:inline">{{ carDesc }}</h4><br>
<h4 style="color:blue;display:inline;">Model : </h4>
<h4 style="display:inline">{{ carModel }}</h4>
<hr> {% for key, value in vehInfo.items() %}
<h4 style="color:blue;display:inline;">{{ key }} : </h4>
<h4 style="display:inline;">{{ value }}</h4><br>
{% endfor %}
<br>
<hr>
</body></html>

This HTML file will get the output given by the API and place it within the placeholders and output the details as shown below.

CarDetails.html output

Thank you!!

--

--

No responses yet