MASKRCNN- Custom Model Training using Tensorflow Object Detection API and Conversion to TFLite

Amal
3 min readJan 18, 2022

Clone the tensorflow repository:

git clone https://github.com/tensorflow/models.git

Install the dependencies and compile protos:

cd models/research# Compile protos.
protoc object_detection/protos/*.proto --python_out=.
# Install TensorFlow Object Detection API.
cp object_detection/packages/tf2/setup.py .
python -m pip install .

Test the installation

python object_detection/builders/model_builder_tf2_test.py

Install the COCO API

pip3 install "git+https://github.com/philferriere/cocoapi.git#egg=pycocotools&subdirectory=PythonAPI"

models/research is your parent directory

  • Create a folder named “images ”, subfolder as “train_images” (dump all your training images), “test_images”(dump all your test images) , and create another folder named “coco_annotations” and dump your coco annotation json file to it (both train.json and test.json). If you have a annotated your images with labelme tool , then you can utilize labelme2coco and convert your labelme annotations to coco format.
python create_coco_tf_record.py --logtostderr --train_image_dir = images/train_images --test_image_dir = images/test_images --train_annotations_file = coco_annotations/train.json --test_annotations_file = coco_annotations/test.json --include_masks=True --output_dir=./

This will create train.record and test.record.

  • Copy nets and deployment folder and export_inference_graph.py from slim folder and paste it in research directory .
  • Copy exporter_main_v2.py from object detection folder to research folder.

Training

— — — — — — — — — — — — — — — — — — — — — — — — — —

  • Create a folder called “training” , inside training folder download your custom model from Model Zoo TF2 , extract it and create a labelmap.pbtxt file (sample file is given in training folder) that contains the class labels which look like :
item {
id: 1
name: ‘dog’
}
item {
id: 2
name: ‘cat’
}

The id number of each item should match the ids inside the train.json and test.json files inside coco_annotations folder.

  • Alterations in the config file , copy the config file from object_detection/samples/config and paste it in training folder or else you can use the pipeline.config that comes while downloading the pretrained model
  • Edit line no 12 — Number of classes according to your dataset
  • Edit line no 125 — Path to model.ckpt file (downloaded model’s fine_tune_checkpoint)
  • Edit line no 126 — fine_tune_checkpoint_type : “detection”
  • Edit line no 108 — Iteration/Epochs you want to train the model
  • Edit line no 136 — path-to-train.record
  • Edit line no 134 and 152 — path-to-labelmap.pbtxt
  • Edit line no 156— path to test.record
  • Line no can vary according to different model config files

To train the model (save the model at every 500 steps, you can change according to your needs):

python model_main_tf2.py --pipeline_config_path=training/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config --model_dir=training/model_checkpoint/ --checkpoint_every_n 500 --alsologtostderr

To export the inference graph :

!python exporter_main_v2.py \--trained_checkpoint_dir training/model_checkpoint \--output_directory final_model \--pipeline_config_path training/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config

View Tensorboard

tensorboard --logdir=training/train

Result :

Note : I have only trained only for less number of epochs. It may not be that much accurate . You can go for more epochs , Train the model until it reaches a satisfying loss.

TFLite Conversion

import tensorflow as tf# Your saved model directory that contains graphsaved_model_dir = 'final_model/saved_model/'

Converting and saving tflite model :

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.]tflite_model = converter.convert()# Save the tflite model in your research directory open("model.tflite", "wb").write(tflite_model)

You can checkout this for different types of quantization : https://www.tensorflow.org/lite/performance/post_training_quantization

#tfod #custommodeltraining #objectdetectionapi #tflite

Thanks.!!

— — — — — — — — — — — — — —

--

--