Walk-through: Training an Object Detection Model
Object detection is a common and important computer vision task that involves identifying and labeling objects within images, video frames, or live feeds. It involves labeling each instance of a specific object within an image. An object detection model will draw bounding boxes around objects, learning to identify and predict future instances as it processes more visual data over time.
First, create a new dataset. Click on the "Create New" on the side navigation and select Dataset. Enter a name for your dataset. Click "Save & Continue to Sources".
In this example, we are using a dataset from a Google Cloud storage bucket and synced to Plainsight.
Once you are happy with the images you have added for your dataset, you can define your bounding box label which is used for object detection.
In the "Label Definitions" tab, enter a name for the label. In this example we named our label
truck. Next, select the Bounding Box label type. Optionally, you can select a color for the label using the color chooser, or keep the default.
If this is a new dataset, scroll down and click "Save and Start Labeling" to begin labeling your data.
You are now ready to label the images using the bounding box label type you defined in the previous step. In the "Labeler", select the bounding box label from Labels panel.
Repeat this process for all the images in your dataset. You also have the option to "Skip" over any images you wish to exclude from labeling.
Use the "Review" tab to review your dataset and approve the images to you wish to train your model with. (You can also go directly to the Versions tab and approve all Submitted images when you lock a new version.)
Before you can train or export a dataset, you must lock your dataset version. Click on the Versions tab to create a new version. You must have at least 3 annotated images in your dataset to complete this step.
After locking a dataset version, you can click "Train" on the dataset version to configure your training options.
Train a model from a dataset version.
Configure model training options with SmartML.
Once you are satisfied with your training settings, scroll down and click "Save & Start Training" to begin training your model. Your model will enter several training states as it goes through the training process.
An email and in-app notification will notify you that your model is ready.
Below shows the model version details of a successfully trained model.
Successfully trained object detection model.
You can scroll down and view the dataset images and preview your model's performance.
Preview images in the dataset splits and compare model detections and labeled annotations.
Start the Image API to deploy the model.
Deploying the model can take up to 20 mins. The "Image API" status will show as "Active" when it's ready to use.
Once your API is in the "Active" state, the endpoint is ready to use. You can then copy the "Image API URL" endpoint and use it to send an image and return predicted classes.
Copy the Image API URL to make Prediction requests and return detections.
You will need to generate a valid API Key to make a request. This key is used as an access token. See curl example below to see how the endpoint, API Key, and input image for is used in a Predictions API request.
curl -X POST 'https://<HOSTNAME>/v1/models/01ERDH3S3TZ34S863N47RWECA7/predict' \
--header 'Authorization: Bearer <APIKEY>' \
--form '[email protected]"/Users/myuser/images/Datasets/aerial_trucks/00130.jpg"'
In this example, the object detection model detected 3 trucks in the image with confidence scores ranging from 66% to 95%.
You can also use Test Model to visualize your model detections. With the Image API active, click the Test Model button located in the dataset images section.
Select your image with the file chooser or drag and drop it into the modal.
Your image will be used as input to the model and the detections will be overlaid.
In this example, 9 trucks were detected in the image.
That's it! We just labeled, trained, deployed, and tested an object detection model all within the Plainsight platform.