Skip to main content

Filter Demo

This guide explains how to set up and run the EVP License Plate Filter Demo using Docker, Google Cloud, and optional GPU acceleration.


Prerequisites


Google Cloud Setup

All Docker images and assets are hosted on Google Artifact Registry (GAR).

  1. Obtain your GOOGLE_SERVICE_ACCOUNT_CREDENTIALS.json credentials file from Plainsight.

  2. Authenticate:

    gcloud auth activate-service-account --key-file=GOOGLE_SERVICE_ACCOUNT_CREDENTIALS.json
  3. Configure your environment:

    gcloud config set project plainsightai-prod
    gcloud auth configure-docker us-west1-docker.pkg.dev

Downloading Demo Assets

Retrieve the full demo bundle:

gcloud artifacts generic download \
--project=plainsightai-prod \
--location=us-west1 \
--repository=files \
--package=evp-demo-getting-started \
--version=v1.1.2 \
--destination=.

Unzip the assets:

unzip evp-demo-getting-started.zip

You should see:

video.mkv
model.zip
docker-compose.yaml
docker-compose-local.yaml
docker-compose-gpu.yaml
docker-compose-local-gpu.yaml
.env
README.md
crop_filter/
filter_license_plate_detection/
filter_optical_character_recognition/
filter_license_annotation_demo/

Running the Demo

Standard CPU Run

docker compose -f docker-compose.yaml up

This launches:

  • vidin → streams video
  • filter_license_plate_detection → detects license plates
  • crop_filter → crops plates
  • filter_optical_character_recognition → Runs OCR (Optical character recognition) on plates
  • filter_license_annotation_demo → overlays results
  • webvis → web visualization (localhost:8002)

Running with GPU Acceleration

If you have a GPU and proper Docker setup:

docker compose -f docker-compose-gpu.yaml up

This enables GPU usage for inference filters (filter_license_plate_detection and filter_optical_character_recognition).


Development Mode (Editable Filters)

For live-editing Python code (filter.py) without rebuilding images:

docker compose -f docker-compose-local.yaml up

Each service mounts the local filter source directly.
Restart an individual filter to apply changes:

docker compose restart filter_license_annotation_demo

Development + GPU Mode

For live-editing and GPU acceleration:

docker compose -f docker-compose-local-gpu.yaml up

Folder Structure

video.mkv                        # Input video
model.zip # Detection model
docker-compose.yaml # Standard demo run
docker-compose-local.yaml # Editable mode
docker-compose-gpu.yaml # GPU run
docker-compose-local-gpu.yaml # Editable + GPU run
crop_filter/filter.py
filter_license_plate_detection/filter.py
filter_optical_character_recognition/filter.py
filter_license_annotation_demo/filter.py
README.md
.env

Tips

  • Access Web UI at http://localhost:8002
  • View live logs:
    docker compose logs -f filter_license_annotation_demo
  • Restart a single filter:
    docker compose restart filter_license_annotation_demo

Troubleshooting

ProblemSolution
Blank or frozen videoEnsure all filters are running
Missing OCR textCheck topic connection (cropped_main)
Docker pull failuresConfirm GCP credentials and artifact access
GPU not detectedVerify Docker + NVIDIA runtime setup

License Plate Annotation Behavior

  • Plates are detected and cropped
  • OCR is performed on cropped plates
  • Cleaned OCR results (e.g., ABC1234) are overlaid
  • Last seen license plate is used if OCR temporarily fails
  • Cropped plate image is inserted into the main video

Summary

This demo showcases a complete event-driven vision pipeline — real-time license plate detection, OCR, annotation, and live web visualization — all powered by Plainsight's modular filter architecture.

You can extend it with your own videos, models, or downstream analytics!