Filter Demo
This guide explains how to set up and run the EVP License Plate Filter Demo using Docker, Google Cloud, and optional GPU acceleration.
Prerequisites
Google Cloud Setup
All Docker images and assets are hosted on Google Artifact Registry (GAR).
-
Obtain your
GOOGLE_SERVICE_ACCOUNT_CREDENTIALS.json
credentials file from Plainsight. -
Authenticate:
gcloud auth activate-service-account --key-file=GOOGLE_SERVICE_ACCOUNT_CREDENTIALS.json
-
Configure your environment:
gcloud config set project plainsightai-prod
gcloud auth configure-docker us-west1-docker.pkg.dev
Downloading Demo Assets
Retrieve the full demo bundle:
gcloud artifacts generic download \
--project=plainsightai-prod \
--location=us-west1 \
--repository=files \
--package=evp-demo-getting-started \
--version=v1.1.2 \
--destination=.
Unzip the assets:
unzip evp-demo-getting-started.zip
You should see:
video.mkv
model.zip
docker-compose.yaml
docker-compose-local.yaml
docker-compose-gpu.yaml
docker-compose-local-gpu.yaml
.env
README.md
crop_filter/
filter_license_plate_detection/
filter_optical_character_recognition/
filter_license_annotation_demo/
Running the Demo
Standard CPU Run
docker compose -f docker-compose.yaml up
This launches:
vidin
→ streams videofilter_license_plate_detection
→ detects license platescrop_filter
→ crops platesfilter_optical_character_recognition
→ Runs OCR (Optical character recognition) on platesfilter_license_annotation_demo
→ overlays resultswebvis
→ web visualization (localhost:8002)
Running with GPU Acceleration
If you have a GPU and proper Docker setup:
docker compose -f docker-compose-gpu.yaml up
This enables GPU usage for inference filters (filter_license_plate_detection
and filter_optical_character_recognition
).
Development Mode (Editable Filters)
For live-editing Python code (filter.py
) without rebuilding images:
docker compose -f docker-compose-local.yaml up
Each service mounts the local filter source directly.
Restart an individual filter to apply changes:
docker compose restart filter_license_annotation_demo
Development + GPU Mode
For live-editing and GPU acceleration:
docker compose -f docker-compose-local-gpu.yaml up
Folder Structure
video.mkv # Input video
model.zip # Detection model
docker-compose.yaml # Standard demo run
docker-compose-local.yaml # Editable mode
docker-compose-gpu.yaml # GPU run
docker-compose-local-gpu.yaml # Editable + GPU run
crop_filter/filter.py
filter_license_plate_detection/filter.py
filter_optical_character_recognition/filter.py
filter_license_annotation_demo/filter.py
README.md
.env
Tips
- Access Web UI at http://localhost:8002
- View live logs:
docker compose logs -f filter_license_annotation_demo
- Restart a single filter:
docker compose restart filter_license_annotation_demo
Troubleshooting
Problem | Solution |
---|---|
Blank or frozen video | Ensure all filters are running |
Missing OCR text | Check topic connection (cropped_main ) |
Docker pull failures | Confirm GCP credentials and artifact access |
GPU not detected | Verify Docker + NVIDIA runtime setup |
License Plate Annotation Behavior
- Plates are detected and cropped
- OCR is performed on cropped plates
- Cleaned OCR results (e.g.,
ABC1234
) are overlaid - Last seen license plate is used if OCR temporarily fails
- Cropped plate image is inserted into the main video
Summary
This demo showcases a complete event-driven vision pipeline — real-time license plate detection, OCR, annotation, and live web visualization — all powered by Plainsight's modular filter architecture.
You can extend it with your own videos, models, or downstream analytics!