Skip to main content

import Admonition from '@theme/Admonition';

Quality Gates - Validation Filter

Quality Gates validates CV model outputs (detection, classification, OCR, segmentation) using rule-based assertions and generates real-time quality reports during pipeline execution.

FeaturesDirect link to Features

  • Real-time Quality Validation - Validates predictions frame-by-frame using configurable rules
  • Rule-based Assertions - Define quality thresholds for metrics like confidence, IoU, precision, recall
  • Live HTTP Reports - View quality metrics in real-time via web interface
  • Golden File Comparison - Compare current outputs against baseline golden predictions
  • Multiple Output Types - Supports detection, classification, OCR, and segmentation outputs

Basic UsageDirect link to Basic Usage

from openfilter.filter_runtime.filter import Filter
from filter_quality_gates.filter import FilterQualityGate

filters = [
# ... your CV model filter ...

(FilterQualityGate, {
'id': 'filter_quality_gates',
'sources': 'tcp://localhost:5555',
'outputs': 'tcp://*:5556',
'rules': [
{
'name': 'detection_confidence',
'type': 'detection',
'metric': 'confidence',
'operator': '>=',
'threshold': 0.7,
'scope': 'per_frame'
}
],
'serve_http': True,
'http_port': 9000
})
]

Filter.run_multi(filters)

# View report at: http://localhost:9000/report

Offline Process: Golden File WorkflowDirect link to Offline Process: Golden File Workflow

The offline process for establishing quality baselines involves two simple steps:

Step 1: Record Golden PredictionsDirect link to Step 1: Record Golden Predictions

Add the Recorder filter to your pipeline to save baseline predictions as a golden file:

from openfilter.filter_runtime.filter import Filter
from openfilter.filter_runtime.filters.recorder import Recorder
from filter_quality_gates.filter import FilterQualityGate

# Your production pipeline
filters = [
(VideoIn, {'outputs': 'tcp://*:5554'}),
(YourCVModelFilter, {
'sources': 'tcp://localhost:5554',
'outputs': 'tcp://*:5556'
}),

# Add Recorder to save golden predictions
(Recorder, {
'sources': 'tcp://localhost:5556',
'outputs': 'file://./golden_predictions.jsonl',
'rules': ['+', '-/meta/ts'], # Include all except timestamps
'flush': True
}),
]

Filter.run_multi(filters)
# Golden file saved to: ./golden_predictions.jsonl

Step 2: Record the predictions.jsonDirect link to Step 2: Record the predictions.json

Make the necessary changes in your pipeline and reuse the same Pipeline from step 1.

Step 3: Compare Against GoldenDirect link to Step 3: Compare Against Golden

In a separate pipeline, add FilterStubApplication to load predictions and Quality Gates to compare against the golden file:

from openfilter.filter_runtime.filter import Filter
from openfilter.filter_runtime.filters.video_in import VideoIn
from filter_quality_gates.filter import FilterQualityGate

# Import filter-stub-application
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent.parent / 'filter-stub-application'))
from filter_stub_application.filter import FilterStubApplication

filters = [
(VideoIn, {'outputs': 'tcp://*:5554'}),

# Stub filter loads predictions from file
(FilterStubApplication, {
'sources': 'tcp://localhost:5554',
'outputs': 'tcp://*:5556',
'output_mode': 'echo',
'input_json_events_file_path': './current_predictions.json', # Your new predictions
'event_topic_key': 'main' # If using aggregator-style JSON
}),

# Quality Gates compares against golden file
(FilterQualityGate, {
'sources': 'tcp://localhost:5556',
'rules_file': './rules.json',
'calibration_mode': 'compare',
'golden_file_path': './golden_predictions.jsonl', # From Step 1
'serve_http': True,
'http_port': 9000
})
]

Filter.run_multi(filters)
# View comparison report at: http://localhost:9000/report

ConfigurationDirect link to Configuration

Rules FileDirect link to Rules File

Define quality rules in a JSON file:

{
"rules": [
{
"name": "detection_min_confidence",
"type": "detection",
"scope": "per_frame",
"metric": "confidence",
"operator": ">=",
"threshold": 0.7,
"enabled": true
},
{
"name": "detection_avg_confidence",
"type": "detection",
"scope": "aggregated",
"metric": "confidence",
"aggregation": "mean",
"operator": ">=",
"threshold": 0.85,
"enabled": true
}
]
}

Configuration OptionsDirect link to Configuration Options

KeyTypeDefaultDescription
ruleslist[dict][]List of quality rules
rules_filestrNonePath to JSON file with rules
serve_httpboolTrueEnable HTTP report server
http_portint9000HTTP server port
output_dirstr"./reports"Directory for report files
calibration_modestrNone"record" or "compare"
golden_file_pathstrNonePath to golden predictions file

ReportsDirect link to Reports

When serve_http: True, Quality Gates starts an HTTP server providing:

  • Live Report: http://localhost:9000/report - Real-time quality metrics
  • JSON API: http://localhost:9000/json - Machine-readable report data

On pipeline shutdown, Quality Gates generates:

  • final_report.json - Complete quality report with metrics and rule results
  • final_report.html - Visual HTML report with charts and tables

ArchitectureDirect link to Architecture

┌──────────┐      ┌──────────────┐      ┌──────────────────┐
│ VideoIn │─────▶│ CV Model │─────▶│ Quality Gates │
│ │ │ Filter │ │ │
└──────────┘ └──────────────┘ └──────────────────┘


┌─────────────────┐
│ HTTP Report │
│ - Live metrics │
│ - Rule status │
└─────────────────┘

ExamplesDirect link to Examples

See the examples/ directory for complete working examples:

  • pipeline_detection/ - Detection validation with mock filters
  • calibration_detection/ - Golden file workflow example
  • calibration_sweet_green/ - Classification validation example