Skip to main content

Training Metrics Description

Classification

  • Accuracy - Proportion of true results among the total number of cases examined.

  • Top-K Accuracy - Proportion of cases where the true label is among the top K predicted labels. Useful for multi-class classification where multiple predictions are considered valid.

  • Macro Precision - Average of precision scores (true positives divided by all predicted positives) for each class.

  • Macro Recall - Average of recall scores (true positives divided by actual positives) for each class.

  • Macro F1 Score - Harmonic mean of macro precision and macro recall, balancing both.

Semantic Segmentation

  • Accuracy (Pixel Accuracy) - Proportion of correctly classified pixels over total pixels.

  • Macro IoU (Intersection over Union) - Average IoU scores for each class, measuring the overlap between predicted and actual segments.

  • Macro Dice Coefficient (Dice Similarity) - Average Dice Coefficients for each class, assessing spatial overlap accuracy.

  • Macro Precision - Average precision for each class, indicating the accuracy of positive pixel predictions.

  • Macro Recall - Average recall for each class, focusing on capturing actual positive pixels.

Object Detection

  • classes - Definition: Number of classes detected by the model.

  • mAP (Mean Average Precision)

    • Definition: A comprehensive metric that averages the precision across all classes.
    • Value: 0.2000 indicates overall precision across all classes and IoU thresholds.
  • mAP at IoU=50% (map_50)

    • Definition: Precision at an IoU threshold of 50%. Ideal for less challenging scenarios.
    • Value: 1.0 suggests high precision at this IoU threshold.
  • mAP at IoU=75% (map_75)

    • Definition: Precision at a stricter IoU threshold of 75%, for more challenging conditions.
    • Value: 0.0 indicates low precision at this stricter IoU threshold.
  • mAP for Large Objects (map_large)

    • Definition: Precision for large-sized objects.
    • Value: -1 suggests this metric was not calculated or is not applicable.
  • mAP for Medium Objects (map_medium)

    • Definition: Precision for medium-sized objects.
    • Value: -1, indicating not calculated or not applicable.
  • mAP per Class (map_per_class)

    • Definition: Average precision calculated separately for each class.
    • Value: -1, indicating this metric was not evaluated or is irrelevant.
  • mAP for Small Objects (map_small)

    • Definition: Precision for small-sized objects.
    • Value: 0.2000 reflects precision for small objects.
  • mAR at 1 Detection (mar_1)

    • Definition: Mean average recall with only one detection per image.
    • Value: 0.2000 shows recall with one detection.
  • mAR at 10 Detections (mar_10)

    • Definition: Average recall with up to 10 detections per image.
    • Value: 0.2000 indicates recall with up to 10 detections.
  • mAR at 100 Detections (mar_100)

    • Definition: Average recall with up to 100 detections per image.
    • Value: 0.2000 shows recall with a high detection threshold.
  • mAR per Class at 100 Detections (mar_100_per_class)

    • Definition: Recall calculated separately for each class, with up to 100 detections per class.
    • Value: -1, suggesting not evaluated or not applicable.
  • mAR for Large Objects (mar_large)

    • Definition: Recall for large-sized objects.
    • Value: -1, indicating not calculated or not applicable.
  • mAR for Medium Objects (mar_medium)

    • Definition: Recall for medium-sized objects.
    • Value: -1, suggesting not evaluated or not applicable.
  • mAR for Small Objects (mar_small)

    • Definition: Recall for small-sized objects.
    • Value: 0.2000 reflects recall for small objects.
  • IoU (Intersection over Union)

    • Definition: A measurement of the overlap between the predicted bounding box and the ground truth bounding box. It's a fundamental metric for evaluating the accuracy of object localization.
    • Value: 0.4307 suggests a moderate level of overlap between predicted and actual bounding boxes.

Instance Segmentation

  • classes

    • Definition: Number of segment classes identified in instance segmentation.
    • Value: 0 implies limited or no differentiation in segment classes.
  • mAP (Mean Average Precision)

    • Definition: Overall precision averaged across all segment classes and IoU thresholds.
    • Value: 0.2000 indicates average precision performance for segments.
  • mAP at IoU=50% (map_50)

    • Definition: Precision at an IoU threshold of 50% for segments, suitable for less strict conditions.
    • Value: 1.0 shows high precision at this threshold for segment detection.
  • mAP at IoU=75% (map_75)

    • Definition: Precision at a stricter IoU threshold of 75% for segments.
    • Value: 0.0 indicates lower precision at this higher IoU threshold for segments.
  • mAP for Large Segments (map_large)

    • Definition: Precision specifically for large segments.
    • Value: -1 suggests this metric was not calculated or not applicable for large segments.
  • mAP for Medium Segments (map_medium)

    • Definition: Precision for medium-sized segments.
    • Value: -1, indicating it wasn't calculated or is irrelevant for medium segments.
  • mAP per Segment Class (map_per_class)

    • Definition: Precision calculated separately for each identified segment class.
    • Value: -1, suggesting this was not assessed for individual segment classes.
  • mAP for Small Segments (map_small)

    • Definition: Precision for small segments.
    • Value: 0.2000 indicates precision level for small-sized segments.
  • mAR at 1 Detection (mar_1)

    • Definition: Mean average recall with a limit of one detected segment per image.
    • Value: 0.2000 indicating recall performance with one segment detected.
  • mAR at 10 Detections (mar_10)

    • Definition: Average recall calculated with up to 10 detected segments per image.
    • Value: 0.2000 reflects recall with up to 10 segment detections.
  • mAR at 100 Detections (mar_100)

    • Definition: Average recall with a high threshold of up to 100 detected segments per image.
    • Value: 0.2000 indicates recall at this higher detection threshold for segments.
  • mAR per Segment Class at 100 Detections (mar_100_per_class)

    • Definition: Recall for each segment class, with up to 100 detections per class.
    • Value: -1, suggesting it wasn't evaluated for each segment class.
  • mAR for Large Segments (mar_large)

    • Definition: Recall measurement for large-sized segments.
    • Value: -1, indicating not applicable or not calculated for large segments.
  • mAR for Medium Segments (mar_medium)

    • Definition: Recall for medium-sized segments.
    • Value: -1', suggesting not evaluated for medium segments.
  • mAR for Small Segments (mar_small)

    • Definition: Recall for small segments.
    • Value: `0.2000' reflects recall for smaller segments.
  • IoU (Intersection over Union)

    • Definition: Measures the overlap between the predicted instance segmentation and the ground truth for each segment.
    • Value: 0.4307 indicates a moderate degree of overlap between predicted and actual segments.