Platform News and Updates
Plainsight Pipelines is a guided step-by-step interface that streamlines the setup, processing and management of visual data workflows, making it easier than ever to operationalize computer vision models, create custom data transformations and prediction pipelines at scale.
- Types of Pipelines
- Streaming: for always available predictions when response time is important
- Batch: Create scheduled or one-time batch pipelines
- Cloud buckets (Google Cloud and Amazon S3), Pub/Sub, API Endpoints
- Images or video (videos can be split at custom frame rate)
- Setup filename and file extension filters when syncing from cloud sources
- Pipeline Blocks
- Transform your data with resize and crop blocks
- Generate predictions from you data using pre built models or your own custom models
- Perform tiled inferencing to identify small objects on high resolution images
- The block library will continue to grow in future releases
- Cloud buckets (Google Cloud and Amazon S3), Pub/Sub, API
- Option to include output image files along with predictions
We’ve updated our reporting for computer vision models trained in the Plainsight platform. You’ll find plenty of new metrics that vary by model type, improvements include:
- New metrics and tools for each supported model type to enable you to identify low model performance AND diagnose potential causes of low performance
- View your training settings in the evaluation page to know how your model was trained
- Improved training image gallery metadata, sorting and filtering
- Added classification matrix for comparing class performance and identifying confusion between classes
- Added Confusion matrix that can be reordered and filtered to find problematic classes
- Added summary metrics for each class
- Object detection
- Added summary statistics per class
- Added metrics support for this new model type
- Semantic Segmentation
- Added metrics support for this new model type
- Improved communication when credits are running low or are exhausted
You can now label, train and deploy semantic segmentation models!
Semantic segmentation classifies each pixel in an image that matches a defined label. These models do not track unique instances of objects, so they are not recommended for counting/tracking unique items. Rather, these models are useful for calculating the coverage area of defined objects within a frame.
Why Use Semantic Segmentation?
Semantic segmentation models typically are more performant than instance segmentation models as they are not classifying unique labels for each detection and instead are painting all instances of a defined object. Common uses for semantic segmentation segmentation include:
- Autonomous vehicles: Self driving cars need low latency and pixel accuracy to ensure the vehicles stay on the road, obey traffic laws and avoid collisions.
- Precision assembly: Combining robotics with semantic segmentation models allows users to pin-point the location of materials in a frame and pass those coordinates along for a robot to take action.
- Calculating area: When properly calibrated, semantic segmentation models can help users understand how much of a frame is covered by a defined object with pixel perfect accuracy, enabling a large range of use cases from measuring the movement and growth of wildfires all of the way down to measuring the growth of bacterial colonies in lab environments.
Model training: We've made improvements, optimizations, and fixed some issues related to model training.
Storage usage: We've fixed an issue with storage usage reporting and have also made it easier to see your current usage compared to your billable usage in the Billing details:
We are excited to share the notes for our April release of the Plainsight vision AI platform. This update includes support for training new model types, UI improvements and updates to our billing and pricing.
Along with updates to our Enterprise vision AI platform, we’ll be launching new On-Demand features over the coming weeks to streamline deployment and adoption of Plainsight’s intuitive computer vision solutions.
We’d love to hear what you think about these new features and hear about your experience with the Plainsight vision AI platform. Join the official Plainsight Slack channel to connect directly with the Plainsight team.
- Free Labels: Accelerate the creation of robust datasets with free unlimited labels.
- Large File Format Support: Faster loading speed for large and high resolution datasets and/or projects with a large number of labels.
- Zoom and Pan: Improved pan and scroll mouse interactions for more accurate labeling.
- Mobile Model Training and Download*: Train and download TFLite models specifically designed and optimized for mobile devices including iOS and Android phones and tablets. Read about how to train TensorFlow Lite mobile models.
- Training and Inferencing with Tiled Images: Slice high-resolution images into tiles in order to detect small objects. Tiled models take image tiles as inputs, creates inferences on each tile, stitches the original image back together, and returns detections for the image as a whole. Read more about how to use tiling.
- Labels Are Now Free and Unlimited: Stop worrying about cost-per-label pricing and create richer datasets for high-performance models.
- $100 in Credits: Jump-start your next computer vision project with $100 in credits that let you explore each section of our vision AI platform.
- Plainsight On-Demand: With pay-as-you-go pricing, explore our full vision AI platform and pay only for usage and functionality that fits your needs.