Vision Detector 4+

Run your Vision CoreML model

Kazufumi Suzuki

    • 4.0 • 1 Rating
    • Free

Description

Unlock the power of CoreML on video streams with Vision Detector, simplifying model execution without the need for Xcode previews or application builds.

Vision Detector performs real-time image processing using a CoreML model on Mac.
To use the app, first prepare a machine learning model in CoreML format using CreateML or coreml tools.
When you launch Vision Detector, it will search for input devices in the order of external video inputs connected to your Mac, the MacBook's FaceTime camera, and nearby iPhones, and then display the video.
You can switch input devices via the camera menu.
You can select your CoreML machine learning model from the app's open menu or control panel buttons, or you can drag and drop it onto the Vision Detector app icon in Finder or Dock.
Once the CoreML machine learning model is loaded, you can start or stop processing by pressing the 'Play' button or hitting the space bar.

The supported types of machine learning models include:
- Image classification
- Object detection
- Style transfer
Models lacking a non-maximum suppression layer, or those that use MultiArray for input/output data, are not supported.

In the iClouds documents folder (/Libraries/Containers/VisionDetector/Data/Documents/), you'll find an empty tab-separated values (TSV) file named 'customMessage.tsv'. This file is for defining custom messages to be displayed while running object detection model. The data should be organized into a table with two columns as follows:
(Label output by YOLO, etc.) (tab) (Message) (return)
(Label output by YOLO, etc.) (tab) (Message) (return)
(Label output by YOLO, etc.) (tab) (Message) (return)
This is experimental.

Note: This application does not include a machine learning model.

What’s New

Version 1.6.2

- Automatic window resizing feature
- Use iPhone as an external camera (macOS 14 or later)
- Adjustment of the load meter (10 increments represent 100ms)

Ratings and Reviews

4.0 out of 5
1 Rating

1 Rating

Bytuf ,

Sadly it only supports webcam or live input.

I'm sure this is useful for many people and it is free (thank you dev), but for me it wasn't.

I wish we could use mp4 videos or even images to test our CoreML models quickly instead of being limited to webcam and live video inputs.

Anyway, it didn't seem right to rate it below 4-star as it is a free app.

App Privacy

The developer, Kazufumi Suzuki, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy.

Data Not Collected

The developer does not collect any data from this app.

Privacy practices may vary, for example, based on the features you use or your age. Learn More

More By This Developer

You Might Also Like

ML Trainer: Make Training Data
Developer Tools
ルーレット.
Developer Tools
ML Annotator
Developer Tools
True Scanner
Developer Tools
Notate ML
Developer Tools
SSNG
Developer Tools