Vision Detector 4+

Kazufumi Suzuki

    • 免費

簡介

Vision Detector performs image processing using a CoreML model on Mac.
To use the app, first prepare a machine learning model in CoreML format using CreateML or coreml tools. Then, open your machine learning model by drag & drop on Finder or Dock.
When an external video input device is connected to your Mac, it will be used in priority. If no external device is available, the FaceTime camera on your MacBook will be used. You can also use your iPhone nearby Mac.
Push 'Play' button or hit space bar to start/stop processing video input.

The supported types of machine learning models include:
- Image classification
- Object detection
- Style transfer
Models lacking a non-maximum suppression layer, or those that use MultiArray for input/output data, are not supported.

In the iClouds documents folder (/Libraries/Containers/VisionDetector/Data/Documents/), you'll find an empty tab-separated values (TSV) file named 'customMessage.tsv'. This file is for defining custom messages to be displayed while running object detection model. The data should be organized into a table with two columns as follows:
(Label output by YOLO, etc.) (tab) (Message) (return)
(Label output by YOLO, etc.) (tab) (Message) (return)
(Label output by YOLO, etc.) (tab) (Message) (return)
This is experimental.

Note: This application does not include a machine learning model.

新內容

版本 1.4

When your Mac has multiple input video systems, it is now possible to switch between them using Camera menu.
Added support for files in the .mlpackage and .mlmodelc format.

App 隱私權

開發者「Kazufumi Suzuki」指出 App 的隱私權實務可能包含下方描述的資料處理。如需更多資訊,請參閱開發者的隱私權政策

不收集資料

開發者不會從這個 App 收集任何資料。

隱私權實務可能因你使用的功能或你的年齡等因素而有所不同。進一步瞭解

更多此開發者的作品

你可能也會喜歡

ML Trainer: Make Training Data
開發者工具
ML Annotator
開發者工具
True Scanner
開發者工具
Haptic Wave
開發者工具
Source Files - Git Storage
開發者工具
Notate ML
開發者工具