Neural Network
Last updated
Last updated
Each camera is capable of running a neural network and feeding it data from a selected stream. Our API can currently handle only models in the MyriadX blob format. To read more about this format and how to convert your models to MyriadX blob format, please refer to Luxonis conversion guide.
Deploying an AI model on the camera takes some time (usually ~20s, but may take up to a minute), during this time, the camera will be inaccessible via the API, since it needs to restart in order to deploy the model. All running streams from the particular camera will be paused and will resume when the camera is up again.
The deploy neural network endpoint accepts form-data body. This body shall contain two fields - model, which is a binary file containing the neural network in the MyriadX blob format and nn_config, stringified json containing the neural network configuration. The nn_config format is described below
type (required)
Type of the neural network being deployed. Possible values are:
Generic
YOLO
MobileNet
Type: enum
num_inference_threads (optional)
Number of CPU to run inference on.
Type: number Default: 2
nn_config (optional)
Type-specific configuration for the deployed model. Type: NNYoloConfig | NNMobileNetConfig
anchor_masks (optional)
anchors (optional)
coordinate_size (optional)
iou_threshold (optional)
num_classes (optional)
confidence_threshold (optional)
/cameras/{mxid}/streams/{stream_name}/nn
CAM_[A-H]|DEPTH_[A-H]_[A-H]
[A-Z0-9]+
No body