Manage and Deploy Trained Models on Datature Nexus
From Training Artifact to Production Model
After a training run completes, Datature Nexus stores the resulting model as an artifact. Managing these artifacts well matters when you have multiple training runs, different model versions, and several projects running in parallel. This tutorial covers the model management and deployment workflow.
What This Tutorial Covers
Datature walks through the post-training model lifecycle:
- Locating trained model artifacts in your project dashboard
- Comparing performance across model versions
- Exporting models for local use or edge deployment
- Deploying a model as an inference API endpoint directly from Nexus
The walkthrough covers these steps in under two minutes.
Deployment Options
Datature supports multiple paths from trained model to production. You can deploy as a hosted API for server-side inference, export to TFLite or ONNX for edge devices, or download the weights for integration into your own pipeline. Teams that need a quick testing endpoint can spin one up from the Nexus dashboard without writing deployment code.
For API deployment details, see How to Use API Deployment for Trained Model Inference. For edge deployment, see How to Load Vision Models on Raspberry Pi for Edge Deployment.

