Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ever wondered how you are able to use Google Lens to translate menus of foreign restaurants even while offline? Instead of sending the image to their servers for translation, the Google Translate app has a tiny in-built prediction model that runs entirely on your phone’s processors. This is an example of edge deployment, where deep learning models are brought forward to mobile and embedded devices. In other words, prediction tasks are run entirely on the local device without the data ever leaving the device.
Why is Edge Deployment Important?
Deep learning is becoming increasingly prevalent in our society. From shopping recommendations to identifying famous landmarks in pictures, people are starting to be ever-reliant on such features present in their mobile phones. Many state-of-the-art deep learning models require powerful hardware in the form of servers with multiple GPUs. However, not everyone has access to these, especially given the limited computing power of mobile devices. Hence, liberating deep learning from such static servers has plenty of benefits.
Reduced Latency: Server communication can incur some latency, as data needs to be sent to the servers for processing, and the results are sent back to the device to be displayed. The latency can become quite significant depending on the type of data (e.g. 4K image vs a text file) or the amount of data (e.g. video feed running at 60 FPS). Running such prediction tasks on-device ensures a smoother and more real-time experience by minimising the waiting time for a task to be completed.
Connectivity: Since prediction models are hosted on-device, edge deployment offers a degree of freedom for devices to operate regardless of whether they are connected to the Internet, since there is no need for any external data transmission. This is crucial for tasks like legged robots and drones mapping terrain in remote areas, or even in outer space.
Privacy: Data transmission signals are prone to interception. By removing the need to transmit data, edge deployment creates a privacy shield where all data collected by the device can only be accessed from the device itself. This is important to protect sensitive user data, especially when running on personal devices.
What Are the Applications of Vision Model Inference on Raspberry Pi?
Many devices are designed to be small and portable. Take Amazon’s Alexa, or Google Home Mini, for example. It would be impractical to install multiple GPUs in these devices simply for voice recognition. Other devices like drones have a maximum weight capacity. Having the capability to run lightweight vision models on microprocessors like the Raspberry Pi allows drones to perform tasks like terrain mapping and surveillance.
Raspberry Pi offers integrations with a wide range of peripherals, some of which include controllers, displays, and speakers. With the right set of accessories, you can implement a deep learning solution for just about any use case. If you would like to integrate your Raspberry Pi with your drone, do check out this cool tutorial!
Why Datature Edge?
Our edge deployment of trained models furthers Datature’s mission to democratise the power of computer vision through low code requirements and ease of use.
Edge deployment coupled with Datature Nexus platform allows users to have uninterrupted access to their trained models for inference without the need to reload the model or manage it on your own. This takes the responsibility of deployment off of you so that you can focus on utilising the prediction inference in the most effective manner possible. We make this simple by streamlining the entire process from model loading to the inference, and finally the visualisation.
How to Set Up Datature Edge on Your Raspberry Pi
For this example, we will be using the Raspberry Pi 4b with a 32-bit Raspbian Buster OS. Please note that the steps involved may differ if you have a different architecture or operating system.
If you have not set up your camera, please refer to this tutorial. Ensure that your camera is enabled and you are able to capture a still image with `raspistill`. The camera will be initialised for 5 seconds before the image is captured.
The first step is to download some handy scripts from that should minimise any chances of throwing your brand-new Raspberry Pi out of the window (yes, we know it can be quite frustrating at times).
git clone https://github.com/datature/edge.git cd raspberry-pi
Run the setup script to set up your environment. This updates your firmware, configures your camera using `raspi-config` and installs the necessary packages, such as Datature Hub, Tensorflow, PiCamera and OpenCV for inference for model loading and inference.
chmod u+x setup.sh ./setup_datature_edge.sh
Once the script has been executed to completion, reboot your Raspberry Pi for the camera configuration settings to take effect. Then, check that you have the following four files in their respective directories.
/usr/bin/datature-edge: this is the binary executable compiled from `datature-edge.sh` that allows you to start and stop the camera streaming and inference, and switch between models by specifying the model key and project secret from Datature Nexus.
/usr/src/datature-edge/datature_edge.py: this is the Python script that grabs frames from the Raspberry Pi camera stream, performs inference, and displays the prediction results in real-time. The parent directory should contain other supporting files as well.
/etc/datature_edge.conf: this is a configuration file that stores user parameters such as the confidence threshold and model input size. They will be passed on to the Python script upon invocation.
/etc/systemd/system/datature_edge.service: this is a system-level file that is invoked upon startup. This allows the Python inference script to be executed automatically even after the Raspberry Pi has been rebooted. The inference script will also be automatically restarted upon failures (such as OOM errors or with an accidental KeyboardInterrupt) to ensure that minimal user intervention is required.
Check the status of the Datature Edge service by running:
sudo systemctl status datature_edge.service
Disable this service and run the script manually, run:
sudo systemctl disable datature_edge.service
How to Run Datature Edge on a Live Camera Stream
To initialise the camera stream, load your model, and begin the inference process, run `datature-edge` with your specified model key and project secret from Datature Nexus. This process can take some time depending on the size of your model. The model format and input size of the model are also required fields. Currently, the only model formats we support are Tensorflow (tf) and TFLite (tflite), but we plan to expand to more formats in the future.
The executable will download your model using our open-source model loader, Datature Hub, and load it in memory. If you would like to use a custom model, you can change the execution mode by adding the option `--local`. Then, you would need to specify a path to your custom model and a path to the labels map as shown below.
The camera will be initialised and start capturing frames. The model will then analyse each frame and return the predictions, if any. You should be able to see a window displaying the output from the camera stream. To test if your model works, grab a relevant image on your phone or laptop and point the camera at it. If your model has been trained well, you should be able to see the predictions overlaid on the camera feed.
To stop Datature Edge, run:
datature-edge --stop
Voila! You now have a working edge-deployed inference service!
Additional Deployment Capabilities
Once inference on your Raspberry Pi is up and running, you can now fully utilise your deep learning model for inference, taking the usage of your deep learning pipeline to the next level. If latency is not a priority, you can also consider platform deployment on Datature Nexus to send data to a hosted model for predictions instead. With our Inference API, you can always alter the deployment’s capability as needed.
Our Developer’s Roadmap
Additionally, we have roadmaps in place to make Datature Edge more versatile by adding compatibility with other edge deployment formats such as ONNX. This will allow Datature Edge to serve a wider suite of devices and applications. We are also looking at integrating a simple frontend inference dashboard with Streamlit to stream the camera feed and prediction results for convenient visualisation.
Want to Get Started?
If you have questions, feel free to join our Community Slack to post your questions or contact us about how edge deployment fits in with your usage.
For more detailed information about Datature Edge’s functionality, customization options, or answers to any common questions you might have, read more about Datature Edge on our Developer Portal.
What is Edge Deployment?
Ever wondered how you are able to use Google Lens to translate menus of foreign restaurants even while offline? Instead of sending the image to their servers for translation, the Google Translate app has a tiny in-built prediction model that runs entirely on your phone’s processors. This is an example of edge deployment, where deep learning models are brought forward to mobile and embedded devices. In other words, prediction tasks are run entirely on the local device without the data ever leaving the device.
Why is Edge Deployment Important?
Deep learning is becoming increasingly prevalent in our society. From shopping recommendations to identifying famous landmarks in pictures, people are starting to be ever-reliant on such features present in their mobile phones. Many state-of-the-art deep learning models require powerful hardware in the form of servers with multiple GPUs. However, not everyone has access to these, especially given the limited computing power of mobile devices. Hence, liberating deep learning from such static servers has plenty of benefits.
Reduced Latency: Server communication can incur some latency, as data needs to be sent to the servers for processing, and the results are sent back to the device to be displayed. The latency can become quite significant depending on the type of data (e.g. 4K image vs a text file) or the amount of data (e.g. video feed running at 60 FPS). Running such prediction tasks on-device ensures a smoother and more real-time experience by minimising the waiting time for a task to be completed.
Connectivity: Since prediction models are hosted on-device, edge deployment offers a degree of freedom for devices to operate regardless of whether they are connected to the Internet, since there is no need for any external data transmission. This is crucial for tasks like legged robots and drones mapping terrain in remote areas, or even in outer space.
Privacy: Data transmission signals are prone to interception. By removing the need to transmit data, edge deployment creates a privacy shield where all data collected by the device can only be accessed from the device itself. This is important to protect sensitive user data, especially when running on personal devices.
What Are the Applications of Vision Model Inference on Raspberry Pi?
Many devices are designed to be small and portable. Take Amazon’s Alexa, or Google Home Mini, for example. It would be impractical to install multiple GPUs in these devices simply for voice recognition. Other devices like drones have a maximum weight capacity. Having the capability to run lightweight vision models on microprocessors like the Raspberry Pi allows drones to perform tasks like terrain mapping and surveillance.
Raspberry Pi offers integrations with a wide range of peripherals, some of which include controllers, displays, and speakers. With the right set of accessories, you can implement a deep learning solution for just about any use case. If you would like to integrate your Raspberry Pi with your drone, do check out this cool tutorial!
Why Datature Edge?
Our edge deployment of trained models furthers Datature’s mission to democratise the power of computer vision through low code requirements and ease of use.
Edge deployment coupled with Datature Nexus platform allows users to have uninterrupted access to their trained models for inference without the need to reload the model or manage it on your own. This takes the responsibility of deployment off of you so that you can focus on utilising the prediction inference in the most effective manner possible. We make this simple by streamlining the entire process from model loading to the inference, and finally the visualisation.
How to Set Up Datature Edge on Your Raspberry Pi
For this example, we will be using the Raspberry Pi 4b with a 32-bit Raspbian Buster OS. Please note that the steps involved may differ if you have a different architecture or operating system.
If you have not set up your camera, please refer to this tutorial. Ensure that your camera is enabled and you are able to capture a still image with `raspistill`. The camera will be initialised for 5 seconds before the image is captured.
The first step is to download some handy scripts from that should minimise any chances of throwing your brand-new Raspberry Pi out of the window (yes, we know it can be quite frustrating at times).
git clone https://github.com/datature/edge.git cd raspberry-pi
Run the setup script to set up your environment. This updates your firmware, configures your camera using `raspi-config` and installs the necessary packages, such as Datature Hub, Tensorflow, PiCamera and OpenCV for inference for model loading and inference.
chmod u+x setup.sh ./setup_datature_edge.sh
Once the script has been executed to completion, reboot your Raspberry Pi for the camera configuration settings to take effect. Then, check that you have the following four files in their respective directories.
/usr/bin/datature-edge: this is the binary executable compiled from `datature-edge.sh` that allows you to start and stop the camera streaming and inference, and switch between models by specifying the model key and project secret from Datature Nexus.
/usr/src/datature-edge/datature_edge.py: this is the Python script that grabs frames from the Raspberry Pi camera stream, performs inference, and displays the prediction results in real-time. The parent directory should contain other supporting files as well.
/etc/datature_edge.conf: this is a configuration file that stores user parameters such as the confidence threshold and model input size. They will be passed on to the Python script upon invocation.
/etc/systemd/system/datature_edge.service: this is a system-level file that is invoked upon startup. This allows the Python inference script to be executed automatically even after the Raspberry Pi has been rebooted. The inference script will also be automatically restarted upon failures (such as OOM errors or with an accidental KeyboardInterrupt) to ensure that minimal user intervention is required.
Check the status of the Datature Edge service by running:
sudo systemctl status datature_edge.service
Disable this service and run the script manually, run:
sudo systemctl disable datature_edge.service
How to Run Datature Edge on a Live Camera Stream
To initialise the camera stream, load your model, and begin the inference process, run `datature-edge` with your specified model key and project secret from Datature Nexus. This process can take some time depending on the size of your model. The model format and input size of the model are also required fields. Currently, the only model formats we support are Tensorflow (tf) and TFLite (tflite), but we plan to expand to more formats in the future.
The executable will download your model using our open-source model loader, Datature Hub, and load it in memory. If you would like to use a custom model, you can change the execution mode by adding the option `--local`. Then, you would need to specify a path to your custom model and a path to the labels map as shown below.
The camera will be initialised and start capturing frames. The model will then analyse each frame and return the predictions, if any. You should be able to see a window displaying the output from the camera stream. To test if your model works, grab a relevant image on your phone or laptop and point the camera at it. If your model has been trained well, you should be able to see the predictions overlaid on the camera feed.
To stop Datature Edge, run:
datature-edge --stop
Voila! You now have a working edge-deployed inference service!
Additional Deployment Capabilities
Once inference on your Raspberry Pi is up and running, you can now fully utilise your deep learning model for inference, taking the usage of your deep learning pipeline to the next level. If latency is not a priority, you can also consider platform deployment on Datature Nexus to send data to a hosted model for predictions instead. With our Inference API, you can always alter the deployment’s capability as needed.
Our Developer’s Roadmap
Additionally, we have roadmaps in place to make Datature Edge more versatile by adding compatibility with other edge deployment formats such as ONNX. This will allow Datature Edge to serve a wider suite of devices and applications. We are also looking at integrating a simple frontend inference dashboard with Streamlit to stream the camera feed and prediction results for convenient visualisation.
Want to Get Started?
If you have questions, feel free to join our Community Slack to post your questions or contact us about how edge deployment fits in with your usage.
For more detailed information about Datature Edge’s functionality, customization options, or answers to any common questions you might have, read more about Datature Edge on our Developer Portal.
What is Edge Deployment?
Ever wondered how you are able to use Google Lens to translate menus of foreign restaurants even while offline? Instead of sending the image to their servers for translation, the Google Translate app has a tiny in-built prediction model that runs entirely on your phone’s processors. This is an example of edge deployment, where deep learning models are brought forward to mobile and embedded devices. In other words, prediction tasks are run entirely on the local device without the data ever leaving the device.
Why is Edge Deployment Important?
Deep learning is becoming increasingly prevalent in our society. From shopping recommendations to identifying famous landmarks in pictures, people are starting to be ever-reliant on such features present in their mobile phones. Many state-of-the-art deep learning models require powerful hardware in the form of servers with multiple GPUs. However, not everyone has access to these, especially given the limited computing power of mobile devices. Hence, liberating deep learning from such static servers has plenty of benefits.
Reduced Latency: Server communication can incur some latency, as data needs to be sent to the servers for processing, and the results are sent back to the device to be displayed. The latency can become quite significant depending on the type of data (e.g. 4K image vs a text file) or the amount of data (e.g. video feed running at 60 FPS). Running such prediction tasks on-device ensures a smoother and more real-time experience by minimising the waiting time for a task to be completed.
Connectivity: Since prediction models are hosted on-device, edge deployment offers a degree of freedom for devices to operate regardless of whether they are connected to the Internet, since there is no need for any external data transmission. This is crucial for tasks like legged robots and drones mapping terrain in remote areas, or even in outer space.
Privacy: Data transmission signals are prone to interception. By removing the need to transmit data, edge deployment creates a privacy shield where all data collected by the device can only be accessed from the device itself. This is important to protect sensitive user data, especially when running on personal devices.
What Are the Applications of Vision Model Inference on Raspberry Pi?
Many devices are designed to be small and portable. Take Amazon’s Alexa, or Google Home Mini, for example. It would be impractical to install multiple GPUs in these devices simply for voice recognition. Other devices like drones have a maximum weight capacity. Having the capability to run lightweight vision models on microprocessors like the Raspberry Pi allows drones to perform tasks like terrain mapping and surveillance.
Raspberry Pi offers integrations with a wide range of peripherals, some of which include controllers, displays, and speakers. With the right set of accessories, you can implement a deep learning solution for just about any use case. If you would like to integrate your Raspberry Pi with your drone, do check out this cool tutorial!
Why Datature Edge?
Our edge deployment of trained models furthers Datature’s mission to democratise the power of computer vision through low code requirements and ease of use.
Edge deployment coupled with Datature Nexus platform allows users to have uninterrupted access to their trained models for inference without the need to reload the model or manage it on your own. This takes the responsibility of deployment off of you so that you can focus on utilising the prediction inference in the most effective manner possible. We make this simple by streamlining the entire process from model loading to the inference, and finally the visualisation.
How to Set Up Datature Edge on Your Raspberry Pi
For this example, we will be using the Raspberry Pi 4b with a 32-bit Raspbian Buster OS. Please note that the steps involved may differ if you have a different architecture or operating system.
If you have not set up your camera, please refer to this tutorial. Ensure that your camera is enabled and you are able to capture a still image with `raspistill`. The camera will be initialised for 5 seconds before the image is captured.
The first step is to download some handy scripts from that should minimise any chances of throwing your brand-new Raspberry Pi out of the window (yes, we know it can be quite frustrating at times).
git clone https://github.com/datature/edge.git cd raspberry-pi
Run the setup script to set up your environment. This updates your firmware, configures your camera using `raspi-config` and installs the necessary packages, such as Datature Hub, Tensorflow, PiCamera and OpenCV for inference for model loading and inference.
chmod u+x setup.sh ./setup_datature_edge.sh
Once the script has been executed to completion, reboot your Raspberry Pi for the camera configuration settings to take effect. Then, check that you have the following four files in their respective directories.
/usr/bin/datature-edge: this is the binary executable compiled from `datature-edge.sh` that allows you to start and stop the camera streaming and inference, and switch between models by specifying the model key and project secret from Datature Nexus.
/usr/src/datature-edge/datature_edge.py: this is the Python script that grabs frames from the Raspberry Pi camera stream, performs inference, and displays the prediction results in real-time. The parent directory should contain other supporting files as well.
/etc/datature_edge.conf: this is a configuration file that stores user parameters such as the confidence threshold and model input size. They will be passed on to the Python script upon invocation.
/etc/systemd/system/datature_edge.service: this is a system-level file that is invoked upon startup. This allows the Python inference script to be executed automatically even after the Raspberry Pi has been rebooted. The inference script will also be automatically restarted upon failures (such as OOM errors or with an accidental KeyboardInterrupt) to ensure that minimal user intervention is required.
Check the status of the Datature Edge service by running:
sudo systemctl status datature_edge.service
Disable this service and run the script manually, run:
sudo systemctl disable datature_edge.service
How to Run Datature Edge on a Live Camera Stream
To initialise the camera stream, load your model, and begin the inference process, run `datature-edge` with your specified model key and project secret from Datature Nexus. This process can take some time depending on the size of your model. The model format and input size of the model are also required fields. Currently, the only model formats we support are Tensorflow (tf) and TFLite (tflite), but we plan to expand to more formats in the future.
The executable will download your model using our open-source model loader, Datature Hub, and load it in memory. If you would like to use a custom model, you can change the execution mode by adding the option `--local`. Then, you would need to specify a path to your custom model and a path to the labels map as shown below.
The camera will be initialised and start capturing frames. The model will then analyse each frame and return the predictions, if any. You should be able to see a window displaying the output from the camera stream. To test if your model works, grab a relevant image on your phone or laptop and point the camera at it. If your model has been trained well, you should be able to see the predictions overlaid on the camera feed.
To stop Datature Edge, run:
datature-edge --stop
Voila! You now have a working edge-deployed inference service!
Additional Deployment Capabilities
Once inference on your Raspberry Pi is up and running, you can now fully utilise your deep learning model for inference, taking the usage of your deep learning pipeline to the next level. If latency is not a priority, you can also consider platform deployment on Datature Nexus to send data to a hosted model for predictions instead. With our Inference API, you can always alter the deployment’s capability as needed.
Our Developer’s Roadmap
Additionally, we have roadmaps in place to make Datature Edge more versatile by adding compatibility with other edge deployment formats such as ONNX. This will allow Datature Edge to serve a wider suite of devices and applications. We are also looking at integrating a simple frontend inference dashboard with Streamlit to stream the camera feed and prediction results for convenient visualisation.
Want to Get Started?
If you have questions, feel free to join our Community Slack to post your questions or contact us about how edge deployment fits in with your usage.
For more detailed information about Datature Edge’s functionality, customization options, or answers to any common questions you might have, read more about Datature Edge on our Developer Portal.
This article introduces D-FINE, an advanced object detection model addressing the limitations of traditional methods. It uses Fine-grained Distribution Refinement (FDR) for precise bounding box adjustments and Global Optimal Localization Self-Distillation (GO-LSD) for efficient learning. The article also demonstrates fine-tuning D-FINE on custom datasets with Datature Nexus for real-world applications.
How to Use LiteRT for Real-Time Inferencing on Android
8
MIN READ
November 6, 2024
This is some text inside of a div block.
This article introduces LiteRT, Google’s rebranded tool for on-device AI, with a step-by-step guide to deploying models on Android. It covers model export, integration, and optimization, showcasing how developers can leverage LiteRT for efficient real-time performance in mobile applications.
YOLO11: Step-by-Step Training on Custom Data and Comparison with YOLOv8
5
MIN READ
October 22, 2024
This is some text inside of a div block.
Ultralytics YOLO11 represents the latest breakthrough in real-time object detection, building on YOLOv8 to address the need for quicker and more accurate predictions in fields such as self-driving cars and surveillance. This article presents a step-by-step guide to training an object detection model using YOLO11 on a crop dataset, comparing its performance with YOLOv8 to showcase its capabilities and emphasize its effectiveness in high-demand situations.