Tutorials

How to Assess Your Labelling Metrics with Performance Tracking

Video Description Lorem Ipsum

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

What is Performance Tracking?

Performance Tracking for labelling pipelines is the active process of recording and evaluating the accuracy and consistency of annotations made by human annotators. It involves tagging annotation tasks with quantitative metrics that project leads can utilize to monitor progress and performance.

Why is Performance Tracking Important?

Data quality is crucial for any machine learning task. It impacts how the model learns patterns and features over time. Any discrepancies in data annotations can adversely affect the performance of the model, leading to inaccurate or inconsistent results. Therefore, it is essential to ensure high-quality data annotations to achieve optimal performance of the model.

Data quality can be improved through a variety of techniques, including:

  1. Developing clear and concise annotation guidelines: Clear and concise annotation guidelines can help ensure that annotators understand the task and produce consistent and accurate annotations.
  2. Conducting regular quality checks: Regular quality checks can help identify errors and inconsistencies in the data annotations, allowing for timely corrections and improvements.
  3. Using multiple annotators: Using multiple annotators for the same task can help identify discrepancies and ensure that the annotations are consistent and accurate.
  4. Providing training and feedback to annotators: Providing training and feedback to annotators can help improve their skills and knowledge, leading to more accurate and consistent annotations over time.

Performance Tracking supports the usage of these techniques by providing quantitative metrics to grade annotators’ performance across a variety of assessments, such as efficiency and quality.

How Can You Effectively Utilize Performance Tracking in Your Nexus Project?

The Performance Tracker page can be found in the Automation section in your project. If you have made any annotations in your project, the Performance tab will reveal a dashboard containing various graphs and quantitative metrics. The metrics are saved and collected by the day, and the dashboard provides filters such as the ability to restrict the time range to the past 7 days, 30 days, 3 months, 6 months, and 12 months, as well as the ability to look at the annotation workflow for specific collaborators in the Nexus project.

The Performance Tracker Page can be found in the Automation Section in your Project

The most prominent graph is a Sankey chart that tracks the annotation progress of the whole project. Assets are aggregated based on their annotation status and the current stage of the annotation workflow. It consists of five main categories:

  • None - shows the number of assets that are yet to be annotated.
  • Annotated - shows the number of assets that have been annotated and submitted to the next stage.
  • Review - shows the number of annotated assets that are being reviewed. This includes assets that are in the consensus stage.
  • To Fix - shows the number of annotated assets that have been rejected during review and are sent for re-annotation.
  • Completed - shows the number of annotated assets that are ready to be used for training.
Track various metrics using the bar graphs

There are also five bar graphs that track various metrics over time. The main metrics that are covered daily are as follows:

  • The number of annotations or labels annotated
  • The number of reviews completed for annotated assets
  • The total amount of time used for labelling
  • The total amount of time used to review and rework labels

These charts will be automatically updated daily to allow you track the progress of the annotations at their various stages, and how they vary day by day. Through tracking these metrics, you can see whether you are labelling at the pace you expect, which stages take the most time, and how efficiently your individual labellers are annotating. This provides teams with quantitative metrics to determine what the main blockers in the annotation pipeline are, and help to guide and substantiate specific actionable improvements for the pipeline.

Our Developer’s Roadmap

Performance Tracking is one of the tools that we introduced to empower teams to collaborate seamlessly and effectively using our new Annotation Workflow. We have roadmaps in place to introduce other metrics that will further improve the collaborative annotation experience, such as Efficiency and Quality to assess the annotation precision for each labeller.

Want to Get Started?

If you have questions, feel free to join our Community Slack to post your questions or contact us about how active learning fits in with your usage. 

For more detailed information about the Performance Tracking functionality, customization options, or answers to any common questions you might have, read more about the process on our Developer Portal.

What is Performance Tracking?

Performance Tracking for labelling pipelines is the active process of recording and evaluating the accuracy and consistency of annotations made by human annotators. It involves tagging annotation tasks with quantitative metrics that project leads can utilize to monitor progress and performance.

Why is Performance Tracking Important?

Data quality is crucial for any machine learning task. It impacts how the model learns patterns and features over time. Any discrepancies in data annotations can adversely affect the performance of the model, leading to inaccurate or inconsistent results. Therefore, it is essential to ensure high-quality data annotations to achieve optimal performance of the model.

Data quality can be improved through a variety of techniques, including:

  1. Developing clear and concise annotation guidelines: Clear and concise annotation guidelines can help ensure that annotators understand the task and produce consistent and accurate annotations.
  2. Conducting regular quality checks: Regular quality checks can help identify errors and inconsistencies in the data annotations, allowing for timely corrections and improvements.
  3. Using multiple annotators: Using multiple annotators for the same task can help identify discrepancies and ensure that the annotations are consistent and accurate.
  4. Providing training and feedback to annotators: Providing training and feedback to annotators can help improve their skills and knowledge, leading to more accurate and consistent annotations over time.

Performance Tracking supports the usage of these techniques by providing quantitative metrics to grade annotators’ performance across a variety of assessments, such as efficiency and quality.

How Can You Effectively Utilize Performance Tracking in Your Nexus Project?

The Performance Tracker page can be found in the Automation section in your project. If you have made any annotations in your project, the Performance tab will reveal a dashboard containing various graphs and quantitative metrics. The metrics are saved and collected by the day, and the dashboard provides filters such as the ability to restrict the time range to the past 7 days, 30 days, 3 months, 6 months, and 12 months, as well as the ability to look at the annotation workflow for specific collaborators in the Nexus project.

The Performance Tracker Page can be found in the Automation Section in your Project

The most prominent graph is a Sankey chart that tracks the annotation progress of the whole project. Assets are aggregated based on their annotation status and the current stage of the annotation workflow. It consists of five main categories:

  • None - shows the number of assets that are yet to be annotated.
  • Annotated - shows the number of assets that have been annotated and submitted to the next stage.
  • Review - shows the number of annotated assets that are being reviewed. This includes assets that are in the consensus stage.
  • To Fix - shows the number of annotated assets that have been rejected during review and are sent for re-annotation.
  • Completed - shows the number of annotated assets that are ready to be used for training.
Track various metrics using the bar graphs

There are also five bar graphs that track various metrics over time. The main metrics that are covered daily are as follows:

  • The number of annotations or labels annotated
  • The number of reviews completed for annotated assets
  • The total amount of time used for labelling
  • The total amount of time used to review and rework labels

These charts will be automatically updated daily to allow you track the progress of the annotations at their various stages, and how they vary day by day. Through tracking these metrics, you can see whether you are labelling at the pace you expect, which stages take the most time, and how efficiently your individual labellers are annotating. This provides teams with quantitative metrics to determine what the main blockers in the annotation pipeline are, and help to guide and substantiate specific actionable improvements for the pipeline.

Our Developer’s Roadmap

Performance Tracking is one of the tools that we introduced to empower teams to collaborate seamlessly and effectively using our new Annotation Workflow. We have roadmaps in place to introduce other metrics that will further improve the collaborative annotation experience, such as Efficiency and Quality to assess the annotation precision for each labeller.

Want to Get Started?

If you have questions, feel free to join our Community Slack to post your questions or contact us about how active learning fits in with your usage. 

For more detailed information about the Performance Tracking functionality, customization options, or answers to any common questions you might have, read more about the process on our Developer Portal.

What is Performance Tracking?

Performance Tracking for labelling pipelines is the active process of recording and evaluating the accuracy and consistency of annotations made by human annotators. It involves tagging annotation tasks with quantitative metrics that project leads can utilize to monitor progress and performance.

Why is Performance Tracking Important?

Data quality is crucial for any machine learning task. It impacts how the model learns patterns and features over time. Any discrepancies in data annotations can adversely affect the performance of the model, leading to inaccurate or inconsistent results. Therefore, it is essential to ensure high-quality data annotations to achieve optimal performance of the model.

Data quality can be improved through a variety of techniques, including:

  1. Developing clear and concise annotation guidelines: Clear and concise annotation guidelines can help ensure that annotators understand the task and produce consistent and accurate annotations.
  2. Conducting regular quality checks: Regular quality checks can help identify errors and inconsistencies in the data annotations, allowing for timely corrections and improvements.
  3. Using multiple annotators: Using multiple annotators for the same task can help identify discrepancies and ensure that the annotations are consistent and accurate.
  4. Providing training and feedback to annotators: Providing training and feedback to annotators can help improve their skills and knowledge, leading to more accurate and consistent annotations over time.

Performance Tracking supports the usage of these techniques by providing quantitative metrics to grade annotators’ performance across a variety of assessments, such as efficiency and quality.

How Can You Effectively Utilize Performance Tracking in Your Nexus Project?

The Performance Tracker page can be found in the Automation section in your project. If you have made any annotations in your project, the Performance tab will reveal a dashboard containing various graphs and quantitative metrics. The metrics are saved and collected by the day, and the dashboard provides filters such as the ability to restrict the time range to the past 7 days, 30 days, 3 months, 6 months, and 12 months, as well as the ability to look at the annotation workflow for specific collaborators in the Nexus project.

The Performance Tracker Page can be found in the Automation Section in your Project

The most prominent graph is a Sankey chart that tracks the annotation progress of the whole project. Assets are aggregated based on their annotation status and the current stage of the annotation workflow. It consists of five main categories:

  • None - shows the number of assets that are yet to be annotated.
  • Annotated - shows the number of assets that have been annotated and submitted to the next stage.
  • Review - shows the number of annotated assets that are being reviewed. This includes assets that are in the consensus stage.
  • To Fix - shows the number of annotated assets that have been rejected during review and are sent for re-annotation.
  • Completed - shows the number of annotated assets that are ready to be used for training.
Track various metrics using the bar graphs

There are also five bar graphs that track various metrics over time. The main metrics that are covered daily are as follows:

  • The number of annotations or labels annotated
  • The number of reviews completed for annotated assets
  • The total amount of time used for labelling
  • The total amount of time used to review and rework labels

These charts will be automatically updated daily to allow you track the progress of the annotations at their various stages, and how they vary day by day. Through tracking these metrics, you can see whether you are labelling at the pace you expect, which stages take the most time, and how efficiently your individual labellers are annotating. This provides teams with quantitative metrics to determine what the main blockers in the annotation pipeline are, and help to guide and substantiate specific actionable improvements for the pipeline.

Our Developer’s Roadmap

Performance Tracking is one of the tools that we introduced to empower teams to collaborate seamlessly and effectively using our new Annotation Workflow. We have roadmaps in place to introduce other metrics that will further improve the collaborative annotation experience, such as Efficiency and Quality to assess the annotation precision for each labeller.

Want to Get Started?

If you have questions, feel free to join our Community Slack to post your questions or contact us about how active learning fits in with your usage. 

For more detailed information about the Performance Tracking functionality, customization options, or answers to any common questions you might have, read more about the process on our Developer Portal.

Resources

More reading...

Real-Time Object Detection With D-FINE
6
MIN READ
November 20, 2024
This is some text inside of a div block.

This article introduces D-FINE, an advanced object detection model addressing the limitations of traditional methods. It uses Fine-grained Distribution Refinement (FDR) for precise bounding box adjustments and Global Optimal Localization Self-Distillation (GO-LSD) for efficient learning. The article also demonstrates fine-tuning D-FINE on custom datasets with Datature Nexus for real-world applications.

Read
How to Use LiteRT for Real-Time Inferencing on Android
8
MIN READ
November 6, 2024
This is some text inside of a div block.

This article introduces LiteRT, Google’s rebranded tool for on-device AI, with a step-by-step guide to deploying models on Android. It covers model export, integration, and optimization, showcasing how developers can leverage LiteRT for efficient real-time performance in mobile applications.

Read
YOLO11: Step-by-Step Training on Custom Data and Comparison with YOLOv8
5
MIN READ
October 22, 2024
This is some text inside of a div block.

Ultralytics YOLO11 represents the latest breakthrough in real-time object detection, building on YOLOv8 to address the need for quicker and more accurate predictions in fields such as self-driving cars and surveillance. This article presents a step-by-step guide to training an object detection model using YOLO11 on a crop dataset, comparing its performance with YOLOv8 to showcase its capabilities and emphasize its effectiveness in high-demand situations.

Read
Get Started Now

Get Started using Datature’s platform now for free.