By clicking “Accept all cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
>
Knowledge

How to obtain high-quality annotations for AI training data?

AI development requires high-quality training data. To achieve that, implementing reliable methods in data annotations is a must. Read our latest post to learn how EyeVi does it.

It is well known that AI development requires a lot of data for training. With supervised learning, the training data are annotated, i.e., features important for the task at hand are manually labeled and described. It is important to note that the quality of the training data determines the quality of the AI performance. However, the question of the quality of annotations has not been at the forefront. This is what we are going to discuss in this post, offering some examples and possible solutions in terms of methods on how to ensure that the annotations guarantee the quality of the training data.

Various data types and annotation techniques

Different types of data require different annotation techniques. Thus, the data itself and the tasks of the AI define which annotation techniques to use. For example, in natural language processing, these techniques can include i) sentiment annotation, such as labeling the presence of emotions and opinions within the text; ii) text classification, such as categorizing the texts into predetermined categories; and iii) entity annotation, such as categorizing words into various categories, e.g., verbs, nouns, and adjectives (1).  

Regarding visual imagery, other approaches are needed. For example, in computer vision annotations, the tasks can include image classification, object detection, and segmentation (2). Image classification can be used when the purpose of the annotation is to identify the object’s presence on the images of the dataset. In other words, whether an image includes an entity (e.g., a car) or not, nothing more.  

With object detection, other information of the visual scene is also added to the presence of the entity, such as the location and number of the entity. The most complex type of image annotation is segmentation which, in turn, can be divided into three types: i) semantic segmentation (used to annotate objects in the same category); ii) instance segmentation (used to annotate each instance of the entity within a category); and iii) panoptic segmentation (combining semantic and instance segmentation) (3). Segmentation is used for identifying the exact location and the type of the annotated object.

Whichever the data type or which annotation techniques are used, the annotations must be of top quality for gaining the best results.

Annotation errors and their impact on data processing

Since manual annotations are done by humans, the annotations are also prone to human error – none of us are perfect, and annotating data can be a repetitive and highly attention-demanding task. Some of the most common errors when annotating imagery data are the following (4):

  1. an object is classified incorrectly,
  1. the state of the object is not described correctly,
  1. an object is not annotated,
  1. the annotation is not precise enough (the object doesn’t fit into its actual dimensions or the object doesn’t fit into its actual position).

However, not all errors are caused by mislabeling. It can easily happen that changes are made in the middle of a project, which means that annotations that were done before may seem as mislabels after the changes. Also, poorly defined classes in the assignment instructions may cause confusion and various interpretations in annotating the objects. Thus, there are also procedural mishaps that can be at fault in causing annotation errors.

Nevertheless, whichever the reason, errors in the training data lead to errors in AI processed data which can cause serious consequences in real life. For example, should the base data be faulty in defect detection of roads, the AI processed data can potentially lead to miscalculations in road network management. This, in turn, has an effect on the costs in road management due to untimely repair actions and might, in the worst-case scenario, even lead to road accidents.  

How to avoid such annotation errors and keep the quality of the learning data impeccable? This can depend on the workforce you are using as well as on the methods you are implementing in running the annotation processes.

Which workforce performs the best?

In general, the annotation projects can be outsourced or one can use in-house annotators. When outsourcing data labelling, one can use either crowdsourcing or managed teams. Crowdsourcing includes multiple anonymous workers who are not monitored and are paid per task, whereas managed teams are monitored teams who have hourly payment. When comparing the two based on annotation quality, managed teams win. The accuracy of the manual annotations is lower for crowdsourced labor as compared to managed teams: 7% in simple transcribing tasks and up to 80% in more complex tasks that demand specialist knowledge (5). Thus, when outsourcing, the higher quality is ensured with managed teams. Crowdsourcing, however, can be useful for projects that need simple annotation data in large volumes within a short time span.

The merit of having in-house teams of annotators, your own team of trained and skilled specialists, is the flexibility you gain in terms of continuous learning, skills in reorientating in the data, and the ability to train new staff. In-house teams might not have the plus side of easy scalability, a characteristic of managed teams, but when needed, experienced annotators can act as quality specialists and managers for outsourced annotation projects.  

In addition, when collaboration between the developers and in-house annotators is encouraged, it can potentially increase the annotation quality. The annotators have a better understanding of what is needed, and the AI specialists have a better idea of the potential obstacles that can come from the annotations. Moreover, the AI specialists will gain better knowledge on what problems annotators face and what impact this can have on the AI model.

When working with in-house teams of annotators, there is a lot that the company can do to ensure and improve the quality of labelling.

How solid annotation methods can improve the quality of annotations within in-house team?

These are the best practices we at EyeVi Technologies implement in our projects to make sure that we provide the best outcome for our partners and clients. This is also what we recommend doing when using in-house teams of annotators.

First and foremost, ensure quality training for your in-house annotators. This will give you the edge that you wouldn’t have when using managed teams. Why? Because there is always a learning curve – the data and the nature of annotations always differ between and within domains, depending on the final goal of the task at hand. So, even if you choose to use managed teams, they also will have to adapt to your data. Moreover, there is no guarantee that you will get the team you used with your last project so the outsourced team may have a learning curve in every project. Having your own team with high-quality training warrants the stability in team’s knowledge and skill sets and ensures that the projects are done fast and efficiently.  

Second, discuss your data decisions and keep a thorough documentation about the decisions. There are always edge cases in the data. Theory is one thing but processing real life data is another. Discuss with annotators, how to proceed with edge cases, make agreements on what grounds you base your decisions. Then write it up and keep the documentation available to all annotators. This way, relevant information is quickly accessible, and it serves as a base for similar edge-cases. This also reduces subjectivity in decision-making and helps to unify the annotations.

Third, implement quality control. Let some of the data have double annotations to gain control over the quality as well as the subjective bias in data annotations. This is especially useful at the beginning of a new project or when the annotators are in their learning process.  

Fourth, encourage communication within the annotator teams and between the annotators and development teams. This can be done through team chats and meetings across teams. Efficient and open communication means faster problem solving and less faltering in the projects.

Sources

1. Tarjama. Data Annotation: Types and Use Cases for Machine Learning. [Internet, cited 2022 February 25]. Available from: https://www.tarjama.com/data-annotation-types-and-use-cases-for-machine-learning/

2. EyeVi Technologies. Artificial intelligence (AI): how it works and why to use it? [Internet, cited 2022 February 25]. Available from: https://www.eyevi.tech/blog/artificial-intelligence-ai-how-it-works-and-why-to-use-it

3. Cloudfactory. Image Annotation for Computer Vision. A Guide to Labeling Visual Data for Your Machine Learning Project. [Internet, cited 2022 February 25]. Available from: https://www.cloudfactory.com/image-annotation-guide

4. Steffen Enderes. The Impact of Annotation Errors on Neural Networks [Internet, cited 2022 February 27]. Available from: https://understand.ai/blog/annotation/machine-learning/autonomous-driving/2021/06/01/impact-of-annotation-errors-on-neural-networks.html

5. Hivemind and CloudFactory. Crowd vs. Managed Team: A Study on Quality Data Processing at Scale. [Internet, cited 2022 February 27]. Available from: https://go.cloudfactory.com/hubfs/02-Contents/3-Reports/Crowd-vs-Managed-Team-Hivemind-Study.pdf

EyeVi Technologies
EyeVi logomark.

Join our journey for mapping the future

Want to optimize your costs and think we could be the perfect solution for that? Contact us by sending an email to sales@eyevi.tech or schedule a call with us on our contact page.

Contact us

Gaspar Anton

Founder and CEO

We are more than happy to help you!

sales@eyevi.tech