AI Image Annotation – SentiSight.ai https://www.sentisight.ai Image labeling and recognition Sun, 23 Oct 2022 17:47:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Top 5 Image Annotation Tools and Their Use Cases https://www.sentisight.ai/5-best-image-annotation-tools-and-applications/ Sun, 23 Jan 2022 11:43:36 +0000 https://www.sentisight.ai/?p=6512 Building and training image recognition models that are fit for industry implementation across various sectors is a science in itself. For this science to become a reality, it requires enormous amounts of data, which in this case is image data. The data sets require image annotation, also commonly referred to as image labeling, in order to make the data suitable for training machine learning models.

SentiSight.ai’s image recognition online platform is a place where AI enthusiasts, or those who require image recognition models for their industry, can annotate their image datasets on a mass scale no matter their experience or knowledge in the field of AI and machine learning.

The graphic below encapsulates the image annotation service provided by SentiSight.ai where users can make use of various features to perform image annotation tasks more efficiently.

Available tools and features on the SentiSight.ai platform for annotating images.

This article will outline and focus specifically on the different labeling tool capabilities SentiSight.ai has to offer and their most popular use cases. The tools mentioned include:

  • Bounding boxes
  • Keypoints
  • Polygons
  • Bitmap and smart labeling
  • Polylines

Image labeling – why do we need it?

Image annotation is a process of classifying images and creating labels to describe objects within them, resulting in high functioning object detection models through sufficient training. It is a crucial stepping stone in a supervised machine learning project because the quality of the initial data determines the quality of the final model. A mislabeled image could lead to the model getting trained incorrectly, consequently producing undesirable results.

To develop a neural network model well, data scientists are collecting vast amounts of data that contains hundreds of images. Therefore, labeling all of them correctly is a tedious, resource-heavy and lengthy process. The more people are working on the same project annotating, the more confusing it can get. Images can get duplicated, mislabeled or not labeled at all. Therefore, having a good management system is a must.

To make the image annotation process more efficient programmers have developed numerous data labeling tools that allow for quicker and more precise annotation. One of these powerful tools, called SentiSight.ai, is being offered by us.

What are the most popular types of image annotation?

1. Bounding boxes

In computer vision, there are several labeling options, ranging from the simplest to the most complex. The most commonly used type of annotation is bounding boxes. It works by applying rectangular boxes used to determine the location of an object and is represented by four coordinates marking its corners. However, using bounding boxes is not a very precise way to label images, since most of the objects are not rectangular.

Applications of bounding boxes include development of autonomous vehicles to detect other vehicles on the road, as well as, facial detection which can be used in workplaces for attendance monitoring, security and much more.

Vehicle detection application of bounding boxes.

2. Keypoints

The keypoints feature is a sub-class of the bounding box tool. Once the user has drawn a bounding box around the object within the image, keypoints can then be selected to mark up any specific points within the image. Each keypoint can be assigned its own label, and after saving can be reused in the same order on another selection. 

This labeling feature is useful for person detection & recognition. Using keypoints, more precise face recognition models can be trained than by using bounding boxes alone. These types of detections can be useful for voter registration, video surveillance or even employee time attendance systems.

Bounding boxes (Image annotation / image labeling)

3. Polygons

Alternatively, our platform provides the possibility to accurately define the object by creating complex polygons around it. Editing them is possible by adding, moving or removing specific anchor points. The polygon annotation tool is useful while working with occluded objects since it can combine separate items into one joint structure or cut out holes in the initial selection.

Polygon use cases fall under the same bracket as bitmaps which are detailed below.

AI property development using the polygon annotation tool.

4. Bitmap and smart labeling

If users need to select an object of a complex shape, adding numerous separate points to form a polygon might become too difficult. For this task, free-form hand drawing bitmaps seems to be an easier option. It works similarly to a paintbrush, masking the selected area with a specific color that can then be converted to a polygon and vice versa to speed up the labeling process. The tool can be used as a drawing brush or an eraser, which helps to maintain accuracy.

In order to speed up the bitmap labeling process , SentiSight.ai offers an AI-assisted smart labeling feature similar to the ‘Magic Wand Tool’ in Adobe Photoshop automates the process. The only thing that needs to be done by the user is choosing the object area, vaguely defining both the foreground and background of the image using the given selection tools and then the AI algorithm does the rest by extracting the object. The tool works best with high contrast images, but even when the quality of the picture is only subpar, the process can be repeated as many times as needed to reach a satisfactory level.

Use cases of polygon and bitmap tools can vary from training autonomous vehicles to better understand their surroundings for optimal driving, identification of ripe crops that are ready for harvest, to medical imaging devices used for anatomy labeling such as brain imaging through CT scans.

Bitmap and polygon use case of crop identification.

5. Polylines

As the name suggests, the polyline tool is implemented when users wish to draw lines along the intended boundaries or edges within the image. Training a model on such data will enable you to recognize such edges on unlabeled images.

There are a number of use cases for polylines including outlining lanes on the road to make autonomous driving simpler for self-driving cars, as well as, optimizing crop yields in agriculture by firstly distinguishing each row and spotting weeds between the crops and within them.

Polyline annotation tool use case within agriculture.

An easy way to label pictures quickly and efficiently

Featured tips and guides

By offering a user-friendly interface, SentiSight.ai makes image annotation tasks a lot easier. The visibility for all labels can be turned on or off and their opacity can be adjusted to help see objects behind them better. The tools have keyboard shortcuts that significantly speed up the labeling process. After finishing a task, all the selected labels can be downloaded in JSON or CSV formats. The colored bitmaps needed for semantic segmentation used in self-driving cars and robotics can be downloaded with the original images as well, the same goes for black and white bitmaps, used in instant segmentation. The platform is suitable both for beginners, providing a straightforward user guide as well as video tutorials, and experts who can improve their experience by using advanced features. Depending on the size of a project, SentiSight.ai is free to start, making it accessible to anyone willing to give it a shot.

AI-assisted tagging

To train a deep neural network model well, large datasets containing thousands, sometimes even millions of annotated pictures are needed. Labeling them one by one even with the help of useful tools is still time-consuming. Therefore, SentiSight.ai presents AI-assisted labeling that enables an iterative labeling process.

Tags, attributed through labeling, allow machine learning models of this kind to work brilliantly when implemented in various industries. For instance, image recognition for retail utilizes product tagging so users can shop online at ease with the categories being developed around tags of items.

To be able to work with it, the user needs to annotate a small number of images and train a neural network model with them, which can then be used to make predictions on the rest of the dataset. The platform then allows the annotator to review the results and correct specific labels if needed.

Already having annotations to choose from does not require the user to spend too much time coming up with new ones. Afterward, the images can be added to the training set so that a more accurate version of the model could be trained. Repeating the process with more pictures a few more times provides the user with better results with every iteration.

AI-assited image annotation

Convenient management system

As we are entering a new era of AI technology, using deep learning algorithms for autonomous vehicles, facial recognition and robotics, image labeling tasks are becoming more important than ever. Some projects require a huge number of images to be labeled, which cannot be done by one or two labelers. For this reason, not having a strong management system can prove to be a challenge for large teams working on a single project. While working on our platform, users can share projects within their team, supervisors can add new roles and manage permissions. They can also track the time spent labeling images, either by checking day to day statistics or the overall time spent on a project. These summaries can also be downloaded in CSV format. The platform allows users to filter images by type, as in ‘seen by you’, ‘labeled by you’, ‘marked as validation set’, etc, by names and by image status, which can be set to ‘seen’ or ‘labeled’ by particular team members to avoid duplication, track your work and remind what has already been done. If desired, images can be marked as ‘seen’ or ‘unseen’ by the user if the project guidelines have changed and annotation revisions are needed. Since SentiSight.ai is an online tool, everyone’s work can be checked by their supervisor in real-time.

The power of sight

The ability to see and understand what we see comes naturally to humans, and so we tend to take it for granted. However, teaching someone else, especially a machine, to perceive things as we do is a long and laborious process. In computer vision, correctly describing the ground truth is a critical task requiring careful consideration. Therefore, whether you are training models for single label classification, multi label classification or various other requirements, a straightforward and quick process is vital. Image labeling platforms such as SentiSight.ai are here to help by offering powerful tools, assisting to improve the technology we currently have at our disposal.

]]>
Image annotation project management using SentiSight.ai https://www.sentisight.ai/image-annotation-project-management-using-sentisight-ai/ Wed, 01 Sep 2021 16:18:09 +0000 https://www.sentisight.ai/?p=7403 […]]]> Image labeling, also commonly known as image annotation, is one of the most important processes when working in the Artificial Intelligence field. It is the stepping stone of supervised machine learning, providing the needed learning materials for the models to be trained on. 

According to AI-oriented analyst firm Cognilytica, it is estimated that approximatelly 80% of machine learning project time is spent on collecting, cleaning, labeling, and augmenting model data, labeling taking up 25% of that time. Most companies cannot afford spending so much time on tedious manual work, and so they are always on the lookout for optimisation to their workflow.

In regards to this, SentiSight.ai is offering a featured dashboard that makes not only the labeling process more efficient, but also acts as a manager for your labelling staff as well. Today we are going to walk you through what to keep in mind when working on a large scale project and making the most of our labeling tool.

Uploading your images

You begin the labeling process by uploading data to your account from the menu on the left. Whether you are uploading all images at once or one at a time, you are able to choose the initial classification labels, add images to the validation set and preprocess for classification and similarity search to make the image recognition models work faster in case you need them in the future.

Image annotation project management using SentiSight.ai - SentiSight.ai

By right-clicking on the image, you will be able to access some of the quick-action features, such as image rotation, similarity search or the labeling tool. While some of them will make sure your data has the right size and label, the image status features, such as marking the data as seen and unseen, will prove their worth when working on the same project with the rest of your team.

Image labeling tool

Image annotation project management using SentiSight.ai - SentiSight.ai

After accessing the image labeling tool you will be presented with a wide variety of annotation instrument choices with their keyboard shortcuts specified between the brackets:

  1. Bounding box (B). The most popular form of labeling, commonly used to annotate images before utilizing them in training an object detection model. After applying the rectangular bounding box to an object, you are able to mark its most important key points, such as human joints or facial features.
  2. Polygon (P). A labeling form allowing you to curve around corners by adding and manipulating the anchor points. Very useful for marking objects of a complicated shape, occluded objects as well as carving holes inside objects. 
  3. Bitmap (J). A free-form labeling tool that allows you to cover an object by drawing on it, then converting it to a polygon if needed. 
  4. Polyline (L). Similar to the polygon option, this tool can be used via manipulating anchor points. The differentiation from polygon is that in polyline the first and the last point can’t be connected, so it does not mark an area.
  5. Point (T). This tool is used to label small objects. You can change the representation of a point to either a circle or a cross.
  6. Smart labeling tool (M). The smart labeling tool is designed to speed up bitmap labeling. The algorithm behind it automatically extracts the object from the selected foreground and background areas.

Further information about these tools can be found in our article Image annotation with SentiSight.ai, which will help you choose the most suitable one for your project.

Recently, we have introduced a new feature for setting the direction on both polygon and bitmap labeled items, allowing you to mark the object’s angle.

Project management with SentiSight.ai

To train a model properly, you need to prepare large amounts of data which usually requires a lot of time and manual work. This may lead to mislabeled or unlabeled images and other errors in your dataset. 

In order to overcome this, SentiSight.ai is offering a project sharing feature allowing multiple users to work on the same project while being able to track their time completing specific tasks and sorting their work accordingly. 
To get started, navigate to the main project’s dashboard, select the Share project icon, add a registered user(s)’s email address to the project and start working on it together.

Image annotation project management using SentiSight.ai - SentiSight.ai

The limit on how many users you can add to a project can be found under the Wallet information.

Through the Settings section, a user is able to choose whether or not the images uploaded and being labeled by another member are skipped. A feature that will prove its worth for those who need to re-check team members’ work to make sure no errors slip through.

As a project manager, under the User permissions section you will be able to edit your team members’ roles by granting or removing their access to certain features To this day, we have around 20 different access levels that you can use for your newly created roles, being able to create unique selections for each member of your team.

User permissions update

While managing a large team might feel like a challenge, with our SentiSight.ai tools it can be done with ease. You will be able to track your team’s progress with a convenient time management system, filter the dataset by image status, labels and type. 

When filtering by user you will be presented with numerous filtering options including:

  • Track their contribution to the team,
  • See how many images they have seen and/or labeled, 
  • How much time they have spent on the task; 
  • Labelers will be able to filter images they have not seen or labeled yet, 
  • Double-check on their colleagues’ work and track their progress.

Image annotation project management using SentiSight.ai - SentiSight.ai
User data

Lastly, we have recently introduced a Project Manager tool allowing for quicker management for projects that greatly simplifies adding or removing users to/from projects, creating or deleting projects and changing users’ permissions. On the left hand side you can see all of your projects while on the right you’ll find all of the users assigned to them. By selecting one or more projects on the left and one or more users on the right, you will be able to quickly add or remove the selected user(s) to/from selected project(s).

Project manager

Conclusion

Image annotation is a crucial task in a supervised machine learning field leading to training image recognition models. In order to train them properly, large amounts of data need to be collected, cleaned and correctly labeled. While this process requires a lot of time and manpower, SentiSight.ai is offering a labeling tool that will assist you in preparing your dataset as well as tracking the progress of each of your team members.

SentiSight.ai was created having two main goals in mind: 

  1. To make image labelling tasks as efficient as possible, even when working on large scale projects.
  2. To provide its users with a clean and user-friendly interface for training deep learning models, saving companies precious time and money. 

If you have any queries on how our platform can assist you with developing and training your very own image recognition models for various uses, such as retail, then please do not hesitate to contact us and one of our team will get back to you.

]]>
AI-assisted Image Labeling with SentiSight.ai https://www.sentisight.ai/ai-assisted-image-labeling-with-sentisight-ai/ Tue, 03 Nov 2020 12:45:48 +0000 https://www.sentisight.ai/?p=6397 Today we are releasing a new version of our platform and we have decided to start our blog to keep you updated about the progress in our development and other related news. The most significant update of this new version is AI-assisted labeling. Some AI-assisted labeling functionalities, such as smart labeling tool, have already been part of SentiSight.ai platform, but now we are bringing those to a whole new level.

Traditionally, image annotation is done by humans. The number of images that need to be labeled for training deep neural network models is often huge, so labeling them can be a laborious and time-consuming task. With AI-assisted labeling, this process can be sped up significantly. But wait, how can AI help to label images for training AI? Doesn’t it sound like the chicken and egg problem?

The idea is actually simple. The user can label only a handful of images and train a neural network model. This model then can be used for predicting classes in a set of unlabeled images. Afterward, human annotators review those predictions and correct them as necessary. The reviewing process is usually faster than labeling images from scratch because the annotator can already see the suggested label by AI and only needs to either approve it or correct it. After the images are reviewed, they can be added to the training set and a new and more precise neural network can be trained. From then on, the process can be continued in an iterative manner, as more and more images can be included in the training set.

Let’s have a look into how to do that on SentiSight.ai platform. Firstly, we assume that you already know how to label images manually, train a classification model, and make predictions on new images. If not, please consult our user guide or video tutorials.

Now after you make predictions on new images, you will see a screen similar to this:

SentiSight.ai AI-assisted labeling example 1

You can see that there is a checkbox next to each label along with the predicted score. For a single-label classification, always the first checkbox that corresponds to the predicted label with the highest score is checked by default. For multi-label classification, all checkboxes that correspond to labels with predicted scores above the score threshold will be checked. If some of the predictions are incorrect, you can adjust them by checking/unchecking the checkboxes. After you are done reviewing the predictions, you can select some or all of the images and add them to your dataset. If those images are already in your dataset, only their new labels will be added.

Now let’s have a look into how predictions look like for object detection models:

SentiSight AI-assisted labeling example 2

Here, you will see bounding boxes with a checkbox in the upper right corner. Just like in image classification, you can either select or unselect the bounding boxes by clicking on the checkboxes or on the bounding boxes themselves. Afterward, select images and add them to your dataset.

Whether you are using AI-assisted labeling for classification or object detection, in both cases after the images are added to the dataset you will see them marked as “auto-labeled” in the main dashboard. You can filter those images by selecting the appropriate “Filter by type” option. This can be useful in case you want to review the classification labels again or adjust the auto-labeled bounding boxes using the labeling tool.

AI-assisted labeling can also be used with pre-trained models. These are the models that are already trained on a large data set and they can be accessed via the “pre-trained” models menu for making predictions. These predictions can be then used to label some images and after reviewing these AI-assisted labels they can be used as a starting point for training your own model.

It has already been a long journey for our SentiSight.ai image labeling and recognition platform since we released it on November 18, 2018. The initial version had just a handful of features but we constantly updated it and added new functionalities. Just a couple of months ago (August 17, 2020) we released a big update that for the first time included powerful paid features such as object detection model training, downloading offline models, and improved functionality for managing labeling projects. Today we are adding new AI-assisted labeling functionalities but we shall not stop here as we still have lots of features in our minds that we would like to add.

We hope that these new functionalities will help you to label your data even faster and more efficiently. This is still a new functionality, so we are planning to improve it and add more features to it. So thank you for using SentiSight.ai and until next time!

]]>