New features – SentiSight.ai https://www.sentisight.ai Image labeling and recognition Tue, 16 Apr 2024 17:49:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Background Removal Tool using Image Recognition AI https://www.sentisight.ai/background-removal-tool-using-image-recognition-ai/ Tue, 25 Apr 2023 13:14:49 +0000 https://www.sentisight.ai/?p=16609 […]]]> In today’s digital era, images have become an integral part of our lives. From product photography to graphic design and machine learning applications, seamlessly eliminating backgrounds has become a game-changer in visual communication. Beyond its practical applications, background removal serves as a transformative tool, emphasizing the subject, enhancing aesthetics, and elevating brand appeal. In this blog post, we delve into the methods of how background can be removed from images as well as its applications, empowering photographers, designers, and marketers alike to unlock new levels of creativity, storytelling, and visual impact. 

What is background removal?

Background removal is a technique that extracts the main subject from an image while eliminating the unwanted elements in the background. Whether you’re an e-commerce retailer looking to display products on a clean backdrop or a photographer seeking to isolate subjects for creative compositions, background removal has a wide range of applications.

How does background removal work?

Background removal in images is achieved through various techniques, ranging from manual methods to sophisticated algorithms used in software applications. Some of the best-known methods are described below.

Manual selection

The simplest method involves manually selecting the foreground objects using tools like lasso, pen, or magnetic selection. This process requires time and precision, as the user carefully outlines the object. Once selected, the background can be deleted or replaced with a transparent layer.

Color-based segmentation

In cases where the foreground and background have distinct color differences, segmentation techniques can be used. By identifying color clusters, algorithms can separate the foreground from the background.

Chroma key composition

This technique is widely used in video and film production. A subject is filmed or photographed in front of a colored screen (usually, green or blue), which is then replaced with a different background using specialized software. The solid color is easy to distinguish and remove, leaving the subject intact.

Deep learning techniques

With the advent of deep learning, convolutional neural networks (CNNs) have been employed for background removal tasks. The most common CNN-based background removal technique is called semantic segmentation. Semantic segmentation models can identify and label different objects within an image, including the background. These networks are trained on large datasets of images with known foreground-background pairs, enabling them to learn to distinguish between the two. By removing the background class, the desired foreground object can be isolated.

The quality of the remaining subject cut out of the background highly depends on the complexity of the image as well as the removal technique chosen. The best quality results can be achieved by utilizing the newest advancements in technology, mainly by using deep learning and semantic segmentation methods. With large datasets and extensive training, these models can accurately distinguish between foreground and background, even in complex images with intricate details.

The least accurate results are achieved by using manual selection, since it is time-consuming when done carefully, as well as color-based segmentation, which works well when the foreground and background have distinct colors, yet might struggle with similar color pallets. 

No matter which background removal method is chosen, it is important to remember that the accuracy can be subjective and depend on individual use cases. For professional applications or critical tasks, combining multiple methods may yield the best results.

Image Segmentation Explored

There are two types of image segmentation – semantic segmentation and instance segmentation.

Semantic segmentation

This approach to image segmentation detects the area of the image that contacts an object of a certain label / type. Semantic segmentation is useful where the user needs to detect patterns and abstract objects.

Instance segmentation

Instance segmentation is more advanced and has a different set of use cases. This approach detects individual objects of a certain label / type.

Background Removal Tool using Image Recognition AI - SentiSight.ai

Training an Instance Segmentation Model

Training an instance segmentation model requires a dataset of labeled images. These labeled images will require either polygon and / or bitmap labeled objects, both of which can be created using the SentiSight.ai annotation tools. For detailed instructions on using the tools to create bitmap and polygon labels, please refer to the user guides

The SentiSight.ai web interface enables you to train an instance segmentation model without the need for coding or computer vision expertise. 

Once you have your labeled images, simply head over to the model training interface. From there, you can set the model name, training time and the stop time. 

The stop time will determine how long the model will continue training if there is no improvement measured in terms of mean Average Precision (mAP). 

For more advanced users, there are additional parameters that can be selected and customized. 

Once you have trained an instance segmentation model, you could then use this as a basis for a background removal tool powered by AI.

Applications

Applications of background removal are diverse and essential in various industries. This tool enhances the visual appeal of marketing materials, product photography, and social media content by isolating subjects. It also fuels creativity for artists and photographers, enabling unique compositions and collages. Key applications for the background removal tool are described below.

Enhancing Visual Appeal

One of the key advantages of background removal is its ability to improve the visual appeal of an image. By isolating the subject, you can create stunning visuals that draw the viewer’s attention to the intended focal point. Whether you’re creating marketing material, product catalogs, or social media content, background removal ensures that your images are visually captivating and professional.

E-commerce and Product Photography

For e-commerce businesses, displaying products against a clutter-free background is crucial for boosting sales. Background removal allows you to replace the original background with a solid color, a transparent layer, or a contextually appropriate backdrop. This not only provides a consistent and visually appealing presentation but also facilitates comparison between different products and creates a sense of professionalism.

Creative Compositions

Photographers and graphic designers leverage background removal to unlock their creativity. By removing the background, they can effortlessly blend subjects into new environments, experiment with unique collages, or seamlessly integrate them into other visual elements. Background removal offers immense flexibility and opens up a world of possibilities for artistic expression and storytelling.

Training Data Preparation

In the realm of machine learning and computer vision, accurate and reliable training data is essential. Background removal plays a vital role in creating high-quality datasets for various tasks, such as object detection, segmentation, and recognition. By removing backgrounds, researchers and developers can isolate subjects and ensure that their models focus solely on the relevant features, improving their performance and efficiency.

SentiSight have recently launched a background removal pre-trained model powered by Image Recognition AI, helping you to efficiently and accurately remove the background of images at scale. 

Introducing the SentiSight background removal tool

The Sentisight platform provides an intuitive background removal tool as a pre-trained model, offering users the convenience of immediate use without the need to train their own model. This feature allows users to effortlessly remove backgrounds from images in a hassle-free manner. 

Why use our tool?

There are two main benefits of using the SentiSight background removal tool; scalability and accuracy.

Whilst some other tools may only allow you to remove the background from one image at a time, our platform enables you to process images in batches, helping to speed up the process. Available to use via API (as well as on the web interface and mobile app), the SentiSight pre-trained model can be deployed to automatically remove the background from a large volume of images at once. Our advanced image recognition AI ensures that the background removal results are accurate and precise as well.

How to use the background removal tool

There are three ways you can use the background removal tool for your project, these being:

  1. Web Interface – Utilizing SentiSight.ai’s pre-trained models via the Web Interface offers the fastest and most straightforward approach, especially ideal when scalability is not a concern.
  2. REST API – By employing the REST API for SentiSight.ai’s pre-trained models, the users gain immense flexibility and scalability without the expense of dedicated hardware like GPUs.
  3. Mobile App – The SentiSight.ai mobile app allows users to effortlessly perform image classification predictions on their phones and upload images to the projects.

Web Interface

Using the background removal tool on the SentiSight online platform web interface is the quickest and most straightforward way to remove the background from images using our tools. This is a popular approach if you are trying out the models, or your project uses do not require scalability. 

Using the background removal tool via the web interface can be achieved in such simple steps:

  1. Log in to your SentiSight.ai account
  2. Select Pre-trained models > Background removal
  3. Click on the Upload images and import the images you desire to remove the background from
  4. Review the results and download the resulting images

REST API

Using the background removal tool via REST API offers you a significant degree of flexibility and scalability to remove the background from images without the need for expensive hardware such as GPUs or your own custom solutions.

Mobile App

Using the SentiSight mobile app gives you the ability to use the background removal pre-trained model from your phone, as well as being able to add images to your project datasets.

Get Started

As a pre-trained model, you can get started with the SentiSight background removal tool right away without the need for model training or coding expertise. The easiest and quickest way to get started is to use the web interface platform, where you can make a free account here. Once you have made your account, to use the background removal tool:

  1. Click on the ‘pre-trained models’ dropdown menu
  2. Choose the background removal tool
  3. Perform the background removal on your images!

Conclusion

Background removal is a transformative technique that empowers users to enhance visual appeal, boost e-commerce sales, enable creative compositions, and prepare high-quality training data for machine learning applications. 

SentiSight.ai stands at the forefront of this technological advancement, offering powerful background removal tool capabilities within its user-friendly platform.

]]>
New SentiSight.ai features and changes https://www.sentisight.ai/new-sentisight-ai-features-and-changes/ Wed, 19 May 2021 06:02:50 +0000 https://www.sentisight.ai/?p=7143 The SentiSight.ai team are very excited to announce a range of new features and changes to our online dashboard and website that have been designed to deliver the most user-friendly experience to date. 

The most prominent change is to our subscription model with the introduction of a pay-as-you-go wallet system that allows users to pay for only what they use on the platform. Other exciting operational changes and features include the possibility to retrain a whole image classification network that allows more accurate models for large data sets, introduction of the object detection model building as a tool available to be used for all users, as well as changes to the capabilities of REST API operations. 

Explore the full expanse of the changes below.

Changes to the Subscription Model

From the 17th of May, 2021 the current SentiSight.ai subscription model has been replaced by a pay-as-you-go wallet system that allows users to pay for only what they use on the platform.

The use of the available training, labeling and prediction capabilities for most users was often very dependent upon the image recognition projects being undertaken at the time, rather than being a linear monthly level of operation.

This new pay-as-you-go wallet system ensures that users are not either (a) restricted in their usage of the platform by their subscription level or (b) unable to make full use of the capabilities included in their premium subscription levels.

Instead, users will be able to pay for exactly what they’ve used, when they’ve used it.

Users will be able to top-up their SentiSight wallet by PayPal or Bank Transfer, and then will be able to spend their balance for any operation on the SentiSight.ai platform. The new pricing model offers per-operation discounts for our most active users, with the cost dependent upon the volume of operations that the user has already completed throughout the month.

For full details of the new pay-as-you-go wallet system, including full details of the overdraft facility, please visit https://www.sentisight.ai/pricing/

Introduction of Complimentary Credits

To help support our users to explore and use all of the available image recognition features and tools on our platform, every user will receive 10 euros a month of complimentary credits to be used for any operation on SentiSight.ai excluding model download. 

New feature and Introduction of Complimentary Credits

These complimentary credits reset to 10 eur on the first day of the month, with leftover credit not rolling over to the subsequent month. After the Complimentary Credits have been used up, the cost of each operation on the SentiSight.ai platform will be deducted from the User’s Balance.

The Overdraft Feature

Once a user has made a top-up of at least 5 eur, users will be able to set a Maximum Overdraft Limit of up to 100 eur should they choose to. This allows for the User Balance to become negative when using the SentiSight.ai platform, with the negative balance to be paid off at a later date. By default, this overdraft limit is set to zero. The Overdraft can be repaid at any time during the same or the next month – if this amount continues to be unpaid by the end of the next month then the accounts operations on SentiSight.ai will be blocked. The subsequent month’s complimentary credits do not contribute towards paying off the overdraft. 

Our most active and loyal customers have the right but not an obligation to increase this maximum overdraft limit, however the decision remains at the complete discretion of the SentiSight.ai team. To request a higher maximum overdraft limit, please contact us.

Features remaining monthly charged

Certain SentiSight.ai features, such as used Disk space and the number of users in shared projects (so called User shares) will be remaining to be charged monthly. Every user receives complimentary 5GB of Disk space and 2 User shares for free. They will only be charged if they increase the maximum limit of Disk space or User shares above the free limit. When the user increases the maximum limit of Disk space or User shares, Complimentary Credits will firstly be used. If there is an insufficient amount of Complimentary Credits to cover the desired increases, then the User Balance will be charged. The amount is withdrawn immediately after the maximum limit increase and on the 1st day of every subsequent month. 

The price for Dedicated similarity search service is charged monthly. To set up a Dedicated Similarity Search Service, please contact the SentiSight.ai team.

Operational Changes

Object Detection Model Building now available for all users

Object Detection Model Building now available for all users

The object detection model building tool, which was originally only available for premium subscribers, is now available for all for operation by all registered users. Complimentary credits can be put towards operation of the object detection model. If you are looking to use the object detection model for the first time, you will enjoy reading the quickstart guide to training object detection models. 

If you are unsure whether an object detection model is suitable for your project, please read this guide to choosing the right AI model for your project. 

Classification and Similarity Search Network

There have been changes to the classification and similarity search network that will improve the operation of these models.

REST API Legacy requests

REST API requests using the “text/plain” content type are now considered as legacy. Instead, users are recommended to change the implementations of the REST API to use the “application/json” content type instead. We have also updated our code samples to take this change into account. 

New Operational Features

New Classification Features

To benefit users with a large dataset of images, we have added new functionality to retrain full classification model networks. Additionally, users can now choose to automatically stop classification model training if there is no improvement for a certain amount of time.

REST API Capabilities

When using REST API, users are now able to upload and delete images from a project. 

New code samples and specifications

We have added C# REST API code samples and Swagger specifications to help users incorporate the SentiSight.ai tools into their own projects

Online User Guides

To help every user make the most of the image recognition tools, we have created an extensive online hub of learning resources and user guides for every tool and resource. Whether you’re looking to learn a few basic skills, or perfect a difficult customisation, the user guides will help you to make your own image recognition models! To read the guides, please visit https://www.sentisight.ai/user-guide/

Read our latest blogs

Deep Dive: Role of Image Recognition in Defect Detection

Since the dawn of the industrial era, innovations in machinery and technology have helped manufacturers to increase efficiency, reduce production costs and standardize quality at a vast scale. However, the diminishing human involvement in the production process has reduced the manufacturers’ ability to spot defective goods or products before they reach the final consumer. Read More

Image labeling using online vs offline tools

Image labeling (sometimes known as image annotation) is the process of creating a textual and visual description of the content of images. These labels / annotations are then used to train deep learning computer vision models for tasks such as object detection. Read More

Follow us on Social Media!

We’ll be sharing the latest top tips and new features on our LinkedIn and Facebook pages, as well as sharing interesting use cases of our tools. To stay in the loop and to even promote your own projects built using SentiSight.ai, follow us on our Facebook and LinkedIn platform!

We look forward to sharing the new changes and features to our platform with you.

The SentiSight Team

]]>
Deep Dive: Role of Image Recognition in Defect Detection https://www.sentisight.ai/deep-dive-role-of-image-recognition-in-defect-detection/ Thu, 13 May 2021 11:35:01 +0000 https://www.sentisight.ai/?p=6808 Introduction

Since the dawn of the industrial era, innovations in machinery and technology have helped manufacturers to increase efficiency, reduce production costs and standardize quality at a vast scale. However, the diminishing human involvement in the production process has reduced the manufacturers’ ability to spot defective goods or products before they reach the final consumer.

This is detrimental not only to the quality of the product, but can also put the end user at various levels of risk if defective products reach the distribution or consumption phase. Fortunately, there is an efficient innovative solution to overcome these problems – AI image recognition models.

Artificial intelligence powered image recognition models can be trained to assist and improve the manufacturing processes by visually scanning and detecting potentially defective products. 

By uploading and labeling images of the products at various manufacturing stages, AI image recognition models can be trained to learn what optimal and defective products should look like. This is achieved by uploading images of both optimal and different types of defective or faulty products. The image recognition model can then be trained to detect and scan for known defects on products. Through a custom integration involving image annotation, these image recognition models can even be used within the manufacturing process to speed up the removal of faulty items from the production line.

SentiSight.ai Models 

Automobile dent defect detection

The SentiSight.ai platform has two AI image recognition models suitable for defect detection, image classification and object detection. Whilst both models are suitable for defect detection, the model’s different characteristics and capabilities lend to differences in the use cases and applications of when to use which model:

  • Image classificationused to predict the content of images, with the model classifying the content of the image into specific categories, whilst also providing a confidence estimate of each classification prediction, by means of a percentage.
  • Object detection – these models are used to identify and locate objects within images, with located objects highlighted by a bounding box.

SentiSight.ai’s image classification models can be trained to identify flawed products at different stages of the manufacturing process. The image classification training process for defect detection involves uploading and labeling images of both defective and acceptable products, and then using the SentiSight.ai platform to train the model to be able to differentiate between the two.

 Image classification models are suitable for projects that do not require identifying the exact location of the faulty item, or the nature of the defected item, as image classification models do not flag up the location of the fault. However, the confidence estimate provided by the image classification model is a useful feature in helping to identify the probability of a defect classification, this is to say how confident the model is that a defect is present. This in turn allows employees to explore these defects, identify their causes, and adjust the manufacturing chain to avoid future flaws. 

The SentiSight.ai object detection models can also be trained to be of assistance in detecting defects or anomalies of products during the manufacturing process. However unlike classification models, object detection models can also identify and mark the exact location of the defective product, enabling an employee to be guided directly to the defective goods, or through a custom solution, the defective products to be automatically removed from the production line. For larger items, the object detection localization helps to pinpoint the defect within the product, helping employees to quickly localize and solve the issue.

Such automated detection processes are indispensable for mass production lines as not only do they detect and identify defects, but they help to provide stakeholders with a real-time understanding of the nature and extent of the defective issue. Defective goods or objects are often indicative of larger issues within the manufacturing process or machinery, so can prove costly if they go undetected. The multi-user capabilities of the SentiSight.ai platform means that not only can the boots-on-the-ground employees detect defects in real time, the management are also able to review and analyse the extent of the defections on a macro-level. 

Defect Detection Model Use Cases

Damaged box detected

These image recognition models, both object detection and image classification, have various use cases across a wide variety of industries, such as in the production of electronics and food.

Electronics

In the electronics industry, safety is of paramount importance to both the producer and the consumer; faulty goods can be deadly. Even when not deadly, defects within the manufacture of electronics are costly, time consuming and have an adverse impact on the reputation of the manufacturer. Given the complexity of electronic hardware such as motherboards, processors and memory cards, the functionality of object detection models in being able to identify and precisely locate defects is of keen interest to manufacturers seeking to maintain high standards of quality and efficiency. 

Food Distributors

Rotten apple detected

AI image recognition can also be very handy for food distributors. Object detection models can be trained to assist in the automated process of separating good, wonky and damaged fruit and vegetables, by scanning the produce as they pass along the feed, identifying and locating the produce according to their relevant category. Through combination with a custom sorting procedure, these object detection models improve the efficiency and speed of this early-stage fresh produce processing, a vital factor when working with fresh fruit and vegetables in the race to deliver fresh produce to the shopfloor.

Conclusion

SentiSight.ai’s image recognition models prove useful in spotting defective products within various industries, allowing companies to maintain a consistent production quality and avoid the financial repercussions that can arise from the mass production of flawed items. The AI-powered tools ease the workflow for employees in large factories, and ensure a consistent and accurate reporting of defects to the management thanks to the real-time project management functionalities.

Large scale defect detection can be undertaken by both image recognition and object detection models, each model is capable of spotting abnormalities within the production process. Since each model has unique capabilities and features, it is important to understand the requirements and desired outcomes of the defect detection process before choosing the right model for your project. As most defect detection projects require the localisation of the abnormal product, object detection is the most widely used model. 

Be sure to check out other applications such as image recognition in retail.

]]>
Optical Character Recognition Using SentiSight.ai https://www.sentisight.ai/optical-character-recognition-using-sentisight-ai/ Mon, 01 Mar 2021 09:04:58 +0000 https://www.sentisight.ai/?p=6608 On February 8th, 2021 we released a new version of our platform that introduced an optical character recognition pre-trained model, otherwise known as text recognition. Regarding the update, we created this short guide on what text recognition is, its history and usage scenarios, how it works and how to make the best of it on the SentiSight.ai platform

Optical character recognition in simple words

Optical character recognition (OCR), is a computer vision specialization task that enables the conversion of handwritten, typed or printed text into the machine-encoded text. 

Usually, words and characters are extracted from scanned documents, pictures or subtitle text, however, following a recent increase in demand for more convenient remote teaching and learning tools, and technology innovations like real-time handwriting recognition, implemented by using a pen or stylus and a tablet instead of more traditional input devices such as a keyboard or a mouse, became increasingly popular. 

How long has it been around us?

The first reading device was developed in 1914 by Emanuel Goldberg. Purposefully created for the blind and visually impaired people this machine was able to read characters and convert them into a standard telegraph code. 

At the same time Edmund Fournier d’Albe developed a handheld scanner that made a specific sound for each character or letter when moved across the page and called it the Optophone. Technology that started out merely as assistance to visually impaired people, gradually evolved into a search system that allowed its users to find answers to their queries in archived databases and were later implemented in various sectors. 

Wide variety of use cases

As the general public we often encounter optical character recognition on a daily basis, however, we do not necessarily acknowledge it. Nowadays OCR systems assist many industries in everyday tasks:

  •  Banking: used to verify a customer’s identity by matching the cheque’s handwriting to a signature stored in a database. It speeds up the clearance process and completes the task without any human involvement. 
  •  Healthcare industry: scanned reports, treatments and hospital records are digitally stored in the database accessible to every healthcare worker. This allows for quicker diagnostics and improved logistics, since the required numbers of equipment and medicine can be extracted from digital records’ analysis. 
  • Legal profession: by converting text into digital form that can then be processed by a computer, OCR eliminates the need for excessive paperwork, thus making it more sustainable. Its positive impact drastically changed how the legal industry operates. Once all the affidavits, statements and wills are digitized and stored in the database, finding the right paperwork is just one click away. Quick access to past documents significantly reduces the time spent on a case and cheapens the process.
  • Airport customs: extracts information from visitors’ passports and accelerates border checks.  
  • Autonomous vehicles: they recognize traffic signs and number plates of cars that are in front of them. Nonetheless, additional information about automated license plate recognition can be found on a separate Neurotechnology product’s website that specializes in this task.
Optical character recognition can be implemented in the form of license plate recognition
  • Literature research: OCR is partly responsible for many articles and essays that have been written across both digital and print mediums. Scanned books and research reports, converted to searchable PDFs, make it possible to find relevant information in a blink of an eye. With OCR implementation in translators, it is easier than ever to interpret that foreign-language booklet’s content you always wondered about! 

How does OCR work?

The process of text recognition gets complicated by numerous different fonts available in the market, therefore scanned images must be pre-processed before the selection of an OCR algorithm. It usually involves de-skewing (properly aligning), despeckling and converting an image to black-and-white in order to separate the text from its background. 

After normalization of the image, text can be extracted either by matrix matching, that involves matching the pattern pixel-by-pixel which works best with already known fonts, or feature extraction, that decomposes characters into lines and loops and compares them with the dataset. To further increase OCR accuracy, post-processing measures should be taken. They involve restricting the output by a specific lexicon, remaining the initial textual representation and knowing the grammar of the language. 

Optical character recognition on the SentiSight.ai platform

SentiSight.ai offers a helpful optical character recognition tool that converts your files to searchable text with ease. 

To start this process, go to Pre-trained models and select Text recognition from a drop down list. Here you have to choose a lexicon from a wide variety of 75 languages to improve recognition accuracy in post-processing. If you wish to recognize a language that uses a non-Latin alphabet, you can select the option to include latin characters too. Upload your images and voilà – the results are presented on your screen. 

Optical character recognition tool

Sentisight.ai’s optical character recognition tool displays segmented words and lines with a bounding box. The blue label in the top left corner indicates the predicted words while the black one in the top opposite corner shows the score of prediction accuracy as a percentage. Either one of them can be hidden by selecting an appropriate checkbox above the document. The results are listed at the bottom of the page which can also be downloaded in JSON format. 

Our text recognition tool is accessible either via SentiSight.ai web-interface or via REST API

Our text recognition tool is accessible either via SentiSight.ai web-interface or via REST API. More information on the latter can be found in explanatory user guides.

Key benefits of OCR-based system

Digitizing images and extracting their text is advantageous in numerous aspects. Since the file is converted into a machine-searchable document, editing a specific part of the text is a piece of cake. Moreover, storing files in a digital database instead of physical archives provides quick access to them and can be backed up anytime at a low cost. Additionally, after the extraction digital text can be easily translated into any other language thus improving time spent on a translation process and its cost. 

You can start enjoying a more sustainable lifestyle with convenient access to your files with our new version of SentiSight.ai now!

]]>