Learn more about technology – SentiSight.ai https://www.sentisight.ai Image labeling and recognition Mon, 24 Apr 2023 17:34:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Computer Vision Applications in Agriculture https://www.sentisight.ai/computer-vision-applications-in-agriculture/ Mon, 06 Dec 2021 12:37:58 +0000 https://www.sentisight.ai/?p=8047

Computer vision is a subfield of artificial intelligence rapidly changing the world around us. Its main goal is to process, analyze and interpret visual data just like the human brain can. 

While a decade ago it offered only limited functionality, due to constantly evolving AI technological advancements and the amount of data we generate daily its capabilities have increased exponentially. 

From 50 percent accuracy, nowadays we are reaching an outstanding 99 percent, making it more precise than human vision reacting to quick visual inputs. 

The power of computer vision application 

Computer vision technologies have been implemented across various fields. They range from visual systems developed to inspect faulty products in the manufacturing industry, to research about creating an intelligent machine that would be able to comprehend the world around them. Some of the best-known computer vision applications that are already making a significant impact on humanity include:

  • Autonomous vehicles: Combining numerous radars, sonars, GPS, and other sensors, self-driving cars are able to comprehend their surroundings by identifying navigation paths, traffic signs, and obstacles on the road. 
  • Military applications: Ranging from fully autonomous drones used to capture visual analysis to automated missile defense technology systems, computer vision can be utilized to enhance the security of the nation.
  • Medical image processing: Covering x-ray radiography, ultrasound, magnetic resonance imaging, and other disciplines, medical image analysis is bound to increase the efficiency of clinical examination and illness detection.
The power of computer vision application

Computer vision application in agriculture 

Needless to say, computer vision innovations are revolutionizing the agricultural industry too. An ever-growing human population increases the demand for produce, requiring greater efficiency in the fields of crops. 

From lowering the cost of production to improving productivity, artificial intelligence-powered technologies are able to perform highly sophisticated tasks that alternatively only humans can complete. The agricultural industry profits from a wide variety of applications, including sowing, harvesting, weather condition and soil analysis, weeding, crop health detection and monitoring, etc. 

Even though our livelihood depends on it, agriculture continues to be one of the most underestimated low-tech sectors. By eliminating outdated machinery and implementing innovative artificial intelligence technologies we are on track to changing that. With integrated smart farming solutions agriculture would boost the economy from the global market perspective, allowing countries to generate more produce at a lower cost. Some of the computer vision applications in agriculture that have already been successfully implemented across the world include:

  • Drone-based crop monitoring: Improved autonomous flying capabilities allow drones to obtain crucial data by using computer vision-enabled cameras. During the process, a drone collects crop health information, aerial view of the farm and identifies soil conditions. All the information acquired can then be fed to data processing models to generate the following year’s yield analysis.
Computer vision application in agriculture
  • Crop scouting: A drone can scout 100 times more area compared to a human, taking samples from every corner of the farm. Moreover, drones do not choose where to scout, resulting in more objective samplings that better explain crop performance. They manage to analyze 10-20 km rows of crops in a short 25-minute drone flight.
  • Systems for crop grading and sorting: Sorting out good crops from bad ones is another time-consuming task that can be easily automated. Smart systems with computer-vision enabled cameras could be trained to detect bad crops and sort out the produce in no time.
  • Weeding: There is a wide variety of automated weeding solutions available today, ranging from targeted to precision weeding. Computer vision-enabled robots can easily detect weeds between the crops and remove them with the help of localized herbicides, mechanical disruption, lasers, or electric current.
  • Automated pesticide spraying: Computer vision-based drones can monitor crop health conditions, detect infected crops and spray them with a localized stream of pesticides. Since a specifically calculated amount of pesticides would be sprayed in a restricted area covering only the infected crops, the solution would save a portion of healthy produce.
  • Rock picking: Before starting to plant crops, farmers need to clear the sowing fields of rocks. This is a tedious and time-consuming task that can be replaced by autonomous robots with integrated computer vision-enabled cameras. Automating such basic tasks would allow farmers to concentrate on more productive matters.
Drone-based crop monitoring

The success story in agriculture technology

Several months ago we started working with one of our clients in Argentina and Brazil on an artificial intelligence solution for the agricultural industry. It has been developed for yield analysis conducted from aerial view images of the fields obtained by computer vision-enabled drones. 

To make the image annotation process easier, images acquired from the drones are modified by removing the soil around them. Moreover, the images are captured from the top, so the position of the crop stem does not match the position of the top of the crop on the sides of the image, therefore, we perform perspective correction calculations to take into account these discrepancies. 

The final solution is composed of three parts:

  1. The first step is to recognize the rows of crops as straight lines for further calculations.
  1. The second step is to distinguish the crops within rows of crops. In regards to corn or sunflower, every plant is being detected separately. Other crops like wheat, soybeans, peanuts, sugar cane, and barley are detected as a group together with spacings between them.
  1. The last step is to spot weeds between rows of crops and within them.
Agriculture

After the thorough detection processes, we calculate the number of plants in each row of crops, plant population per hectare, the total length of spacings between crops, the coefficient of variation of spacing lengths, and other relevant statistics for further analysis. These findings provide opportunities for the following applications:

  • Yield estimation: Knowing the number of plants in each row of crops allows farmers to improve productivity estimation. It is a crucial part of the season management process, especially in medium and large farmlands.
  • Weed and insect population control: Two of the main reasons why farmers lose a portion of the produce are weed and pest infestations. Both of the problems can be tackled by implementing computer vision-enabled agricultural systems into the fields. For instance, once the solution detects a weed-infected region, it can localize the pesticides to be sprayed only to that specific area. This allows to tackle the problem effectively without spraying pesticides over the whole field, therefore, it saves farmers a lot of time and money.
  • Problematic yield area analysis: Some areas in the fields produce a low yield, caused by pests, weeds, or any defects made in the sowing stage. Statistical information gathered from the analysis can provide farmers with the relevant information in determining the root cause, allowing them to make better judgment calls in the years to come. 

The solution is already proving its use within the agricultural industry, helping bring about a positive change in farmers’ lives. Knowing the amount of the produce is crucial for cash-flow budgeting, delivery estimates, planning harvest equipment, storage requirements, and crop insurance purposes. 

The future of computer vision applications in agriculture

According to the report about the future of farming technology, it is estimated that due to the unstoppably growing population, by 2050 agricultural industry will need to produce 70 percent more food. If we keep the agricultural technology sector as it is now, we will face serious food shortages affecting millions of people around the world. 

However, with the use of artificial intelligence solutions, autonomous drones, and robots, we can improve efficiency, reduce the workflow for the farmers in the fields and allow them to concentrate on more pressing matters. By investing in the development of computer vision applications for the agriculture sector, we can expect to see a return on this investment through the efficient production of goods on a mass scale. It is also reasonable to believe that as these technologies become more commonplace, they will be more readily available and filter down into horticulture and be distributed by garden machinery and maintenance suppliers like Horace Fuller.

How SentiSight.ai can help in developing a computer vision-based agricultural solution?

SentiSight.ai is an online-based image recognition platform offering extensive image labeling and model training tools. It was designed with two main goals in mind: to make image recognition tasks as easy as possible and to offer a powerful and user-friendly image annotation tool for its users. 

Just like our client in South America, everyone is able to upload their images to the platform, label them, and use the data to train a deep convolutional network model. Although the process sounds complicated, we have tried our best to make it as simple as possible. 

To get started, check our blog post library about how to choose the right image labeling tool for the job and how to train an object detection model with your data. The SentiSight.ai platform is here to help you develop the solution of your dreams, whether that be within the agricultural industry or others like retail.

]]>
3 ways to deploy SentiSight.ai models https://www.sentisight.ai/3-ways-to-deploy-sentisight-ai-models/ Thu, 01 Jul 2021 08:02:05 +0000 https://www.sentisight.ai/?p=7227 Users of SentiSight.ai are able to train a variety of image recognition models of their choice, but deploying these models correctly can often be a tricky decision. At SentiSight.ai we are offering three options to use the model you have trained: 

  • Via our web interface
  • Via the online cloud-based REST API server
  • Using the downloaded model via offline REST API server which you can set up on your local device. 

In this post we are going to guide you through their differences, pros and cons of each choice and a short introduction to their setup.

Creating great customer experience helps us to improve

Let's improve AI platform together

We strive for ultimate customer satisfaction, and so it is our mission to learn from our customers’ needs and implement features that aid workflow flexibility and adaptability. It is important for us to offer our users as much freedom of choice and alternative means of deploying the models to keep up with an ever evolving AI landscape and to stay ahead of the competition by offering our clients the most up to date and easy to use AI platform.

We pride ourselves in being able to offer a convenient and easy to use platform that provides in-depth performance statistics and various complexity options. Experienced users may opt for setting up their own servers with our AI platform integration, while beginners are finding our web interface to be very comfortable to learn on.

Advantages and disadvantages of different SentiSight.ai model deployment options

Advantages and disadvantages of different models

What differentiates the options for deploying your model is variations in their simplicity, ability to automate, speed and pricing. 

Below we go through the advantages and disadvantages of each of the three options to deploy SentiSight.ai models: 

  1. Web interface 
  2. Cloud-based REST API server 
  3. Offline REST API server

1. Web interface

While you will need to perform detection requests manually, the web interface is perfect for quick testing. It is possible to view the predictions on the web interface itself and also to download the prediction results as a .csv or .json file. 

You can also download images grouped into folders by the predicted label, which might be useful if, for example, you are a photographer and you want to sort your photos. When deciding to opt for the web platform, you will be presented with a simplistic user-friendly interface that still offers thorough statistics for more advanced users. 

Using the model via our web interface allows for a quick and easy test implementation of a new model, however, rendering the results may take some time so this option works slightly slower than the REST API options. Plus, the process of making new predictions cannot be automated.

2. Cloud-based REST API server

The second option to deploy SentiSight.ai models is via cloud-based REST API. The major advantage of such deployment is that it is very easy to integrate it into any app or software that you are developing. 

The idea is that the user still uses the web interface to train the model, but once it is trained they can send REST API requests to our server and get the model predictions as the responses. The REST API request must include an image to be predicted, project id and model name to be used. We provide code samples to form these REST API requests.

Using the model via a cloud-based REST API is easy to set up, requests can be sent from any operating system, even mobile phones, and it does not require any additional hardware. Moreover, the trained model’s predictions can be automated and the only requirement is that the client device is connected to the Internet. 

3. Offline REST API server

The third option is to deploy the SentiSight.ai model as an offline REST API server. For this, you need to download the offline version of SentiSight.ai model and launch it on your own server. 

The server needs to run a Linux operating system and in some cases it requires a GPU to reach the maximum speed. Once the server is launched, the client can send REST API requests to this local server. 

The client devices don’t have to be connected to the Internet, they only need to be on the same local network as the server. In principle, it is even possible to run the client on the same device as the server, so the model could run completely offline. On the other hand, if the client runs on a different device, it has very low requirements for computational power and it can run on any operating system, because all it needs to do is to send a REST API request to the server.

Offline SentiSight.ai models come in three different speed modes: “slow”, “medium” and “fast”. The price of an offline model license depends on the required speed. GPU hardware is required to reach the maximum “fast” speed for classification models and to reach “medium” and “fast” speed for object detection models. The slower speed modes do not require a GPU. Also, the image similarity search model does not require a GPU for any of the speed limits. One of the major advantages of the offline SentiSight models is that if they are set up correctly and run on “fast” speed mode, they are faster than the cloud-based REST API, because the image does not have to travel via the Internet. 

It is worth noting that the offline server can also run on a Linux virtual machine that runs on Windows. However, in our experience it might be complicated or perhaps not even possible to configure GPU drivers to run correctly on Linux virtual machines. On the other hand, if you are running the model in “slow” mode or using a similarity search model that does not require a GPU to reach maximum speed, the virtual Linux environment might be a reasonable option.

Finally, one more consideration when choosing between the online and offline options for deployment is pricing. When you are making predictions either via web interface or via the cloud-based REST API, you have to pay for each prediction (provided you’ve used up all of your free monthly credits).

On the other hand, when you buy a license for an offline model, you need to pay a one time fee and you can use it as much as you want. Offline SentiSight.ai models also have a 30-day free trial version. You can download this free trial and test the integration into your system. Please, note that the free trial always runs on “slow” speed mode and requires internet connection for the server. To learn more about the speed options and licence types for offline models, please, visit our pricing page.

How to deploy SentiSight.ai models

Web interface

The first option is to use the model via our web interface. Very useful for beginners who do not have much technical knowledge and everyone who wants the quickest way to make new predictions, this option does not require any additional preparation. You can simply select a model from the Trained models drop down list and click on Make a new prediction

Web interface trained models

Cloud-based REST API server

The second option is to use the model via our cloud-based REST API server. It requires a minimal amount of preparation and technical knowledge, however, it is still easy to use and can be set up quickly. 

To begin the process you will need your Model name that is visible under the Trained models tab, your Project ID centered below the top menu and the API token located under your Wallet icon:

Cloud-based REST API server

After collecting all the relevant data you can access the server’s end of communication channel via this link to use the model with the lowest validation error:

https://platform.sentisight.ai/api/predict/{your_project_id}/{your_model_name}/

Or this one, if you wish to use your last model saved:

https://platform.sentisight.ai/api/predict/{your_project_id}/{your_model_name}/last

Now you are able to send requests with your images to the cloud-based REST API server and receive predictions for them. When using the model via this option, you can use any programming language of your choice and to make the startup easier we have provided some code samples for popular interfaces, such as cURL, Java, JavaScript, Python and C# in our user guide. Moreover, we have introduced SentiSight.ai REST API Swagger specification where you can test the service interactively and export code samples for many different programming languages. 

Offline REST API server

The third option is to set up the REST API server on your local device, download the model you have trained and use it offline. It requires both technical knowledge, a license and suitable hardware to reach its maximum speed.

To set up the REST API server on your local device you have to download the model from its statistics tab:

3 ways to deploy SentiSight.ai models - SentiSight.ai

After the model is downloaded we suggest to follow the quickstart instructions that you can find in QUICKSTART_GUIDE.html file located in the doc folder. The basic server setup is very simple, all you need is two commands: one to launch our license daemon and another one to launch the server. 

After this, you can immediately start sending REST API requests to the local server. For more information, consult README.html. You can also find code samples for sending requests in several different languages in the SAMPLES.html file.

Conclusion

Choosing the suitable option for you mainly depends on your resources, required speed and automation level, as well as technical knowledge. If you wish to quickly test a new model’s performance with an easy setup, then web interface is the best choice. For a relatively easy setup and an option for an automated process without requiring GPU hardware, yet having constant internet connection, our cloud-based REST API server will suit you well.

Lastly, a correctly set up local REST API server can provide the best model’s performance, automation and can be used completely offline, however, the setup requires a Linux operating system. The local REST API server also requires a GPU to achieve maximum speed for all models except for similarity search, for which the maximum speed can be reached on CPU-only devices.
Please take the opportunity to contact us for any further questions so you can deploy your trained model effectively. There are plenty of industries that already benefit from image recognition models, such as, agriculture, retail, and defect detection. We’d like to help you create new models for any application you deem necessary!

]]>
Image labeling using online vs offline tools https://www.sentisight.ai/image-labeling-using-online-vs-offline-tools/ Thu, 29 Apr 2021 06:00:00 +0000 https://www.sentisight.ai/?p=6800 Image labeling (sometimes known as image annotation) is the process of creating a textual and visual description of the content of images. These labels / annotations are then used to train deep learning computer vision models for tasks such as object detection.

Image labels can be created using either online or offline tools. Deciding upon whether online or offline image labeling tools are the best fit for your requirements depend upon a variety of factors. Here, we will dissect the key features, advantages and disadvantages of online and offline image labeling tools to help you choose the most suitable tool for your project. 

Offline image labeling tools

The popularity of offline image labeling tools is primarily as a result of data remaining in the clients PC, rather than having the visual data uploaded to an online server. This helps to ensure full data protection of sensitive content by preventing any fraudulent third parties from accessing the content. 

Moreover, for remote areas without stable and fast internet connection, images do not need to be uploaded into an online platform which can reduce any poor-wifi induced bottlenecks in the labeling process. However, the labelers will still need to access and download the images in the first place and also once the labels have been created. This is a time intensive process that often involves initially downloading the photos via the web or from a USB stick, which negates the offline benefits for areas with poor connection.

Additionally, when using offline AI-assisted image labeling tools, the platform is entirely reliant on the capabilities of the employees’ computing units. AI-assisted labeling tools involve a trained AI model giving the user labeling suggestions, which helps to greatly speed up the labeling process. Very few offline labeling tools have this feature and even if they have it, it is complicated to set up and requires GPU hardware, leading to a more intricate and expensive set up compared to the online tools that could be completed using a standard computer or laptop.

There are other limitations to offline image labeling tools, especially for collaborative labeling processes involving multiple team members. 

Offline labeling models do not allow labellers and/or managers to keep track of work progress in real time. Hence, if these models are used by a group of people in a certain company, it is more challenging to assess the efficiency of the workflow or promptly troubleshoot the hurdles that employees might encounter amidst a task. It can be a rather common scenario that the labeling team member misunderstands the labeling instructions from the supervisor and this misunderstanding results in a large number of hours spent labeling images in a wrong way before they are returned to the supervisor. 

Online image labeling tools

Online labeling tools help to avoid such cases, because the supervisor is able to immediately see the images being labeled by his team and give them feedback, which explains the growth in popularity for online labeling tools, especially for large collaborative projects.

Not only do online image labeling platforms provide synchronous accessibility and availability, they also allow for more efficient management of image labeling tasks by providing labeling time tracking. Manually sharing the data created with an offline image annotation tool with your labeling personnel can be a laborious task, especially if they work off-site, whereas online image labeling tools allow for multi-access in real time, which helps to avoid this inefficient process.  

Online image labeling tools are also less reliant upon expensive hardware such as expensive graphic cards (GPU), especially when using AI-assisted labeling tools. The online labeling platforms that have AI-assisted labeling tools do not require any set-up or model training, as the model has been trained on the server side. This in turn allows companies to avoid setting up and maintaining expensive computing units, since a basic computer/laptop would be sufficient to handle online labelling tasks.

In spite of the advantages explained above, online image annotation tools may still be inconvenient for certain industries. Since online annotation relies on uploading visual data to an online platform, this method might not be suitable for industries that handle sensitive or classified data with strict data protection policies.

SentiSight.ai Online Image Annotation Tool

SentiSight.ai’s online image annotation tool (or labelling tool) has been designed to speed up the image labeling process by offering a range of customisable AI-powered capabilities, whilst also avoiding many of the privacy pitfalls commonly associated with other online image labeling tools. 

Features of the SentiSight.ai Image Annotation Tool

The SentiSight.ai platform allows users to add several types of labels, including:

  • Classification labels
  • Bounding boxes
  • Polygons
  • Polylines 
  • Points and bitmaps

Each labeled object can have several child objects, such as key-points or attributes. Additionally, users can convert polygons to bitmaps, and vice versa.

To improve the efficiency of your labeling, SentiSight.ai offers a range of intuitive AI-assisted capabilities. For greater insight, read our guide to AI-assisted Image Labeling.

Once the labels have been created, they can be used to train deep learning models on the SentiSight.ai site, or downloaded in a .json format for use in your in-house model training. 

Project Management Functionalities

Project management functionalities

The SentiSight.ai platform offers users useful project management functionalities, to improve the efficiency and accuracy of collaborative labeling exercises. The labeling time tracking feature allows employers to accurately calculate the time spent by employees’ on specific tasks and projects. 

Project supervisors are also able to track annotation progress for every user, providing real-time feedback and analysis. The image filtering tools allow users and supervisors to narrow down their image searches by label type, label name, or by people responsible for the label creation and review. 

Privacy

One of the key considerations users make when opting for offline image labeling models is the increased security and privacy associated with offline servers, especially for classified content. Given the limitations of offline models, this has often meant users must sacrifice efficiency in the pursuit of security. SentiSight.ai has rigorous data protection policies that ensure confidentiality is consistently guaranteed, meaning that those searching for the right image labeling tool no longer need to choose between security and efficiency. 

Conclusion 

The decision of whether to use an online or offline image labeling tool relies on an array of factors such as the nature of the tasks to be completed, the amount and sensitivity of data to be processed, the overall requirements of the business and the reliability of the hardware in use. Undoubtedly, online labelling outweighs making use of an offline tool, especially for collaborative projects due to the multi-access and real time features which help contribute to a more efficient image labeling process.If you are interested in utilising the power of image labelling for your project, the SentiSight.ai online image annotation tool is free to try for yourself!

]]>
Human Pose estimation using SentiSight.ai https://www.sentisight.ai/human-pose-estimation-using-sentisight-ai/ Tue, 13 Apr 2021 06:00:00 +0000 https://www.sentisight.ai/?p=6786 On March 15th we released a new version of our platform that includes an exciting new feature, another type of computer vision task that focuses on object localization – a pre-trained model for pose estimation. 

This  article  will introduce you to its purpose, history, key benefits and will help you navigate through its usage on the SentiSight.ai platform. 

What is human pose estimation?

Human pose estimation, is defined as the localization of major human joints such as elbows, knees, wrists, etc.It continues to be one of the most popular research areas regarding computer vision tasks. 

It is a feature set that determines the estimated pose of a person from an image or video by approximating the spatial location of body and limb joints. It is important to note that pose estimation is only used for estimating body joints and not for the recognition of specific individuals. 

The major breakthrough in human pose estimation

This field is relatively new, the first major paper regarding human pose estimation based on deep learning methods – DeepPose: Human Pose Estimation via Deep Neural Networks  – was proposed during the IEEE Conference on Computer Vision and Pattern Recognition in 2014. Motivated by the deep neural network’s exceptional results on classification and localization problems, the authors presented a cascade of deep neural networks-based (DNN-based) regressors towards body joints that resulted in high precision pose estimates. 

Since humans are relatively flexible, the key problem with human joint localization usually lies within the occlusions, small or barely visible limbs, and the need to capture the context of an image. This new proposal included the location regression of each body joint and a cascade of DNN-based pose predictors, which have significantly increased the precision of joint localization. It outperformed all previous approaches by showing strong pose estimation results in the four most challenging limbs – lower and upper arms and legs as well as the mean confidence score value across these limbs. 

After the initial proposal of DNN-based regression towards body joints, other approaches arose that have implemented a sliding window detector to produce a rough heatmap output and estimation of the pose and its iterative correction based on feedback, instead of predicting the outputs in one go. 

Another state-of-the-art method includes deep convolutional neural networks passing the input through a high-resolution subnetwork and forming more stages by adding high-to-low subnetworks connecting them parallelly.

Human pose estimation using deep convolutional neural networks

How does human pose estimation work?

When it comes to the pose estimation process, there are two main approaches:

  • Bottom-up: This approach works by detecting every key point in an image and then assembling them into an object.
  • Top-down. Involves drawing a bounding box around the object and only then estimates the key points within each region. 

The process of human pose estimation depends on the complexity of input images, their quality, occlusion, clothing and lighting. Usually, input images are processed and an output of indexed keypoints is produced, along with a confidence score ranging from 0.0 to 1.0 for each keypoint. The confidence score refers to the likelihood of a body joint existing in that spot. 

Human pose estimation use cases

With pose estimation, we can track a person and his/her movements every step of the way. There is a myriad of exciting applications for human pose estimation technology in a wide variety of industries: 

  1. Well-known, everyday examples include motion capture and motion tracking, as is used in Apple’s Animoji and Microsoft’s Kinect technologies, with a ton of consumer Augmented Reality (AR) applications on the horizon.
  2. Enhanced surveillance in cases of recognizing a person’s gait or emergency systems that detect if someone has fallen or is sick.
  3. Pose estimation in devices that can recognize sign language for use in service institutions, such as airports, schools and banks. This would save the need for any specialized sign language interpreters. 
  4. Health and Fitness:Applications developed to analyze and optimize the performance of sports games, dance techniques, posture learning and personal fitness overall. Nowadays we already see some of its capabilities in the dance filter Sway and virtual fitness trainer Onyx that demonstrates the use of pose estimation. Another great example is an interactive basketball app HomeCourt that uses pose estimation to analyse basketball players’ movements and help them improve.
  5. Home and nursing use, specifically in training robots to walk and move more like a human would, and recognize if a person is in any kind of distress or need of immediate assistance.
  6. Self-driving cars can learn to better gauge a situation and make better decisions in regards to avoiding unexpected collisions with pedestrians by reacting to where they are on the road and where they are trying to go.
How does human pose estimation work

Human pose estimation using SentiSight.ai

SentiSight.ai offers a pre-trained pose estimation model that localizes human joints in the image and shows the kinematic pose in 2D. We present such a model that predicts both a single pose estimation and a multi one, meaning that it can detect more than one person in an image.

Starting the pose estimation process is very simple – you will need to navigate to Pre-trained models and select Pose estimation from the drop-down list. Upload your images and that’s it! You can see the results in front of you.

Pre-trained model for pose estimation

By ticking a check-box above your images you can manage the visibility of bounding boxes around the predictions. Finally, the results can then be downloaded as images with poses or in JSON format. 

Download pose predictions in JSON format

As with all of our pre-trained models, the pose estimation model is accessible via our SentiSight.ai web-interface or via REST API which is explained in more detail in our user guides. 

Estimating human pose is advantageous in a wide variety of use cases as mentioned above – whether it’s the health and fitness industry to improve technique of movement and help minimize injury, recognizing sign language in service institutions, training robots to recognize distress/pain in the medicinal field, plus many more. The tech will be at the core of humanoid robot technology once it matures to a point where it can be widely adopted in practice.

You can start contributing to the future of technology by implementing AI-fueled innovations into your life introduced by SentiSight.ai.

]]>
AI Image Recognition Use Cases: 6 Different Industry Examples https://www.sentisight.ai/ai-based-image-recognition-6-different-industry-use-cases/ Mon, 15 Mar 2021 07:06:53 +0000 https://www.sentisight.ai/?p=6703 Since the dawn of artificial intelligence, image recognition has long been recognised as one of the most prosperous and beneficial utilizations of the technology. Closely linked to computer vision, image recognition is the interdisciplinary computer science field that deals with a computer’s ability to identify and understand the content within images. Nowadays, most image recognition tasks are performed by using deep learning algorithms.

The development of the image recognition industry is twofold. As the technological capabilities develop, so too does the accessibility and usability of the neural networks to the wider population.

Gone are the days that only skilled AI and ML trained professionals could use image recognition models. Thanks to intuitive and user-friendly platforms such as SentiSight.ai’s AI image recognition tool features and capabilities, these models can be trained for various use cases. 

The image recognition use cases available via SentiSight.ai are virtually endless, as the platform allows you, the user, to train your own models to suit your own requirements. Once you have a collection of relevant images (known as the dataset), then simply follow the step-by-step instructions, including image annotation, to train the image recognition model themselves. Once trained, the image recognition model can be used for whatever it has been trained for.

So, with that all being said, here are six of the best image recognition applications across a variety of industries:

  • Defect Detection (Manufacturing)
  • Medicine
  • Agriculture
  • Content Moderation
  • Advertizing
  • Retail

1. Defect Detection (Manufacturing)

Industrial manufacturing is an ever-growing field with an increasing demand as the technological age advances. Large-scale production is required to take place in order to keep up with consumers needs, and this is not without its challenges.

The safety of consumers and workers is paramount to manufacturers, although this safety can be compromised when products leave the line undetected of defects. These defects, however small, may grow in size over a period of time and ultimately lead to the failure of the component causing significant implications for all involved.

For instance, factories involved in the production of cookies for retail consumers need to ensure each cookie is of the same size, and most importantly the same quality of taste. Image recognition models can be trained and implemented to detect any defects in cookies that deviate from the norm and they will be pulled from the production line. Installing these models at different sections along the production line will increase the effectiveness and efficiency of the process. Check out our model in action at our partner’s website Foldsolutions.

Conventional processes are simple and require visual observation from humans, a time inefficient process as well as being subjective through influence of prior knowledge and expectations. It is only natural that some products not fit for entering the consumer market due to defects will go unnoticed due to human error. Therefore, it is crucial technologies are implemented to reduce this chance of recurrence.

This is where image recognition comes into its own. Models can be trained to create an image database of correct products so any not fit for use can be identified through defect detection. Overtime, these images can be grouped through identification and classification of defects, not only allowing for an efficient fix by manufacturers but freeing up human expertise to be served elsewhere along the production line.

AI image recognition use cases include defect detection in manufacturing

2. Medicine

The key benefit AI image recognition software has in the medical field is detecting anomalies in tissues, including tumours, bone cracks or various types of cancer. This software can thereby enable hospitals to speed up the diagnosis process as well as avoid human error in spotting abnormalities at an early stage. The same technology can also be implemented in medical training by allowing trainees to better spot and diagnose diseases.

SentiSight’s image recognition capabilities can also be used to flag potential irregularities which can indicate certain diseases and subsequently inform the person in case they need to seek medical advice. For instance, an image containing the fragment of human skin with a birthmark can be analysed looking for the signs of melanoma (skin cancer). If the scan identifies a potential risk of cancer, the person could be referred to the doctor for further testing.

Along similar lines, the tool can be employed by medical military units, especially in remote areas. Various clinical studies contend that army doctors often misdiagnose their patients using MRI scans in overseas military units. A well-trained SentiSight model is a great image recognition application within the field of medicine due to the fact it would improve the accuracy of the interpretation of MRI results, eventually leading to accurate diagnosis and efficient course of treatment.

Detecting anomalies in tissues using AI image recognition

3. Agriculture

Agricultural farms are another example of a beneficial image recognition use case. It is used to identify which plants need watering and even spot plant diseases, insects, and worms. This can effectively reduce the need for human intervention which usually requires a time-consuming process of checking plants individually. By identifying plant diseases and parasites at a premature stage, SentiSight’s detection and recognition technologies can help maintain crops from the very early stages until the harvest. 

Thanks to the taxonomies that can result from training models via SentiSight, the indication of when the harvest should take place can be performed by comparing raw and ripe crops. This can facilitate identifying the areas of the farm that are ready to be harvested and compare the ripening process across different areas or farms. Finally, SentiSight’s image recognition software can be part of an agile ecosystem that involves the flawless coordination of an array of machines powered by artificial intelligence. For example, AI image recognition models can identify the weeds in the crops after harvesting. Following this scan, other machines can eliminate weeds from the harvest of crops at a faster pace compared to the current methods.

The agriculture industry benefits from being an image recognition application

4. Content moderation

The ai image recognition examples via SentiSight.ai extend into the field of content moderation by scanning large amounts of online content and filtering it via automated moderation at an alarming pace. Pre-trained models via SentiSight are an efficient and effective ready to implement this solution. This software can automatically detect graphic content (such as guns and nudity) allowing quicker content alteration or removal. For example, AI image recognition can facilitate the management of social media sites by ensuring that all content is complying with the websites’ guidelines.

SentiSight’s image recognition model can also identify specific products in various images. This function can be used to speed up the editing process by finding specific images in large databases. For instance, if the company desires to alter several adverts by changing a specific product displayed on the image, this software can assist the process by identifying and separating all images with a particular item that needs changing.

Content moderation is a popular ai image recognition example

5. Advertizing

SentiSight’s image recognition models can be trained to add an effective analytical tool to measure brand awareness and exposure. By training the model to recognise logos, SentiSight can be used to measure the prominence and regularity of branded content in photographs, an innovative way to analyse the impact of public branded content such as billboards or sports team sponsorship.

Another way to employ this image recognition application for advertizing purposes is through scanning online images to find similar looking items for sale via an image similarity model. Object detection models can be used to identify the objects in images and find similar items on e-commerce sites which in turn increases online traffic and sales. For instance, social media users can search for items they see on Instagram, Facebook, or Pinterest. Such a tool can also be employed to identify adverts on specific sites, like social media. This involves scanning images and indicating visible adverts in them, even if they are not marked as adverts by the publisher. Therefore, server moderators can have more in-depth data about the content on their sites and make better behaviour analysis of site users.

Effective analytical tool to measure brand awareness and exposure is a useful ai image recognition use case

6. Retail

Image recognition use cases include the retail sector as well. It can be implemented within self-checkout stations to scan and identify the products.  In doing so, it can identify items and the exact number of certain products in the basket (this usually applies to fresh fruits and vegetables).

Utilisation of AI image recognition software can also contribute to the creation of a hassle-free and automated checkout process. A range of models could be integrated within smartphone apps allowing each shopper to scan grocery products in the aisles, right after taking them off the shelf. Image recognition then identifies the item and automatically adds it to the basket on the app. The transaction could be completed in app instead of the checkout till or the cashier which, on top of being efficient, also contributes to a safer and more socially distanced shopping experience.

Additionally, grocery stores could make use of cameras and image recognition machine learning technologies to improve and automate store management. The software could scan the image of a shelf surface and deduce various data such as the shortage of certain products, misplaced items or even spot the use-by date of certain products. This would help to ensure that all out-of-date products are removed from the shelves on-time. Defect detection is a prominent use case of image recognition technology within the retail and consumer goods industry. The image recognition models can be used to scan and identify any defects along the production line, automating the inspection process to improve efficiency and accuracy.

Image recognition use case for implementation in self checkout

What have we learnt about from the ai image recognition examples?

Image recognition models can perform various tasks to increase work efficiency, ensure the safety and well-being of individuals, conduct in-depth analysis, and reduce the potential for human error. It can also work with other machines to create agile AI ecosystems capable of performing an array of tasks through:

SentiSight.ai’s extensive range of image recognition applications allows you and other users to train your own models, without the need for AI expertise. Our platform has been carefully designed to ensure all available tools are equipped with information to help you distinguish which tools are ideal for training your very own image recognition models.

If you found this article interesting, then you may want to check out our automated AI Alt text generator and how image recognition can benefit the world of online digital marketing as well!

]]>