AI Image Search - Ximilar: Visual AI for Business https://www3.ximilar.com/blog/tag/ai-image-search/ VISUAL AI FOR BUSINESS Wed, 18 Sep 2024 13:01:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.ximilar.com/wp-content/uploads/2024/08/cropped-favicon-ximilar-32x32.png AI Image Search - Ximilar: Visual AI for Business https://www3.ximilar.com/blog/tag/ai-image-search/ 32 32 New Solutions & Innovations in Fashion and Home Decor AI https://www.ximilar.com/blog/fashion-and-home-updates-2024/ Wed, 18 Sep 2024 12:09:13 +0000 https://www.ximilar.com/?p=18116 Our latest AI innovations for fashion & home include automated product descriptions, enhanced fashion tagging, and home decor search.

The post New Solutions & Innovations in Fashion and Home Decor AI appeared first on Ximilar: Visual AI for Business.

]]>
Automate Writing of SEO-Friendly Product Titles and Descriptions With Our AI

Our AI-powered Product Description revolutionizes the way you manage your fashion apparel catalogs by fully automating the creation of product titles and descriptions. Instead of spending hours manually tagging and writing descriptions, our AI-driven generator swiftly produces optimized texts, saving you valuable time and effort.

Ximilar automates keyword extraction from your fashion images, enabling you to instantly create SEO-friendly product titles and descriptions, streamlining the inventory listing process.

With the ability to customize style, tonality, format, length, and preferred product tags, you can ensure that each description aligns perfectly with your brand’s voice and SEO needs. This service is designed to streamline your workflow, providing accurate, engaging, and search-friendly descriptions for your entire fashion inventory.

Enhanced Taxonomy for Accessories Product Tagging

We’ve upgraded our taxonomy for accessories tagging. For sunglasses and glasses, you can now get tags for frame types (Frameless, Fully Framed, Half-Framed), materials (Combined, Metal, Plastic & Acetate), and shapes (Aviator, Cat-eye, Geometric, Oval, Rectangle, Vizor/Sport, Wayfarer, Round, Square). Try how it works on your images in our public demo.

Our tags for accessories cover all visual features from materials to patterns or shapes.

Automate Detection & Tagging of Home Decor Images With AI

Our new Home Decor Tagging service streamlines the process of categorizing and managing your home decor product images. It uses advanced recognition technology to automatically assign categories, sub-categories, and tags to each image, making your product catalog more organized. You can customize the tags and choose translations to fit your needs.

Try our interactive home decor detection & tagging demo.

The service also offers flexibility with custom profiles, allowing you to rename tags or add new ones based on your requirements. For pricing details and to see the service in action, check our API documentation or contact our support team for help with custom tagging and translations.

Visual Search for Home Decor: Find Products With Real-Life Photos

With our new Home Decor Search service, customers can use real-life photos to find visually similar items from your furniture and home decor catalogue.

Our tool integrates four key functionalities: home decor detection, product tagging, colour extraction, and visual search. It allows users to upload a photo, which the system analyzes to detect home decor items and match them with similar products from your inventory.

Our Home Decor Search tool suggests similar alternatives from your inventory for each detected product.

To use Home Decor Search, you first sync your database with Ximilar’s cloud collection. This involves processing product images to detect and tag items, and discarding the images immediately after. Once your data is synced, you can perform visual searches by submitting photos and retrieving similar products based on visual and tag similarity.

The API allows for customized searches, such as specifying exact objects of interest or integrating custom profiles to modify tag outputs. For a streamlined experience, Ximilar offers options for automatic synchronization and data mapping, ensuring your product catalog remains up-to-date and accurate.

The post New Solutions & Innovations in Fashion and Home Decor AI appeared first on Ximilar: Visual AI for Business.

]]>
How Fashion Tagging Works and Changes E-Commerce? https://www.ximilar.com/blog/how-fashion-tagging-works/ Wed, 22 May 2024 10:05:34 +0000 https://www.ximilar.com/?p=15764 An in-depth overview of the key AI tools reshaping the fashion industry, with a focus on automated fashion tagging.

The post How Fashion Tagging Works and Changes E-Commerce? appeared first on Ximilar: Visual AI for Business.

]]>
Keeping up with the constantly emerging trends is essential in the fashion industry. Beyond shifts in cuts, materials, and colours, staying updated on technological trends has become equally, if not more, crucial in recent years. Given our expertise in Fashion AI, let’s take a look at the key technologies reshaping the world of fashion e-commerce, with a particular focus on a key Fashion AI tool: automated fashion tagging.

AI’s Impact on Fashion: Turning the Industry on Its Head

The latest buzz in the fashion e-commerce realm revolves around visual AI. From AI-powered fashion design to AI-generated fashion models, and all the new AI tools, which rapidly change our shopping experience by quietly fueling the product discovery engines in the background, often unnoticed.

Key AI-Powered Technologies in Fashion E-Commerce

So what are the main AI technologies shaking up fashion e-commerce lately? And why is it important to keep up with them?

Recognition, Detection & Data Enrichment in Fashion

In the world of fashion e-commerce, time is money. Machine learning techniques now allow fashion e-shops to upload large unstructured collections of images and extract all the necessary information from them within milliseconds. The results of fashion image recognition (tags/keywords) serve various purposes like product sorting, filtering, searching, and also text generation.

Breaking down automated fashion tagging: AI can automatically assign relevant tags and save you a significant amount of money and time, compared to the manual process.
AI can automatically assign relevant tags and save you a significant amount of money and time, compared to the manual process.

These tools are indispensable for today’s fashion shops and marketplaces, particularly those with extensive stock inventories and large volumes of data. In the past few years, automated fashion tagging has made time-consuming manual product tagging practically obsolete.

Generative AI Systems for Fashion

The fashion world has embraced generative artificial intelligence almost immediately. Utilizing advanced AI algorithms and deep learning, AI can analyze images to extract visual attributes such as styles, colours, and textures, which are then used to generate visually stunning designs and written content. This offers endless possibilities for creating personalized shopping experiences for consumers.

Different attributes extracted by automated product tagging can directly serve as keywords for product titles and descriptions. You can set the tonality, and length, or choose important attributes to be mentioned in the texts.
Different attributes extracted during the product tagging process can directly serve for titles and descriptions. You can set the style and length, or choose important attributes.

Our AI also enables you to automate the writing of all product titles and product descriptions via API, directly utilizing the product attributes extracted with deep tagging and letting you select the tone, length, and other rules to get SEO-friendly texts quickly. We’ll delve deeper into this later on.

Fashion Discovery Engines and Recommendation Systems

Fashion search engines and personalized recommendations are game-changers in online shopping. They are powered by our speciality: visual search. This technology analyzes images in depth to capture their essence and search vast product catalogs for identical or similar products. Three of its endless uses are indispensable for fashion e-commerce: similar items recommendations, reverse image search and image matching.

Personalized experiences and product recommendations are the key to high engagement of customers.
Personalized experiences and product recommendations are essential for high engagement of customers.

Visual search enables shoppers to effortlessly explore new styles, find matching pieces, and stay updated on trends. It allows you to have your own visual search engine, that rapidly scans image databases with millions of images to provide relevant and accurate search results within milliseconds. This not only saves you time but also ensures that every purchase feels personalized.

Shopping Assistants in Fashion E-Commerce and Retail

The AI-driven assistants guide shoppers towards personalized outfit choices suited for any occasion. Augmented Reality (AR) technology allows shoppers to virtually try on garments before making a purchase, ensuring their satisfaction with every selection. Personalized styling advice and virtual try-ons powered by artificial intelligence are among the hottest trends developed for fashion retailers and fashion apps right now.

Both fashion tags for occasions extracted with our automated product tagging, as well as similar item recommendations, are valuable in systems that assist customers in dressing appropriately for specific events.

My Fashion Website Needs AI Automation, What Should I Do?

Consider the Needs of Your Shoppers

To provide the best customer experience possible, always take into account your shoppers’ demographics, geographical location, language preferences, and individual styles.

However, predicting style is not an easy task. But by utilizing AI, you can analyze various factors such as user preferences, personal style, favoured fashion brands, liked items, items in their shopping baskets, and past purchases. Think about how to help them discover items aligned with their preferences and receive only relevant suggestions that inspire rather than overwhelm them.

There are endless ways to improve a fashion e-shop. Always keep in mind not to overwhelm the visitors, and streamline your offer to the most relevant items.

While certain customer preferences can be manually set up by users when logging into an app or visiting an e-commerce site, such as preferred sizes, materials, or price range, others can be predicted. For example, design preferences can be inferred based on similarities with items visitors have browsed, liked, saved, or purchased.

Three Simple Steps to Elevate Your Fashion Website With AI

Whether you run a fashion or accessories e-shop, or a vintage fashion marketplace, using these essential AI-driven features could boost your traffic, improve customer engagement, and get you ahead of the competition.

Automate Product Tagging & Text Generation

The image tagging process is fueled by specialised object detection and image recognition models, ensuring consistent and accurate tagging, without the need for any additional information. Our AI can analyze product images, identify all fashion items, and then categorize and assign relevant tags to each item individually.

In essence, you input an unstructured collection of fashion images and receive structured metadata, which you can immediately use for searching, sorting, filtering, and product discovery on your fashion website.

Automated fashion tagging relies on neural networks and deep learning techniques. The product attributes are only assigned with a certain level of confidence, highlighted in green in our demo.
AI image tagging relies on neural networks and deep learning techniques. We only assign product attributes with a certain level of confidence, highlighted in green in our demo.

The keywords extracted by AI can serve right away to generate captivating product titles and descriptions using a language model. With Ximilar, you can pre-set the tone and length, and even set basic rules for AI-generated texts tailored for your website. This automates the entire product listing process on your website through a single API integration.

Streamline and Automate Collection Management With AI

Visual AI is great for inventory management and product gallery assembling. It can recognize and match products irrespective of lighting, format, or resolution. This enables consistent image selection for product listings and galleries.

You can synchronise your entire fashion apparel inventory via API to ensure continual processing by up-to-date visual AI. You can either set the frequency of synchronization (e.g., the first day of each month) or schedule the synchronization run every time you add a new addition to the collection.

A large fashion e-commerce store can list tens of thousands of items, with millions of fashion images. AI can sort images in product galleries and references based purely on visual attributes.
A large fashion e-commerce store can have millions of fashion images. AI can sort images in product galleries and references based purely on visual attributes.

For example, you can showcase all clothing items on models in product listings or display all accessories as standalone photos in the shopping cart. Additionally, you can automate tasks like removing duplicates and sorting user-generated visual content, saving a lot of valuable time. Moreover, AI can be used to quickly spot inappropriate and harmful content.

Provide Relevant Suggestions & Reverse Image Search

During your collection synchronisation, visual search processes each image and each product in it individually. It precisely analyzes various visual features, such as colours, patterns, edges and other structures. Apart from the inventory curation, this will enable you to:

  1. Have your custom fashion recommendation system. You can provide relevant suggestions from your inventory anywhere across the customer journey from the start page to the kart.
  2. Improve your website or app with a reverse image search tool. Your visitors can search with smartphone photos, product images, pictures from Pinterest, Instagram, screenshots, or even video content.
Looking for a specific dress? Reverse image search can provide relevant results to a search query, independent of the quality or source of the images.
Looking for a specific dress? Reverse image search can provide relevant results to a search query, independent of the quality or source of the images.

Since fashion detection, image tagging and visual search are the holy trinity of fashion discovery systems, we’ve integrated them into a single service called Fashion Search. Check out my article Everything You Need to Know About Fashion Search to learn more.

Visual search can match images, independent of their origin (e.g., professional images vs. user-generated content), quality and format. We can customize it to fit your collection, even for vintage pieces, or niche fashion brands. For a firsthand experience of how basic fashion visual search operates, check out our free demo.

How Does the Automated Fashion Tagging Work?

Let’s take a closer look at the basic AI-driven tool for the fashion industry: automated fashion tagging. Our product tagging is powered by a complex hierarchy of computer vision models, that work together to detect and recognize all fashion products in an image. Then, each product gets one category (e.g., Clothing), one or more subcategories (e.g., Evening dresses or Cocktail dresses), and a varied set of product tags.

To name a few, fashion tags describe the garment’s type, cut, fit, colours, material, or patterns. For shoes, there are features such as heels, toes, materials, and soles. Other categories are for instance jewellery, watches, and accessories.

In the past, assigning relevant tags and texts to each product was a labor-intensive process, slowing down the listing of new inventory on fashion sites. Image tagging solved this issue and lowered the risk of human error.
In the past, assigning relevant tags and texts to each product was a labor-intensive process, slowing down the listing of new inventory on fashion sites. Image tagging solved this issue and eliminated the risk of human error.

The fashion taxonomy encompasses hundreds of product tags for all typical categories of fashion apparel and accessories. Nevertheless, we continually update the system to keep up with emerging trends in the fashion industry. Custom product tags, personal additions, taxonomy mapping, and languages other than the default English are also welcomed and supported. The service is available online – via API.

How Do I Use the Automated Fashion Tagging API?

You can seamlessly integrate automated fashion tagging into basically any website, store, system, or application via REST API. I’d suggest taking these steps first:

First, log into Ximilar App – After you register into Ximilar App, you will get the unique API authentication token that will serve for your private connection. The App has many useful functions, which are summarised here. In the past, I wrote this short overview that could be helpful when navigating the App for the first time.

If you’d like to try creating and training your own additional machine learning models without coding, you can also use Ximilar App to approach our computer vision platform.

Secondly, select your plan – Use the API credit consumption calculator to estimate your credit consumption and optimise your monthly supply. This ensures your credit consumption aligns with the actual traffic on your website or app, maximizing efficiency.

Use Ximilar's credit consumption calculator to optimise your monthly supply.
Use Ximilar’s credit consumption calculator to optimise your monthly supply.

And finally, connect to API – The connection process is described step by step in our API documentation. For a quick start, I suggest checking out First Steps, Authentication & Image Data. Automated Fashion Tagging has dedicated documentation as well. However, don’t hesitate to reach out anytime for guidance.

Do You Need Help With the Setup?

Our computer vision specialists are ready to assist you with even the most challenging tasks. We also welcome all suggestions and custom inquiries to ensure our solutions meet your unique needs. And if you require a custom solution, our team of developers is happy to help.

We also offer personalized demos on your data before the deployment, and can even provide dedicated server options or set up offline solutions. Reach out to us via live chat for immediate assistance and our team will guide you through the entire process. Alternatively, you can contact us via our contact page, and we will get back to you promptly.

The post How Fashion Tagging Works and Changes E-Commerce? appeared first on Ximilar: Visual AI for Business.

]]>
How to Build a Good Visual Search Engine? https://www.ximilar.com/blog/how-to-build-a-good-visual-search-engine/ Mon, 09 Jan 2023 14:08:28 +0000 https://www.ximilar.com/?p=12001 Let's take a closer look at the technology behind visual search and the key components of visual search engines.

The post How to Build a Good Visual Search Engine? appeared first on Ximilar: Visual AI for Business.

]]>
Visual search is one of the most-demanded computer vision solutions. Our team in Ximilar have been actively developing the best general multimedia visual search engine for retailers, startups, as well as bigger companies, who need to process a lot of images, video content, or 3D models.

However, a universal visual search solution is not the only thing that customers around the world will require in the future. Especially smaller companies and startups now more often look for custom or customizable visual search solutions for their sites & apps, built in a short time and for a reasonable price. What does creating a visual search engine actually look like? And can a visual search engine be built by anyone?

This article should provide a bit deeper insight into the technology behind visual search engines. I will describe the basic components of a visual search engine, analyze approaches to machine learning models and their training datasets, and share some ideas, training tips, and techniques that we use when creating visual search solutions. Those who do not wish to build a visual search from scratch can skip right to Building a Visual Search Engine on a Machine Learning Platform.

What Exactly Does a Visual Search Engine Mean?

The technology of visual search in general analyses the overall visual appearance of the image or a selected object in an image (typically a product), observing numerous features such as colours and their transitions, edges, patterns, or details. It is powered by AI trained specifically to understand the concept of similarity the way you perceive it.

In a narrow sense, the visual search usually refers to a process, in which a user uploads a photo, which is used as an image search query by a visual search engine. This engine in turn provides the user with either identical or similar items. You can find this technology under terms such as reverse image search, search by image, or simply photo & image search.

However, reverse image search is not the only use of visual search. The technology has numerous applications. It can search for near-duplicates, match duplicates, or recommend more or less similar images. All of these visual search tools can be used together in an all-in-one visual search engine, which helps internet users find, compare, match, and discover visual content.

And if you combine these visual search tools with other computer vision solutions, such as object detection, image recognition, or tagging services, you get a quite complex automated image-processing system. It will be able to identify images and objects in them and apply both keywords & image search queries to provide as relevant search results as possible.

Different computer vision systems can be combined on Ximilar platform via Flows. If you would like to know more, here’s an article about how Flows work.

Typical Visual Search Engines:
Google Lens & Pinterest Lens

Big visual search industry players such as Shutterstock, eBay, Pinterest (Pinterest Lens) or Google Images (Google Lens & Google Images) already implemented visual search engines, as well as other advanced, yet hidden algorithms to satisfy the increasing needs of online shoppers and searchers. It is predicted, that a majority of big companies will implement some form of soft AI in their everyday processes in the next few years.

The Algorithm for Training
Visual Similarity

The Components of a Visual Search Tool

Multimedia search engines are very powerful systems consisting of multiple parts. The first key component is storage (database). It wouldn’t be exactly economical to store the full sample (e.g., .jpg image or .mp4 video) in a database. That is why we do not store any visual data for visual search. Instead, we store just a representation of the image, called a visual hash.

The visual hash (also visual descriptor or embedding) is basically a vector, representing the data extracted from your image by the visual search. Each visual hash should be a unique combination of numbers to represent a single sample (image). These vectors also have some mathematical properties, meaning you can compare them, e.g., with cosine, hamming, or Euclidean distance.

So the basic principle of visual search is: the more similar the images are, the more similar will their vector representations be. Visual search engines such as Google Lens are able to compare incredible volumes of images (i.e., their visual hashes) to find the best match in a hundred milliseconds via smart indexing.

How to Create a Visual Hash?

The visual hashes can be extracted from images by standard algorithms such as PHASH. However, the era of big data gives us a much stronger model for vector representation – a neural network. A simple overview of the image search system built with a neural network can look like this:

Extracting visual vectors with the neural network and searching with them in a similarity collection.
Extracting visual vectors with the neural network and searching with them in a similarity collection.

This neural network was trained on images from a website selling cosmetics. Here, it extracted the embeddings (vectors), and they were stored in a database. Then, when a customer uploads an image to the visual search engine on the website, the neural network will extract the embedding vector from this image as well, and use it to find the most similar samples.

Of course, you could also store other metadata in the database, and do advanced filtering or add keyword search to the visual search.

Types of Neural Networks

There are several basic architectures of neural networks that are widely used for vector representations. You can encode almost anything with a neural network. The most common for images is a convolutional neural network (CNN).

There are also special architectures to encode words and text. Lately, so-called transformer neural networks are starting to be more popular for computer vision as well as for natural language processing (NLP). Transformers use a lot of new techniques developed in the last few years, such as an attention mechanism. The attention mechanism, as the name suggests, is able to focus only on the “interesting” parts of the image & ignore the unnecessary details.

Training the Similarity Model

There are multiple methods to train models (neural networks) for image search. First, we should know that training of machine learning models is based on your data and loss function (also called objective or optimization function).

Optimization Functions

The loss function usually computes the error between the output of the model and the ground truth (labels) of the data. This feature is used for adjusting the weights of a model. The model can be interpreted as a function and its weights as parameters of this function. Therefore, if the value of the loss function is big, you should adjust the weights of the model.

How it Works

The model is trained iteratively, taking subsamples of the dataset (batches of images) and going over the entire dataset multiple times. We call one such pass of the dataset an epoch. During one batch analysis, the model needs to compute the loss function value and adjust weights according to it. The algorithm for adjusting the weights of the model is called backpropagation. Training is usually finished when the loss function is not improving (minimizing) anymore.

We can divide the methods (based on loss function) depending on the data we have. Imagine that we have a dataset of images, and we know the class (category) of each image. Our optimization function (loss function) can use these classes to compute the error and modify the model.

The advantage of this approach is its simple implementation. It’s practically only a few lines in any modern framework like TensorFlow or PyTorch. However, it has also a big disadvantage: the class-level optimization functions don’t scale well with the number of classes. We could potentially have thousands of classes (e.g., there are thousands of fashion products and each product represents a class). The computation of such a function with thousands of classes/arguments can be slow. There could also be a problem with fitting everything on the GPU card.

Loss Function: A Few Tips

If you work with a lot of labels, I would recommend using a pair-based loss function instead of a class-based one. The pair-based function usually takes two or more samples from the same class (i.e., the same group or category). A model based on a pair-based loss function doesn’t need to output prediction for so many unique classes. Instead, it can process just a subsample of classes (groups) in each step. It doesn’t know exactly whether the image belongs to class 1 or 9999. But it knows that the two images are from the same class.

Images can be labelled manually or by a custom image recognition model. Read more about image recognition systems.

The Distance Between Vectors

The picture below shows the data in the so-called vector space before and after model optimization (training). In the vector space, each image (sample) is represented by its embedding (vector). Our vectors have two dimensions, x and y, so we can visualize them. The objective of model optimization is to learn the vector representation of images. The loss function is forcing the model to predict similar vectors for samples within the same class (group).

By similar vectors, I mean that the Euclidean distance between the two vectors is small. The larger the distance, the more different these images are. After the optimization, the model assigns a new vector to each sample. Ideally, the model should maximize the distance between images with different classes and minimize the distance between images of the same class.

How visual search engines work: Optimization for visual search should maximize the distance of items between different categories and minimize the distance within category.
Optimization for visual search should maximize the distance of items between different categories and minimize the distance within the category.

Sometimes we don’t know anything about our data in advance, meaning we do not have any metadata. In such cases, we need to use unsupervised or self-supervised learning, about which I will talk later in this article. Big tech companies do a lot of work with unsupervised learning. Special models are being developed for searching in databases. In research papers, this field is often called deep metric learning.

Supervised & Unsupervised Machine Learning Methods

1) Supervised Learning

As I mentioned, if we know the classes of images, the easiest way to train a neural network for vectors is to optimize it for the classification problem. This is a classic image recognition problem. The loss function is usually cross-entropy loss. In this way, the model is learning to predict predefined classes from input images. For example, to say whether the image contains a dog, a cat or a bird. We can get the vectors by removing the last classification layer of the model and getting the vectors from some intermediate layer of the network.

When it comes to the pair-based loss function, one of the oldest techniques for metric learning is the Siamese network (contrastive learning). The name contains “Siamese” because there are two identical models of the same weights. In the Siamese network, we need to have pairs of images, which we label based on whether they are or aren’t equal (i.e., from the same class or not). Pairs in the batch that are equal are labelled with 1 and unequal pairs with 0.

In the following image, we can see different batch construction methods that depend on our model: Siamese (contrastive) network, Triplet, or N-pair, which I will explain below.

How visual search engine works: Each deep learning architecture requires different batch construction methods. For example siames and npair requires tuples. However in Npair, the tuples must be unique.
Each deep learning architecture requires different batch construction methods. For example, Siamese and N-pair require tuples. However, in N-pair, the tuples must be unique.

Triplet Neural Network and Online/Offline Mining

In the Triplet method, we construct triplets of items, two of which (anchor and positive) belong to the same category and the third one (negative) to a different category. This can be harder than you might think because picking the “right” samples in the batch is critical. If you pick items that are too easy or too difficult, the network will converge (adjust weights) very slowly or not at all. The triplet loss function contains an important constant called margin. Margin defines what should be the minimum distance between positive and negative samples.

Picking the right samples in deep metric learning is called mining. We can find optimal triplets via either offline or online mining. The difference is, that during offline mining, you are finding the triplets at the beginning of each epoch.

Online & Offline Mining

The disadvantage of offline mining is that computing embeddings for each sample is not very computationally efficient. During the epoch, the model can change rapidly, so embeddings are becoming obsolete. That’s why online mining of triplets is more popular. In online mining, each batch of triplets is created before fitting the model. For more information about mining and batch strategies for triplet training, I would recommend this post.

We can visualize the Triplet model training in the following way. The model is copied three times, but it has the same shared weights. Each model takes one image from the triplet (anchor, positive, negative) and outputs the embedding vector. Then, the triplet loss is computed and weights are adjusted with backpropagation. After the training is done, the model weights are frozen and the output of the embeddings is used in the similarity engine. Because the three models have shared weights (the same), we take only one model that is used for predicting embedding vectors on images.

How visual search engines work: Triplet network that takes a batch of anchor, positive and negative images.
Triplet network that takes a batch of anchor, positive and negative images.

N-pair Models

The more modern approach is the N-pair model. The advantage of this model is that you don’t mine negative samples, as it is with a triplet network. The batch consists of just positive samples. The negative samples are mitigated through the matrix construction, where all non-diagonal items are negative samples.

You still need to do online mining. For example, you can select a batch with a maximum value of the loss function, or pick pairs that are distant in metric space.

How visual search engine works: N-pair model requires a unique pair of items. In triplet and Siamese model, your batch can contain multiple triplets/pairs from the same class (group).
The N-pair model requires a unique pair of items. In the triplet and Siamese model, your batch can contain multiple triplets/pairs from the same class (group).

In our experience, the N-pair model is much easier to fit, and the results are also better than with the triplet or Siamese model. You still need to do a lot of experiments and know how to tune other hyperparameters such as learning rate, batch size, or model architecture. However, you don’t need to work with the margin value in the loss function, as it is in triplet or Siamese. The small drawback is that during batch creation, we need to have always only two items per class/product.

Proxy-Based Methods

In the proxy-based methods (Proxy-Anchor, Proxy-NCA, Soft Triple) the model is trying to learn class representatives (proxies) from samples. Imagine that instead of having 10,000 classes of fashion products, we will have just 20 class representatives. The first representative will be used for shoes, the second for dresses, the third for shirts, the fourth for pants and so on.

A big advantage is that we don’t need to work with so many classes and the problems coming with it. The idea is to learn class representatives and instead of slow mining “the right samples” we can use the learned representatives in computing the loss function. This leads to much faster training & convergence of the model. This approach, as always, has some cons and questions like how many representatives should we use, and so on.

MultiSimilarity Loss

Finally, it is worth mentioning MultiSimilarity Loss, introduced in this paper. MultiSimilarity Loss is suitable in cases when you have more than two items per class (images per product). The authors of the paper are using 5 samples per class in a batch. MultiSimilarity can bring closer items within the same class and push the negative samples far away by effectively weighting informative pairs. It works with three types of similarities:

  • Self-Similarity (the distance between the negative sample and anchor)
  • Positive-Similarity (the relationship between positive pairs)
  • Negative-Similarity (the relationship between negative pairs)

Finally, it is also worth noting, that in fact, you don’t need to use only one loss function, but you can combine multiple loss functions. For example, you can use the Triplet Loss function with CrossEntropy and MultiSimilarity or N-pair together with Angular Loss. This should often lead to better results than the standalone loss function.

2) Unsupervised Learning

AutoEncoder

Unsupervised learning is helpful when we have a completely unlabelled dataset, meaning we don’t know the classes of our images. These methods are very interesting because the annotation of data can be very expensive and time-consuming. The most simplistic unsupervised learning can simply use some form of AutoEncoder.

AutoEncoder is a neural network consisting of two parts: an encoder, which encodes the image to the smaller representation (embedding vector), and a decoder, which is trying to reconstruct the original image from the embedding vector.

After the whole model is trained, and the decoder is able to reconstruct the images from smaller vectors, the decoder part is discarded and only the encoder part is used in similarity search engines.

How visual search engine works: Simple AutoEncoder neural network for learning embeddings via reconstruction of image.
Simple AutoEncoder neural network for learning embeddings via reconstruction of the image.

There are many other solutions for unsupervised learning. For example, we can train AutoEncoder architecture to colourize images. In this technique, the input image has no colour and the decoding part of the network tries to output a colourful image.

Image Inpainting

Another technique is Image Inpainting, where we remove part of the image and the model will learn to inpaint them back. Interesting way to propose a model that is solving jigsaw puzzles or correct ordering of frames of a video.

Then there are more advanced unsupervised models like SimCLR, MoCo, PIRL, SimSiam or GAN architectures. All these models try to internally represent images so their outputs (vectors) can be used in visual search systems. The explanation of these models is beyond this article.

Tips for Training Deep Metric Models

Here are some useful tips for training deep metric learning models:

  • Batch size plays an important role in deep metric learning. Some methods such as N-pair should have bigger batch sizes. Bigger batch sizes generally lead to better results, however, they also require more memory on the GPU card.
  • If your dataset has a bigger variation and a lot of classes, use a bigger batch size for Multi-similarity loss.
  • The most important part of metric learning is your data. It’s a pity that most research, as well as articles, focus only on models and methods. If you have a large collection with a lot of products, it is important to have a lot of samples per product. If you have fewer classes, try to use some unsupervised method or cross-entropy loss and do heavy augmentations. In the next section, we will look at data in more depth.
  • Try to start with a pre-trained model and tune the learning rate.
  • When using Siamese or Triplet training, try to play with the margin term, all the modern frameworks will allow you to change it (make it harder) during the training.
  • Don’t forget to normalize the output of the embedding if the loss function requires it. Because we are comparing vectors, they should be normalized in a way that the norm of the vectors is always 1. This way, we are able to compute Euclidean or cosine distances.
  • Use advanced methods such as MultiSimilarity with big batch size. If you use Siamese, Triplet, or N-pair, mining of negatives or positives is essential. Start with easier samples at the beginning and increase the challenging samples every epoch.

Neural Text Search on Images with CLIP

Up to right now, we were talking purely about images and searching images with image queries. However, a common use case is to search the collection of images with text input, like we are doing with Google or Bing search. This is also called Text-to-Image problem, because we need to transform text representation to the same representation as images (same vector space). Luckily, researchers from OpenAI develop a simple yet powerful architecture called CLIP (Contrastive Language Image Pre-training). The concept is simple, instead of training on pair of images (SIAMESE, NPAIR) we are training two models (one for image and one for text) on pairs of images and texts.

The architecture of CLIP model by OpenAI. Image Source Github

You can train a CLIP model on a dataset and then use it on your images (or videos) collection. You are able to find similar images/products or try to search your database with a text query. If you would like to use a CLIP-like model on your data, we can help you with the development and integration of the search system. Just contact us at care@ximilar.com, and we can create a search system for your data.

The Training Data
for Visual Search Engines

99 % of the deep learning models have a very expensive requirement: data. Data should not contain any errors such as wrong labels, and we should have a lot of them. However, obtaining enough samples can be a problematic and time-consuming process. That is why techniques such as transfer learning or image augmentation are widely used to enrich the datasets.

How Does Image Augmentation Help With Training Datasets?

Image augmentation is a technique allowing you to multiply training images and therefore expand your dataset. When preparing your dataset, proper image augmentation is crucial. Each specific category of data requires unique augmentation settings for the visual search engine to work properly. Let’s say you want to build a fashion visual search engine based strictly on patterns and not the colours of items. Then you should probably employ heavy colour distortion and channel-swapping augmentation (randomly swapping red, green, or blue channels of an image).

On the other hand, when building an image search engine for a shop with coins, you can rotate the images and flip them to left-right and upside-down. But what to do if the classic augmentations are not enough? We have a few more options.

Removing or Replacing Background

Most of the models that are used for image search require pairs of different images of the same object. Typically, when training product image search, we use an official product photo from a retail site and another picture from a smartphone, such as a real-life photo or a screenshot. This way, we get a pair-based model that understands the similarity of a product in pictures with different backgrounds, lights, or colours.

How visual search engine works: The difference between a product photo and a real-life image made with a smartphone, both of which are important to use when training computer vision models.
The difference between a product photo and a real-life image made with a smartphone, both of which are important to use when training computer vision models.

All such photos of the same product belong to an entity which we call a Similarity Group. This way, we can build an interactive tool for your website or app, which enables users to upload a real-life picture (sample) and find the product they are interested in.

Background Removal Solution

Sometimes, obtaining multiple images of the same group can be impossible. We found a way to tackle this issue by developing a background removal model that can distinguish the dominant foreground object from its background and detect its pixel-accurate position.

Once we know the exact location of the object, we can generate new photos of products with different backgrounds, making the training of the model more effective with just a few images.

The background removal can also be used to narrow the area of augmentation only to the dominant item, ignoring the background of the image. There are a lot of ways to get the original product in different styles, including changing saturation, exposure, highlights and shadows, or changing the colours entirely.

How visual search engines work: Generating more variants can make your model very robust.
Generating more variants can make your model very robust.

Building such an augmentation pipeline with background/foreground augmentation can take hundreds of hours and a lot of GPU resources. That is why we deployed our Background Removal solution as a ready-to-use image tool.

You can use the Background Removal as a stand-alone service for your image collections, or as a tool for training data augmentation. It is available in public demo, App, and via API.

GAN-Based Methods for Generating New Training Data

One of the modern approaches is to use a Generative Adversarial Network (GAN). GANs are incredibly powerful in generating whole new images from some specific domain. You can simply create a model for generating new kinds of insects or making birds with different textures.

How visual search engines work: Creating new insect images automatically to train an image recognition system? How cool is that? There are endless possibilities with GAN models for basicaly any image type. [Source]
Creating new insect images automatically to train an image recognition system? How cool is that? There are endless possibilities with GAN models for basically any image type. [Source]

The greatest advantage of GAN is you will easily get a lot of new variants, which will make your model very robust. GANs are starting to be widely used in more tasks such as simulations, and I think the gathering of data will cost much less in the near future because of them. In Ximilar, we used GAN to create a GAN Image Upscaler, which adds new relevant pixels to images to increase their resolution and quality.

When creating a visual search system on our platform, our team picks the most suitable neural network architecture, loss functions, and image augmentation settings through the analysis of your visual data and goals. All of these are critical for the optimization of a model and the final accuracy of the system. Some architectures are more suitable for specific problems like OCR systems, fashion recommenders or quality control. The same goes with image augmentation, choosing the wrong settings can destroy the optimization. We have experience with selecting the best tools to solve specific problems.

Annotation System for Building Image Search Datasets

As we can see, a good dataset definitely is one of the key elements for training deep learning models. Obtaining such a collection can be quite expensive and time-consuming. With some of our customers, we build a system that continually gathers the images needed in the training datasets (for instance, through a smartphone app). This feature continually & automatically improves the precision of the deployed search engines.

How does it work? When the new images are uploaded to Ximilar Platform (through Custom Similarity service) either via App or API, our annotators can check them and use them to enhance the training dataset in Annotate, our interface dedicated to image annotation & management of datasets for computer vision systems.

Annotate effectively works with the similarity groups by grouping all images of the same item. The annotator can add the image to a group with the relevant Stock Keeping Unit (SKU), label it as either a product picture or a real-life photo, add some tags, or mark objects in the picture. They can also mark images that should be used for the evaluation and not used in the training process. In this way, you can have two separate datasets, one for training and one for evaluation.

We are quite proud of all the capabilities of Annotate, such as quality control, team cooperation, or API connection. There are not many web-based data annotation apps where you can effectively build datasets for visual search, object detection, as well as image recognition, and which are connected to a whole visual AI platform based on computer vision.

A sneak peek into Annotate – image annotation tool for building visual search and image similarity models.
Image annotation tool for building visual search and image similarity models.

How to Improve Visual Search Engine Results?

We already assessed that the optimization algorithm and the training dataset are key elements in training your similarity model. And that having multiple images per product then significantly increases the quality of the trained similarity model. The model (CNN or other modern architecture) for similarity is used for embedding (vector) extraction, which determines the quality of image search.

Over the years that we’ve been training visual search engines for various customers around the world, we were also able to identify several potential weak spots. Their fixing really helped with the performance of searches as well as the relevance of the search results. Let’s take a look at what can improve your visual search engine:

Include Tags

Adding relevant keywords for every image can improve the search results dramatically. We recommend using some basic words that are not synonymous with each other. The wrong keywords for one item are for instance “sky, skyline, cloud, cloudy, building, skyscraper, tall building, a city”, while the good alternative keywords would be “sky, cloud, skyscraper, city”.

Our engine can internally use these tags and improve the search results. You can let an image recognition system label the images instead of adding the keywords manually.

Include Filtering Categories

You can store the main categories of images in their metadata. For instance, in real estate, you can distinguish photos that were taken inside or outside. Based on this, the searchers can filter the search results and improve the quality of the searches. This can also be easily done by an image recognition task.

Include Dominant Colours

Colour analysis is very important, especially when working for a fashion or home decor shop. We built a tool conveniently called Dominant Colors, with several extraction options. The system can extract the main colours of a product while ignoring its background. Searchers can use the colours for advanced filtering.

Use Object Detection & Segmentation

Object detection can help you focus the view of both the search engine and its user on the product, by merely cutting the detected object from the image. You can also apply background removal to search & showcase the products the way you want. For training object detection and other custom image recognition models, you can use our AppAnnotate.

Use Optical Character Recognition (OCR)

In some domains, you can have products with text. For instance, wine bottles or skincare products with the name of the item and other text labels that can be read by artificial intelligence, stored as metadata and used for keyword search on your site.

How visual search engines work: Our visual search engine allows us to combine several features for multimedia search with advanced filtering.
Our visual search engine allows us to combine several features for multimedia search with advanced filtering.

Improve Image Resolution

If the uploaded images from the mobile phones have low resolution, you can use the image upscaler to increase the resolution of the image, screenshot, or video. This way, you will get as much as possible even from user-generated content with potentially lower quality.

Combine Multiple Approaches

FusionCombining multiple features like model embeddings, tags, dominant colours, and text increases your chances to build a solid visual search engine. Our system is able to use these different modalities and return the best items accordingly. For example, extracting dominant colours is really helpful in Fashion Search, our service combining object detection, fashion taggingvisual search.

Search Engine and Vector Databases

Once you trained your model (neural network), you can extract and store the embeddings for your multimedia items somewhere. There are a lot of image search engine implementations that are able to work with vectors (embedding representation) that you can use. For example, Annoy from Spotify or FAISS from Facebook developers.

These solutions are open-source (i.e. you don’t have to deal with usage rights) and you can use them for simple solutions. However, they also have a few disadvantages:

  • After the initial build of the search engine database, you cannot perform any update, insert or delete operations. Once you store the data, you can only perform search queries.
  • You are unable to use a combination of multiple features, such as tags, colours, or metadata.
  • There’s no support for advanced filtering for more precise results.
  • You need to have an IT background and coding skills to implement and use them. And in the end, the system must be deployed on some server, which brings additional challenges.
  • It is difficult to extend them for advanced use cases, you will need to learn a complex codebase of the project and adjust it accordingly.

Building a Visual Search Engine on a Machine Learning Platform

The creation of a great visual search engine is not an easy task. The mentioned challenges and disadvantages of building complex visual search engines with high performance are the reasons why a lot of companies hesitate to dedicate their time and funds to building them from scratch. That is where AI platforms like Ximilar come into play.

Custom Similarity Service

Ximilar provides a computer vision platform, where a fast similarity engine is available as a service. Anyone can connect via API and fill their custom collection with data and query at the same time. This streamlines the tedious workflow a lot, enabling people to have custom visual search engines fast and, more importantly, without coding. Our image search engines can handle other data types like videos, music, or 3D models. If you want more privacy for your data, the system can also be deployed on your hardware infrastructure.

In all industries, it is important to know what we need from our model and optimize it towards the defined goal. We developed our visual search services with this in mind. You can simply define your data and problem and what should be the primary goal for this similarity. This is done via similarity groups, where you put the items that should be matched together.

Examples of Visual Search Solutions for Business

One of the typical industries that use visual search extensively is fashion. Here, you can look at similarities in multiple ways. For instance, one can simply want to find footwear with a colour, pattern, texture, or shape similar to the product in a screenshot. We built several visual search engines for fashion e-shops and especially price comparators, which combined search by photo and recommendations of alternative similar products.

Based on a long experience with visual search solutions, we deployed several ready-to-use services for visual search: Visual Product Search, a complex visual search service for e-commerce including technologies such as search by photo, similar product recommendations, or image matching, and Fashion Search created specifically for the fashion segment.

Another nice use case is also the story of how we built a Pokémon Trading Card search engine. It is no surprise that computer vision has been recently widely applied in the world of collectibles. Trading card games, sports cards or stamps and visual AI are a perfect match. Based on our customers’ demand, we also created several AI solutions specifically for collectibles.

The Workflow of Building
a Visual Search Engine

If you are looking to build a custom search engine for your users, we can develop a solution for you, using our service Custom Image Similarity. This is the typical workflow of our team when working on a customized search service:

  1. SetupResearch & Plan – Initial calls, the definition of the project, NDA, and agreement on expected delivery time.

  2. Data – If you don’t provide any data, we will gather it for you. Gathering and curating datasets is the most important part of developing machine learning models. Having a well-balanced dataset without any bias to any class leads to great performance in production.

  3. First prototype – Our machine learning team will start working on the model and collection. You will be able to see the first results within a month. You can test it and evaluate it by yourself via our clickable front end.

  4. Development – Once you are satisfied with the results, we will gather more data and do more experiments with the models. This is an iterative way of improving the model.

  5. Evaluation & Deployment – If the system performs well and meets the criteria set up in the first calls (mostly some evaluation on the test dataset and speed performance), we work on the deployment. We will show you how to connect and work with the API for visual similarity (insert, delete, search endpoints).

If you are interested in knowing more about how the cooperation with Ximilar works in general, read our How it works and contact us anytime.

We are also able to do a lot of additional steps, such as:

  • Managing and gathering more training data continually after the deployment to gradually increase the performance of visual similarity (the usage rights for user-generated content are up to you; keep in mind that we don’t store any physical images).
  • Building a customized model or multiple models that can be integrated into the search engine.
  • Creating & maintaining your visual search collection, with automatic synchronization to always keep up to date with your current stock.
  • Scaling the service to hundreds of requests per second.

Visual Search is Not Only
For the Big Companies

I presented the basic techniques and architectures for training visual similarity models, but of course, there are much more advanced models and the research of this field continues with mile steps.

Search engines are practically everywhere. It all started with AltaVista in 1995 and Google in 1998. Now it’s more common to get information directly from Siri or Alexa. Searching for things with visual information is just another step, and we are glad that we can give our clients tools to maximise their potential. Ximilar has a lot of technical experience with advanced search technology for multimedia data, and we work hard to make it accessible to everyone, including small and medium companies.

If you are considering implementing visual search into your system:

  1. Schedule a call with us and we will discuss your goals. We will set up a process for getting the training data that are necessary to train your machine learning model for search engines.

  2. In the following weeks, our machine learning team will train a custom model and a testable search collection for you.

  3. After meeting all the requirements from the POC, we will deploy the system to production, and you can connect to it via Rest API.

The post How to Build a Good Visual Search Engine? appeared first on Ximilar: Visual AI for Business.

]]>
Pokémon TCG Search Engine: Use AI to Catch Them All https://www.ximilar.com/blog/pokemon-card-image-search-engine/ Tue, 11 Oct 2022 12:20:00 +0000 https://www.ximilar.com/?p=4551 With a new custom image similarity service, we are able to build an image search engine for collectible cards trading.

The post Pokémon TCG Search Engine: Use AI to Catch Them All appeared first on Ximilar: Visual AI for Business.

]]>
Have you played any trading card games? As an elementary school student, I remember spending hundreds of hours playing Lord of the Rings TCG with my friend. Back then, LOTR was in the cinemas, and the game was simply fantastic, with beautiful pictures from movies. I still remember my deck, played with a combination of Ents/Gondor and Nazguls.

Other people in our office spent their youth playing Magic The Gathering (with beautiful artworks), or collecting sports cards with their favorite athletes. In my country, basketball cards and ice hockey cards were really popular. Cards are still loved, played, collected, and traded by geeks, collectors, and sports fans across the world! Their market is growing, and so is the need for automation of image processing on websites and apps for collectors. Right now, cards can be seen even as a great investment.

Where can you use visual AI for cards?

Trading card games (トレーディングカード) can consist of tens of thousands of cards. In principle, building a basic image classifier based solely on image recognition leads to low precision and is simply not enough for more complicated problems.

However, we are able to build a complex similarity system that can recognize, categorize, and find similar cards by a picture. Once trained properly, it can deal with enormous databases of images it never encountered before. With this system, you can find all the information, such as the year of release, card title, exact value, card set, or whether it already is in someone’s collection, with just a smartphone image of the card.

Tip: Check out our Computer Vision Platform to learn about how basic image recognition systems work. If you are not sure how to develop your card search system, just contact us and we will help you.

Collectibles are a big business and some cards are really expensive nowadays. Who knows, maybe you have the card of Charizard or Kobe Bryant hidden in your old box in the attic. We can develop a system for you that can automatically analyze the bulk of trading cards sent from your customers or integrate it into your mobile/smartphone app.

Automatic Recognition of Collectibles

Ximilar built an AI system for the detection, recognition and grading of collectibles. Check it out!

What can visual search do for the trading cards market?

In the last year, we have been building a universal system able to train visual models with numerous applications in image search engines. We already offer visual search services for photo search. But, they are optimized mostly for general and fashion images. This system can be tuned to trading cards, coins, furniture & home decor, arts, and real estate, … there are infinite use cases.

In the last decades, we have all witnessed the growth of the TCG community. However, technologies based on artificial intelligence have not yet been used in this market. Plus, even though the first system for scanning trading cards was released by ebay.com, it was not made available for small shops as an API. And since trading card games and visual AI are a perfect match, we are going to change itwith a card image search.

Tip: Check out Visual Product Search to learn more about visual search applications.

Which TCG cards could visual AI help with?

An image search engine is a great approach when the number of classes for the image classification is high (above 1,000+). With TCGs, each card represents a unique class. A convolutional neural network (CNN) trained as a classifier can have poor results when working with a larger number of classes.

Pokémon TCG contains more than 10,000 cards (classes), Magic the Gathering (MTG) over 50.000, and the same goes for basketball or any other sports cards. So basically, we can build a visual search system for both:

  • Trading card games (Magic the Gathering, Lord of the Rings, Pokémon, Yu-Gi-Oh!, One Piece, Warhammer, and so on)
  • Collectible sports cards (like Ice Hockey, Football, Soccer, Baseball, Basketball, UFC, and more)
Pokémon, Magic The Gathering, LOTR, Ice Hockey and Basketball cards.
Pokémon, Magic The Gathering, LOTR, Ice Hockey, and Basketball cards.
Yes, we are big fans of all these things 🙂

A visual search/recognition technology is starting to be used on E-bay when listing trading and sports cards for sale. However, this is only available in the e-bay app on smartphones. The app has a built-in scanning tool for cards and can find the average price with additional info.

Our service for card image search can be integrated into your website or application. And you can simply connect via API through a smartphone, computer, or sorting machine to find exact cards by photo, saving a lot of time and improving the user experience!

We’ve been recently training an AI (neural network) model for Pokémon trading cards, Yugioh! and Magic The Gathering. Why these? Pokémon is the most played TCG in the world, the game has simple rules and an enormous fan base. Very popular are also MTG and Yugioh! Some cards are really expensive, but more importantly, they are traded heavily!

With this model, we built a reverse search for finding the exact Pokémon card, MTG and Yugioh! cards, which achieved 94%+ accuracy (i.e. exact image match). And we are still talking about a prototype in beta version that can be improved to almost 100 %. This search system can return you the edition of the card, language, name of the card, year of release and much more.

If you would like to try the system on these three trading card games, then the endpoint for card identification (/v2/tcg_id) from the Collectibles Recognition service is the right choice for you. If you need to tune it on your image collections or have any other games or cards (sports) then just contact us and we can build a similar service for you.

Automatic grading and inspection of cards with AI

A lot of companies are grading sports & trading cards manually. Our visual AI can be trained to detect corner types, scratches, surface wear, light staining, creases, focus, and borders. The Image recognition models are able to identify marks, wrong cut, lopsided centering, print defects and other special attributes.

For example, PSA is a company that has developed its own grading standards for automatic card grading (MINT). With our platform and team, you can automatize the entire workflow of grading with just one photo. We provide several solutions for computing card grades and card condition.

PSA graded baseball card. Our machine learning model can analyze picture of these cards.
PSA graded baseball card. Automatic grading is possible with machine learning.

With the new custom similarity service, we are able to create a custom solution for trading card image search in a matter of weeks. The process for developing it is quite simple:

  1. We will schedule a call and talk about your goals. We will agree on how we will obtain the training data that are necessary to train your custom machine-learning model for the search engine.
  2. Our machine-learning specialists will assemble a testable image search collection and train a custom machine-learning model for you in a matter of weeks.
  3. After meeting all the requirements of PoC, we will deploy the system to production, and you can connect to it via Rest API.

Image Recognition of Collectibles

Machine learning models bring endless possibilities not only to pop culture geeks and collectors, but to all fields and industries. From personalized recommendations in custom fashion search engines to automatic detection of slight differences in surface materials, the visual AI gets better and smarter every day, making routine tasks a matter of milliseconds. That is one of the reasons why it is an unlimited resource of curiosity, challenges, and joy for us, including being geeks – professionally :).

Ximilar is currently releasing on a ready-to-use computer vision service able to recognize collectibles such as TCG cards, coins, banknotes or post stamps, detect their features and categorize them. Let us know if you’d like to implement it on your website!

If you are interested in a customized AI solution for collector items write us an email and we will get back to you as soon as possible. If you would like to identify cards with our Collectibles Recognition service just sign up via app.ximilar.com.

The post Pokémon TCG Search Engine: Use AI to Catch Them All appeared first on Ximilar: Visual AI for Business.

]]>
Ximilar Introduces API Credit Packs https://www.ximilar.com/blog/we-introduce-api-credit-packs/ Tue, 27 Apr 2021 15:34:49 +0000 https://www.ximilar.com/?p=3879 API credit packs are a cost-effective solution for initial system setup, unexpected user traffic, and one-time system loads.

The post Ximilar Introduces API Credit Packs appeared first on Ximilar: Visual AI for Business.

]]>
In the year 2021, we are going to implement some major updates and add new features to our App. They should make the user experience more convenient and the work environment more customizable. The first new feature is the API Credit Packs, specifically created at your requests and suggestions. In this article, I briefly describe, what are the main benefits of API credit packs, and how to use them.

How API Credits Work

Imagine you upload a training image, create a recognition label, or send an image for recognition in our App. Every time you perform an operation like this, you send a request to our server using API. This request is called an API call.

To keep track of API calls and their requirements, each type of call corresponds to a certain number of API credits. Generally, all calls sending image data to our servers cost some API credits. The full list of operations with their API credit values is available in our documentation.

Your Monthly API Credits

Every user of the Ximilar App is provided with a monthly supply of API credits, depending on their pricing plan. This supply is renewed every month on the day they made the purchase of their plan. For example, if you purchase a Business plan on April 15th, your monthly supply will be restored on the 15th day of every consequent month.

The users with the Free pricing plan are provided with a monthly supply of API credits as well. Whether you use a paid or free plan, the unused API credits from your monthly supply are not transferred to the following month and expire.

Introducing API Credit Packs

Ximilar App users can now buy an unlimited number of API credits aside from their monthly supply, in the form of API credit packs. This option is available for all pricing plans, including the Free plan.

There are two major benefits of the API credit packs. First, credits from the packs are used only when your monthly supply of credits runs out. In this example, the user with the Business plan has already used all API credits from his monthly supply and the system automatically switched to using the API credit pack. On April 15th, his monthly credit balance will be renewed, and the system will switch back to the monthly supply.

Second, API credit packs have no expiration. Therefore, their balance passes to the next month. You can buy as many credit packs as you need. The credits will add up in the lower API credit bar.

Typical Uses for API Credit Packs

The credit packs cover both expected and unexpected system loads. There are more ways and situations in which they can help or serve as safety nets.

Get Your System Ready

Our users generally pick their pricing plan based on regular traffic on their websites. However, the initial service setup is more demanding, and it costs a lot of extra credits. In this case, you wouldn’t want to upgrade your pricing plan for the short period of higher workloads and then downgrade back to the plan suiting your long-term needs.

One-Time System Loads

As you could see in the example with a Business plan user, the number of API credits in the credit pack bar was twice as high as his monthly credit supply. It is common for our users to use an above-average number of credits from time to time – typically when they are expecting higher system loads than usual. For example, uploading more products and images, or adding a brand new collection, would mean withdrawing your monthly credit supply too soon. In such cases, API credit packs provide a cost-effective solution.

Safety Net in a Case of Higher Traffic

The credit packs also cover the situations of unpredicted system loads caused by third parties. For example, when your website is visited and the system is used by an unexpected number of customers in a short period.

This way, the credit packs provide a sort of safety net to make sure no service outages will occur on your side due to the sudden exhaustion of credits.

What if I Upgrade or Downgrade My Plan?

You can always upgrade or downgrade your pricing plan. When this happens, the credits from your previous plan’s monthly supply will add up to the monthly supply of your new plan. They will remain in the bar till the end of your old monthly subscription and will be used first. In addition, you can purchase as many credit packs as you need, and the credits from the packs will be used after both of your monthly supplies are exhausted.

Do you have any questions? We’re more than happy to talk.

The post Ximilar Introduces API Credit Packs appeared first on Ximilar: Visual AI for Business.

]]>
Introducing Tags, Categories & Image Management https://www.ximilar.com/blog/introducing-tags-categories-image-management/ Tue, 26 Mar 2019 13:02:14 +0000 https://www.ximilar.com/?p=909 With the new tagging tasks, you are able to create even more powerful custom deep learning models and deploy them as API.

The post Introducing Tags, Categories & Image Management appeared first on Ximilar: Visual AI for Business.

]]>
Ximilar not only grows by its customer base, but we constantly learn and add new features. We aim to give you as much comfort as possible — by delivering great user experience and even features that might not have been invented yet. We learn from the AI universe, and we contribute to it in return. Let’s see the feature set added in the early spring of 2019.

New Label Types: Categories & Tags

This one is a major, long-awaited upgrade, to our custom recognition system.
 
Until this point, we offered only image categorization, formally: multi-class classification, where every image belongs to exactly one category. That was great for many use cases, but some elaborate ones needed more. So now we introduce Tagging tasks, formally: multi-label classification, where images are tagged with multiple labels per image. Labels correspond to various features or objects contained in a single picture. Therefore, from this point on, we use strictly categorization or tagging, and not classification anymore.
 
With this change, the Ximilar App starts to differentiate two kinds of labels — Categories and Tags, where each image could be assigned either to one Category or/and multiple Tags.
 
 
Ximilar differentiates two kinds of labels — Categories and Tags, where each image could be assigned either to one Category or/and multiple Tags.
 
For every Tagging Task that you create, the Ximilar App automatically creates a special tag “<name of the task> – no tags” where you can put images that contain none of the tags connected to the task. You need to carefully choose the type of task when creating, as the type cannot be changed later. Other than that, you can work in the same way with both types of tasks.
 
When you want to categorize your images in production, you simply take the category with the highest probability – this is clear. In the case of tagging, you must set a threshold and take tags with probability over this threshold. A general rule of thumb is to take all tags with a probability over 50 %, but you can tune this number to fit your use case and data.
 
With these new features, there are also a few minor API improvements. To keep everything backwards compatible, when you create a Task or Label and do not specify the type, then you create a Categorization task with Categories. If you want to learn more about our REST API, which allows you to manage almost everything even training of the models, please check out docs.ximilar.com.

Benefit: Linking Tags with Categories

So hey, we have two types of labels in place. Let’s see what that brings in real use. The typical use-case of our customers is, that they have two or more tasks, defined in the same field/area. For instance, they want to enhance real-estate properties so they need:
  1. Automatically categorize photos by room typeliving room, bedroom, kitchen, outdoor house. At the same time, also:
  2. Recognize different features/objects in the images — bed, cabinet, wooden floor, lamp, etc.

So far, customers had to upload — often the same — training images separately into each label.

This upgrade makes this way easier. The new Ximilar App section Images allows you to upload images once and assign them to several Categories and Tags. You can easily modify the categories and tags of each image there. Either one by one or in bulk. There can be thousands of images in your workspace. So you can also filter images by their tags/categories and do batch processing on selected images. We believe that this will speed up the workflow of building reliable data for your tasks.

Improved Search

Some of our customers have hundreds of Labels. With a growing number of projects, it started to be hard to orient all Labels, Tags, and Tasks. That is why there is now a search bar at the top of the screen, which helps you find desired items faster.

Updated Insights

As we mentioned in our last update notes, we offer a set of insights that help you increase the quality of results over time by looking into what works and what does not in your case. In order to improve the accuracy of your models, you may inspect the details of your model. Please see the article on Confusion Matrix and Failed Images insights and also another one, talking about the Precision/Recall table. We have recently updated the list of Failed images so that you can modify the categories/tags of these failed images — or delete them — directly.

Upcoming Features

  • Workspaces — to clearly split work in different areas
  • Rich statistics — number of API calls, amount of credits, per task, long-term/per-month/within-week/hourly and more.
We at Ximilar are constantly working on new features, refactoring the older ones and listening to your requests and ideas as we aim to deliver a great service not just out of the box, and not only with pre-defined packages but actually meeting your needs in real-world applications. You can always write to us at and request some new API features which will benefit everyone who uses this platform. We will be glad if you share with us how do you use the Ximilar Recognition in your use cases. Not only this will help us grow as a company, but it will also inspire others.
 
We create the Ximilar App as a solid entry point to learn a bunch about AI, but our skills are mostly benefiting custom use cases, where we deliver solutions for Narrow Fields AI Challenges, that are required more than a little over-hyped generic tools that just tell you this is a banana and that is an apple.

The post Introducing Tags, Categories & Image Management appeared first on Ximilar: Visual AI for Business.

]]>