Photo Tagging - Ximilar: Visual AI for Business https://www3.ximilar.com/blog/tag/photo-tagging/ VISUAL AI FOR BUSINESS Wed, 18 Sep 2024 13:01:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.ximilar.com/wp-content/uploads/2024/08/cropped-favicon-ximilar-32x32.png Photo Tagging - Ximilar: Visual AI for Business https://www3.ximilar.com/blog/tag/photo-tagging/ 32 32 New AI Solutions for Card & Comic Book Collectors https://www.ximilar.com/blog/new-ai-solutions-for-card-and-comic-book-collectors/ Wed, 18 Sep 2024 12:35:34 +0000 https://www.ximilar.com/?p=18142 Discover the latest AI tools for comic book and trading card identification, including slab label reading and automated metadata extraction.

The post New AI Solutions for Card & Comic Book Collectors appeared first on Ximilar: Visual AI for Business.

]]>
Recognize and Identify Comic Books in Detail With AI

The newest addition to our portfolio of solutions is the Comics Identification (/v2/comics_id). This service is designed to identify comics from images. While it’s still in the early stages, we are actively refining and enhancing its capabilities.

The API detects the largest comic book in an image, and provides key information such as the title, issue number, release date, publisher, origin date, and creator’s name, making it ideal for identifying comic books, magazines, as well as manga.

Comics Identification by Ximilar provides the title, issue number, release date, publisher, origin date, and creator’s name.

This tool is perfect for organizing and cataloging large comic collections, offering accurate identification and automation of metadata extraction. Whether you’re managing a digital archive or cataloging physical collections, the Comics Identification API streamlines the process by quickly delivering essential details. We’re committed to continuously improving this service to meet the evolving needs of comic identification.

Star Wars Unlimited, Digimon, Dragon Ball, and More Can Now Be Recognized by Our System

Our trading card identification system has already been widely used to accurately recognize and provide detailed information on cards from games like Pokémon, Yu-Gi-Oh!, Magic: The Gathering, One Piece, Flesh and Blood, MetaZoo, and Lorcana.

Recently, we’ve expanded the system to include cards from Garbage Pail Kids, Star Wars Unlimited, Digimon, Dragon Ball Super, Weiss Schwarz, and Union Arena. And we’re continually adding new games based on demand. For the full and up-to-date list of recognized games, check out our API documentation.

Ximilar keeps adding new games to the trading card game recognition system. It can easily be deployed via API and controlled in our App.

Detect and Identify Both Trading Cards and Their Slab Labels

The new endpoint slab_grade processes your list of image records to detect and identify cards and slab labels. It utilizes advanced image recognition to return detailed results, including the location of detected items and analyzed features.

Graded slab reading by Ximilar AI.

The Slab Label object provides essential information, such as the company or category (e.g., BECKETT, CGC, PSA, SGC, MANA, ACE, TAG, Other), the card’s grade, and the side of the slab. This endpoint enhances our capability to categorize and assess trading cards with greater precision. In our App, you will find it under Collectibles Recognition: Slab Reading & Identification.

Automatic Recognition of Collectibles

Ximilar built an AI system for the detection, recognition and grading of collectibles. Check it out!

New Endpoint for Card Centering Analysis With Interactive Demo

Given a single image record, the centering endpoint returns the position of a card and performs centering analysis. You can also get a visualization of grading through the _clean_url_card and _exact_url_card fields.

The _tags field indicates if the card is autographed, its side, and type. Centering information is included in the card field of the record.

The card centering API by Ximilar returns the position of a card and performs centering analysis.

Learn How to Scan and Identify Trading Card Games in Bulk With Ximilar

Our new guide How To Scan And Identify Your Trading Cards With Ximilar AI explains how to use AI to streamline card processing with card scanners. It covers everything from setting up your scanner and running a Python script to analyzing results and integrating them into your website.

Let Us Know What You Think!

And that’s a wrap on our latest updates to the platform! We hope these new features might help your shop, website, or app grow traffic and gain an edge over the competition.

If you have any questions, feedback, or ideas on how you’d like to see the services evolve, we’d love to hear from you. We’re always open to suggestions because your input shapes the future of our platform. Your voice matters!

The post New AI Solutions for Card & Comic Book Collectors appeared first on Ximilar: Visual AI for Business.

]]>
New Solutions & Innovations in Fashion and Home Decor AI https://www.ximilar.com/blog/fashion-and-home-updates-2024/ Wed, 18 Sep 2024 12:09:13 +0000 https://www.ximilar.com/?p=18116 Our latest AI innovations for fashion & home include automated product descriptions, enhanced fashion tagging, and home decor search.

The post New Solutions & Innovations in Fashion and Home Decor AI appeared first on Ximilar: Visual AI for Business.

]]>
Automate Writing of SEO-Friendly Product Titles and Descriptions With Our AI

Our AI-powered Product Description revolutionizes the way you manage your fashion apparel catalogs by fully automating the creation of product titles and descriptions. Instead of spending hours manually tagging and writing descriptions, our AI-driven generator swiftly produces optimized texts, saving you valuable time and effort.

Ximilar automates keyword extraction from your fashion images, enabling you to instantly create SEO-friendly product titles and descriptions, streamlining the inventory listing process.

With the ability to customize style, tonality, format, length, and preferred product tags, you can ensure that each description aligns perfectly with your brand’s voice and SEO needs. This service is designed to streamline your workflow, providing accurate, engaging, and search-friendly descriptions for your entire fashion inventory.

Enhanced Taxonomy for Accessories Product Tagging

We’ve upgraded our taxonomy for accessories tagging. For sunglasses and glasses, you can now get tags for frame types (Frameless, Fully Framed, Half-Framed), materials (Combined, Metal, Plastic & Acetate), and shapes (Aviator, Cat-eye, Geometric, Oval, Rectangle, Vizor/Sport, Wayfarer, Round, Square). Try how it works on your images in our public demo.

Our tags for accessories cover all visual features from materials to patterns or shapes.

Automate Detection & Tagging of Home Decor Images With AI

Our new Home Decor Tagging service streamlines the process of categorizing and managing your home decor product images. It uses advanced recognition technology to automatically assign categories, sub-categories, and tags to each image, making your product catalog more organized. You can customize the tags and choose translations to fit your needs.

Try our interactive home decor detection & tagging demo.

The service also offers flexibility with custom profiles, allowing you to rename tags or add new ones based on your requirements. For pricing details and to see the service in action, check our API documentation or contact our support team for help with custom tagging and translations.

Visual Search for Home Decor: Find Products With Real-Life Photos

With our new Home Decor Search service, customers can use real-life photos to find visually similar items from your furniture and home decor catalogue.

Our tool integrates four key functionalities: home decor detection, product tagging, colour extraction, and visual search. It allows users to upload a photo, which the system analyzes to detect home decor items and match them with similar products from your inventory.

Our Home Decor Search tool suggests similar alternatives from your inventory for each detected product.

To use Home Decor Search, you first sync your database with Ximilar’s cloud collection. This involves processing product images to detect and tag items, and discarding the images immediately after. Once your data is synced, you can perform visual searches by submitting photos and retrieving similar products based on visual and tag similarity.

The API allows for customized searches, such as specifying exact objects of interest or integrating custom profiles to modify tag outputs. For a streamlined experience, Ximilar offers options for automatic synchronization and data mapping, ensuring your product catalog remains up-to-date and accurate.

The post New Solutions & Innovations in Fashion and Home Decor AI appeared first on Ximilar: Visual AI for Business.

]]>
How Fashion Tagging Works and Changes E-Commerce? https://www.ximilar.com/blog/how-fashion-tagging-works/ Wed, 22 May 2024 10:05:34 +0000 https://www.ximilar.com/?p=15764 An in-depth overview of the key AI tools reshaping the fashion industry, with a focus on automated fashion tagging.

The post How Fashion Tagging Works and Changes E-Commerce? appeared first on Ximilar: Visual AI for Business.

]]>
Keeping up with the constantly emerging trends is essential in the fashion industry. Beyond shifts in cuts, materials, and colours, staying updated on technological trends has become equally, if not more, crucial in recent years. Given our expertise in Fashion AI, let’s take a look at the key technologies reshaping the world of fashion e-commerce, with a particular focus on a key Fashion AI tool: automated fashion tagging.

AI’s Impact on Fashion: Turning the Industry on Its Head

The latest buzz in the fashion e-commerce realm revolves around visual AI. From AI-powered fashion design to AI-generated fashion models, and all the new AI tools, which rapidly change our shopping experience by quietly fueling the product discovery engines in the background, often unnoticed.

Key AI-Powered Technologies in Fashion E-Commerce

So what are the main AI technologies shaking up fashion e-commerce lately? And why is it important to keep up with them?

Recognition, Detection & Data Enrichment in Fashion

In the world of fashion e-commerce, time is money. Machine learning techniques now allow fashion e-shops to upload large unstructured collections of images and extract all the necessary information from them within milliseconds. The results of fashion image recognition (tags/keywords) serve various purposes like product sorting, filtering, searching, and also text generation.

Breaking down automated fashion tagging: AI can automatically assign relevant tags and save you a significant amount of money and time, compared to the manual process.
AI can automatically assign relevant tags and save you a significant amount of money and time, compared to the manual process.

These tools are indispensable for today’s fashion shops and marketplaces, particularly those with extensive stock inventories and large volumes of data. In the past few years, automated fashion tagging has made time-consuming manual product tagging practically obsolete.

Generative AI Systems for Fashion

The fashion world has embraced generative artificial intelligence almost immediately. Utilizing advanced AI algorithms and deep learning, AI can analyze images to extract visual attributes such as styles, colours, and textures, which are then used to generate visually stunning designs and written content. This offers endless possibilities for creating personalized shopping experiences for consumers.

Different attributes extracted by automated product tagging can directly serve as keywords for product titles and descriptions. You can set the tonality, and length, or choose important attributes to be mentioned in the texts.
Different attributes extracted during the product tagging process can directly serve for titles and descriptions. You can set the style and length, or choose important attributes.

Our AI also enables you to automate the writing of all product titles and product descriptions via API, directly utilizing the product attributes extracted with deep tagging and letting you select the tone, length, and other rules to get SEO-friendly texts quickly. We’ll delve deeper into this later on.

Fashion Discovery Engines and Recommendation Systems

Fashion search engines and personalized recommendations are game-changers in online shopping. They are powered by our speciality: visual search. This technology analyzes images in depth to capture their essence and search vast product catalogs for identical or similar products. Three of its endless uses are indispensable for fashion e-commerce: similar items recommendations, reverse image search and image matching.

Personalized experiences and product recommendations are the key to high engagement of customers.
Personalized experiences and product recommendations are essential for high engagement of customers.

Visual search enables shoppers to effortlessly explore new styles, find matching pieces, and stay updated on trends. It allows you to have your own visual search engine, that rapidly scans image databases with millions of images to provide relevant and accurate search results within milliseconds. This not only saves you time but also ensures that every purchase feels personalized.

Shopping Assistants in Fashion E-Commerce and Retail

The AI-driven assistants guide shoppers towards personalized outfit choices suited for any occasion. Augmented Reality (AR) technology allows shoppers to virtually try on garments before making a purchase, ensuring their satisfaction with every selection. Personalized styling advice and virtual try-ons powered by artificial intelligence are among the hottest trends developed for fashion retailers and fashion apps right now.

Both fashion tags for occasions extracted with our automated product tagging, as well as similar item recommendations, are valuable in systems that assist customers in dressing appropriately for specific events.

My Fashion Website Needs AI Automation, What Should I Do?

Consider the Needs of Your Shoppers

To provide the best customer experience possible, always take into account your shoppers’ demographics, geographical location, language preferences, and individual styles.

However, predicting style is not an easy task. But by utilizing AI, you can analyze various factors such as user preferences, personal style, favoured fashion brands, liked items, items in their shopping baskets, and past purchases. Think about how to help them discover items aligned with their preferences and receive only relevant suggestions that inspire rather than overwhelm them.

There are endless ways to improve a fashion e-shop. Always keep in mind not to overwhelm the visitors, and streamline your offer to the most relevant items.

While certain customer preferences can be manually set up by users when logging into an app or visiting an e-commerce site, such as preferred sizes, materials, or price range, others can be predicted. For example, design preferences can be inferred based on similarities with items visitors have browsed, liked, saved, or purchased.

Three Simple Steps to Elevate Your Fashion Website With AI

Whether you run a fashion or accessories e-shop, or a vintage fashion marketplace, using these essential AI-driven features could boost your traffic, improve customer engagement, and get you ahead of the competition.

Automate Product Tagging & Text Generation

The image tagging process is fueled by specialised object detection and image recognition models, ensuring consistent and accurate tagging, without the need for any additional information. Our AI can analyze product images, identify all fashion items, and then categorize and assign relevant tags to each item individually.

In essence, you input an unstructured collection of fashion images and receive structured metadata, which you can immediately use for searching, sorting, filtering, and product discovery on your fashion website.

Automated fashion tagging relies on neural networks and deep learning techniques. The product attributes are only assigned with a certain level of confidence, highlighted in green in our demo.
AI image tagging relies on neural networks and deep learning techniques. We only assign product attributes with a certain level of confidence, highlighted in green in our demo.

The keywords extracted by AI can serve right away to generate captivating product titles and descriptions using a language model. With Ximilar, you can pre-set the tone and length, and even set basic rules for AI-generated texts tailored for your website. This automates the entire product listing process on your website through a single API integration.

Streamline and Automate Collection Management With AI

Visual AI is great for inventory management and product gallery assembling. It can recognize and match products irrespective of lighting, format, or resolution. This enables consistent image selection for product listings and galleries.

You can synchronise your entire fashion apparel inventory via API to ensure continual processing by up-to-date visual AI. You can either set the frequency of synchronization (e.g., the first day of each month) or schedule the synchronization run every time you add a new addition to the collection.

A large fashion e-commerce store can list tens of thousands of items, with millions of fashion images. AI can sort images in product galleries and references based purely on visual attributes.
A large fashion e-commerce store can have millions of fashion images. AI can sort images in product galleries and references based purely on visual attributes.

For example, you can showcase all clothing items on models in product listings or display all accessories as standalone photos in the shopping cart. Additionally, you can automate tasks like removing duplicates and sorting user-generated visual content, saving a lot of valuable time. Moreover, AI can be used to quickly spot inappropriate and harmful content.

Provide Relevant Suggestions & Reverse Image Search

During your collection synchronisation, visual search processes each image and each product in it individually. It precisely analyzes various visual features, such as colours, patterns, edges and other structures. Apart from the inventory curation, this will enable you to:

  1. Have your custom fashion recommendation system. You can provide relevant suggestions from your inventory anywhere across the customer journey from the start page to the kart.
  2. Improve your website or app with a reverse image search tool. Your visitors can search with smartphone photos, product images, pictures from Pinterest, Instagram, screenshots, or even video content.
Looking for a specific dress? Reverse image search can provide relevant results to a search query, independent of the quality or source of the images.
Looking for a specific dress? Reverse image search can provide relevant results to a search query, independent of the quality or source of the images.

Since fashion detection, image tagging and visual search are the holy trinity of fashion discovery systems, we’ve integrated them into a single service called Fashion Search. Check out my article Everything You Need to Know About Fashion Search to learn more.

Visual search can match images, independent of their origin (e.g., professional images vs. user-generated content), quality and format. We can customize it to fit your collection, even for vintage pieces, or niche fashion brands. For a firsthand experience of how basic fashion visual search operates, check out our free demo.

How Does the Automated Fashion Tagging Work?

Let’s take a closer look at the basic AI-driven tool for the fashion industry: automated fashion tagging. Our product tagging is powered by a complex hierarchy of computer vision models, that work together to detect and recognize all fashion products in an image. Then, each product gets one category (e.g., Clothing), one or more subcategories (e.g., Evening dresses or Cocktail dresses), and a varied set of product tags.

To name a few, fashion tags describe the garment’s type, cut, fit, colours, material, or patterns. For shoes, there are features such as heels, toes, materials, and soles. Other categories are for instance jewellery, watches, and accessories.

In the past, assigning relevant tags and texts to each product was a labor-intensive process, slowing down the listing of new inventory on fashion sites. Image tagging solved this issue and lowered the risk of human error.
In the past, assigning relevant tags and texts to each product was a labor-intensive process, slowing down the listing of new inventory on fashion sites. Image tagging solved this issue and eliminated the risk of human error.

The fashion taxonomy encompasses hundreds of product tags for all typical categories of fashion apparel and accessories. Nevertheless, we continually update the system to keep up with emerging trends in the fashion industry. Custom product tags, personal additions, taxonomy mapping, and languages other than the default English are also welcomed and supported. The service is available online – via API.

How Do I Use the Automated Fashion Tagging API?

You can seamlessly integrate automated fashion tagging into basically any website, store, system, or application via REST API. I’d suggest taking these steps first:

First, log into Ximilar App – After you register into Ximilar App, you will get the unique API authentication token that will serve for your private connection. The App has many useful functions, which are summarised here. In the past, I wrote this short overview that could be helpful when navigating the App for the first time.

If you’d like to try creating and training your own additional machine learning models without coding, you can also use Ximilar App to approach our computer vision platform.

Secondly, select your plan – Use the API credit consumption calculator to estimate your credit consumption and optimise your monthly supply. This ensures your credit consumption aligns with the actual traffic on your website or app, maximizing efficiency.

Use Ximilar's credit consumption calculator to optimise your monthly supply.
Use Ximilar’s credit consumption calculator to optimise your monthly supply.

And finally, connect to API – The connection process is described step by step in our API documentation. For a quick start, I suggest checking out First Steps, Authentication & Image Data. Automated Fashion Tagging has dedicated documentation as well. However, don’t hesitate to reach out anytime for guidance.

Do You Need Help With the Setup?

Our computer vision specialists are ready to assist you with even the most challenging tasks. We also welcome all suggestions and custom inquiries to ensure our solutions meet your unique needs. And if you require a custom solution, our team of developers is happy to help.

We also offer personalized demos on your data before the deployment, and can even provide dedicated server options or set up offline solutions. Reach out to us via live chat for immediate assistance and our team will guide you through the entire process. Alternatively, you can contact us via our contact page, and we will get back to you promptly.

The post How Fashion Tagging Works and Changes E-Commerce? appeared first on Ximilar: Visual AI for Business.

]]>
Evaluation on an Independent Dataset for Image Recognition https://www.ximilar.com/blog/evaluation-on-an-independent-dataset-for-image-recognition/ Fri, 14 Aug 2020 10:34:13 +0000 https://www.ximilar.com/?p=1871 Monitor the performance of your custom models on test dataset with Ximilar platform.

The post Evaluation on an Independent Dataset for Image Recognition appeared first on Ximilar: Visual AI for Business.

]]>
Today, I would like to present the latest feature which many of you were missing in our custom image recognition service. We don’t see this feature in any other public platform that allows you to train your own classification models.

Many machine-learning specialists and data scientists are used to evaluate models on an test dataset, which is selected by them manually and is not seen by the training process. Why? So they see the results on a dataset which is not biased to the training data.

We think that this is a critical step for the reliability and transparency of our solution.

“The more we come to rely on technology, the more important it becomes that it’s robust and trustworthy, doing what we want it to do”.

Max Tegmark, scientist and author of international bestseller Life 3.0

Training, Validation and Testing Datasets at the Ximilar platform

Your Categorization or Tagging Task in the image recognition service contains labels. Every label must contain at least 20 images before Ximilar allows you to train the task.

When you click on the Train button, the new model/training is added to a queue. Our training process takes the next non-trained model (neural network) from the queue and starts the optimization.

The process divides your data/images (which are not labelled with a test flag) into the training dataset (80 %) and validation (20 %) dataset randomly.

In the first phase, the model optimization is done on the training dataset and evaluated it on the validation dataset. The result of this evaluation is stored, and its Accuracy number is what you see when you open your task or detail of the model.

In the last training phase, the process takes all your data (training and validation dataset) and optimizes the model one more time. Then we compute the failed images and store them. You can see them at the bottom of the model page.

Newly, you can mark certain images by the test flag. Before your optimized model is stored in the database (after the training phases), the model is evaluated on this test dataset (images with test flag). This is a very useful feature if you are looking for better insights into your model. In short:

  • Creating a test dataset for your Task is optional.
  • The test dataset is stable, it contains images only that you mark with the “test” flag. On the contrary, the validation dataset mentioned above is picked randomly. That’s why the results (accuracy, precision, recall) of the test dataset are better for monitoring between different model versions of your Task.
  • Your task is never optimized on the test dataset.

How to set up your test dataset?

If you want to add this test flag to some of your images, then select them and use the MODIFY button. Select the checkbox “Set as test images” and that’s it.

Naturally, these test images must have the correct labels (categories or tags). If the labels of your task contain also these test images, every newly trained model will contain the independent test evaluation. You can display these results by selecting the “Test” check button in the upper-right corner of the model detail.

To sum it up

So, if you have some series of test images, then you will be able to see two results in the detail of your model:

1. Validation Set selected randomly from your training images (this is shown as default)

2. Test Set (your selected images)

We recommend adding at least 30 images to the test set per Label/Tag and increasing the numbers (hundreds or thousands) over time.

We hope you will love this feature whether you are working on your healthcare, retail, or industry project. If you want to know more about insights from the model page, then read our older blog post. We are looking forward to bringing more explainability features (Explainable AI) in the future, so stay tuned.

The post Evaluation on an Independent Dataset for Image Recognition appeared first on Ximilar: Visual AI for Business.

]]>
Introducing Tags, Categories & Image Management https://www.ximilar.com/blog/introducing-tags-categories-image-management/ Tue, 26 Mar 2019 13:02:14 +0000 https://www.ximilar.com/?p=909 With the new tagging tasks, you are able to create even more powerful custom deep learning models and deploy them as API.

The post Introducing Tags, Categories & Image Management appeared first on Ximilar: Visual AI for Business.

]]>
Ximilar not only grows by its customer base, but we constantly learn and add new features. We aim to give you as much comfort as possible — by delivering great user experience and even features that might not have been invented yet. We learn from the AI universe, and we contribute to it in return. Let’s see the feature set added in the early spring of 2019.

New Label Types: Categories & Tags

This one is a major, long-awaited upgrade, to our custom recognition system.
 
Until this point, we offered only image categorization, formally: multi-class classification, where every image belongs to exactly one category. That was great for many use cases, but some elaborate ones needed more. So now we introduce Tagging tasks, formally: multi-label classification, where images are tagged with multiple labels per image. Labels correspond to various features or objects contained in a single picture. Therefore, from this point on, we use strictly categorization or tagging, and not classification anymore.
 
With this change, the Ximilar App starts to differentiate two kinds of labels — Categories and Tags, where each image could be assigned either to one Category or/and multiple Tags.
 
 
Ximilar differentiates two kinds of labels — Categories and Tags, where each image could be assigned either to one Category or/and multiple Tags.
 
For every Tagging Task that you create, the Ximilar App automatically creates a special tag “<name of the task> – no tags” where you can put images that contain none of the tags connected to the task. You need to carefully choose the type of task when creating, as the type cannot be changed later. Other than that, you can work in the same way with both types of tasks.
 
When you want to categorize your images in production, you simply take the category with the highest probability – this is clear. In the case of tagging, you must set a threshold and take tags with probability over this threshold. A general rule of thumb is to take all tags with a probability over 50 %, but you can tune this number to fit your use case and data.
 
With these new features, there are also a few minor API improvements. To keep everything backwards compatible, when you create a Task or Label and do not specify the type, then you create a Categorization task with Categories. If you want to learn more about our REST API, which allows you to manage almost everything even training of the models, please check out docs.ximilar.com.

Benefit: Linking Tags with Categories

So hey, we have two types of labels in place. Let’s see what that brings in real use. The typical use-case of our customers is, that they have two or more tasks, defined in the same field/area. For instance, they want to enhance real-estate properties so they need:
  1. Automatically categorize photos by room typeliving room, bedroom, kitchen, outdoor house. At the same time, also:
  2. Recognize different features/objects in the images — bed, cabinet, wooden floor, lamp, etc.

So far, customers had to upload — often the same — training images separately into each label.

This upgrade makes this way easier. The new Ximilar App section Images allows you to upload images once and assign them to several Categories and Tags. You can easily modify the categories and tags of each image there. Either one by one or in bulk. There can be thousands of images in your workspace. So you can also filter images by their tags/categories and do batch processing on selected images. We believe that this will speed up the workflow of building reliable data for your tasks.

Improved Search

Some of our customers have hundreds of Labels. With a growing number of projects, it started to be hard to orient all Labels, Tags, and Tasks. That is why there is now a search bar at the top of the screen, which helps you find desired items faster.

Updated Insights

As we mentioned in our last update notes, we offer a set of insights that help you increase the quality of results over time by looking into what works and what does not in your case. In order to improve the accuracy of your models, you may inspect the details of your model. Please see the article on Confusion Matrix and Failed Images insights and also another one, talking about the Precision/Recall table. We have recently updated the list of Failed images so that you can modify the categories/tags of these failed images — or delete them — directly.

Upcoming Features

  • Workspaces — to clearly split work in different areas
  • Rich statistics — number of API calls, amount of credits, per task, long-term/per-month/within-week/hourly and more.
We at Ximilar are constantly working on new features, refactoring the older ones and listening to your requests and ideas as we aim to deliver a great service not just out of the box, and not only with pre-defined packages but actually meeting your needs in real-world applications. You can always write to us at and request some new API features which will benefit everyone who uses this platform. We will be glad if you share with us how do you use the Ximilar Recognition in your use cases. Not only this will help us grow as a company, but it will also inspire others.
 
We create the Ximilar App as a solid entry point to learn a bunch about AI, but our skills are mostly benefiting custom use cases, where we deliver solutions for Narrow Fields AI Challenges, that are required more than a little over-hyped generic tools that just tell you this is a banana and that is an apple.

The post Introducing Tags, Categories & Image Management appeared first on Ximilar: Visual AI for Business.

]]>