Industry 4.0 - Ximilar: Visual AI for Business https://www3.ximilar.com/blog/tag/industry-40/ VISUAL AI FOR BUSINESS Wed, 04 Sep 2024 09:00:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.ximilar.com/wp-content/uploads/2024/08/cropped-favicon-ximilar-32x32.png Industry 4.0 - Ximilar: Visual AI for Business https://www3.ximilar.com/blog/tag/industry-40/ 32 32 How to deploy object detection on Nvidia Jetson Nano https://www.ximilar.com/blog/how-to-deploy-object-detection-on-nvidia-jetson-nano/ Mon, 18 Oct 2021 12:13:16 +0000 https://www.ximilar.com/?p=6124 We developed a computer vision system for object detection, counting, and tracking on Nvidia Jetson Nano.

The post How to deploy object detection on Nvidia Jetson Nano appeared first on Ximilar: Visual AI for Business.

]]>
At the beginning of summer, we received a request for a custom project for a camera system in a factory located in Africa. The project was about detecting, counting, and visual quality control of the items on the conveyor belts in a factory with the help of visual AI. So we developed a complex system with neural networks on a small computer called Jetson Nano. If you are curious about how we did it, this article is for you. And if you need help with building similar solutions for your factory, our team and tools are here for you.

What is NVIDIA Jetson Nano?

There were two reasons why using our API was not an option. First, the factory has unstable internet connectivity. Also, the entire solution needs to run in real time. So we chose to experiment with embedded hardware that can be deployed in such an environment, and we are very glad that we found Nvidia Jetson Nano.

[Source]

Jetson Nano is an amazing small computer (embedded or edge device) built for AI. It allows you to do machine learning in a very efficient way with low-power consumption (about 5 watts). It can be a part of IoT (Internet of Things) systems, running on Ubuntu & Linux, and is suitable for simple robotics or computer vision projects in factories. However, if you know that you will need to detect, recognize and track tens of different labels, choose the higher version of Jetson embedded hardware, such as Xavier. It is a much faster device than Nano and can solve more complex problems.

What is Jetson Nano good for?

Jetson is great if:

  • You need a real-time analysis
  • Your problem can be solved with one or two simple models
  • You need a budget solution & be cost-effective when running the system
  • You want to connect it to a static camera – for example, monitoring an assembly line
  • The system cannot be connected to the internet – for example, because your factory is in a remote place or for security reasons

The biggest challenges in Africa & South Africa remain connectivity and accessibility. AI systems that can run in house and offline can have great potential in such environments.

Deloitte: Industry 4.0 – Is Africa ready for digital transformation?

Object Detection with Jetson Nano

If you need real-time object detection processing, use the Yolo-V4-Tiny model proposed in this repository AlexeyAB/darknet. And other more powerful architectures are available as well. Here is a table of what FPS you can expect when using Yolo-V4-Tiny on Jetson:

ArchitecturemAP @ 0.5FPS
yolov4-tiny-2880.34436.6
yolov4-tiny-4160.38725.5
yolov4-2880.5917.93
Source: Github

After the model’s training is completed, the next step is the conversion of the weights to the TensorRT runtime. TensorRT runtimes make a substantial difference in speed performance on Jetson Nano. So train the model with AlexeyAB/darknet and then convert it with tensorrt_demos repository. The conversion has multiple steps because you first convert darknet Yolo weights to ONNX and then convert to TensorRT.

There is always a trade-off between accuracy and speed. If you do not require a fast model, we also have a good experience with Centernet. Centernet can achieve a really nice mAP with precise boxes. If you run models with TensorFlow or PyTorch backends, then the speed is slower than Yolo models in our experience. Luckily, we can train both architectures and export them in a suitable format for Nvidia Jetson Nano.

Image Recognition on Jetson Nano

For any image categorization problem, I would recommend using simple architecture as MobileNetV2. You can select for example the depth multiplier for mobilenet of 0.35 and image resolution 128×128 pixels. In this way, you can achieve great performance both in speed and precision.

We recommend using TFLITE backend when deploying the recognition model on Jetson Nano. So train the model with the TensorFlow framework and then convert it to TFLITE. You can train recognition models with our platform without any coding for free. Just visit Ximilar App, where you can develop powerful image recognition models and download them for offline usage on Jetson Nano.

Detecting, tracking and counting of objects on nvidia jetson.
A simple Object Detection camera system with the counting of products can be deployed offline in your factory with Jetson Nano.

Jetson Nano is simple but powerful hardware. However, it is not as powerful as your laptop or desktop computer. That’s why analyzing 4k images on Jetson will be very slow. I would recommend using max 1080p camera resolution. We used a camera by Raspberry PI, which works very well on Jetson and installation is easy!

I should mention that with Jetson Nano, you can come across some temperature issues. Jetson is normally shipped with a passive cooling system. However, if this small piece of hardware should be in the factory, and run stable for 24 hours, we recommend using an active cooling system like this one. Don’t forget to run the next command so your fan on Jetson starts working:

sudo jetson_clocks --fan

Installation steps & tips for development

When working with Jetson Nano, I recommend following guidelines by Nvidia, for example here is how to install the latest TensorFlow version. There is a great tool called jtop, which visualizes hardware stats as GPU frequency, temperature, memory size, and much more:

Jtop linux tool on jetson nano.
jtop tool can help you monitor statistics on Nvidia Jetson Nano.

Remember, the Jetson has shared memory with GPU. You can easily run out of 4 GB when running the model and some programs alongside. If you want to save more than 0.5 GB of memory on Jetson, then run the Ubuntu on LXDE desktop environment/interface. The LXDE is more lightweight than the default Ubuntu environment. To increase memory, you can also create a swap file. But be aware that if your project requires a lot of memory, it can eventually destroy your microSD card. More great tips and hacks can be found on JetsonHacks page.

For improvement of the speed of Jetson, you can also try these two commands, which will set the maximum power input and frequency:

sudo nvpmodel -m0
sudo jetson_clocks

When using the latest image for Jetson, be sure that you are working with the right OpenCV versions of the library. For example, some older tracking algorithms like MOSSE or KCF from OpenCV require a specific version. For some tracking solutions, I recommend looking on PyImageSearch website.

Developing on Jetson Nano

The experience of programming challenging projects, exploring new gadgets, and helping our customers is something that deeply satisfies us. We are looking forward to trying other hardware for machine learning such as Coral from Google, Raspberry Pi, or Intel Movidius for Industry 4.0 projects.

Most of the time, we are developing a machine learning API for large e-commerce sites. We are really glad that our platform can also help us build machine learning models on devices running in distant parts of the world with no internet connectivity. I think that there are many more opportunities for similar projects in the future.

The post How to deploy object detection on Nvidia Jetson Nano appeared first on Ximilar: Visual AI for Business.

]]>
Visual AI Takes Quality Control to a New Level https://www.ximilar.com/blog/visual-ai-takes-quality-control-to-a-new-level/ Wed, 24 Feb 2021 16:08:27 +0000 https://www.ximilar.com/?p=2424 Comprehensive guide for automated visual industrial quality control with AI and Machine Learning. From image recognition to anomaly detection.

The post Visual AI Takes Quality Control to a New Level appeared first on Ximilar: Visual AI for Business.

]]>
Have you heard about The Big Hack? The Big Hack story was about a tiny probe (small chip) inserted on computer motherboards by Chinese manufacturing companies. Attackers then could infiltrate any server workstation containing these motherboards, many of which were installed in large US-based companies and government agencies. The thing is, the probes were so small, and the motherboards so complex, that they were almost impossible to spot by the human eye. You can take this post as a guide to help you navigate the latest trends of AI in the industry with a primary focus on AI-based visual inspection systems.

AI Adoption by Companies Worldwide

Let’s start with some interesting stats and news. The expansion of AI and Machine Learning is becoming common across numerous industries. According to this report by Stanford University, AI adoption is increasing globally. More than 50 % of respondents said their companies were using AI, and the adoption growth was greatest in the Asia-Pacific region. Some people refer to the automation of factory processes, including digitalization and the use of AI, as the Fourth Industrial Revolution (and so-called Industry 4.0).

Photo by AI Index 2019 Report
AI adoption by industry and function [Source]

The data show that the Automotive industry is the largest adopter of AI in manufacturing, using heavily machine learning, computer vision, and robotics.
Other industries, such as Pharma or Infrastructure, are using computer vision in their production lines as well. Financial services, on the other hand, are using AI mostly in operations, marketing & sales (with a focus on Natural Language Processing – NLP).

AI technologies per industry [Source]

The MIT Technology Review cited the statement of a leading artificial intelligence expert Andrew Ng, who has been helping tech giants like Google implement AI solutions, that factories are AI’s next frontier. For example, while it would be difficult to inspect parts of electronic devices with our eyes, a cheap camera of the latest Android or iPhone can provide high-resolution images that can be connected to any industrial system.

Adopting AI brings major advantages, but also potential risks that need to be mitigated. It is no surprise that companies are mainly concerned about the cybersecurity of such systems. Imagine you could lose a billion dollars if your factory stopped working (like Honda in this case). Other obstacles are potential errors in machine learning models. There are techniques on how to discover such errors, such as the explainability of AI systems. As for now, the explainability of AI is a concern of only 19 % of companies so there is space to improve. Getting insight from the algorithms can improve the processes and quality of the products. Other than security, there are also political & ethical questions (e.g., job replacement or privacy) that companies are worried about.

This survey by McKinsey & Company brings interesting insights into Germany’s industrial sector. It demonstrates the potential of AI for German companies in eight use cases, one of which is automated quality testing. The expected benefit is a 50% productivity increase due to AI-based automation. Needless to say, Germany is a bit ahead with the AI implementation strategy – there are already several plans made by German institutions to create standardised AI systems that will have better interoperability, certain security standards, quality criteria, and test procedures.

Highly developed economies like Germany, with a high GDP per capita and challenges such as a quickly ageing population, will increasingly need to rely on automation based on AI to achieve GDP targets.

McKinsey & Company

Another study by PwC predicts that the total expected economic impact of AI in the period until 2030 will be about $15.7 trillion. The greatest economic gains from AI are expected in China (26% higher GDP in 2030) and North America.

What is Visual Quality Control?

The human visual system is naturally very selective in what it perceives, focusing on one thing at a time and not actually seeing the whole image (direct vs. peripheral view). The cameras, on the other hand, see all the details, and with the highest resolution possible. Therefore, stories like The Big Hack show us the importance of visual control not only to ensure quality but also safety. That is why several companies and universities decided to develop optical inspection systems engaging machine learning methods able to detect the tiniest difference from the reference board.

Motherboards by Super Micro [Source: Scott Gelber]

In general, visual quality control is a method or process to inspect equipment or structures to discover defects, damages, missing parts, or other irregularities in production or manufacturing. It is an important method of confirming the quality and safety of manufactured products. Optical inspection systems are mostly used for visual quality control in factories and assembly lines, where the control would be hard or ineffective with human workers.

What Are the Main Benefits of Automatic Visual Inspection?

Here are some of the essential aspects and reasons, why automatic visual inspection brings a major advantage to businesses:

  • The human eye is imprecise – Even though our visual system is a magnificent thing, it needs a lot of “optimization” to be effective, making it prone to optical illusions. The focused view can miss many details, our visible spectrum is limited (380–750 nm), and therefore unable to capture NIR wavelength (source). Cameras and computer systems, on the other hand, can be calibrated to different conditions. Cameras are more suitable for highly precise analyses.
  • Manual checking – Manual checking of the items one by one is a time-consuming process. Smart automation allows processing and checking more items and faster. It also reduces the number of defective items that are released to customers.
  • The complexity – Some assembly lines can produce thousands of various products of different shapes, colours, and materials. For humans, it can be very difficult to keep track of all possible variations.
  • Quality – Providing better and higher quality products by reducing defective items and getting insights into the critical parts of the assembly line.
  • Risk of damage – Machine vision can reduce the risk of item damage and contamination by a person.
  • Workplace safety – Making the work environment safer by inspecting it for potentially dangerous actions (e.g. detection of protection wearables as safety helmets in construction sites), inspection in radioactive or biohazard environments, detection of fire, covid face masks, and many more.
  • Saving costs – Labour work can be pretty expensive in the Western world.
    For example, the average Quality control inspector salary in the US is about 40k USD. Companies consider numerous options when saving costs, such as moving the factories to other countries, streamlining the operations, or replacing the workers with robots. And as I said before, this goes hand in hand with some political & ethical questions. I think the most reasonable solution in the long term is the cooperation of workers with robotic systems. This will make the process more robust, reliable, and effective.
  • Costs of AI systems – Sooner or later, modern technology and automation will be common in all companies (Startups as well as enterprise companies). The adoption of automatic solutions based on AI will make the transition more affordable.

Where is Visual Quality Control Used?

Let’s take a look at some of the fields where the AI visual control helps:

  • Cosmetics – Inspection of beauty products for defects and contaminations, colour & shape checks, controlling glass or plastic tubes for cleanliness and rejecting scratched pieces.
  • Pharma & Medical – Visual inspection for pharmaceuticals: rejecting defective and unfilled capsules or tablets or the filling level of bottles, checking the integrity of items; or surface imperfections of medical devices. High-resolution recognition of materials.
  • Food Industry and Agriculture – Food and beverage inspection for freshness. Label print/barcode/QR code control of presence or position.

A great example of industrial IoT is this story about a Japanese cucumber farmer who developed a monitoring system for quality check with deep learning and TensorFlow.

  • Automotive – Examination of forged metallic parts, plastic parts, cracks, stains or scratches in the paint coating, and other surface and material imperfections. Monitoring quality of automotive parts (tires, car seats, panels, gears) over time. Engine monitoring and predictive autonomous maintenance.
  • Aerospace – Checking for the presence and quality of critical components and material, spotting the defective parts, discarding them, and therefore making the products more reliable.
  • Transportation – Rail surface defects control (example), aircraft maintenance check, or baggage screening in airports – all of them require some kind of visual inspection.
  • Retail/Consumer Goods & Fashion – Checking assembly line items made of plastics, polymers, wood, and textile, and packaging. Visual quality control can be deployed for the manufacturing process of the goods. Sorting imprecise products.
  • Energy, Mining & Heavy Industries – Detecting cracks and damage in wind blades or solar panels, visual control in nuclear power plants, and many more.

It’s interesting to see that more and more companies choose collaborative platforms such as Kaggle to solve specific problems. In 2019, the contest by Russian company Severstal on Kaggle led to tens of solutions for the steel defect detection problem.

Steel defects [Source: Kaggle]

Image of flat steel defects from Severstal competition. [Source: Kaggle]
  • Other, e.g. safety checks – if people are present in specific zones of the factory if they have helmets, or stopping the robotic arm if a worker is located nearby.

The Technology Behind AI Quality Control

There are several different approaches and technologies that can be used for visual inspection on production lines. The most common nowadays are using some kind of neural network model.

Neural Networks – Deep Learning

Neural Networks (NN) are computational models that accept the input data and output relevant information. To make the neural network useful (finding the weights for the connection between the neurons and layers), we need to feed the network with some initial training data.

The advantage of using neural networks is their power to internally represent training data which leads to the best performance compared to other machine learning models in computer vision. However, it brings challenges, such as computational demands, overfitting, and others.

[Un|Semi|Self] Supervised Learning

If a machine-learning algorithm (NN) requires ground truth labels, i.e. annotations, then we are talking about supervised learning. If not, then it is an unsupervised method or something in between – semi or self-supervised method. However, building an annotated dataset is much more expensive than simply obtaining data with no labels. The good news is that the latest research in Neural Networks tackles problems with unsupervised learning.

On the left is the original item without any defects, on the right, a bit damaged one. If we know the labels (OK/DEFECT), we can train a supervised machine-learning algorithm. [Source: Kaggle]

Here is the list of common services and techniques for visual inspection:

  • Image Recognition – Simple neural network that can be trained for categorization or error detection on products from images. The most common architectures are based on convolution (CNN).
  • Object Detection – Model able to predict the exact position (bounding box) of specific parts. Suitable for defect localization and counting.
  • Segmentation – More complex than object detection, image segmentation can tell you a pixel-based prediction.
  • Image Regression – Regress/get a single decimal value from the image. For example, getting the level of wear out of the item.
  • Anomaly Detection – Shows which image contains an anomaly and why. Mostly done by GAN or GRAD-CAM.
  • OCR – Optical Character Recognition is used for getting and reading text from images.
  • Image matching – Matching the picture of the product to the reference image and displaying the difference.
  • Other – There are also other solutions that do not require data at all, most of the time using some simple, yet powerful computer vision technique.

If you would like to dive a bit deeper into the process of building a model, you can check my posts on Medium, such as How to detect defects on images.

Typical Types and Sources of Data for Visual Inspection

Common Data Sources

Thermal imaging example [Source: Quality Magazine]

RGB images – The most common data type and the easiest to get. A simple 1080p camera that you can connect to Raspberry Pi costs about 25$.

Thermography – Thermal quality control via infrared cameras, mostly used to detect flaws not visible by simple RGB cameras under the surface, gas imaging, fire prevention, and electronics behaviour under different conditions. If you want to know more, I recommend reading the articles in Quality Magazine.

3D scanning, Lasers, X-ray, and CT scans – Creating 3D models from special depth scanners gives you a better insight into material composition, surface, shape, and depth.

Microscopy – Due to the rapid development and miniaturization of technologies, sometimes we need a more detailed and precise view. Microscopes can be used in an industrial setting to ensure the best quality and safety of products. Microscopy is used for visual inspection in many fields, including material sciences and industry (stress fractures), nanotechnology (nanomaterial structure), or biology & medicine. There are many microscopy methods to choose from, such as stereomicroscopy, electron microscopy, opto-digital or purely digital microscopes, and others.

Common Inspection Errors

  • scratches
  • patches
  • knots, shakes, checks, and splits in the wood
  • crazing
  • pitted surface
  • missing parts
  • label/print damage
  • corrosion
  • coating nonuniformity
Surface crazing and cracking on brake discs [source], crazing in polymer-grafted nanoparticle film [source], and wood shakes [source].

Examples of Datasets for Visual Inspection

  • Severstal Kaggle Dataset – A competition for the detection of defects on flat sheet steel.
  • MVTec AD – 5000 high-resolution annotated images of 15 items (divided into defective and defect-free categories).
  • Casting Dataset – Casting is a manufacturing process in which a liquid material is usually poured into a form/mould. About 7 thousand images of submersible pump defects.
  • Kolektor Surface-Defect Dataset – Dataset of microscopic fractions or cracks in electrical accumulators.
  • PCB Dataset – Annotated images of printed circuit boards.

AI Quality Control Use Cases

We talked about a wide range of applications for visual control with AI and machine learning. Here are three of our use cases for industrial image recognition we worked on in 2020. All these cases required an automatic optical inspection (AOI) and partial customization when building the model, working with different types of data and deployment (cloud/on-premise instance/smartphone). We are glad to hear that during the COVID-19 pandemic, our technologies help customers keep their factories open.

Our typical workflow for a customized solution is the following:

  1. Setup, Research & Plan: If we don’t know how to solve the problem from the initial call, our Machine Learning team does the research and finds the optimal solution for you.
  2. Gathering Data: We sit with your team and discuss what kind of data samples we need. If you can’t acquire and annotate data yourself, our team of annotators will work on obtaining a training dataset.
  3. First prototype: Within 2–4 weeks we prepare the first prototype or proof of concept. The proof of concept is a lightweight solution for your problem. You can test it and evaluate it by yourself.
  4. Development: Once you are satisfied with the prototype results, our team can focus on the development of the full solution. We work mostly in an iterative way improving the model and obtaining more data if needed.
  5. Evaluation & Deployment: If the system performs well and meets the criteria set up in the first calls (mostly some evaluation on the test dataset and speed performance), we work on the deployment. It can be used in our cloud, on-premise, or embedded hardware in the factory. It’s up to you. We can even provide a source code so your team can edit it in the future.

Use case: Image recognition & OCR for wood products

One of our customers contacted us with a request to build a system for categorization and quality control of wooden products. With Ximilar Platform we were able to easily develop and deploy a camera system over the assembly line that sorted the products into the bins. The system can identify the defective print on the products with optical character recognition technology (OCR), and the surface control of wood texture is enabled by a separate model.

Printed text on wood [Source: Ximilar]

The technology is connected to a simple smartphone/tablet camera in the factory and can handle tens of products per second. This way, our customer was able to reduce rework and manual inspections which led to saving thousands of USD per year. This system was built with the Ximilar Flows service.

Use case: Spectrogram analysis from car engines

Another project we successfully deployed was the detection of malfunctioning engines. We did it by transforming the sound input from the car into an image spectrogram. After that, we train a deep neural network that recognises problematic car engines and can tell you the specific problem of the engine.

The good news is that this system can also detect anomalies in an unsupervised way (no need for data labelling) with the GAN technology.

Spectrogram from Engine [Source: Ximilar]

Use case: Wind Turbin Blade damages from drone footage

[Source: Pexels]

According to Bloomberg, there is no simple way to recycle a wind turbine, and it is therefore crucial to prolong the lifespan of wind power plants. They can be hit by lightning, influenced by extreme weather, and other natural forces.

That’s why we developed for our customers a system checking the rotor blade integrity and damages working with drone video footage. The videos are uploaded to the system, and inspection is done with an object detection model identifying potential problems. There are thousands of videos analyzed in one batch, so we built a workstation (with NVidia RTX GPU cards) able to handle such a load.

Ximilar Advantages in Visual AI Quality Control

  • An end-to-end and easy-to-use platform for Computer Vision and Machine Learning, with enterprise-ready features.
  • Processing hundreds of images per second on an average computer.
  • Train your model in the cloud and use it offline in your factory without an internet connection. Thanks to TensorFlow, you can use the model on any computer, edge device, GPU card, or embedded hardware (Raspberry Pi or NVIDIA Jetson connected to a camera). We also provide optimized CPU models on Intel devices through OpenVINO technology.
  • Easily gather more data and teach models on new defects within a day.
  • Evaluation of the independent dataset, and model versioning.
  • A customized yet affordable solution providing the best outcome with pixel-accurate recognition.
  • Advanced image management and annotation platform suitable for creating intelligent vision systems.
  • Image augmentation settings that can be tuned for your problem.
  • Fast machine learning models that can be connected to your industrial camera or smartphone for industrial image processing robust to lighting conditions, object motion, or vibrations.
  • Great team of experts, available to communicate and help.

To sum up, it is clear that artificial intelligence and machine learning are becoming common in the majority of industries working with automation, digital data, and quality or safety control. Machine learning definitely has a lot to offer to the factories with both manual and robotic assembly lines, or even fully automated production, but also to various specialized fields, such as material sciences, pharmaceutical, and medical industry.

Are you interested in creating your own visual control system?

The post Visual AI Takes Quality Control to a New Level appeared first on Ximilar: Visual AI for Business.

]]>
Is Ximilar Better Than AI Giants? https://www.ximilar.com/blog/is-ximilar-better-than-ai-giants/ Tue, 18 Jun 2019 07:43:37 +0000 https://www.ximilar.com/?p=921 Comparison of pricing and features of main cloud players in computer vision, machine learning and artificial intelligence.

The post Is Ximilar Better Than AI Giants? appeared first on Ximilar: Visual AI for Business.

]]>
We get this question occasionally from users of other Visual AI analysis tools, and the simple answer could be yes, it’s better. Nothing is as simple as black and white, so let us compare services from Goliaths like Google, IBM, Amazon and Microsoft with our David-like solution from Ximilar.

To say it simply, artificial intelligence vision got to a point, where it is easy not only to recognize objects in a photo, but also detect features of each thing. That creates a new universe of opportunities for real-world application in e-commerce & traditional industries alike. And Ximilar is a computer vision platform that digs deep into some pretty narrow use cases. So while the big solutions might be great in many ways, Ximilar might very well be the agile alternative.

Ximilar offers you a great cloud AI platform for training your custom image recognition models and advanced visual search services.

Recognition

Ximilar is Not a Big Corporation

And that is a good thing. Because we keep things simple, streamlined, and we have time to listen to each customer’s needs. We also have the ability to implement new custom features in a timely manner. And we do it as fast as we can, widely benefiting both customers and us, freeing our manpower from manual work.

We at Ximilar create, and continuously improve, advanced visual search, image recognition services & image tools for businesses around the World. That happens in few areas:

We are also not an enterprise that requires millions of users of its services to just stay afloat. See for example how many services were killed by Google. No. Rather than growth in quantity, our center of the universe is how precise we get, and how reliable & sustainable results we deliver. And how we can grow strong together with our customers, or we should rather say our partners.

Here is why Ximilar could be a solid alternative for you if you need to iterate quickly and reach reliable results in narrow fields. Or if you simply need someone who takes your idea further and finds an AI solution to deliver value to your business.

1 – We are focused AI Team

We craft our features to perfection, and we test & use them ourselves. We continuously improve our application for everybody to benefit from new findings in AI vision industry. And we also do things that customers ask for, we don’t just sell access to a platform.

2 – We are an independent company

These days, many companies are created to be acquired. They are created to grow no matter the sustainability of such growth. We are different. Our customers like that we would not disappear tomorrow — getting acquired by a giant and then dissolved into some unreachable feature of some huge app suite is not our target.

3 – We innovate faster

We don’t have a large team and therefore decisions are quick. We are a team of remote professionals working in a field that we truly love and would like to explore to the edge of possibilities. It’s a lot of fun to work on our customers’ challenging tasks. And we are happy to customize any feature. The customer’s budget is the only limit.

4 – Save expenses on AI

Our AI solutions are significantly cheaper than the solutions of big AI players. We are able to save you a lot of money on training and deploying your custom models. For example, training and deploying a model on Google Vertex AI can cost you thousands of dollars, without even calling the API. For Vertex AI AutoML models you are paying for training, deploying and calling a model. Similar pricing for features can be applied to Amazon Rekognition and Azure Custom Vision services. With Amazon Rekognition you are also paying for each hour your model is deployed! On the other hand, AI models built via our platform are trained and deployed for free! You are paying just for calling the API. No more hidden costs.

Head-to-Head Comparison

 FocusModels3,000 requests, free model training and deploymentRequest Price  per 1,000 imagesFree plan per monthVisual SearchExpert assistance
XimilarCustom Image Recognition, Visual & Similarity Search, TaggingFashion, Home-Decor, Collectibles, Custom (classification, tagging, detection)Optional$1.03,000 requests, free model trainings and deploymentYesYes
MicrosoftImage RecognitionGeneric, Custom (classification, tagging, detection)No$210,000 requests, 1 hour of trainingNoNo
AmazonImage & Video RecognitionGeneric, Face, Sensitive Content, Text, Celebrity, …No$15,000 requestsFace onlyNo
GoogleImage RecognitionGeneric, Faces, Text, Logos, LandmarksNo$1.51,000 requestsNoNo
IBM WatsonImage RecognitionGeneric, Faces, Food, Explicit, Custom (classification,  tagging)No$21,000 predictions, 2 trainings of modelsNoNo
ClarifaiImage & Video Recognition, Similarity SearchGeneric, Faces, Nudity, (Fashion) Custom (classification,  tagging)…Optional$1.2 – 3.21,000 operationsYesYes

Narrow Field vs. Generic AI

This one is personal. You would see a lot of simple AI applications, like detecting a cat and a dog in a given — well lit & well shot — picture. But in reality, the bread and butter of applied visual AI is narrow field recognition and analysis of large volumes of images, where the customer needs pretty high accuracy on a specific subject. For example, detect a type of screw on a blurry cellphone photo, shot in bad lighting conditions.

Unlike the giants who mostly sell you ready-made solutions that you can hardly bend to meet your needs, Ximilar is in the other end of the spectrum, brainstorming with customers about how to solve the use case that they have. Being their partner in the path to success.

Examples of such narrow use cases are

  • Detecting coffee grounds in a cup – for a customer who receives millions of images to their mobile app used to foretell the future for its users. You wouldn’t believe how many users in coffee-drinking countries use such an app.
Fal Cafe mobile app
  • Recognition of trading cards from photo – A cool use-case that was a dream of every geek. Not anymore. Simply snap a photo of a sports card or a game like Pokémon, and the app will identify a card and return a price listed on eBay. You can build your own portfolio tracker and much more with Ximilar.
  • Give me a quality rating of a photo – this one was brought up by a hotel reservation site and real estate company. They need to detect the best photos of a property, while the photos are often delivered by a re-seller, or a hotel owner and might not be well shot. And we all know that good photos sell better. Ximilar can help even there with upscaling images and improving their quality.

Lower Price for Higher Accuracy

While the examples above might be fun to read, let’s get to real facts, hardcore numbers and actual user feedback. Because that is a requirement for any business to base its thoughts on. Here are some real-life examples of our customer experiences.

  • Ximilar Recognition is cheaper and has comparable accuracy as Microsoft Custom Vision, Amazon Rekognition, Google Vertex AI and IBM Watson. At least several of our customers, and users of the Ximilar App, achieve even better accuracy than with the big cloud solutions. Ximilar allows users to control various parameters of training from a simple GUI.
Models & Insights into AI
Model versioning in Ximilar App.
  • UX of Ximilar App is extremely easy to use, also reported by our customers, saying: “Ximilar has a shallow learning curve in comparison to others”. Connection to the API and integration to your systems and apps is easy.
  • Ximilar has advanced features for tuning of your recognition tasks which no other services provide — flips, rotations, etc.
Ximilar Features
Advanced settings of image augmentations in Ximilar App.
  • Ximilar Product Similarity and Custom Similarity are unique services for finding visually similar alternatives in fashion, home decor and other image collections
  • Ximilar is much more flexible as we are willing to improve our service for your needs – e.g. add more tags to our models — according to your requirements and keep it attached to your data exclusively
  • We are cheaper — Google AutoML Vision/Vertex AI is significantly much more expensive than our solution
  • Ximilar Fashion Tagging is at the top of abilities in fashion object recognition
  • Elaborate management of tags & categories for more projects of higher complexity — we are the only system we know of, that enables users to share training data between categorisation and tagging tasks, chaining recognition models into one API…
  • Ximilar, unlike the big competition, is able to install the system on-premise, giving you better control over the system, do a lot of flexible customizations

This is just a brief summary of what we see as benefits for you if you use Ximilar as your partner for pioneering the AI world. We see it now as really just the beginning of all the possibilities that might come in the future of automation and machine learning abilities. We have been around for many years now and Ximilar would surely be around for the years to come. Backing you on the way. Enjoying the exploration.

The post Is Ximilar Better Than AI Giants? appeared first on Ximilar: Visual AI for Business.

]]>
Vision AI is Breaking Into New Industries https://www.ximilar.com/blog/vision-ai-is-breaking-into-new-industries/ Tue, 20 Jun 2017 11:00:09 +0000 https://www.ximilar.com/?p=809 Several interesting use-cases for using machine learning and visual ai in the world.

The post Vision AI is Breaking Into New Industries appeared first on Ximilar: Visual AI for Business.

]]>
New opportunities in food industry automation and science

Automotive, electronic manufacturing, mechanical engineering are always searching for a way to automate repeating tasks. In this post, I want to mention three segments where current progress in AI enabled new forms of automation.

Why is machine learning good for industries? Because not every part of the process is precise. Products are not always aligned in a grid and do not have same colour and shape. This is hard to cover by standard visual systems.

Here comes the power of machine learning. We do not need to add more rules to our process line. We only need to gather enough real-world images for deep learning. Let’s take a look at some interesting industrial use-cases for visual AI.


Machine learning in agriculture

We already see many new applications in this field. There is a buzz around drones that use a camera to monitor fields on regular basis. There are systems to reduce chemical usage by looking at each flower and spraying only important ones. By looking at flower one by one it can also provide customized fertilizers and pesticides based on the requirements of each plant. We could not imagine this to be possible in large areas 5 years ago.

Because of the huge areas of agriculture we often see moving units with cameras. These cameras can be used for different types of monitoring.

Farmers can adjust irrigation not only according to weather forecast but also live camera feed. We can place cameras around the field and detect persons or animals to enter different zones. Machine learning can be used to detect presence of different types of insect and automatically trigger actions. What if we point a camera to the plants and collect images of its growth every season? We can then predict future profits at early stage.


Ximilar (Vize.ai) supports these ideas with an important feature. With only few training images people can start monitoring what is important for them. All simple and easy to use. People do not have to wait for companies to understand the pain they have.

You can read more about applications of AI in agriculture here.

Machine learning in a food industry

“Food and Agriculture Organisation of the United Nations estimating that by 2050, feeding a global population of nine billion will require 70 per cent increase in food production”

In this case, we need to responsibly allocate food and minimize its wasting. In this article, we can read about how AI helps to sort food resources before we start to process them. Sorting raw material before processing leads to less waste during food processing.

Fast-foods can use machine learning and set of cameras to determine how much food to prepare at any time of the day. This again leads to less waste.

Food as past and vegetables

I can also see a significant opportunity in understanding human generated waste. Applications like smart trash bins can improve your shopping habits. By avoiding to buy food that always ends up in the trash you can save nature and money as well. This comes with the trend we call IoT (internet of things). Your refrigerator can help you choose the right food for family members by recognizing faces as well as take care of expiration days and old vegetables.

Machine learning in Animal Science

This is something we really like to talk about with our customers in Ximilar (Vize.ai). There are almost never ending possibilities in the animal world. Cameras can help to count animals for scientific researches. Some scientists are interested in animal tracking. There are possibilities to analyze satellite images to find water resources or animal migration. When it comes to security one can avoid risks in human-animal conflicts by leveraging little camera and AI. I would like to write more about this in another post.

Summary

Nowadays, it is hard to find a segment where machine learning and AI is not going to bring some value in future. I like to think about AI as an enabler for people who are working in the background on something valuable. We do not realize the exhausting work done every day by farmers and scientists around the world to bring us something as general as food and safety. What we can do for them is to bring tools and smart computers to make their lives easier. All the mentioned use-cases can be developed via Ximilar App platform. Some problems require a deep customization that we are able to do for you so dont hesitate to contact us for consultation.

The post Vision AI is Breaking Into New Industries appeared first on Ximilar: Visual AI for Business.

]]>