Vision Intelligence: TYQ-i®

You are Industry 4.0 ready. So are we.

TYQ-i – Vision Intelligence for Smart-er Inspection.

Every industry is looking to improve efficiencies – be it early discovery of wear and tear in civil infrastructure or improving finished product quality on an assembly line. TYQ-i® is Ignitarium's Visual Deep Learning based defect detection platform that can vastly improve your inspection and Quality Assurance process.

See – and find – more with TYQ-i®.

AI driven Vision Analytics for Wind Turbine Inspection
TYQ-i icon

Deep Learning-based Vision Analytics for Wind Turbine Inspection

Key Features of TYQ-i

Accurate defect detection using Vision sensors

Multi-mode vision sensor support – RGB, 3D, Laser, Microscopic, Multi-spectral, etc.

Advanced Hybrid algorithms merging classical CV and DL techniques

In-field Customer Re-trainable architecture

Custom Neural Networks that minimize the requirement for vast training data sets

Multiple deployment modes – edge, in-premise server and cloud based

Optimized for Intel distribution of OpenVINO™

Solution scalable from low-cost micro-controllers to GPU farms

TYQ-i Product

TYQ-i Attributes


TYQ-i is a distillation of Ignitarium's deep experience in AI-for-vision. At its core, it blends the best of Classical CV techniques with advanced custom neural nets to create an efficient solution that can generalise across vastly different use cases. The same platform can be tuned to detect anomalies in an infrastructural asset spread over miles as easily as it can be re-purposed to detect a defect in a tiny part moving on a high speed conveyer line.

Minimal training images

Handcrafted custom neural networks (not just Open Source CNNs) that can be trained for high accuracy inference with minimal data sets (often less than a few 10s)

Scalable across defect classes

Common Neural Network platform architecture that can detect edge, surface, color, dimensional, missing or extra artifacts

Integrated SDK-based solution

Efficient SDK to allow quick deployment of the solution in live environments, irrespective of defect class being targeted

Works with low-cost cameras

Does not require high-end purpose-built industrial grade Machine-Vision cameras. Can even work with regular webcams in several use cases.

In-built benchmarking frameworks

The TYQ-i framework has a native benchmarking framework that allows quick identification of bottlenecks in the pipeline – be it registration, classification, inference or sub-stages in between.

FPGA-acceleratable architecture

Efficient architecture that can seamlessly transfer critical functions from the Vision processing pipeline into low-cost FPGAs to achieve higher frame rates or reduce latencie

Use cases

TYQ-i in Action

TYQ-i Demo Kits

Lattice FPGA-based TYQ-i Demo Kit

Renesas MCU-based TYQ-i Demo Kit


Deep Learning-based Vision Analytics For Smart Infrastructure

Webinar Recording

Vision Analytics of Wind Turbine for Inspection

Deep Learning-based Vision Analytics for Wind Turbine Inspection

Webinar Recording

Aerial Analytics

Use cases :

  • Rail track defect detection
  • Tower defect detection: Structural analysis of Power transmission towers
  • infrastructure mapping

Highlights :

  • Defect detection from a distance
  • Non-intrusive
  • Automatic video capture with perfectly centered ROI
  • No manual intervention required by pilot for camera positioning

Ground Based Infrastructure analytics

Some Buildings in a city

Use cases :

  • Rail tracks (public transport, mining, etc.)
  • Highways
  • Tunnels

Highlights :

  • Analysis of video and images from 2D & 3D RGB camera sensors
  • Multi sensor support (X-ray, thermal, radar, etc.)
  • Detection of anomalies in peripheral areas of core infrastructure (Ex: vegetation or stones near rail tracks)

Real Time Manufacturing Line Inspection

Use cases :

  • Detection of defects on the surface of manufactured goods (metal, plastic, glass, food, etc.)
  • Can be integrated into the overall automated QA infrastructure on an assembly line

Highlights :

  • Custom neural network and algorithms to achieve high accuracy and inference speed
  • Use of consumer or industrial grade cameras
  • Requires only a few hundred images during the training phase
  • Supports incremental training of the neural network with data augmentation
  • Allows implementation on low cost GPU or CPU based platforms

Missing Artifact Detection

A puzzle

Use cases :

  • Detection of missing components during various stages of manufacturing of industrial parts
  • Examples include : missing nuts and bolts, missing ridges, missing grooves on plastic and metal blocks

Highlights :

  • Custom neural network and algorithms to achieve high accuracy and inference speed
  • Single-pass detection of many categories of missing artifacts
  • In-field trainable neural networks with dynamic addition of new artifact categories
  • Implementation using low cost cameras and not expensive machine-vision cameras
  • Learning via the use of minimal training sets
  • Options to implement the neural network on GPU or CPU based systems

Real Time Color Detection

Use cases :

  • Machine vision applications such as color sorter or food defect detection

Highlights :

  • Color detection algorithm with real time performance
  • Detects as close to human vison as possible including color shade discrimination
  • GPGPU based algorithm on NVIDIA CUDA and Snapdragon Adreno GPU
  • Extremely low latency (a few 10s of milliseconds) for detection
  • Portable onto different hardware platforms