VSLAM featured image
This Series of blogs explores the exciting field of Feature based Visual Simultaneous Localization and Mapping (VSLAM). It also discusses the two state-of-the-art algorithms that are widely used in this area: RTAB-Map and ORB-SLAM3.

VSLAM Series – Feature Extraction & Description Pipeline


This Series of blogs explores the exciting field of Feature based Visual Simultaneous Localization and Mapping (VSLAM). It also discusses the two state-of-the-art algorithms that are widely used in this area: RTAB-Map and ORB-SLAM3. After going through our earlier blog on the challenges of SLAM and the advantages of using cameras for SLAM, this new article provides a comprehensive study of different components of feature based visual SLAM algorithms, including their underlying principles. The blog provides a comparison of the two algorithms, discussing their strengths and weaknesses and identifies the best use cases for each algorithm.

SLAM utilizes different types of sensors, including LiDARs, cameras, and radar, to estimate the robot’s location and map the surrounding environment. However, each method has its strengths and limitations, and the use of multiple sensors is often required to improve accuracy and reduce errors. The cameras are a popular sensor for visual SLAM (VSLAM), which addresses the challenge of accurately localizing and mapping a robot’s surroundings using visual information. VSLAM is increasingly popular due to low-cost sensors, easy sensor fusion, and GPS-denied operation, making it practical for autonomous navigation. The first part of the blog will explore the key concepts and techniques of feature extraction and description pipeline of VSLAM.

Common Modules for a Feature-based Visual Slam

A feature-based visual SLAM system typically consists of several modules that work together to estimate the camera pose and build a map of the environment. Figure 1 shows the common modules of a feature-based visual SLAM system.

Fig 1


Figure 1: Common modules of a feature-based visual SLAM system


      1. Image Preprocessing: This module performs image preprocessing operations such as undistortion, color correction, and noise reduction to prepare the images for feature extraction.
      2. Feature Detection and Description: This module extracts distinctive features from the images and describes them using a set of descriptors. Common feature detection algorithms include FAST, ORB, and SURF, while common descriptors include SIFT, ORB, and FREAK.
      3. Feature Matching: This module matches the features between consecutive images or between distant images using a matching algorithm such as brute-force matching, FLANN, or ANN. The matches are used to estimate the camera motion and to build a map of the environment.
      4. Camera Motion Estimation: This module estimates the camera motion from the matched features using techniques such as direct sparse odometry (DSO), visual-inertial odometry (VIO), or simultaneous localization and mapping (SLAM).
      5. Map Representation: This module represents the map of the environment using a data structure such as a point cloud, a keyframe graph, or a pose graph. The map is updated and optimized continuously as new images and features are added.
      6. Loop Closure Detection: This module detects loop closures, which occur when the camera revisits a previously visited location and helps to reduce the drift in the camera motion estimation. Loop closure detection is typically performed using techniques such as bag-of-words (BoW) or covisibility graph.
      7. Map Optimization: This module optimizes the map and the camera poses using techniques such as bundle adjustment, pose graph optimization, or SLAM back-end. Map optimization helps to refine the accuracy and consistency of the map and to reduce the error in the camera motion estimation.
      8. Localization and re-localization: This module performs localization and re-localization, which is the task of estimating the camera pose in a known and unknown map. Localization is typically performed using techniques such as particle filtering, Monte Carlo localization, or pose-graph optimization.

    The choice of modules and algorithms depends on the specific requirements of the application, such as the type of sensor, the computational resources, the accuracy, and the robustness.

    Feature Detection and Description

    The first step of feature-based visual SLAM (Simultaneous Localization and Mapping) is to extract features from the images captured by the camera. To make it more computationally efficient, the image is first converted into a grayscale image before the feature extraction step. Any specific, identifiable or characteristic part of an image that can be used to distinguish it from other images is called a feature. It could be as simple as edges and corners or even complex like shapes, textures or patterns that change in viewpoint, illumination and scale.

     Feature extraction is performed using a feature detection algorithm, such as the Harris corner detector, Scale-Invariant Feature Transform (SIFT), Speeded-Up Robust Feature (SURF), or Oriented FAST and Rotated BRIEF (ORB). ORB is a popular choice for feature detection and extraction due to its efficiency, robustness, and rotation invariance. It uses a combination of FAST keypoint detection and BRIEF binary descriptor to accurately detect and describe features in images. ORB is computationally efficient, operates in real-time on low-power devices, and is robust to noise, blur, and low contrast or lighting conditions. It is useful in applications such as outdoor navigation and surveillance.

    The ORB algorithm uses an image pyramid to detect features at different scales in an image. The pyramid is created by down sampling the original image to create a series of smaller images, with each level having a lower resolution than the previous level. By detecting features at multiple scales, the algorithm becomes more robust to changes in image scale, and the pyramid also helps to reduce computational complexity.

    Feature Detection Algorithms
    FAST (Features from Accelerated Segment Test)

    After the creation of the image pyramid, the ORB algorithm performs Feature Detection by using the FAST (Features from Accelerated Segment Test) algorithm to detect key points in each level of the image pyramid. The FAST algorithm can be defined as follows:

    1. Choose a pixel p in the image and a threshold value t.


    Figure 2: Representation of candidate pixel p with circular ring [source]

    2. Select a circular ring around the pixel p with radius r and choose a set of n pixels on the ring.

    3. Compute the difference between the intensity of pixel p and the intensity of each of the n pixels on the ring.

    4. If at least k pixels out of the n have intensities greater than p + t or less than p – t, then pixel p is considered a corner or a keypoint.

    The above process is repeated for all pixels in the image.

    The FAST algorithm is efficient because it only requires a small number of intensity comparisons to determine if a pixel is a corner or keypoint. This makes it well-suited for real-time applications. Then a non-maximum suppression is to be applied to eliminate redundant feature points in close proximity. The process of non-maximum suppression can be broken down into two steps:

    Step 1: Compute score function (S) for all the detected feature points. The score function is typically computed as the sum of the absolute difference between the central pixel P and the values of its surrounding n pixels.

    Step 2: If two adjacent feature points have overlapping regions, the one with the lower S value is discarded or suppressed. This step ensures that only the highest-scoring feature points are retained (see figure 3).


    Fig 3.1: Original image  


    Fig 3.2: FAST keypoint   


    Fig 3.3: Non-maximumsuppression

    After feature detection the ORB uses centroid orientation to compute the dominant orientation of feature points based on local gradients for rotation invariance. to compute the intensity centroid of a patch of pixels around a keypoint, ORB follow the steps outlined below:

    1. Compute the horizontal and vertical gradients of the patch using the Sobel operator: Let’s assume that we have a patch of pixels P of size n x n centered around the keypoint. The horizontal gradient Gx(i,j) and vertical gradient Gy(i’j) at pixel (i,j) in the patch are given by:

    The gradient vector G(i,j) at pixel (i,j) is given by: G (i,j)=[Gx (i,j),Gy (i.j)]

    2. The distance d(i,j) of pixel (i,j) from the center of the patch is given by:

    The product of the gradient vector and the distance is given by:

    W (i,j)= G (i,j)*d (i,j), where * denotes the element-wise multiplication of two vectors.  

    Then sum the values of W(i,j) over all pixels (i,j) in the patch to get the weighted sum:

    over all pixels (i,j) in the patch.

    where i and j denote the row and column indices of the pixel in the patch, and P(i,j) is the intensity value of the pixel

    3. Compute the sum of magnitudes of gradient of all pixels in the patch as:

     over all pixels (i,j) in the patch.

    4. Compute the intensity centroid (cx, cy) of the patch as:

    , where (n+1)/2 represents the center of the patch.

    5. Compute the orientation: the orientation of the keypoint is calculated by taking the angle between the intensity centroid and the keypoint. This is given by:

    where arctangent function that takes the signs of the numerator and denominator into account to return an angle between and .The orientation computed in the above is assigned to the keypoint as its dominant orientation. If there are multiple orientations that are significantly high, then additional keypoints are created for each of these orientations.

    BRIEF (Binary robust independent elementary feature)

    BRIEF is a binary descriptor that can be calculated from FAST keypoints. The key points are described by comparing their surrounding pixel values in a predefined pattern. Brief uses a randomized set of pairs of pixels, compares their intensities, and assigns a binary value to each pair based on the comparison result. The binary values from all the pairs are concatenated into a single binary string, which forms the feature descriptor for the key point. The length of the binary string is typically fixed, and the number of pairs used in the comparison is also limited. The resulting binary feature vector is therefore relatively small in size, making it computationally efficient to use in large-scale image processing applications.

    Brief start by smoothing image using a Gaussian kernel of size (7, 7) in order to prevent the descriptor from being sensitive to high-frequency noise. Then a patch is defined as a square region of the image centered around a keypoint location. The size of the patch is typically chosen to be a fixed number of pixels in width and height. Then BRIEF selects a random pair of pixels in a defined patch around the keypoint and creates a 2 x n Matrix M.

    Using the patch orientation and the corresponding rotation matrix, we construct a “rotated” version of M.                         

    Now, the rotated BRIEF operator becomes:    

    The feature is defined as a vector of n binary tests:   , where is defined as:   

    Finally, the method computes the BRIEF descriptor for the keypoint using the selected set of pixel pairs from

    Bag of visual words (BoVW)

    BoVW uses the concept of visual words, which are similar to the words in natural language. Each visual word represents a cluster of similar features, such as SIFT or ORB descriptors, that occur frequently in the images.

    The process of computation of BoVW is as follows: 

    1. Let X be the set of images in the dataset, and let F(x) be the set of local features extracted from image x. Let K be the number of visual words and let  be the set of visual words obtained by clustering the features in F(x). Given a set of N feature points x1, x2, …, xN in d-dimensional space, the K-means algorithm partitions the feature points into K clusters . Each cluster is represented by its centroid or mean, denoted by the vector. At each iteration of the algorithm, we minimize the within-cluster sum of squared distances (WCSS), defined as: 
    2. Where xi is a data point and uk is the centroid of the cluster to which xi is assigned. the centroid uk is the mean of the data points assigned to cluster k: for xi in Ck, where Ck is the number of data points assigned to cluster k.
    3. Once the visual words are obtained, a histogram is generated for each image in the dataset. The histogram counts the occurrences of each visual word in the image. Specifically, for each image, we count the number of features assigned to each visual word and store the counts in a histogram. This results in a fixed-length vector representation of each image, where the dimensions correspond to the visual words and the values correspond to the counts of each visual word in the image. For each image x in X, we compute its histogram H(x), where is the number of features in F(x) that are assigned to visual word. The final step is to normalize the histogram vectors to make them invariant to changes in the image scale and illumination. This is done by applying L2 normalization to each histogram vector, which scales the vector to have unit length. The L2 normalization is computed as follows:
      for i = 1 to K

      By comparing the histograms of visual words between successive images, the method can estimate the camera’s motion and the locations of the keypoints in 3D space.


    The feature extraction pipeline is a critical component of visual SLAM systems, as it enables the extraction of distinctive and robust visual features from sensor data. The pipeline typically involves a series of steps such as image acquisition, pre-processing, feature detection and feature description. The quality and efficiency of the pipeline directly affect the accuracy and robustness of the SLAM system, as well as its ability to operate in real-time.

    To design an effective feature extraction pipeline for visual SLAM, it is important to carefully select and optimize each step of the pipeline based on the specific requirements of the application. This includes selecting appropriate feature detection and description algorithms, optimizing feature matching strategies, and addressing potential issues such as occlusion, dynamic scenes, and lighting variations.

    In conclusion, a well-designed and optimized feature extraction pipeline is essential for the success of visual SLAM systems, as it directly influences accuracy, robustness, and real-time performance. By carefully selecting and tuning each step of the pipeline to address specific application requirements and challenges, visual SLAM can achieve reliable and efficient localization and mapping in a wide range of scenarios.

    Scroll to Top

    Human Pose Detection & Classification

    Some Buildings in a city


    • Suitable for real time detection on edge devices
    • Detects human pose / key points and recognizes movement / behavior
    • Light weight deep learning models with good accuracy and performance

    Target Markets:

    • Patient Monitoring in Hospitals
    • Surveillance
    • Sports/Exercise Pose Estimation
    • Retail Analytics

    OCR / Pattern Recognition

    Some Buildings in a city

    Use cases :

    • Analog dial reading
    • Digital meter reading
    • Label recognition
    • Document OCR

    Highlights :

    • Configurable for text or pattern recognition
    • Simultaneous Analog and Digital Dial reading
    • Lightweight implementation

    Behavior Monitoring

    Some Buildings in a city

    Use cases :

    • Fall Detection
    • Social Distancing

    Highlights :

    • Can define region of interest to monitor
    • Multi-subject monitoring
    • Multi-camera monitoring
    • Alarm triggers

    Attire & PPE Detection

    Some Buildings in a city

    Use cases :

    • PPE Checks
    • Disallowed attire checks

    Use cases :

    • Non-intrusive adherence checks
    • Customizable attire checks
    • Post-deployment trainable


    Request for Video

      Real Time Color Detection​

      Use cases :

      • Machine vision applications such as color sorter or food defect detection

      Highlights :

      • Color detection algorithm with real time performance
      • Detects as close to human vison as possible including color shade discrimination
      • GPGPU based algorithm on NVIDIA CUDA and Snapdragon Adreno GPU
      • Extremely low latency (a few 10s of milliseconds) for detection
      • Portable onto different hardware platforms

      Missing Artifact Detection

      Use cases :

      • Detection of missing components during various stages of manufacturing of industrial parts
      • Examples include : missing nuts and bolts, missing ridges, missing grooves on plastic and metal blocks

      Highlights :

      • Custom neural network and algorithms to achieve high accuracy and inference speed
      • Single-pass detection of many categories of missing artifacts
      • In-field trainable neural networks with dynamic addition of new artifact categories
      • Implementation using low cost cameras and not expensive machine-vision cameras
      • Learning via the use of minimal training sets
      • Options to implement the neural network on GPU or CPU based systems

      Real Time Manufacturing Line Inspection

      Use cases :

      • Detection of defects on the surface of manufactured goods (metal, plastic, glass, food, etc.)
      • Can be integrated into the overall automated QA infrastructure on an assembly line.

      Highlights :

      • Custom neural network and algorithms to achieve high accuracy and inference speed
      • Use of consumer or industrial grade cameras
      • Requires only a few hundred images during the training phase
      • Supports incremental training of the neural network with data augmentation
      • Allows implementation on low cost GPU or CPU based platforms

      Ground Based Infrastructure analytics

      Some Buildings in a city

      Use cases :

      • Rail tracks (public transport, mining, etc.)
      • Highways
      • Tunnels

      Highlights :

      • Analysis of video and images from 2D & 3D RGB camera sensors
      • Multi sensor support (X-ray, thermal, radar, etc.)
      • Detection of anomalies in peripheral areas of core infrastructure (Ex: vegetation or stones near rail tracks)

      Aerial Analytics

      Use cases :

      • Rail track defect detection
      • Tower defect detection: Structural analysis of Power
        transmission towers
      • infrastructure mapping

      Highlights :

      • Defect detection from a distance
      • Non-intrusive
      • Automatic video capture with perfectly centered ROI
      • No manual intervention is required by a pilot for
        camera positioning


      Co-founder & CEO


      Founder and Managing director of Ignitarium, Sanjay has been responsible for defining Ignitarium’s core values, which encompass the organisation’s approach towards clients, partners, and all internal stakeholders, and in establishing an innovation and value-driven organisational culture.


      Prior to founding Ignitarium in 2012, Sanjay spent the initial 22 years of his career with the VLSI and Systems Business unit at Wipro Technologies. In his formative years, Sanjay worked in diverse engineering roles in Electronic hardware design, ASIC design, and custom library development. Sanjay later handled a flagship – multi-million dollar, 600-engineer strong – Semiconductor & Embedded account owning complete Delivery and Business responsibility.


      Sanjay graduated in Electronics and Communication Engineering from College of Engineering, Trivandrum, and has a Postgraduate degree in Microelectronics from BITS Pilani.


      Request Free Demo

        RAMESH EMANI Board Member


        Board Member

        Ramesh was the Founder and CEO of Insta Health Solutions, a software products company focused on providing complete hospital and clinic management solutions for hospitals and clinics in India, the Middle East, Southeast Asia, and Africa. He raised Series A funds from Inventus Capital and then subsequently sold the company to Practo Technologies, India. Post-sale, he held the role of SVP and Head of the Insta BU for 4 years. He has now retired from full-time employment and is working as a consultant and board member.


        Prior to Insta, Ramesh had a 25-year-long career at Wipro Technologies where he was the President of the $1B Telecom and Product Engineering Solutions business heading a team of 19,000 people with a truly global operations footprint. Among his other key roles at Wipro, he was a member of Wipro's Corporate Executive Council and was Chief Technology Officer.


        Ramesh is also an Independent Board Member of eMIDs Technologies, a $100M IT services company focused on the healthcare vertical with market presence in the US and India.


        Ramesh holds an M-Tech in Computer Science from IIT-Kanpur.


        General Manager - Marketing

        A professional with a 14-year track record in technology marketing, Malavika heads marketing in Ignitarium. Responsible for all branding, positioning and promotional initiatives in the company, she has collaborated with technical and business teams to further strengthen Ignitarium's positioning as a key E R&D services player in the ecosystem.

        Prior to Ignitarium, Malavika has worked in with multiple global tech startups and IT consulting companies as a marketing consultant. Earlier, she headed marketing for the Semiconductor & Systems BU at Wipro Technologies and worked at IBM in their application software division.

        Malavika completed her MBA in Marketing from SCMHRD, Pune, and holds a B.E. degree in Telecommunications from RVCE, Bengaluru.



        VP - Operations

        Pradeep comes with an overall experience of 26 years across IT services and Academia. In his previous role at Virtusa, he played the role of Delivery Leader for the Middle East geography. He has handled complex delivery projects including the transition of large engagements, account management, and setting up new delivery centers.

        Pradeep graduated in Industrial Engineering and Management, went on to secure an MBA from CUSAT, and cleared UGN Net in Management. He also had teaching stints at his alma mater, CUSAT, and other management institutes like DCSMAT. A certified P3O (Portfolio, Program & Project Management) from the Office of Government Commerce, UK, Pradeep has been recognized for key contributions in the Management domain, at his previous organizations, Wipro & Virtusa.

        In his role as the Head of Operations at Ignitarium, Pradeep leads and manages operational functions such as Resource Management, Procurement, Facilities, IT Infrastructure, and Program Management office.


        SONA MATHEW Director – Human Resources


        AVP – Human Resources

        Sona heads Human Resource functions - Employee Engagement, HR Operations and Learning & Development – at Ignitarium. Her expertise include deep and broad experience in strategic people initiatives, performance management, talent transformation, talent acquisition, people engagement & compliance in the Information Technology & Services industry.


        Prior to Ignitarium, Sona has had held diverse HR responsibilities at Litmus7, Cognizant and Wipro.


        Sona graduated in Commerce from St. Xaviers College and did her MBA in HR from PSG College of Technology.



        Vice President - Sales

        As VP of Sales, Ashwin is responsible for Ignitarium’s go-to-market strategy, business, client relationships, and customer success in the Americas. He brings in over a couple of decades of experience, mainly in the product engineering space with customers from a wide spectrum of industries, especially in the Hi-Tech/semiconductor and telecom verticals.


        Ashwin has worked with the likes of Wipro, GlobalLogic, and Mastek, wherein unconventional and creative business models were used to bring in non-linear revenue. He has strategically diversified, de-risked, and grown his portfolios during his sales career.


        Ashwin strongly believes in the customer-first approach and works to add value and enhance the experiences of our customers.


        AZIF SALY Director – Sales

        AZIF SALY

        Vice President – Sales & Business Development

        Azif is responsible for go-to-market strategy, business development and sales at Ignitarium. Azif has over 14 years of cross-functional experience in the semiconductor product & service spaces and has held senior positions in global client management, strategic account management and business development. An IIM-K alumnus, he has been associated with Wipro, Nokia and Sankalp in the past.


        Azif handled key accounts and sales process initiatives at Sankalp Semiconductors. Azif has pursued entrepreneurial interests in the past and was associated with multiple start-ups in various executive roles. His start-up was successful in raising seed funds from Nokia, India. During his tenure at Nokia, he played a key role in driving product evangelism and customer success functions for the multimedia division.


        At Wipro, he was involved in customer engagement with global customers in APAC and US.


        RAJU KUNNATH Vice President – Enterprise & Mobility


        Distinguished Engineer – Digital

        At Ignitarium, Raju's charter is to architect world class Digital solutions at the confluence of Edge, Cloud and Analytics. Raju has over 25 years of experience in the field of Telecom, Mobility and Cloud. Prior to Ignitarium, he worked at Nokia India Pvt. Ltd. and Sasken Communication Technologies in various leadership positions and was responsible for the delivery of various developer platforms and products.


        Raju graduated in Electronics Engineering from Model Engineering College, Cochin and has an Executive Post Graduate Program (EPGP) in Strategy and Finance from IIM Kozhikode.


        PRADEEP SUKUMARAN Vice President – Business Strategy & Marketing


        Vice President - Software Engineering

        Pradeep heads the Software Engineering division, with a charter to build and grow a world-beating delivery team. He is responsible for all the software functions, which includes embedded & automotive software, multimedia, and AI & Digital services

        At Ignitarium, he was previously part of the sales and marketing team with a special focus on generating a sales pipeline for Vision Intelligence products and services, working with worldwide field sales & partner ecosystems in the U.S  Europe, and APAC.

        Prior to joining Ignitarium in 2017, Pradeep was Senior Solutions Architect at Open-Silicon, an ASIC design house. At Open-Silicon, where he spent a good five years, Pradeep was responsible for Front-end, FPGA, and embedded SW business development, marketing & technical sales and also drove the IoT R&D roadmap. Pradeep started his professional career in 2000 at Sasken, where he worked for 11 years, primarily as an embedded multimedia expert, and then went on to lead the Multimedia software IP team.

        Pradeep is a graduate in Electronics & Communication from RVCE, Bangalore.


        SUJEET SREENIVASAN Vice President – Embedded


        Vice President – Automotive Technology


        Sujeet is responsible for driving innovation in Automotive software, identifying Automotive technology trends and advancements, evaluating their potential impact, and development of solutions to meet the needs of our Automotive customers.

        At Ignitarium, he was previously responsible for the growth and P&L of the Embedded Business unit focusing on Multimedia, Automotive, and Platform software.

        Prior to joining Ignitarium in 2016, Sujeet has had a career spanning more than 16 years at Wipro. During this stint, he has played diverse roles from Solution Architect to Presales Lead covering various domains. His technical expertise lies in the areas of Telecom, Embedded Systems, Wireless, Networking, SoC modeling, and Automotive. He has been honored as a Distinguished Member of the Technical Staff at Wipro and has multiple patents granted in the areas of Networking and IoT Security.

        Sujeet holds a degree in Computer Science from Government Engineering College, Thrissur.


        RAJIN RAVIMONY Distinguished Engineer


        Distinguished Engineer


        At Ignitarium, Rajin plays the role of Distinguished Engineer for complex SoCs and systems. He's an expert in ARM-based designs having architected more than a dozen SoCs and played hands-on design roles in several tens more. His core areas of specialization include security and functional safety architecture (IEC61508 and ISO26262) of automotive systems, RTL implementation of math intensive signal processing blocks as well as design of video processing and related multimedia blocks.


        Prior to Ignitarium, Rajin worked at Wipro Technologies for 14 years where he held roles of architect and consultant for several VLSI designs in the automotive and consumer domains.


        Rajin holds an MS in Micro-electronics from BITS Pilani.


        SIBY ABRAHAM Executive Vice President, Strategy


        Executive Vice President, Strategy


        As EVP, of Strategy at Ignitarium, Siby anchors multiple functions spanning investor community relations, business growth, technology initiatives as well and operational excellence.


        Siby has over 31 years of experience in the semiconductor industry. In his last role at Wipro Technologies, he headed the Semiconductor Industry Practice Group where he was responsible for business growth and engineering delivery for all of Wipro’s semiconductor customers. Prior to that, he held a vast array of crucial roles at Wipro including Chief Technologist & Vice President, CTO Office, Global Delivery Head for Product Engineering Services, Business Head of Semiconductor & Consumer Electronics, and Head of Unified Competency Framework. He was instrumental in growing Wipro’s semiconductor business to over $100 million within 5 years and turning around its Consumer Electronics business in less than 2 years. In addition, he was the Engineering Manager for Enthink Inc., a semiconductor IP-focused subsidiary of Wipro. Prior to that, Siby was the Technical Lead for several of the most prestigious system engineering projects executed by Wipro R&D.


        Siby has held a host of deeply impactful positions, which included representing Wipro in various World Economic Forum working groups on Industrial IOT and as a member of IEEE’s IOT Steering Committee.


        He completed his MTech. in Electrical Engineering (Information and Control) from IIT, Kanpur and his BTech. from NIT, Calicut


        SUJEETH JOSEPH Chief Product Officer


        Chief Technology Officer


        As CTO, Sujeeth is responsible for defining the technology roadmap, driving IP & solution development, and transitioning these technology components into practically deployable product engineering use cases.


        With a career spanning over 30+ years, Sujeeth Joseph is a semiconductor industry veteran in the SoC, System and Product architecture space. At SanDisk India, he was Director of Architecture for the USD $2B Removable Products Group. Simultaneously, he also headed the SanDisk India Patenting function, the Retail Competitive Analysis Group and drove academic research programs with premier Indian academic Institutes. Prior to SanDisk, he was Chief Architect of the Semiconductor & Systems BU (SnS) of Wipro Technologies. Over a 19-year career at Wipro, he has played hands-on and leadership roles across all phases of the ASIC and System design flow.


        He graduated in Electronics Engineering from Bombay University in 1991.


        SUJITH MATHEW IYPE Co-founder & CTO


        Co-founder & COO


        As Ignitarium's Co-founder and COO, Sujith is responsible for driving the operational efficiency and streamlining process across the organization. He is also responsible for the growth and P&L of the Semiconductor Business Unit.


        Apart from establishing a compelling story in VLSI, Sujith was responsible for Ignitarium's foray into nascent technology areas like AI, ML, Computer Vision, and IoT, nurturing them in our R&D Lab - "The Crucible".


        Prior to founding Ignitarium, Sujith played the role of a VLSI architect at Wipro Technologies for 13 years. In true hands-on mode, he has built ASICs and FPGAs for the Multimedia, Telecommunication, and Healthcare domains and has provided technical leadership for many flagship projects executed by Wipro.


        Sujith graduated from NIT - Calicut in the year 2000 in Electronics and Communications Engineering and thereafter he has successfully completed a one-year executive program in Business Management from IIM Calcutta.


        RAMESH SHANMUGHAM Co-founder & COO


        Co-founder & CRO

        As Co-founder and Chief Revenue Officer of Ignitarium, Ramesh has been responsible for global business and marketing as well as building trusted customer relationships upholding the company's core values.

        Ramesh has over 25 years of experience in the Semiconductor Industry covering all aspects of IC design. Prior to Ignitarium, Ramesh was a key member of the senior management team of the semiconductor division at Wipro Technologies. Ramesh has played key roles in Semiconductor Delivery and Pre-sales at a global level.

        Ramesh graduated in Electronics Engineering from Model Engineering College, Cochin, and has a Postgraduate degree in Microelectronics from BITS Pilani.