Autonomous Mobile Robots: not a distant future but a current reality

Humans being intelligent, wanted to create a mechanical replica of themselves to handle monotonous and repetitive tasks, resulting in the genesis of Robots. Robotics has since then evolved leaps and bounds to where we are at today. In our journey at Ignitarium, we started with a simple Ackermann Steered Rover and then evolved to make this rover intelligent and feature rich. Today we have built expertise on ROS, Sensor fusion, Path Planning, Navigation, Dynamic obstacle detection & avoidance and Perception engineering.

Ignitarium Robotics Highlights

Sensor Fusion

Sensors are key components of any autonomous machine and each sensor has its unique advantage or works well in certain conditions. When you have the ability to combine inputs of various sensors in a system to create unified processed data, the resulting model is more accurate and offers enhanced reliability. The most common sensors in the industry are Cameras, Lidars and Radars. 

For State Estimation

(Position & Velocity)

Odometry - Wheel Odometry, IMU & 2D Lidar

Visual Odometry - IMU & 3D Vision Camera

Velocity Estimation - Wheel Odometry, IMU & GPS

For Object Classification/


2D Lidar with 3D Vision Camera

3D Lidar with camera

Radar with camera

Path Planning & Navigation

Autonomous Mobile Robots (AMRs) should be capable of not only detecting objects [Static and Dynamic], but also be intelligent enough to avoid the same by recalculating an optimal route between the current location and the destination. Path planning requires a map of the environment and the robot to be aware of its location with respect to this map. Robots that are capable of Simultaneous Localization And Mapping (SLAM) can achieve optimized coverage of the entire navigable space.

Path Planning

Self-localization – Where Am I ?

Path planning – How do I get to my


Map building and interpretation –

Geometric representation of Robots environment

SLAM Algorithms







Perception is one of the key aspects for a robot to take decisions, plan and operate in real-world environments. Some examples of robotic perception are obstacle detection, object recognition, semantic place classification, 3D environment representation, terrain classification, pedestrian & vehicle detection, and object tracking.  

Sensors: Camera, Lidar, Radar, RGBD

Sensor Data Processing: Mapping and Extraction of the data from the Sensors

AI/ML Inference: Data Analysis, Inference, Prediction

Outcome: Planning, Execution, Navigation


Capabilities Demo Video

Robotics capabilities

One Software Stack Demo Video

One software stack, many applications

Case Studies