AI-based surface vegetation encroachment detection around rail tracks 


Railway operators must conduct routine inspections and maintenance of tracks, trains, and other equipment to ensure safe operation of railways. Through these inspection and maintenance activities, railway operators prevent service interruptions and, most importantly, reduce the chances of catastrophic railway accidents by resolving some of the more common causes of accidents, such as equipment failures, track defects, etc. 

Along with track scheduling conflicts, signaling failure and inappropriately controlled grade crossings make up for the major causes of rail mishaps. While the trains themselves require more maintenance than any other piece of railroad infrastructure, the tracks and the infrastructure around the tracks are usually the biggest culprits. The track “right-of-way" - as the region in the immediate vicinity of the railway track is often referred to - requires constant surveillance to ensure that safety standards are met. This is performed in two ways: 
a. Offline mode where a human or an automated inspection vehicle maps the right-of-way and flags abnormalities which can then be addressed by a maintenance crew, and  
b. Real-time inspection of track and right-of-way conditions from sensors mounted on the service cars themselves.  

Abnormalities include defects on the rail track and its constituent components (for e.g., plates, bolts, ties, ballasts, etc.) as well as rocks, trees and livestock encroaching upon the track.  

Advances in Computer Vision and AI offer an opportunity to provide accurate anomaly detection options to railway operators, allowing them to run their operations in a cost effective manner, without any compromises regarding the safety profile of their vast track networks. Ignitarium’s TYQ-i(TM) solution is a comprehensive platform that addresses numerous classes of automatic defect detection use cases related to civil infrastructure – rail being one of the primary assets of focus.  

In this article, we introduce one component within the TYQ-i software algorithm library that specifically targets surface vegetation encroachment detection and analytics along the track right-of-way. The problem statement includes detection of close-to-the-ground vegetation within a specified distance on both sides of the track, define thresholds for vegetation density around the track and between the ties.  

The steps involved in implementing the component are described in the subsequent sections. 


Multiple sets of video from real world railway tracks are used as the base dataset. The video corpus contains varied footage of track under different scenarios – differential footage of left and right sides of track, different levels of encroachment, different levels of surface thickness, sparse / dense vegetation on the ballast region etc. The video is dumped into frames and these are segregated into carefully selected bins. Binning is achieved by a combination of automatic image processing techniques as well as by manual segregation.  


The binned images are labelled using our custom LabelMe tool which has enhanced features compared to traditional versions; such as client-server based multiple labeler support, improved support for semantic labeling, support for contour hierarchies, better parent-child relationships as well as the ability to add custom image level flags. Images are initially labelled manually accounting for density thresholds and then enhanced with ‘model-in-the-loop' automatic labelling to enhance the image corpus in each relevant bin. 


The pipeline involves several components ranging from image processors, AI models as well as pre and post-processing filters. Key steps are briefly described below: 

  • Detect left and right rail within frame 
  • Track rails across frames 
  • Select ROI encompassing the left and right rail (representing right-of-way) 
  • Perform perspective crop of the ROI 
  • Image processing on crops 
  • Pre-processing filters on crops 
  • Apply AI models to infer on crops 
  • Apply post-processing filters on crops 
  • Remap contours to original frame 


We utilize a CNN for the task for vegetation detection. A model leveraging the meta-architecture of mobilenet-unet was used as the backbone to semantically segment contours (rail, ties, vegetation etc.) The model utilizes a modified mobilenet backbone and then uses conv-blocks and up-sampling layers to build upon the segmentation map. Dataset richness is initially enhanced via the addition of numerous standard augmentation functions (like flipping, translation, contrast, brightness, rotation etc.) In addition to the above, we use our in-house library of active augmentation functions to create additional negative and positive classes superimposed on the image canvas. Generalization is increased by training for disparate threshold settings within the ROI crops.  Adam optimizer and categorical cross entropy loss functions are used during training for several hundreds of epochs on GPU clusters in a server farm. 

Sample results from the inference pipeline are shown in Figure 1. for an example of a rural Indian rail track with a specific right-of-way definition. The software pipeline can be configured easily for right-of-way expansion or shrinkage, encroachment levels and various other thresholds as defined by the network operator SOP (standard-operating-procedure) manuals. 

Fig 1 : Vegetation detection inside the configured ROI 


The critical requirement of automated surface-level vegetation detection was solved using customized detectors and image processing elements.  The software component can be tuned to accommodate specific requirements of rail operators – be it right-of-way distances, degree of encroachment levels, priority areas of encroachment (e.g. on the rail track with the larger banking angle), etc. This component, when deployed along with the larger suite of defect detection components that form the TYQ-i library, allows comprehensive maintenance coverage of rail tracks. 


  1. Tulbure, Andrei-Alexandru et al. “A review on modern defect detection models using DCNNs – Deep convolutional neural networks.” Journal of Advanced Research 35 (2021): 33 - 48.