Solar panel Blog photo
Solar energy is a source of clean energy, naturally harnessing the power of the sun. When solar panels are deployed to generate electricity, greenhouse gases are not emitted into the atmosphere.

Solar Panel defect detection using AI techniques 

1. Introduction 

Solar energy is a source of clean energy, naturally harnessing the power of the sun. When solar panels are deployed to generate electricity, greenhouse gases are not emitted into the atmosphere. Since the sun is an infinite source of energy, solar powered electricity can be considered to be near in-exhaustible. Solar panels are a great way to offset energy costs, reduce the environmental impact of your home and provide a host of other benefits, such as supporting local businesses and contributing to energy independence.  

For all the benefits that solar panels provide, they are not without their own problem areas. Solar modules are susceptible to various kinds of defect mechanisms – some observable during their manufacturing process while others develop over time as they are deployed in harsh environments. In addition to defects such as micro-cracks and cross-cracks, solar panels are prone to material deterioration, diode failures and hotspot formation. Fig 1. shows a few example anomaly types. These result in either reduction in conversion efficiency or outright failure of a panel wherein it fails to convert sunlight into electricity.

Fig 1: Various types of defects on a solar panel. [Source]

2. Problem Statement 

In order to guarantee efficiency of electricity generation, solar farm operators have to inspect individual panels at regular intervals. This is a very time and effort consuming process since these solar farms often are spread across tens of square miles of territory. Historically, these inspections were conducted via 100% manual labour with human inspectors climbing on top of each panel via rope-and-crane methods. This gave way to drone based image capture using camera systems; however, the analysis of the captured footage was still done by human inspectors. Due to the vast amount of data that the human operator had to process, the results were prone to error as well as timeliness of anomaly detection was often compromised. Recent advances in image processing and compute capability allowed the use of automatic detection techniques to find and localize defects. Ignitarium’s TYQ-i™ platform performs automated defect detection using sensor data (RGB or thermal camera data in the solar inspection use case) as input and a suite of complex computer-vision (CV) enhanced Deep-Learning (DL) algorithms. In the subsequent sections, we describe the workflow for the AI component of the solar panel anomaly detection software pipeline.  

3. Defect detection development flow 

Fig 2: Development workflow 

3.1 Data Collection 

At the PoC stage of the project, a small set of a few hundred images, that were representative of the type of solar panels under consideration were acquired using a drone carrying a thermal camera payload. These were used to experiment with the various pre & post processors and AI models. The completion of the PoC phase enabled adjusting the data acquisition process in terms of fine-tuning various parameters such as input resolution, drone angle-of-attack, flight path of drone, image overlap percentage, etc. Post fine-tuning, the data collection phase proceeded to acquire a much larger data set that could be used for comprehensive training. 

3.2 Data Annotation 

The training set was labelled semantically to cover all defect classes of interest. Fig 3 shows an example labelled frame capturing panel defects via our customized LabelMe annotation tool. 

Fig 3: Sample hotspot annotation via customized LabelMe tool

At this stage, labels are created and stored in JSON format. In this sample, two labels are shown in the Label List panel on the right. 

  1. solarplate

  1. defect

solarplate: The solarplate (labelled in red) is the parent of the damage node. 

defect: The defect (labelled in green) is the child of the solarplate. 

Though annotation is done using tight contours, before moving to the training stage, pre-processing of data is an important step during the development of Deep Learning models. 

3.3 Pre-Processing  

Image pre-processing is a method to transform raw image data into clean image data, as most of the raw image data contains noise, missing or inconsistent information. Missing information means lack of certain attributes of interest. Inconsistent information means there are some discrepancies in the image.  The purpose of pre-processing is to enhance the image to reduce chances of false feature identification as well as to improve image characteristics vital for downstream processing. 

Fig 4: Various pre-processing operations 

3.4 Model architecture 

A variant of ResNet-UNet is utilized as the model architecture. In this architecture, ResNet is the backbone of the model and UNet is the head part of the model. Image information is passed to the backbone first (ResNet) following which the extracted features are passed to UNet. Finally, the head (UNet) is used for performing the segmentation task. 

Fig 5: Model architecture 

3.5 Training 

Using the selected model architecture we trained the dataset. Initially, before fine-tuning the model, true damage detection was not good enough, in addition to the presence of many false detections. 

Before Fine tuning model 
Fig 6a: Model detections on dataset before fine tuning model 
After Fine tuning model