Lightweight Inception Networks for the Recognition and Detection of Rice Plant Diseases
Worldwide, researchers have attempted to use machine learning techniques like Support Vector Machines (SVM), Bayes methods, and Artificial Neural Networks (ANN) to identify rice diseases based on features extracted from images of the plants. While these methods have shown some success, they often rely on manual feature extraction and have limitations in accuracy and complexity. Some even used Convolutional Neural Networks (CNN) to recognize rice diseases accurately. However, deploying the models on mobile devices was challenging due to their large size.
In this study, researchers developed an advanced deep learning model called MobInc-Net. The model was designed to extract high-quality image features efficiently. For the methodology, they introduced a technique called Depth-Wise Separable Convolution (DSC) that helped reduce model size and computational complexity. The traditional Inception module, known for its efficiency in extracting features from images, was modified by replacing some convolutions with DSC. This modified module, called M-Inception, was combined with MobileNet, a lightweight CNN model, to form a new network architecture named MobInc-Net.
An improved loss function called Focal-Loss (FL) was used instead of the traditional Cross-Entropy (CE) loss function to enhance the model's learning capability for small lesion features. FL emphasizes samples that are difficult to classify, thus improving the model's ability to recognize subtle disease symptoms. A two-stage transfer learning approach was implemented to train the model. In the first stage, only the new layers of the network were trained using the rice disease dataset, while the base layers were kept frozen with weights pre-trained on a large-scale image dataset called ImageNet. In the second stage, the entire network was fine-tuned using the weights obtained from the first stage.
The experiment delves into testing the effectiveness of their proposed method using Python 3.6 Deep Learning framework along with some image preprocessing done by Photoshop. The hardware setup included an Intel Xeon processor, an NVIDIA RTX 2080 TI GPU, and 64 GB RAM running on a Linux operating system. The datasets utilized are PlantVillage, consisting of over 54,000 plant leaf images, and a locally collected dataset of approximately 1,000 rice disease images. The PlantVillage dataset was captured under controlled conditions with uniform lighting and simple backgrounds, and the rice disease images were collected from real-world agricultural fields and online sources featuring more complex backgrounds.
On the publicly accessible PlantVillage dataset, the proposed method outperformed other advanced techniques in training and validation accuracy. They achieved a remarkable accuracy of 99.21% on the validation set after training for 30 epochs. Furthermore, experiments conducted on a local dataset of rice disease images captured under real-world agricultural conditions demonstrated its superior performance. After training, the validation accuracy reached 95.62%, the highest among all algorithms tested.
Overall, the experimental analysis validated the efficacy of the proposed method in accurately detecting and classifying rice crop diseases. Combining innovative techniques like depth-wise separable convolutions, transfer learning, and data augmentation proved effective in enhancing model performance, making it a promising tool for aiding farmers in disease management and crop protections.