Mobile QR Code QR CODE

2025

Reject Ratio

81.5%


  1. (Department of Computer Science & Engineering, Maharishi Markandeshwar Engineering College (Research Scholar), MMICT&BM, Maharishi Markandeshwar (Deemed to be University), Mullana, Ambala, Haryana 133203, India · rathi.ancester@gmail.com)
  2. (Department of Computer Science & Engineering, Maharishi Markandeshwar Engineering College, Maharishi Markandeshwar (Deemed to be University), Mullana, Ambala, Haryana 133203, India · sonaligoyal@mmumullana.org)



CNN, Deep learning, Disease detection, Stone fruits, Image classification

1. Introduction

Convolutional neural networks have been shown to be an outstanding tool for diagnosing and locating problems in stone fruits as well as in other crops by utilizing their leaves. Plant diseases affect the leaves, fruits, stems, roots, and overall quality and yield of the crop. As a consequence, there is a global deficit in fruit and vegetable consumption. Each year, crop diseases result in a 16% reduction in agricultural yield. The use of CNN designs for detecting plant diseases, including those affecting stone fruits, has been highlighted in various articles. The most modern methods for crop identification, feature fusion and extraction, training, data augmentation, image segmentation of various diseases of plants were inspected in these articles. The constraint of smaller datasets is mentioned as a major issue in over 80% of research publications [1, 2]. Rather than focusing on a small number of important classes, there should be a greater variety of stone fruit diseases [3- 5]. Therefore, the goal of this research is to look at how well CNN architectures like ResNet50 [6], Inceptionv3 [7], MobileNetV2 [8], and DenseNet [9] classify stone fruit leaf illnesses. Finding the model with the best accuracy rate for classifying stone fruit leaf diseases identification is the final aim of this research. The manual method of detecting plant diseases takes longer, is limited to certain areas, and is more prone to error by humans. As a result, the need for automatic illness detection methods has grown. This paper’s primary contributions are the tests conducted on collected data utilising four CNN architectures–MobileNetV2, DenseNet201, Inceptionv3, and ResNet50–that employ transfer learning. After that, a performance comparison is done to show which CNN model performs the best.

2. Literature Review

The relevant research work is reviewed in this part, the sorts of common diseases that affect stone fruits are included first, followed by different architectural configurations of CNN models, and lastly, research done by scientists to identify plant diseases using CNN models.

2.1. Various Leaf Diseases

Mango, olive, and peach stone fruits are among those that experience a range of ailments during their lives. The following are the most common diseases harming mango, olive and peach crops:

  • Mango Powdery Mildew: Pale yellow dots on leaves are the first sign. They spread rapidly to form massive blotches that can completely cover the surfaces of the petiole, stem, and leaves.

  • Mango Gall Midge: It especially targets young, sensitive leaves and shoots. Reduced photosynthesis, leaf drop, and leaf deformation brought on by infestations can all lower fruit output.

  • Olive Aculus Olearius: This disease results in dark green destructed portions and rust blots on buds, as well as yellowish green blotches on the center and top portions of mature leaves.

  • Olive Peaccock Spot: It looks like a bird eye, it manifests as a circular, yellow or brown spot that ranges in size from 2 to 10 mm and is primarily found on the upper surface of the fruit, stems, or leaves.

  • Peach Bacterial Spot: The disease’s symptoms include angular purple to purple-brown patches on the foliage, which eventually fall out, giving the leaves a "shot hole" appearance.

  • Peach Rust: The orange, yellow, brown, or red spore masses on the exterior of the plant are the telltale indicator of rust.

2.2. State-of-the-art CNN Architectures

One type of neuronal organization inspired by biological neural networks in humans and animals excels in many works like computer vision, identification and classification of patterns, is the convolutional neural network. The four primary layers that comprise CNN are the convolutional layer, pooling layer, ReLU, and fully connected layer [10]. The following section discusses various CNN architectures that have been utilized in the field of agriculture to identify plant diseases, along with a brief explanation of each one. The type of datasets, the quantity and quality of images, the number of layers whether convolutions/pooling/flattening layers, and the simulation batch size are some of the variables that influence the accuracy and training error of any CNN model [11].

  • MobileNet: Developed for classifier training, Google’s MobileNet is an open-source computer vision model. This is lightweight deep neural network by using depth-wise convolutions, which significantly reduce the number of parameters when compared to earlier networks [12]. It makes use of two different concepts: depth convolution and point convolution. CNN may also compete in mobile platforms because this improves its ability to predict images. It has 27 convolutional layers: 1 fully connected layer, 1 average pool layer, 1 softmax layer, and 13 depth-wise convolutional layers.

  • DenseNet201: These are densely connected convolutional networks. It was stated that the DenseNet-121 model defeated the NASNet design, ResNet50, and MobileNetV2. Essentially, it utilizes robust interlayer connections. To keep the system feed-forward, each layer gets extra inputs from all the layers before it and sends its own feature-maps to all the layers after it. DenseNet-201 is a convolutional neural network that has 201 layers [13]. The DenseNet201 takes advantage of the condensed network to provide highly parametrically efficient and easily trainable models.

  • InceptionV3: Inception Version3 is the third iteration of Google’s Deep Learning Evolutionary Structures series. The 42-layer Inception V3 architecture’s input layer, which contains the Softmax function in the last layer, captures images with a resolution of 299×299 pixels [14]. It employs a number of improvements like Smoothening of label, factorized 7×7 convolutions, and the employment of an auxiliary classifier to dispatch label information downstream in the network, and batch normalization for side head layers.

  • ResNet50: To deal-up with the problem of dissipating gradients in deep neural networks, the notion of residual layers and skip/shortcut connections was presented by ResNet50. The main transformation of ResNet50 is the residual layer, which enables the network to gain residual mappings rather than strived to grasp the complete mapping through the input to the anticipated output [15]. Skip connections are incorporated into ResNet50 in order to improve gradient flow and solve the disappearing gradient issue. Bypassing the intermediary layers, these connections allow the gradients to travel straight from the end layers to earlier ones.

2.3. Recent Work Done by Researchers

Although deep learning is mostly used in medical image analysis and precision agriculture, its applications are becoming more and more widespread and include, among other things, attack detection and prevention [16], vehicle detection [18], picture monitoring [18], drone detection, and tracking [19]. In this work, we primarily address the latest developments in CNN models for crop disease detection. The deep networks like convolutional neural networks in [5] have received training in identifying illnesses in mango leaves or their absence. With AlexNet it was able to detect grape and mango leaves with an accuracy rate of 99% and 89%, respectively. The goal of the efforts in [9] was to use CNN to categorise the diseases that affect mango leaves. To improve accuracy from the intended data set, DenseNet201, InceptionResNetV2, InceptionV3, ResNet50, ResNet152V2, and Xception are used. The dataset includes 1500 photos of both healthy and diseased mango leaves. After analysing the total performance matrices, findings revealed that DenseNet201 performs superior to the competing models, achieving the greatest accuracy of 98.00%. [1] presented a preprocessing-step-based image-based technique for deep learning-based illness detection in mango leaves that was compared to AlexNet, MobileNet V2 and Inception V3. To train the deep residual neural network–which offers benefits during the learning process–transfer learning from additional datasets with comparable characteristics was also done. The accuracy of 88.46% attained by the suggested method is superior to that of other pre-trained models. In [3], a deep network using convolutional neural network was proposed with the goal of classifying diseases of Acleus olearius and olive peacock spot. On the VGG19 and VGG16 architectures, transfer learning techniques were applied to complete the experimental investigation. The results demonstrated that, both diseases may be detected with minimal error rates. In [4], a deep learning-based model that was not used for training or validation was able to attain a 98.75% accuracy rate on an unknown test dataset. The suggested model, which has a weight size of 235 M, is suitable for intelligent peach agriculture. Models were tested using 240 photographs of bacterial and healthy peach leaves, which were a combination of field and lab photos. [20] investigated a CNN model that uses machine learning methods to identify flaws in mangos, authors developed a computer vision-based model for mango fault identification using CNN, and the findings showed an accuracy of 89.5%. In [21], 380 photos were chosen from the healthy and diseased categories. A convolutional neural network with crossover-based levy flight distribution is employed for enhanced feature selection. [22] used spatial embedding network for crop leaves with the help of instance segmentation and [23] used SenseNet for the segmentation of images. Additionally, the pre-trained MobileNetV2 model is employed during the learning phase, and support vector machines are utilised to classify diseases. MT detects a link-up of a new path and configures a new IP address at the network layer.

3. Methodology

3.1. Dataset

The dataset that was examined in this research came from three separate data repositories: CNN_olive_dataset from GitHub-sinanuguz, MangoLeafBD Dataset from Mendeley Data, and PlantVillage from Kaggle. In addition, some images were also gathered from the internet to ensure that the data was nearly balanced. There are nine classes–two diseased and one healthy class–from the three stone fruits, namely the mango, peach, and olive. Healthy leaves of all fruit as well as unhealthy leaves has 700 images each except peach rust which is having 500 images. 70 : 30 split is used for train and test, as a result, we utilized 10980 photos for validation sets and 25620 images for training sets out of a total of 36600 images (after augmentation).

3.2. Procedure

Firstly, images of plant leaves are taken. Colour, shape, and pattern of the leaf are extracted automatically in deep learning. On the basis of these attributes, deep learning algorithms are subsequently used for categorization. The entire process is mentioned below.

3.2.1 Raw data

RGB images are selected as an input. Images that did not clearly depict illness symptoms were excluded from the collection. Images of stone fruit leaves with a variety of bacterial, fungal, and viral infections are displayed in Fig. 1.

Fig. 1. Sample images of stone fruits leaves.
../../Resources/ieie/IEIESPC.2026.15.2.227/fig1.png

3.2.2 Image augmentation

The objective of augmentation is to raise the dataset’s variance while making sure that newly added data have significant additions [24]. The Keras DL framework is used for image augmentation. The following six types of augmentation options are employed in training: Rotation which arbitrarily rotate a picture from different perspectives. In brightness the model is fed images with varying brightnesses during training, which aids in its ability to adapt to changes in lighting. Shear turn the shearing angle of the image clockwise or anticlockwise. In zoom the provided image has multiple scales. In horizontal and vertical flips, the image’s axes are freely reversed in the horizontal and vertical flip modes.

3.2.3 Model training and classification

Based on the provided dataset, four pre-trained models-MobileNetV2, DenseNet201, InceptionV3, and ResNet50-were trained. Their classification accuracy was then evaluated. Fig. 2 provides a description of the experimentation process. All of the models underwent 40 epochs of training because, at that point, there would be no discernible improvement in validation loss. For DenseNet201, the fine-tuning time is −38 s (s)/epoch (Iterations), while for ResNet50, Inceptionv3, and MobileNetV2, it is 17 s/epoch and 14 s/epoch, respectively. Table 1 lists the hardware and software requirements for training CNN models, and also displays the CNN models’ experimental parameter values used in this analysis.

Fig. 2. Procedure of experiments.
../../Resources/ieie/IEIESPC.2026.15.2.227/fig2.png
Table 1. Hardware/Software requirements and Parameters used in study.
Software & Hardware Requirements
Configuration Item Value
CPU AMD Ryzen 5 5625U with Radeon Graphics 2.30 GHz
GPU T4
Operating System Windows 11 (x64-based processor)
RAM 12.67 GB
Disk 78.19 GB
Development Environment TensorFlow with Keras (On Google Colab)
Programming Language Python 3
Setup used in Experiment
Parameters Value
Learning Rate .0001
Drop out 0.3
Optimizer Adam
Batch Size 32
Momentum 0.9
Epochs 40
Activation function ReLu
Loss function Categorical cross-entropy

4. Experimental Setup

This section presents the experiments findings based on the transfer learning of the pre-trained individual networks. The following experimental questions are anticipated to be addressed by the obtained results.

  • Which CNN model is more accurate in identifying leaf diseases in stone fruits?

  • What makes transfer learning necessary?

To answer the above-mentioned questions the subsequent performance metrics are taken into account:

4.1. Accuracy

When assessing classification models, one parameter to consider is accuracy. More colloquially, Accuracy is the proportion of correct predictions that our model produces. The accuracy formula is provided below (1):

(1)
$Accuracy = \frac{TP+TN}{TP+TN+FP+FN}.$

4.2. Precision

The precision shows the percentage of correctly predicted instances with positive results. When False Positive is more concerning than False Negative, precision is a valuable statistic. The precision formula is provided below (2):

(2)
$Precision = \frac{TP}{TP+FP}.$

4.3. Recall

The percentage of actual positive incidents that our model correctly predicted is known as recall. When FN scores against FP, recall is a valuable statistic. The recall formula is provided below (3):

(3)
$Recall = \frac{TP}{TP+FN}.$

4.4. F1-score

The F1-score, which is a harmonic mean of the precision and recall values, offers a unified comprehension of both of these concepts. It is maximal when Precision and Recall are equal.

(4)
$F1-Score = 2 * \frac{(Precision * Recall)}{Precision + Recall}.$

4.5. Training and Validation Loss

Training loss indicates how well a model fits the training set of data. It quantifies the difference between the model’s expected output and the actual target values found in the training set and validation loss measures the discrepency between the expected and actual outputs on a validation dataset.

4.6. Fine-tuning of Pre-trained Models

Based on the finding that the later layers capture more task-specific data, the earlier layers record more generic properties, such as edges and textures, which are relevant across various activities. We used this method in our work, freezing the first few layers of the pre-trained ResNet50, MobileNetV2, Densenet201, and InceptionV2 models, which worked well for removing broad characteristics from the input data. The last completely connected layers were then retrained in order to modify the models for the detection of stone fruit leaf disease. Below is a description of each model: For ResNet50, retrain the final 50 layers starting at ’conv5_block1’ and freeze the first 140 layers up to ’conv4_block6’. Within MobileNetV2, retrain the final 27 levels starting with ’block_14_project’ and freeze the first 155 layers up to ‘block_13_expand’. DenseNet201 retrains the final 12 layers starting with “bn” after freezing the first 706 layers up to “conv5_block32_concat”. InceptionV3 retrains the final 22 layers starting with "mixed8" and freezes the first 249 layers up to “mixed7”.

5. Results

5.1. Answer to First Experimental Question “Which CNN Model Provides Better Accuracy in Detecting Stone Fruits Leaf Diseases?”

The performance of the four CNN models that use transfer learning is shown in this section. Table 2 displays the models’ performance in class-wise categorization. The results in Table 2 shows that ResNet50 consistently gets the highest F1-score, accuracy, precision, recall, and accuracy across most classes, especially for diseases of mangos and peaches. For example, ResNet50 outperformed the other models with a noteworthy accuracy of 96% for classifying healthy mangos and 96.2% for bacterial spot identification in peaches. ResNet50’s superior metrics across a variety of classes demonstrate its efficacy in precisely diagnosing crop diseases in stone fruits, even though all models performed well. With the y-axis representing accuracy and loss percentages and the x-axis representing the number of epochs, Fig. 3 displays the training & validation accuracy, training & validation loss versus epochs of the ResNet50 model and also shows the recognition accuracy, precision, recall and F1-score of individual model. Validation loss did not decrease from 0.27053 at the 40th epoch. Training accuracy was 96.4% and validation accuracy was 93.11% with this loss amount. The Resnet50 and Inceptionv3 models perform the best when taking into account each model’s precision values on the test dataset.

Table 2. Class-wise performance of pre-trained CNN models.
Class wise performance of CNN models
Model Class Accuracy Precision Recall F1-Score
MobileNetV2 Mango_powdery_mildew 94.6% 94.0% 92.2% 93.2%
Mango_gall_midge 91.6% 93.0% 95.2% 94.3%
Mango_healthy 94% 95.3% 92.2% 92%
Olive_aculus 93.8% 91% 92.2% 92.2%
Olive_peacock 92% 92.8% 92.7% 91.7%
Olive_healthy 93% 94.2% 91.4% 92.2%
Peach_bacterial_spot 96.9% 96% 96.1% 96.2%
Peach_rust 90.3% 87.2% 89.3% 89.2%
Peach_healthy 92.3% 92.3% 92.4% 92.4%
DenseNet201 Mango_powdery_mildew 91.6% 92.6% 90.2% 89.4%
Mango_gall_midge 91.8% 92.0% 93.7% 90.2%
Mango_healthy 89% 88.2% 87.6% 89.7%
Olive_aculus 92.4% 87.1% 87.4% 88.0%
Olive_peacock 90.3% 90.8% 90% 90.8%
Olive_healthy 92% 91.6% 89.3% 88.3%
Peach_bacterial_spot 93% 94.6% 94.8% 93.9%
Peach_rust 86.3% 87.5% 88.4% 89.0%
Peach_healthy 91.2% 90.8% 91.9% 90.7%
InceptionV3 Mango_powdery_mildew 93.5% 94.0% 91.3% 90.2%
Mango_gall_midge 93.6% 93.0% 94.2% 92.2%
Mango_healthy 91% 89% 88.2% 89.2%
Olive_aculus 90.3% 92.3% 91.2% 94.0%
Olive_peacock 90.2% 92.8% 91% 90.3%
Olive_healthy 93% 90.6% 90.4% 89.2%
Peach_bacterial_spot 94% 95.5% 94.4% 95.3%
Peach_rust 83.2% 86.5% 89.3% 89%
Peach_healthy 90.2% 93.8% 92.9% 93.7%
ResNet50 Mango_powdery_mildew 95.6% 95.0% 94.2% 94.2%
Mango_gall_midge 93.6% 93.0% 94.2% 93.2%
Mango_healthy 96% 95% 95% 95%
Olive_aculus 94.3% 95% 94.6% 94.0%
Olive_peacock 93% 93.8% 92% 92.1%
Olive_healthy 92% 93% 92.3% 92.1%
Peach_bacterial_spot 96.2% 96.6% 96% 96%
Peach_rust 89.2% 88% 87.2% 87%
Peach_healthy 94.2% 93% 94.2% 94%
Fig. 3. (a) Accuracy vs epochs. (b) Loss vs epochs. (c) Performance metrics graphs of CNN models.
../../Resources/ieie/IEIESPC.2026.15.2.227/fig3.png

5.2. Answer to Second Experimental Question “Why Should We Use Transfer Learning?”

The selection of four pre-trained models (MobileNetV2, DenseNet201, InceptionV3, and ResNet50) was based on their demonstrated capacity to extract reliable characteristics from images, which is especially useful in agricultural applications where datasets may be few or difficult to annotate. The experimental findings show that utilising a transfer learning technique, CNN models were able to attain higher accuracy, precision, recall, and f1-score on smaller datasets. The accuracy, precision, recall and F1-score results for the test sets produced by CNN models using transfer learning are displayed in Fig. 3 (c). At 93.11%, the ResNet50 model had the highest accuracy while DenseNet201 had the lowest accuracy (89.01%). Since a CNN model is constantly grabbing more and more data to improve performance, these pre-trained models have previously been trained on millions of data from the ImageNet dataset. With limited data, resources, or time, transfer learning is a potent technique that makes use of prior knowledge from massive datasets to speed up and enhance learning on new, related tasks. Due to a lack of labelled data, training a model from scratch might not have been able to achieve the same level of generalization across various crops and diseases as this approach does. Additionally, training a model from scratch still presents the issue of overfitting.

6. Discussion

In this study, we used transfer learning to conduct a thorough analysis of the outputs from four CNN models. We applied these models viz. MobileNetV2, DenseNet201, Inceptionv3, and ResNet50–to the nine classes of stone fruits and compared the outcomes as shown in Table 2. 36600 leaf photos of three stone fruits–mango, olive, and peach–are included in the dataset that was employed. Following the 70:30 split, we had 10980 images for testing and 25620 images for training. ResNet50, one of the CNN models, has the best classification outcomes for determining the illnesses of stone fruits’ leaves. In addition, Resnet50 is supported by [25] using either SVM or MobileNet. This is due to the fact that the fundamental concept of ResNet50 is based on the utilisation of shortcut connections. By avoiding intermediary layers, information can move straight from one layer to another. ResNet addresses the vanishing gradient issue that frequently befalls deep neural networks by adding residual blocks. This allows very deep networks to be trained without compromising on performance. Our research indicates that for smaller datasets, transfer learning of deep learning models offers good accuracy. Using the pre-trained model for training and feature extraction served as the foundation for the transfer learning techniques used in this study. Remarkably, as shown by analysis, the ResNet50 model outperformed previous state-of-the-art efforts. Table 3 compares the performance of various deep learning models on crop disease detection across different datasets and fruit types. Authors in [1, 5] focused on mango disease detection with smaller datasets, achieving accuracies of 89% and 88.46%, respectively, using AlexNet and a CNN-based model. Experiments in [21] utilized a larger UAV-acquired (Unmanned ariel vehicle) dataset for olive detection, where MobileNet outperformed ResNet50 with accuracies of 94.63% and 92.86%. Authors in [4] achieved a high accuracy of 98.75% on peach disease detection using a CNN model with the PlantVillage dataset. In contrast, our study employed a hybrid dataset of 36,600 images across nine classes of stone fruits and utilized several pre-trained models, with ResNet50 emerging as the best, achieving 93.11% accuracy. This demonstrates that larger and more diverse datasets, combined with robust models like ResNet50, significantly enhance performance in crop disease detection.

Table 3. Related literature on stone fruits leaf disease detection using deep learning models.
Classification accuracy, precision, recall, and F1-score of individual CNN networks
Reference Object Number of images/classes used Dataset used DL frames Accuracy (%)
[4] Mango 1216/03 classes Self-acquired AlexNet 89%
[1] Mango 394/03 classes Dataset collected from A Giang Province (Vietnam) CNN based model 88.46%
[18] Olive 5400 Acquired by UAV ResNet50, MobileNet 92.86%, 94.63%
[3] Peach 2705/01 class PlantVillage CNN based model 98.75%
Pre-trained model Stone Fruits (Mango, Olive, Peach) 36600/09 classes GitHub+, MangoLeafBD+PlantVillage (Hybrid dataset) ResNet50 best model among MobileNetV2, DenseNet and InceptionV3 93.11%

7. Conclusion and Future Scope

The experiments in this study are restricted by the use of free resources (Google Colab). Since Google Colab only provides the server for a short period of time, GPU is not available all the time. It takes about 8 to 9 hours to train a CNN model with a dataset size of 30,000-35,000 and 30-40 epochs if a GPU is not available. Therefore, the experiment related to customization of ResNet50 was not conducted in this study. An additional constraint of the study is that it relied on publicly accessible secondary data rather than primary data that was actually gathered from the field.

Since many writers have addressed the issue of data scarcity, as we indicated in the introduction section, it is important to support data gathering efforts in this area to make it simpler for researchers to conduct experiments. This study shows that, ResNet50 beats other three CNN models–MobileNetV2, DenseNet201, and InceptionV3. ResNet50 had the highest accuracy thus, we can modify the ResNet50 model to further improve its accuracy; this will result in an improved ResNet50 model. The experiment concludes that ResNet50 can manage the vanishing gradient problem, is low-power and parameterized to suit resource limits, and is faster than MobileNetV2, DenseNet201, and InceptionV3.

Our future goal is to enhance the accuracy rate ahead by creating a new deep-learning architecture that draws inspiration from ResNet50. Additionally, we aim to develop a localization method that applies image segmentation techniques to detect the diseased area on a leaf.

References

1 
Trang K. , TonThat L. , Thao N. G. , Thi N. T. , 2019, Mango diseases identification by a deep residual network with contrast enhancement and transfer learning, Proc. of IEEE Conference on Sustainable Utilization and Development in Engineering and Technologies, pp. 138-142DOI
2 
Pham T. N. , Tran L. V. , Dao S. V. , 2020, Early disease classification of mango leaves using feed-forward neural network and hybrid metaheuristic feature selection, IEEE Access, Vol. 8, pp. 189960-189973DOI
3 
Uguz S. , Uysal N. , 2021, Classification of olive leaf diseases using deep convolutional neural networks, Neural Computing and Applications, Vol. 33, No. 9, pp. 4133-4149DOI
4 
Yadav S. , Sengar N. , Singh A. , Singh A. , Dutta M. K. , 2021, Identification of disease using deep learning and evaluation of bacteriosis in peach leaf, Ecological Informatics, Vol. 61, pp. 101247DOI
5 
Rao U. S. , Swathi R. , Sanjana V. , Arpitha L. , Chandrasekhar K. , Naik P. K. , 2021, Deep learning precision farming: grapes and mango leaf disease detection by transfer learning, Global Transitions Proceedings, Vol. 2, No. 2, pp. 535-544DOI
6 
Gulavnai S. , Patil R. , 2019, Deep learning for image-based mango leaf disease detection, International Journal of Recent Technology and Engineering, Vol. 8, No. 3S3, pp. 54-56DOI
7 
Turkoglu M. , Hanbay D. , 2019, Plant disease and pest detection using deep learning-based features, Turkish Journal of Electrical Engineering and Computer Sciences, Vol. 27, pp. 1636-1651DOI
8 
Bagga M. , Goyal S. , 2024, Image-based detection and classification of plant diseases using deep learning: State-of-the-art review, Urban Agriculture & Regional Food Systems, Vol. 9, No. 1, pp. e20053DOI
9 
Rajbongshi A. , Khan T. , Pramanik M. M. , Tanvir S. M. , Siddiquee N. R. , 2021, Recognition of mango leaf disease using convolutional neural network models: a transfer learning approach, Indonesian Journal of Electrical Engineering and Computer Science, Vol. 23, No. 3, pp. 1681-1688DOI
10 
Agarap A. F. , 2018, Deep learning using rectified linear units (ReLU), arXiv preprint arXiv:1803.08375DOI
11 
Balderas L. , Lastra M. , Benítez J. M. , 2023, Optimizing convolutional neural network architecture, arXiv preprint arXiv:2401.01361DOI
12 
Sinha D. , El-Sharkawy M. , 2019, Thin MobileNet: An enhanced MobileNet architecture, Proc. of IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference, pp. 280-285DOI
13 
Jaiswal A. , Gianchandani N. , Singh D. , Kumar V. , Kaur M. , 2021, Classification of the COVID-19 infected patients using DenseNet201 based on deep transfer learning, Journal of Biomolecular Structure and Dynamics, Vol. 39, No. 15, pp. 5682-5689DOI
14 
Szegedy C. , Vanhoucke V. , Ioffe S. , Shlens J. , Wojna Z. , 2016, Rethinking the inception architecture for computer vision, Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818-2826DOI
15 
He K. , Zhang X. , Ren S. , Sun J. , 2016, Deep residual learning for image recognition, Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778DOI
16 
Narayanan K. L. , Naresh R. , 2024, Detection and prevention of black hole attack using tree hierarchical deep convolutional neural network and enhanced identity-based encryption in vehicular ad hoc network, IEIE Transactions on Smart Processing & Computing, Vol. 13, No. 1, pp. 41-50DOI
17 
Kim M. , 2023, Light-weight deep neural network for small vehicle detection using model-scale YOLOv4, IEIE Transactions on Smart Processing & Computing, Vol. 12, No. 5, pp. 369-378DOI
18 
Wang S. , Cao H. , Liu Y. , 2023, Application of SIFT algorithm based on the Gabor features in multi-source information image monitoring, IEIE Transactions on Smart Processing & Computing, Vol. 12, No. 2, pp. 112-121DOI
19 
Allmamun M. , 2024, Drone detection and tracking using deep convolutional neural networks from real-time CCTV footage, IEIE Transactions on Smart Processing & Computing, Vol. 13, No. 4, pp. 313-321DOI
20 
Arivazhagan S. , Ligi S. V. , 2018, Mango leaf diseases identification using convolutional neural network, International Journal of Pure and Applied Mathematics, Vol. 120, No. 6, pp. 11067-11079Google Search
21 
Ksibi A. , Ayadi M. , O. S. B. , Jamjoom M. M. , Ullah Z. , 2022, MobiRes-net: a hybrid deep learning model for detecting and classifying olive leaf diseases, Applied Sciences, Vol. 12, No. 20, pp. 10278DOI
22 
Jung J.-Y. , Lee S.-H. , Kim J.-O. , 2023, Knowledge transfer based spatial embedding network for plant leaf instance segmentation, IEIE Transactions on Smart Processing & Computing, Vol. 12, No. 2, pp. 162-170DOI
23 
Lodhi B. A. , Ullah R. , Imran S. , Imran M. , 2024, SenseNet: Densely connected, fully convolutional network with bottleneck skip connection for image segmentation, IEIE Transactions on Smart Processing & Computing, Vol. 13, No. 4, pp. 328-336DOI
24 
Sankupellay M. , Konovalov D. , 2018, Bird call recognition using deep convolutional neural network, ResNet-50, Proc. of the Australian Acoustical Society Conference, pp. 1-8DOI
25 
Haque I. , Alim M. , Alam M. , Nawshin S. , Noori S. R. H. , Habib M. T. , 2022, Analysis of recognition performance of plant leaf diseases based on machine vision techniques, Journal of Human, Earth, and Future, Vol. 3, No. 1, pp. 129-137DOI
Manju Bagga
../../Resources/ieie/IEIESPC.2026.15.2.227/au1.png

Manju Bagga received her M.Tech. degree in computer science & engineering from Punjab Technical University in 2014. Her research interests include object detection and image segmentation with deep learning.

Sonali Goyal
../../Resources/ieie/IEIESPC.2026.15.2.227/au2.png

Sonali Goyal received her Ph.D. degree in computer science & engineering from Maharishi Markandeshwar (Deemed to be University) in 2018. Her research interests include internet of things (IoT) and machine learning in smart systems.