Accurate and Efficient Real-Time Crop and Weed Identification Using YOLO-NAS: A Neural Architecture Search-Optimized Object Detection Model
Accurate and Efficient Real-Time Crop and Weed Identification Using YOLO-NAS: A Neural Architecture Search-Optimized Object Detection Model |
||
|
||
© 2024 by IJETT Journal | ||
Volume-72 Issue-10 |
||
Year of Publication : 2024 | ||
Author : Sanskruti Patel, Dharmendra Patel, Atul Patel, Jay Nanavati, Unnati Patel, Ankit Faldu, Amisha Shingala |
||
DOI : 10.14445/22315381/IJETT-V72I10P104 |
How to Cite?
Sanskruti Patel, Dharmendra Patel, Atul Patel, Jay Nanavati, Unnati Patel, Ankit Faldu, Amisha Shingala, "Accurate and Efficient Real-Time Crop and Weed Identification Using YOLO-NAS: A Neural Architecture Search-Optimized Object Detection Model," International Journal of Engineering Trends and Technology, vol. 72, no. 10, pp. 35-45, 2024. Crossref, https://doi.org/10.14445/22315381/IJETT-V72I10P104
Abstract
Accurate identification of plants and weeds is essential for precision farming since it allows for focused treatments, effective use of resources, and reduced environmental impact. In order to improve its architecture for crop and weed detection tasks, this research proposed a YOLO-NAS, an effective object detection model that makes use of Neural Architecture Search (NAS). With the use of cutting-edge methods like selective quantization, mixed precision training, and knowledge distillation, YOLO-NAS may be deployed seamlessly across a range of computing resources. A large dataset of agricultural images was used to assess the performance of the model while considering evaluation parameters, including mean Average Precision (mAP), recall, and precision. The results of the experiments show that YOLO-NAS works better than most object detection models, with a mAP of 86.11% and a balance between weed misdetection minimization and crop identification accuracy. With its high accuracy, real-time functionality, and easy deployment, the suggested model is a viable option for automated crop and weed identification that helps in precision agriculture.
Keywords
Computer vision, Crop and weed object detection, Deep learning, YOLO, YOLO-NAS.
References
[1] V.M. Abdul Hakkim et al., “Precision Farming: The Future of Indian Agriculture,” Journal of Applied Biology & Biotechnology, vol. 4, no. 6, pp. 68-72, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[2] António Monteiro, Sérgio Santos, and Pedro Gonçalves, “Precision Agriculture for Crop and Livestock Farming—Brief Review,” Animals, vol. 11, no. 8, pp. 1-18, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Utilizing the Power of Data Analytics in Agriculture, Data Analytics, International Association Business of Analytics Certification, 2024. [Online]. Available: https://iabac.org/blog/utilizing-the-power-of-data-analytics-in-agriculture [4] Bulbul Bamne et al., “Transfer Learning-Based Object Detection by Using Convolutional Neural Networks,” 2020 International Conference on Electronics and Sustainable Communication Systems, Coimbatore, India, pp. 328-332, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Thomas A Dorfer, YOLO-NAS: How to Achieve the Best Performance on Object Detection Tasks, Medium, 2023. [Online]. Available: https://towardsdatascience.com/yolo-nas-how-to-achieve-the-best-performance-on-object-detection-tasks-6b95347908d4
[6] Ye Mu et al., “A Faster R-CNN-Based Model for the Identification of Weed Seedling,” Agronomy, vol. 12, no. 11, pp. 1-12, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Nafeesa Yousuf Murad et al., “Weed Detection Using Deep Learning: A Systematic Literature Review,” Sensors, vol. 23, no. 7, pp. 1-45, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Muhammad Hammad Saleem, Johan Potgieter, and Khalid Mahmood Arif, “Weed Detection by Faster RCNN Model: An Enhanced Anchor Box Approach,” Agronomy, vol. 12, no. 7, pp. 1-22, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Vi Nguyen Thanh Le, Giang Truong, and Kamal Alameh, “Detecting Weeds from Crops under Complex Field Environments Based on Faster RCNN,” 2020 IEEE Eighth International Conference on Communications and Electronics, Phu Quoc Island, Vietnam, pp. 350-355, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Hongxing Peng et al., “Weed Detection in Paddy Field Using an Improved RetinaNet Network,” Computers and Electronics in Agriculture, vol. 199, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[11] J.M. López Correa et al., 8. Multi Species Weed Detection with Retinanet One-Step Network in a Maize Field, Precision Agriculture '21, pp. 79-86, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Sudheer Kumar Nagothu et al., “Weed Detection in Agriculture Crop Using Unmanned Aerial Vehicle and Machine Learning,” Materials Today: Proceedings, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Oluibukun Gbenga Ajayi, John Ashi, and Blessed Guda, “Performance Evaluation of YOLO v5 Model for Automatic Crop and Weed Classification on UAV Images,” Smart Agricultural Technology, vol. 5, pp. 1-17, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Jiqing Chen et al., “Weed Detection in Sesame Fields Using a YOLO Model with an Enhanced Attention Mechanism and Feature Fusion,” Computers and Electronics in Agriculture, vol. 202, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Velpula Sekhara Babu, and Nidumolu Venkatram, “Weed Detection and Localization in Soybean Crops Using YOLOv4 Deep Learning Model,” Traitement du Signal, vol. 41, no. 2, pp. 1019-1025, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Rohini Vaidya, Deci’s YOLO-NAS: Next-Generation Model for Object Detection, 2023. [Online]. Available: https://medium.com/codex/decis-yolo-nas-next-generation-model-for-object-detection-8ccb7f2013a7
[17] Yolo-Nas, Ultralytics, 2023. [Online]. Available: https://docs.ultralytics.com/models/yolo-nas/
[18] Xiangxiang Chu, Liang Li, and Bo Zhang, “Make RepVGG Greater Again: A Quantization-Aware Approach,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 10, pp. 11624-11632, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Juan Terven et al., “A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” Machine Learning and Knowledge Extraction, vol. 5, no. 4, pp. 1680-1716, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Shuai Shao et al., “Objects365: A Large-Scale, High-Quality Dataset for Object Detection,” 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), pp. 8429-8438, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[21] YOLO for Object Detection, GUVI. [Online]. Available: https://forum.guvi.in/posts/6991/yolo-for-object-detection
[22] Kaspars Sudars et al., “Dataset of Annotated Food Crops and Weed Images for Robotic Computer Vision Control,” Data in Brief, vol. 31, pp. 1-6, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Touqeer Abbas et al., “Deep Neural Networks for Automatic Flower Species Localization and Recognition,” Computational Intelligence and Neuroscience, vol. 2022, no. 1, pp. 1-9, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Neeraj Kumar et al., “Leafsnap: A Computer Vision System for Automatic Plant Species Identification,” 12th European Conference on Computer Vision – ECCV 2012, Florence, Italy, pp. 502-516, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Alexis Joly et al., “Interactive Plant Identification Based on Social Image Data,” Ecological Informatics, vol. 23, pp. 22-34, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[26] Deval Shah, Mean Average Precision (mAP) Explained: Everything You Need to Know, V7labs.com, 2022. [Online]. Available: https://www.v7labs.com/blog/mean-average-precision [27] Pasi Fränti, and Radu Mariescu-Istodor, “Soft Precision and Recall,” Pattern Recognition Letters, vol. 167, pp. 115-121, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Tanay Agrawal, Hyperparameter Optimization in Machine Learning: Make Your Machine Learning and Deep Learning Models More Efficient, Apress Berkeley, CA, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[29] Priyanka Kumari, Parking Space Detection Model Using PyTorch and Super Gradients, Labellerr, 2023. [Online]. Available: https://www.labellerr.com/blog/building-parking-space-detection-system-pytorch-super-gradients/
[30] Shang Jiang et al., “Optimized Loss Functions for Object Detection and Application on Nighttime Vehicle Detection,” Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, vol. 236, no. 7, pp. 1568-1578, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[31] Nhat-Duy Nguyen et al., “An Evaluation of Deep Learning Methods for Small Object Detection,” Journal of Electrical and Computer Engineering, vol. 2020, no. 1, pp. 1-18, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[32] Rafael Padilla et al., “A Comparative Analysis of Object Detection Metrics with a Companion Open-Source Toolkit,” Electronics, vol. 10, no. 3, pp. 1-28, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[33] Tiagrajah V. Janahiraman, and Mohamed Shahrul Mohamed Subuhan, “Traffic Light Detection Using Tensorflow Object Detection Framework,” 2019 IEEE 9th International Conference on System Engineering and Technology, Shah Alam, Malaysia, pp. 108-113, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[34] Mukaram Safaldin, Nizar Zaghden, and Mahmoud Mejdoub, “An Improved YOLOv8 to Detect Moving Objects,” IEEE Access, vol. 12, pp. 59782-59806, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[35] Tausif Diwan, G. Anirudh, and Jitendra V. Tembhurne, “Object Detection Using YOLO: Challenges, Architectural Successors, Datasets and Applications,” Multimedia Tools and Applications, vol. 82, no. 6, pp. 9243-9275, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[36] Rafael Padilla, Sergio L. Netto, and Eduardo A.B. da Silva, “A Survey on Performance Metrics for Object-Detection Algorithms,” 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niteroi, Brazil, pp. 237-242, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[37] Ignazio Gallo et al., “Deep Object Detection of Crop Weeds: Performance of YOLOv7 on a Real Case Dataset from UAV Images,” Remote Sensing, vol. 15, no. 2, pp. 1-17, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[38] Aanis Ahmad et al., “Performance of Deep Learning Models for Classifying and Detecting Common Weeds in Corn and Soybean Production Systems,” Computers and Electronics in Agriculture, vol. 184, pp. 1-30, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[39] Junfeng Gao et al., “Deep Convolutional Neural Networks for Image-Based Convolvulus Sepium Detection in Sugar Beet Fields,” Plant Methods, vol. 16, pp. 1-12, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[40] Abdur Rahman, Yuzhen Lu, and Haifeng Wang, “Performance Evaluation of Deep Learning Object Detectors for Weed Detection for Cotton,” Smart Agricultural Technology, vol. 3, pp. 1-11, 2023.
[CrossRef] [Google Scholar] [Publisher Link]