Evaluating Soybean Root Health Using Residual Neural Network (ReNN) Based Image Analysis
DOI:
https://doi.org/10.54536/ajsts.v4i2.5382Keywords:
Convolution Neural Network (CoNN), Data Preprocessing System (DPS), Internet of Things (IoT), Neural Network (NN), Residual Neural Networks (ResNN)Abstract
ReNN’s layer-wise image segmentation and robust data processing address the complexities of soybean root analysis, providing valuable insights for improved crop management. Accurate assessment of soybean root health is crucial for optimal crop production and growth. This study utilizes Residual Neural Network (ReNN) image evaluation to analyze soybean root development. Field data is collected and integrated by deploying sensor-based devices (IoT Devices) to evaluate soybean crop stages. ReNN facilitates data preprocessing, layer-wise image segmentation, and effective data processing, enabling accurate assessment of root health, plant vigor, flower fragmentation, and fruit formation. This approach predicts soybean crop yield and provides valuable insights into degradation detection and decision-making for optimal soybean cultivation practices. ReNN utilizes layer-based image formation, collecting data from farm fields through sensor-based devices. The dataset encompasses various environmental and weather conditions, ensuring comprehensive coverage. Key considerations for data preprocessing include temperature, humidity, precipitation as the Weather conditions, soil type, moisture, sunlight as Environmental factors, field location, soil heterogeneity as Spatial variability, and growth stage, seasonality as Temporal variability. By integrating these factors, ReNN enables accurate evaluation of soybean root conditions, facilitating root health assessment, Plant growth monitoring, Yield prediction, and optimized cultivation practices.
Downloads
References
Bello, I., Zoph, B., Vaswani, A., Shlens, J., & Le, Q. V. (2019). Attention augmented convolutional networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3286–3295). IEEE.
Chen, B., Ghiasi, G., Liu, H., Lin, T.-Y., Kalenichenko, D., Adam, H., & Le, Q. V. (2020). MnasFPN: Learning latency-aware pyramid architecture for object detection on mobile devices. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13607–13616). IEEE.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009, June 20). ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (pp. 248–255). IEEE.
DeVries, T., & Taylor, G. W. (2017, August 15). Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552.
Farooq, M., & Hafeez, A. (2020, March 31). Covid-resnet: A deep learning framework for screening of COVID-19 from radiographs. arXiv preprint arXiv:2003.14395.
Gao, S. H., Cheng, M. M., Zhao, K., Zhang, X. Y., Yang, M. H., & Torr, P. H. S. (2021). Res2Net: A new multi-scale backbone architecture. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(2), 652–662.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770–778). IEEE.
Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 7132–7141). IEEE.
Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4700–4708). IEEE.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84–90.
Leibe, B., Matas, J., Sebe, N., & Welling, M. (Eds.). (2016, September 16). Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV. Springer.
Li, Z., Sang, N., Chen, K., Gao, C., & Wang, R. (2018, March 8). Learning deep features with adaptive triplet loss for person reidentification. In MIPPR 2017: Pattern Recognition and Computer Vision (Vol. 10609, pp. 90–95). SPIE.
Lin, S., Ji, R., Yan, C., Zhang, B., Cao, L., Ye, Q., Huang, F., & Doermann, D. (2019). Towards optimal structured CNN pruning via generative adversarial learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2790–2799). IEEE.
Nam, H., Ha, J. W., & Kim, J. (2017). Dual attention networks for multimodal reasoning and matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 299–307). IEEE.
Pham, T. D. (2020). A comprehensive study on the classification of COVID-19 on computed tomography with pre-trained convolutional neural networks. Scientific Reports, 10(1), 1–8.
Roy, S. K., Manna, S., Song, T., & Bruzzone, L. (2020). Attention-based adaptive spectral-spatial kernel ResNet for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 59(9), 7831–7843.
Simonyan, K., & Zisserman, A. (2014, September 4). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–9). IEEE.
Tan, M., & Le, Q. (2019, May 24). EfficientNet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning (pp. 6105–6114). PMLR.
Tseng, H. Y., Lee, H. Y., Huang, J. B., & Yang, M. H. (2020). Cross-domain few-shot classification via learned feature-wise transformation. arXiv preprint arXiv:2001.08735.
Wang, Z., Yu, Z., Zhao, C., Zhu, X., Qin, Y., Zhou, Q., Zhou, F., & Lei, Z. (2020). Deep spatial gradient and temporal depth learning for face anti-spoofing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5042–5051). IEEE.
Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1492–1500). IEEE.
Zagoruyko, S., & Komodakis, N. (2016, May 23). Wide residual networks. arXiv preprint arXiv:1605.07146.
Zerhouni, E., Lányi, D., Viana, M., & Gabrani, M. (2017, April 18). Wide residual networks for mitosis detection. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI) (pp. 924–928). IEEE.
Zhang, H., Wu, C., Zhang, Z., Zhu, Y., Lin, H., Zhang, Z., Sun, Y., He, T., Mueller, J., Manmatha, R., & Li, M. (2022). ResNeSt: Split-attention networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2736–2746). IEEE.
Zhang, K., Sun, M., Han, T. X., Yuan, X., Guo, L., & Liu, T. (2017). Residual networks of residual networks: Multilevel residual networks. IEEE Transactions on Circuits and Systems for Video Technology, 28(6), 1303–1314.
Zhong, X., Gong, O., Huang, W., Li, L., & Xia, H. (2019, September 22). Squeeze-and-excitation wide residual networks in image classification. In 2019 IEEE International Conference on Image Processing (ICIP) (pp. 395–399). IEEE.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Vivek Gupta, Jhankar Moolchadani, Harsh Singh Chouhan

This work is licensed under a Creative Commons Attribution 4.0 International License.