Performance Analysis of Deep Neural Network to Noisy Digit Dataset
DOI:
https://doi.org/10.25170/jurnalelektro.v18i1.6613Keywords:
Neural Network, Noisy Dataset, System Robustness, Generalization AbilityAbstract
This work investigates the impact of noise on model performance by training a neural network on a digit dataset with varying Signal-to-Noise Ratios (SNR) to assess its resilience and generalization ability. The experimental setup involved training the model on datasets with noise levels ranging from clean images to highly distorted ones (SNR 5%–25%), analyzing accuracy, mini-batch loss, and training time. Results indicate that while the model achieves high accuracy (96.88%) at mild noise levels (SNR 5%), performance declines significantly at higher noise levels, with accuracy dropping to 78.91% at SNR 25%. The analysis of mini-batch loss and training time reveals that noise slows convergence and increases computational complexity. The confusion matrix further confirms that while the model effectively distinguishes between classes, noise-induced misclassifications become more frequent at lower SNRs. These findings emphasize the importance of noise reduction techniques and data preprocessing to improve model robustness in real-world applications.
References
[1] A. Nayan, J. Saha, K. R. Mahmud, A. K. Al Azad, and M. G. Kibria, “Detection of Objects from Noisy Images,” in 2nd International Conference on Sustainable Technologies for Industry 4.0 (STI), Dhaka: IEEE, 2020. doi: 10.1109/STI50764.2020.9350521.
[2] H. Jang, D. McCormack, and F. Tong, “Noise-trained deep neural networks effectively predict human vision and its neural responses to challenging images,” PLoS Biol, vol. 19, no. 12, Dec. 2021, doi: 10.1371/journal.pbio.3001418.
[3] Y. H. Wu, E. Podvalny, and B. J. He, “Spatiotemporal neural dynamics of object recognition under uncertainty in humans,” Elife, vol. 12, May 2023, doi: 10.7554/eLife.84797.
[4] K. Guo, P. Wang, P. Shi, C. He, and C. Wei, “A New Partitioned Spatial–Temporal Graph Attention Convolution Network for Human Motion Recognition,” Applied Sciences (Switzerland), vol. 13, no. 3, Feb. 2023, doi: 10.3390/app13031647.
[5] T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, “Learning from Massive Noisy Labeled Data for Image Classification,” in Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, [EEE], 2015.
[6] C. Liu et al., “A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking,” 2023. [Online]. Available: https://github.com/thu-ml/ares
[7] E. Wong, “INCOMPLETE INFORMATION I N DATABASE SYSTEMS,” 1980.
[8] T. Emmanuel, T. Maupong, D. Mpoeleng, T. Semong, B. Mphago, and O. Tabona, “A survey on missing data in machine learning,” J Big Data, vol. 8, no. 1, Dec. 2021, doi: 10.1186/s40537-021-00516-9.
[9] P. Alexandre, “Defending Networks with Incomplete Information: A Machine Learning Approach,” 2013.
[10] B. Alomar, Z. Trabelsi, T. Qayyum, and M. M. Ambali Parambil, “AI and Network Security Curricula: Minding the Gap,” in 2024 IEEE Global Engineering Education Conference (EDUCON), 2024, pp. 1–7. doi: 10.1109/EDUCON60312.2024.10578588.
[11] Z. Liu, Z. Xu, J. Jin, Z. Shen, and T. Darrell, “Dropout reduces underfitting,” in Proceedings of the 40th International Conference on Machine Learning, in ICML’23. JMLR.org, 2023.
[12] I. Marin, A. K. Skelin, and T. Grujic, “Empirical evaluation of the effect of optimization and regularization techniques on the generalization performance of deep convolutional neural network,” Applied Sciences (Switzerland), vol. 10, no. 21, pp. 1–30, Nov. 2020, doi: 10.3390/app10217817.
[13] H. Al-Ghaib and R. Adhami, “On the Digital Image Additive White Gaussian Noise Estimation,” in International Conference on Industrial Automation, Information and Communications Technology (IAICT 2014), IEEE, Aug. 2014, p. 144.
[14] L. Deng, “The MNIST database of handwritten digit images for machine learning research,” IEEE Signal Process Mag, vol. 29, no. 6, pp. 141–142, 2012, doi: 10.1109/MSP.2012.2211477.
[15] J. Sadaiyandi, P. Arumugam, A. K. Sangaiah, and C. Zhang, “Stratified Sampling-Based Deep Learning Approach to Increase Prediction Accuracy of Unbalanced Dataset,” Electronics (Switzerland), vol. 12, no. 21, Nov. 2023, doi: 10.3390/electronics12214423.
[16] M. Sivakumar, S. Parthasarathy, and T. Padmapriya, “Trade-off between training and testing in machine learning for medical image processing,” 2024, doi: 10.7717/peerj-cs.2245.