International Journal of Engineering
Trends and Technology

Research Article | Open Access | Download PDF
Volume 74 | Issue 2 | Year 2026 | Article Id. IJETT-V74I2P114 | DOI : https://doi.org/10.14445/22315381/IJETT-V74I2P114

Research on a Hybrid Security Model for Distributed Neural Networks


Timur V. Jamgharyan

Received Revised Accepted Published
22 Oct 2025 12 Jan 2026 20 Jan 2026 14 Feb 2026

Citation :

Timur V. Jamgharyan, "Research on a Hybrid Security Model for Distributed Neural Networks," International Journal of Engineering Trends and Technology (IJETT), vol. 74, no. 2, pp. 204-214, 2026. Crossref, https://doi.org/10.14445/22315381/IJETT-V74I2P114

Abstract

This research proposes a hybrid method for protecting distributed Machine Learning (ML) systems that combines neural network interconnection strengthening with model architecture obfuscation. Experimental results demonstrate that the combined approach maintains the accuracy of the global learning task while outperforming isolated methods (obfuscation-only or strengthening-only). The proposed method significantly enhances the robustness of inter-network communication by preserving the consistency of feature transmission among neural networks. Both obfuscation and hybrid strategies significantly reduce the effectiveness of various model attacks, such as model stealing, model inversion, and membership inference, thereby lowering the fidelity score of surrogate models. The overall evaluation indicates that the proposed approach achieves a well-balanced trade-off between accuracy, confidentiality, and attack resistance within the defined system constraints.

Keywords

Dolev-Yao threat model, Machine Learning Model Security, Model-level attack, Model Obfuscation, Hyperparameters.

References

[1] Rezak Aziz et al., “Exploring Homomorphic Encryption and Differential Privacy Techniques towards Secure Federated Learning Paradigm,Future Internet, vol. 15 no. 9, pp. 1-25, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[2] Yugeng Liu et al., “ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models,Proceedings of the 31st USENIX Security Symposium, pp. 4525-4542, 2022.
[
Google Scholar] [Publisher Link]

[3] Daryna Oliynyk, Rudolf Mayer, and Andreas Rauber, “I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences,” ACM Computing Surveys, vol. 55 no. 14s, pp. 1-41, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[4] Han Cao et al, “Dslassocov: A Federated Machine Learning Approach Incorporating Covariate Control,” arXiv Preprint, pp. 1-35, 2024.
[
CrossRef] [Google Scholar] [Publisher Link]

[5] Alexander Wood, Kayvan Najarian, and Delaram Kahrobaei, “Homomorphic Encryption for Machine Learning in Medicine and Bioinformatics,” ACM Computing Survey, vol. 53, no. 4, pp. 1-35, 2020.
[
CrossRef] [Google Scholar] [Publisher Link]

[6] Jun Niu et al., “A Survey on Membership Inference Attacks and Defenses in Machine Learning,” Journal of Information and Intelligence, vol. 2, no. 5 pp. 404-454, 2024.
[
CrossRef] [Google Scholar] [Publisher Link]

[7] Zecheng He, Tianwei Zhang, and Ruby B. Lee, “Model Inversion Attacks against Collaborative Inference,” Proceedings of the 35th Annual Computer Security Applications Conference (ACSAC), pp. 148-162, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[8] Soumia Zohra El Mestari, Gabriele Lenzini, and Huseyin Demirci, “Preserving Data Privacy in Machine Learning Systems,” Computers & Security, vol. 137, pp. 1-22, 2024.
[
CrossRef] [Google Scholar] [Publisher Link]

[9] Yulian Sun et al., “Obfuscation for Deep Neural Networks against Model Extraction: Attack Taxonomy and Defense Optimization,” Applied Cryptography and Network Security, pp. 391-414, 2025.
[
CrossRef] [Google Scholar] [Publisher Link]

[10] Shu Sun et al., “Investigation of Prediction Accuracy, Sensitivity, and Parameter Stability of Large-Scale Propagation Path Loss Models for 5G Wireless Communications,” IEEE Transactions on Vehicular Technology, vol. 65, no. 5, pp. 2843-2860, 2016.
[CrossRef] [Google Scholar] [Publisher Link]

[11] Runze Zhang et al., “FedCVG: A Two-Stage Robust Federated Learning Optimization Algorithm,” Scientific Reports, vol. 15, no. 1, pp. 1-13, 2025.
[
Google Scholar] [Publisher Link]

[12] Jingdong Jiang, Yue Zheng, and Chip-Hong Chang, “PUF-Based Edge DNN Model IP Protection with Self-obfuscation and Publicly Verifiable Ownership,” IEEE International Symposium on Circuits and Systems (ISCAS), London, United Kingdom, pp. 1-5, 2025.
[
CrossRef]  [Google Scholar] [Publisher Link]

[13] Rodrigo Castillo Camargo et al., “DEFENDIFY: Defense Amplified with Transfer Learning for Obfuscated Malware Framework,” Cybersecurity, vol. 8, no. 1, pp. 1-23, 2025.
[
CrossRef] [Google Scholar] [Publisher Link]

[14] Gwonsang Ryu, and Daeseon Choi, “A Hybrid Adversarial Training for Deep Learning Model and Denoising Network Resistant to Adversarial Examples,” Applied Intelligence, vol. 53, no. 8, pp. 9174-9187, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[15] Mingyi Zhou et al., “ModelObfuscator: Obfuscating Model Information to Protect Deployed ML-Based Systems,” Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 1005-1017, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[16] Kacem Khaled et al., “Efficient Defense Against Model Stealing Attacks on Convolutional Neural Networks,” 2023 International Conference on Machine Learning and Applications (ICMLA), Jacksonville, FL, USA, pp. 45-52, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[17] Kaixiang Zhao et al., “A Survey on Model Extraction Attacks and Defenses for Large Language Models,” Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2, pp. 6227-6236, 2025.
[
CrossRef] [Google Scholar] [Publisher Link]

[18] Marco Romanelli, Konstantinos Chatzikokolakis, and Catuscia Palamidessi, “Optimal Obfuscation Mechanisms via Machine Learning,” arxiv Preprint, pp. 1-16, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[19] Rickard Brännvall et al., “Technical Report for the Forgotten-by-Design Project: Targeted Obfuscation for Machine Learning,” arXiv Preprint, pp. 1-26, 2025.
[
CrossRef] [Google Scholar] [Publisher Link]

[20] Jingtao Li et al., “NeurObfuscator: A Full-stack Obfuscation Tool to Mitigate Neural Architecture Stealing,” 2021 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), Tysons Corner, VA, USA, pp. 1-11, 2021.
[
CrossRef] [Google Scholar] [Publisher Link]

[21] Mahya Morid Ahmadi et al., “DNN-Alias: Deep Neural Network Protection against Side-Channel Attacks via Layer Balancing,” arXiv Preprint, pp. 1-9, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[22] William Stallings, Cryptography and Network Security, Principe’s and Practice, Prentice Hall, 2010.
 [
Publisher Link]

[23] Danny Dolev, and Andrew Chi-Chih Yao, “On the Security of Public Key Protocols,” IEEE Transactions on Information Theory, vol. 29, no. 2, pp. 198-208, 1983.
[
CrossRef] [Google Scholar] [Publisher Link]

[24] S. Kullback, and R.A. Leibler, “On Information and Sufficiency,” Annals of Mathematical Statistics, vol. 22, no. 1, pp. 79-86, 1951. 
[
Google Scholar] [Publisher Link]

[25] Jin-Young Kim, and Sung-Bae Cho, “Obfuscated Malware Detection using Deep Generative Model based on Global/Local Features,” Computers & Security, vol. 112, pp. 1-20, 2022.
[
CrossRef] [Google Scholar] [Publisher Link]

[26] Alexander Branitsky, “Detection of Anomalous Network Connections based on the Hybridization of Computational Intelligence Methods,” Ph.D., Thesis, Saint-Petersburg Institute of Informatics and Automation of the Russian Academy of Sciences, Russian Federation, Saint-Petersburg, 2018.
[
Publisher Link]

[27] Microsoft Official Website. Windows Operating System Download Page, 2025. [Online]. Available: https://www.microsoft.com/en-us/evalcenter

[28]  Mohammed Al-Ambusaidi et al., “RETRACTED ARTICLE: ML-IDS: An Efficient ML-Enabled Intrusion Detection System for Securing IoT Networks and Applications,” Soft Computing, vol. 28, no. 2, pp. 1765-1784, 2024.
[
CrossRef] [Google Scholar] [Publisher Link]

[29] The Canadian Institute for Cybersecurity Datasets (CIC-IDS) Website, 2025. [Online]. Available: https://www.unb.ca/cic/datasets/index.html

[30] The Malware Bazaar Website. [Online]. Available: https://bazaar.abuse.ch/

[31] The Malware Database Website. [Online]. Available: http://vxvault.net/ViriList.php

[32] The Malware Download Webpage, 2025. [Online]. Available: https://github.com/vxunderground

[33] Chelsea Finn, Pieter Abbeel, and Sergey Levine, “Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks,” Proceedings of the 34th International Conference on Machine Learning (PMLR), vol. 70 pp. 1126-1135, 2017. [Google Scholar] [Publisher Link]

[34] Zechun Liu et al., “ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization,” arXiv Preprint, pp. 1-19, 2025.
[
CrossRef] [Google Scholar] [Publisher Link]

[35] Juan Terven et al., “A Comprehensive Survey of Loss Functions and Metrics in Deep Learning,” Artificial Intelligence Review, vol. 58, no. 7, pp. 1-172, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[36] Christos Louizos, Max Welling, and Diederik P. Kingma, “Learning Sparse Neural Networks through L0 Regularization,” arXiv Preprint, pp. 1-13, 2017.
[
CrossRef] [Google Scholar] [Publisher Link]

[37] Amey Agrawal et al., “Etalon: Holistic Performance Evaluation Framework for LLM Inference Systems,” arXiv Preprint, pp. 1-12, 2024.  
[
CrossRef] [Google Scholar] [Publisher Link]

[38] Scott M Lundberg, and Su-In Lee, “A Unified Approach to Interpreting Model Predictions,” Advances in Neural Information Processing Systems, CA, USA, vol. 30, 2017.

         [Google Scholar] [Publisher Link]

[39] Daniel Nichols et al., “Performance-Aligned LLMs for Generating Fast Code,” arXiv Preprint, pp. 1-12, 2024.
[
CrossRef] [Google Scholar] [Publisher Link]

[40] Boyang Zhang et al., “Lossless Model Compression via Joint Low-Rank Factorization Optimization,” arXiv Preprint, pp. 1-10, 2024.
[
CrossRef] [Google Scholar] [Publisher Link]

[41] The Bootfazz Website, 2025. [Online]. Available: https://boofuzz.readthedocs.io/en/stable/#

[42] The Radamsa Project Webpage, 2025. [Online]. Available: https://gitlab.com/akihe/radamsa

[43] Milin Zhang et al., “Adversarial Attacks to Latent Representations of Distributed Neural Networks in Split Computing,” Computer Networks, vol. 273, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[44]Yuheng Zhang et al., “The Secret Revealer: Generative Model-Inversion Attacks against Deep Neural Networks,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 253-261, 2020.
        [
Google Scholar] [Publisher Link]