Identification of Performance Regression Causing Code Modifications Using Memetic Algorithm

Identification of Performance Regression Causing Code Modifications Using Memetic Algorithm

  IJETT-book-cover           
  
© 2025 by IJETT Journal
Volume-73 Issue-1
Year of Publication : 2025
Author : Brindha Subburaj, Uma Maheswari J
DOI : 10.14445/22315381/IJETT-V73I1P131

How to Cite?
Brindha Subburaj, Uma Maheswari J, "Identification of Performance Regression Causing Code Modifications Using Memetic Algorithm," International Journal of Engineering Trends and Technology, vol. 73, no. 1, pp. 357-370, 2025. Crossref, https://doi.org/10.14445/22315381/IJETT-V73I1P131

Abstract
Regression testing in software development is a vital and inevitable process performed to ensure that the modifications made to the code do not affect the overall quality of the software. Conducting performance regression tests each and every time when we do some modifications to the code is costlier. Thus, it would be better if we could identify the code modifications that may lead to performance regression and apply regression tests only during such code modification instances. The multi-objective optimization problem formulated includes detecting the code modification that causes performance regression. In this paper, we propose a memetic algorithm named Memetic algorithm using NSGA-II and Local Search (MNSLS), where NSGA-II algorithm with controlled elitism technique is used for global search along with a new improved and controlled local search method. These global and local search techniques improve the exploration and exploitation properties of the algorithm and help to find fitter solutions. MNSLS is used to optimize the identification rules, which could characterize and identify the code modifications that pose a problem to the software quality by finding solutions with a better trade-off between the hit and dismiss rates as objectives. The performance of the proposed algorithm is evaluated using a set of around 8000 Git project commits. The multi-objective optimization results are compared with other evolutionary algorithms using the Hypervolume metric and Mann-Whitney U test. The proposed method is further compared with another evolutionary-based regression identification method called PRICE. The results of the above analysis show that the proposed MNSLS algorithm-based regression identification method is more efficient than other methods.

Keywords
Performance Regression, Evolutionary Algorithm, Local Search, Memetic Algorithm, MNSLS.

References
[1] Jinfu Chen, and Weiyi Shang, “An Exploratory Study of Performance Regression Introducing Code Changes,” 2017 IEEE International Conference Software Maintenance and Evolution (ICSME), Shanghai, China, pp. 341-352, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[2] King Chun Foo et al., “Mining Performance Regression Testing Repositories for Automated Performance Analysis,” Proceedings of the IEEE 10th International Conference on Quality Software, Zhangjiajie, China, pp. 32-41, 2010.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Peng Huang et al., “Performance Regression Testing Target Prioritiza-Tion Via Performance Risk Analysis,” Proceedings of the ACM 36th International Conference on Software Engineering, New York, United States, pp. 60-71, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Michael Pradel, Markus Huggler, and Thomas R. Gross, “Performance Regression Testing of Concurrent Classes,” Proceedings of the International Symposium on Software Testing and Analysis, New York, United States, pp. 13-25, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Shadi Ghaith et al., “Profile-Based, Load-Independent Anomaly Detection and Analysis in Performance Regression Testing of Software Systems,” 2013 17th European Conference on Software Maintenance and Reengineering, Genova, Italy, pp. 379-383, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Augusto Born De Oliveira et al., “Perphecy: Performance Regression Test Selection Made Simple but Effective,” IEEE International Conference on Software Testing, Verification and Validation (ICST), Tokyo, Japan, pp. 103-113, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Stefan Mühlbauer, Sven Apel, and Norbert Siegmund, “Accurate Modeling of Performance Histories for Evolving Software Systems,” 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), San Diego, CA, USA, pp. 640-652, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Juan Pablo Sandoval Alcocer, Alexandre Bergel, and Marco Tulio Valente, “Prioritizing Versions for Performance Regression Testing: The Pharo Case,” Science of Computer Programming, vol. 191, pp. 1-25, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[9] M. Harman, “Making the Case for MORTO: Multi Objective Regression Test Optimization,” IEEE Fourth International Conference on Software Testing, Berlin, Germany, pp. 111-114, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Deema Alshoaibi et al., “Search-Based Detection of Code Changes Introducing Performance Regression,” Swarm Evolutionary Computation, vol. 73, pp. 1-19, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Mohamed Wiem Mkaouer et al., “High Dimensional Search-Based Software Engineering: Finding Tradeoffs Among 15 Objectives for Automating Software Refactoring using NSGA-III,” Proceedings of Annual Conference on Genetic and Evolutionary Computation, New York, United States, pp. 1263-1270, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[12] K. Deb et al., “A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182-197, 2002.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Eckart Zitzler, MarcoLaumanns, and Lothar Thiele, “SPEA2: Improving the Strength Pareto Evolutionary Algorithm,” ETH Zurich, Computer Engineering and Networks Laboratory, vol. 103, pp. 1-22, 2001.
[CrossRef] [Google Scholar] [Publisher Link]
[14] S. Brindha, and S. Miruna Joe Amali, “A Robust and Adaptive Fuzzy Logic Based Differential Evolution Algorithm using Population Diversity Tuning for Multi-Objective Optimization,” Engineering Applications of Artificial Intelligence, vol. 102, pp. 1-14, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[15] David W. Corne et al., “PESA-II: Region-Based Selection in Evolutionary Multiobjective Optimization,” Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, San Francisco, CA, United States, pp. 283-290, 2001.
[Google Scholar] [Publisher Link]
[16] Vimal Savsani, and Mohamed A. Tawhid, “Non-Dominated Sorting Moth Flame Optimization (NS-MFO) for Multi-Objective Problems,” Engineering Applications of Artificial Intelligence, vol. 63, pp. 20-32, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Qingfu Zhang, and Hui Li, “MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712-731, 2007.
[CrossRef] [Google Scholar] [Publisher Link]
[18] W.E. Hart, N. Krasnogor, and J.E. Smith, Memetic Evolutionary Algorithms, Recent Advances in Memetic Algorithms, pp. 3-27, 2005.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Sezin Afsar et al., “Multi-Objective Enhanced Memetic Algorithm for Green Job Shop Scheduling with Uncertain Times,” Swarm and Evolutionary Computation, vol. 68, pp. 1-14, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[20] M. Manikandan, S. Sakthivel, and V. Vivekanandhan, “Efficient Clustering using Memetic Adaptive Hill Climbing Algorithm in WSN,” Intelligent Automation and Soft Computing, vol. 35, pp. 3169-3185, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Tansel Dokeroglu, and Ender Sevinc, “Memetic Teaching-Learning-Based Optimization Algorithms for Large Graph Coloring Problems,” Engineering Applications of Artificial Intelligence, vol. 102, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Ruchika Malhotra, Megha Khanna, and Rajeev R. Raje, “On the Application of Search-Based Techniques for Software Engineering Predictive Modeling: A Systematic Review and Future Directions,” Swarm and Evolutionary Computation, vol. 32, pp. 85-109, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Annibale Panichella et al., “Improving Multi-Objective Test Case Selection by Injecting Diversity in Genetic Algorithms,” IEEE Transactions on Software Engineering, vol. 41, no. 4, pp. 358-383, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Shuai Wang, Shaukat Ali, and Arnaud Gotlieb, “Cost-Effective Test Suite Minimization in Product Lines Using Search Techniques,” Journal of Systems and Software, vol. 103, pp. 370-391, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Wei Zheng et al., “Multi-Objective Optimisation for Regression Testing,” Information Sciences, vol. 334-335, pp. 1-16, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[26] Shweta Singhal et al., “Multi-Objective Fault-Coverage Based Regression Test Selection and Prioritization Using Enhanced ACO_TCSP,” Mathematics, vol. 11, no. 13, pp. 1-21, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Arun Prakash Agrawal, Ankur Choudhary, and Parma Nand, “An Efficient Regression Test Suite Optimization Approach Using Hybrid Spider Monkey Optimization Algorithm,” International Journal of Swarm Intelligence Research, vol. 12, no. 4, pp. 57-80, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Shweta Singhal et al., “Empirical Evaluation of Tetrad Optimization Methods for Test Case Selection and Prioritization,” Indian Journal of Science and Technology, vol. 16, pp. 1083-1044, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[29] Max Mendelson, “Identifying Performance Regression from The Commit Phase Utilizing Machine Learning Techniques,” Master Thesis, Rochester Institute of Technology, Rochester, USA, 2020.
[Google Scholar] [Publisher Link]
[30] Lizhi Liao et al., “Early Detection of Performance Regressions by Bridging Local Performance Data and Architecture Models,” Arxiv, pp. 1-13, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[31] Jinfu Chen, Weiyi Shang, and Emad Shihab, “PerfJIT: Test-Level Just-in-Time Prediction for Performance Regression Introducing Commits,” IEEE Transactions on Software Engineering, vol. 48, no. 5, pp. 1529-1544, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[32] Kalyanmoy Deb, Salient Issues of Multi-Objective Evolutionary Algorithms, Multi-Objective Optimization using Evolutionary Algorithms, Wiley, pp. 414-417, 2001.
[Google Scholar]
[33] J.A. Nelder, and R. Mead, “A Simple Method for Function Minimization,” The Computer Journal, vol. 7, no. 4, pp. 308-313, 1965.
[CrossRef] [Google Scholar] [Publisher Link]
[34] Eckart Zitzler, and Simon Künzli, “Indicator-Based Selection in Multiobjective Search,” Parallel Problem Solving from Nature-PPSNVIII, pp. 832-842, 2004.
[CrossRef] [Google Scholar] [Publisher Link]
[35] Johannes Bader, and Eckart Zitzler, “HypE: An Algorithm for Fast Hypervolume-Based Many-Objective Optimization,” Evolutionary Computation, vol. 19, pp. 45-76, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[36] G.W. Corder, and D.I. Foreman, Comparing Two Unrelated Samples: The Mann-Whitney U-Test and the Kolmogorov-Smirnov Two-Sample Test, Nonparametric Statistics: A Step-by-Step Approach, Wiley, pp. 71-80, 2014.
[Google Scholar]