Skip to main content

Part of the book series: Natural Computing Series ((NCS))

  • 968 Accesses

Abstract

Unit testing is a stage of testing where the smallest segment of code that can be tested in isolation from the rest of the system—often a class—is tested. Unit tests are typically written as executable code, often in a format provided by a unit testing framework such as pytest for Python. Creating unit tests is a time and effort-intensive process with many repetitive, manual elements. To illustrate how AI can support unit testing, this chapter introduces the concept of search-based unit test generation. This technique frames the selection of test input as an optimization problem—we seek a set of test cases that meet some measurable goal of a tester—and unleashes powerful metaheuristic search algorithms to identify the best possible test cases within a restricted timeframe. This chapter introduces two algorithms that can generate pytest-formatted unit tests, tuned towards coverage of source code statements. The chapter concludes by discussing more advanced concepts and gives pointers to further reading for how artificial intelligence can support developers and testers when unit testing software.

From “Optimising the Software Development Process with Artificial Intelligence” (Springer, 2022)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For more information, see https://pytest.org.

  2. 2.

    See https://pypi.org/project/pytest-cov/ for more information.

  3. 3.

    Editors note: The source code linked in this chapter could change after publishing this book. A snapshot of the source code accompanying this chapter can be found at https://doi.org/10.5281/zenodo.6965479.

  4. 4.

    This is a relatively simple program compared to what is typically developed and tested in the software industry. However, it allows clear presentation of the core concepts of this chapter. After reading this chapter, you should be able to apply these concepts to a more complex testing reality.

  5. 5.

    Threshold values can also vary depending on different continents or regions.

  6. 6.

    An individual whose personal identity and gender corresponds with their birth sex.

  7. 7.

    See https://www.euro.who.int/en/health-topics/disease-prevention/nutrition/a-healthy-lifestyle/body-mass-index-bmi.

  8. 8.

    See https://www.who.int/tools/growth-reference-data-for-5to19-years/indicators/bmi-for-age.

  9. 9.

    Tests are considered flaky if their verdict (pass or fail) changes when no code changes are made. In other words, the tests seem to show random behaviour.

  10. 10.

    The full suite can be found at https://github.com/Greg4cr/PythonUnitTestGeneration/blob/main/src/example/test_bmi_calculator_automated_statement.py.

  11. 11.

    For more information on this calculation, and normalisation, see the explanations from McMinn, Lukasczyk, and Arcuri: [4,5,6].

  12. 12.

    Although, of course, some values should be applied to catch common “null pointer” faults.

  13. 13.

    The edit distance between two strings A and B is the number of operations (add, remove or replace a character) required to turn string A into the string B.

  14. 14.

    An overview of attempts to use machine learning to derive oracles is offered by Fontes and Gay: [22].

References

  1. Android Developers, Fundamentals of testing (2020). https://developer.android.com/training/testing/fundamentals

  2. Q. Luo, F. Hariri, L. Eloussi, D. Marinov, An empirical analysis of flaky tests, in Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014 (ACM, New York, NY, USA, 2014), pp. 643–653

    Google Scholar 

  3. M. Eck, F. Palomba, M. Castelluccio, A. Bacchelli, Understanding flaky tests: the developer’s perspective, in Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (2019), pp. 830–840

    Google Scholar 

  4. Phil McMinn, Search-based software test data generation: a survey. Softw. Test., Verif. Reliab. 14, 105–156 (2004)

    Article  Google Scholar 

  5. S. Lukasczyk, F. Kroiß, G. Fraser, Automated unit test generation for python, in Aldeida Aleti and Annibale Panichella, editors, Search-Based Software Engineering, ed. by A. Aleti, A. Panichella (Springer International Publishing, Cham, 2020), pp. 9–24

    Google Scholar 

  6. Andrea Arcuri, It really does matter how you normalize the branch distance in search-based software testing. Softw. Test., Verif. Reliab. 23(2), 119–147 (2013)

    Article  Google Scholar 

  7. K. Mao, M. Harman, Y. Jia, Sapienz: multi-objective automated testing for android applications, in Proceedings of the 25th International Symposium on Software Testing and Analysis, pp. 94–105

    Google Scholar 

  8. R.M. Hierons, Comparing test sets and criteria in the presence of test hypotheses and fault domains. ACM Trans. Softw. Eng. Methodol. (TOSEM) 11(4), 448 (2002)

    Google Scholar 

  9. S. Poulding, R. Feldt, The automated generation of humancomprehensible xml test sets, in Proceedings of the 1st North American Search Based Software Engineering Symposium (NasBASE) (2015)

    Google Scholar 

  10. D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al., Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)

    Google Scholar 

  11. P. McMinn, M. Stevenson, M. Harman, Reducing qualitative human oracle costs associated with automatically generated test data, in Proceedings of the First International Workshop on Software Test Output Validation, STOV ’10 (ACM, New York, NY, USA, 2010), pp. 1–4

    Google Scholar 

  12. A. Alsharif, G.M. Kapfhammer, P. McMinn, What factors make SQL test cases understandable for testers? a human study of automatic test data generation techniques, in International Conference on Software Maintenance and Evolution (ICSME 2019) (2019), pp. 437–448

    Google Scholar 

  13. M. Chen, J. Tworek, H. Jun, Q. Yuan, H. Ponde, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, et al., Evaluating large language models trained on code (2021). arXiv:2107.03374

  14. H. Pearce, B. Ahmad, B. Tan, B. Dolan-Gavitt, R. Karri, An empirical cybersecurity evaluation of github copilot’s code contributions (2021). arXiv:2108.09293

  15. R. Feldt, F. Dobslaw, Towards automated boundary value testing with program derivatives and search, in Search-Based Software Engineering, ed. by S. Nejati, G. Gay (Springer International Publishing, Cham, 2019), pp. 155–163

    Google Scholar 

  16. F. Dobslaw, F.G. de Oliveira Neto, R. Feldt, Boundary value exploration for software analysis, in 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 346–353

    Google Scholar 

  17. H. Almulla, G. Gay, Learning how to search: generating effective test cases through adaptive fitness function selection. CoRR (2021). arXiv:2102.04822

  18. C. Henard, M. Papadakis, M. Harman, Y. Jia, Y.L. Traon, Comparing white-box and black-box test prioritization, in 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE) (IEEE, 2016), pp. 523–534

    Google Scholar 

  19. R. Feldt, S. Poulding, D. Clark, S. Yoo, Test set diameter: quantifying the diversity of sets of test cases, in 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST) (IEEE, 2016), pp. 223–233

    Google Scholar 

  20. B. Miranda, E. Cruciani, R. Verdecchia, A. Bertolino, Fast approaches to scalable similarity-based test case prioritization, in 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE) (IEEE, 2018), pp. 222–232

    Google Scholar 

  21. F.G.D.O. Neto, R. Feldt, L. Erlenhov, J.B.D.S. Nunes, Visualizing test diversity to support test optimisation, in 2018 25th Asia-Pacific Software Engineering Conference (APSEC) (IEEE, 2018), pp. 149–158

    Google Scholar 

  22. A. Fontes, G. Gay, Using machine learning to generate test oracles: a systematic literature review, in Proceedings of the 1st International Workshop on Test Oracles, TORACLE 2021 (Association for Computing Machinery, New York, NY, USA, 2021), pp. 1–10

    Google Scholar 

  23. W.B. Langdon, S. Yoo, M. Harman, Inferring automatic test oracles, in 2017 IEEE/ACM 10th International Workshop on Search-Based Software Testing (SBST) (IEEE, 2017), pp. 5–6

    Google Scholar 

  24. F. Tsimpourlas, G. Rooijackers, A. Rajan, M. Allamanis, Embedding and classifying test execution traces using neural networks. IET Softw. (2021)

    Google Scholar 

  25. Bogdan Marculescu, Robert Feldt, Richard Torkar, Simon Poulding, An initial industrial evaluation of interactive search-based testing for embedded software. Appl. Soft Comput. 29, 26–39 (2015)

    Article  Google Scholar 

  26. S. Huurman, X. Bai, T. Hirtz, Generating API test data using deep reinforcement learning, in Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops (2020), pp. 541–544

    Google Scholar 

  27. R. Feldt, S. Poulding, Finding test data with specific properties via metaheuristic search, in 2013 IEEE 24th International Symposium on Software Reliability Engineering (ISSRE) (IEEE, 2013), pp. 350–359

    Google Scholar 

  28. J. Kim, M. Kwon, S. Yoo, Generating test input with deep reinforcement learning, in 2018 IEEE/ACM 11th International Workshop on Search-Based Software Testing (SBST) (IEEE, 2018), pp. 51–58

    Google Scholar 

  29. Y. Jia, M.B. Cohen, M. Harman, J. Petke, Learning combinatorial interaction test generation strategies using hyperheuristic search, in Proceedings of the 37th International Conference on Software Engineering, vol. 1, ICSE ’15 (IEEE Press, 2015), pp. 540–550

    Google Scholar 

  30. W. He, R. Zhao, Q. Zhu, Integrating evolutionary testing with reinforcement learning for automated test generation of object-oriented software. Chin. J. Electron. 24(1), 38–45 (2015)

    Article  Google Scholar 

  31. C. Budnik, M. Gario, G. Markov, Z. Wang, Guided test case generation through AI enabled output space exploration, in Proceedings of the 13th International Workshop on Automation of Software Test, AST ’18 (Association for Computing Machinery, New York, NY, USA, 2018), pp. 53–56

    Google Scholar 

  32. N. Walkinshaw, G. Fraser, Uncertainty-driven black-box test data generation, in 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST) (2017), pp. 253–263

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Gregory Gay or Robert Feldt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Fontes, A., Gay, G., de Oliveira Neto, F.G., Feldt, R. (2023). Automated Support for Unit Test Generation. In: Romero, J.R., Medina-Bulo, I., Chicano, F. (eds) Optimising the Software Development Process with Artificial Intelligence. Natural Computing Series. Springer, Singapore. https://doi.org/10.1007/978-981-19-9948-2_7

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-9948-2_7

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-9947-5

  • Online ISBN: 978-981-19-9948-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy