Prompting Patterns for Code Understanding: A Controlled Experiment with Professional Developers

Main Article Content

Hasanain Hazim Azeez
Ahmed Alaa Mohsin
Hind Ali Abdul-Hassan

Abstract

Even though program comprehension tasks occupy a sizable fraction of a developer's working hours, most developers still have a hard time comprehend poorly documented or unfamiliar code. While natural‑language interfaces to explore source code built upon recent advances in large‑language models (LLMs) are available, the quality of the conversation is heavily dependent upon the way questions are structured. In this paper, we inquire whether initial-highlighter prompt patterns provide benefit to developers as they understand code via LLM interaction. For a controlled experiment, we recruited thirty professional developers that tried to decode Python functions they had never seen before. As part of a within‑subjects design they first responded to comprehension questions without help, then queried an LLM with either an unstructured natural‑language query (the baseline) or one of two structured prompt patterns. Comprehension was assessed in terms of accuracy, time and subjective confidence. The results demonstrate that structured patterns significantly improved comprehension accuracy, and it also took less time to complete compared with the baseline. We provide a set of guidelines to use to leverage effective prompt designs, and indicate how prompt patterns can supplement existing documentation and code understanding tools. The findings highlight the potential of prompt engineering to augment developers’ comprehension while cautioning against over‑reliance on automated suggestions.


Article Details

How to Cite
Azeez, H. H., Mohsin, A. A., & Abdul-Hassan, H. A. (2026). Prompting Patterns for Code Understanding: A Controlled Experiment with Professional Developers. Technium: Romanian Journal of Applied Sciences and Technology, 31, 186–195. https://doi.org/10.47577/technium.v31i.13549
Section
Articles

References

[1] A. Yetistiren, I. Ozsoy, and S. Demirel, "Assessing the quality of GitHub Copilot’s code generation," in Proc. 37th IEEE/ACM Int. Conf. Automated Softw. Eng. (ASE), 2022, pp. 1–5.

[2] S. Bubeck et al., "Sparks of artificial general intelligence: Early experiments with GPT-4," arXiv preprint arXiv:2303.12712, 2023.

[3] J. Zhang et al., "Practices and challenges of using GitHub Copilot: An empirical study," IEEE Softw., vol. 40, no. 5, pp. 34–42, Sep.–Oct. 2023.

[4] C. Ebert and P. Louridas, "Generative AI for software practitioners," IEEE Softw., vol. 40, no. 3, pp. 102–108, May–Jun. 2023.

[5] P. Liu et al., "Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing," ACM Comput. Surv., vol. 55, no. 9, pp. 1–35, Jan. 2023.

[6] T. Roehm, R. Tiarks, C. Koschke, and W. Maalej, "How do professional developers comprehend software?" IEEE Trans. Softw. Eng., vol. 38, no. 5, pp. 1037–1053, Sep. 2012.

[7] E. Aghajani et al., "Software documentation issues unveiled," in Proc. 26th IEEE Int. Conf. Softw. Anal., Evol. and Reeng. (SANER), 2019, pp. 119–130.

[8] M. Wyrich, J. Bogner, and S. Wagner, "A systematic mapping study on code comprehension experiments," ACM Comput. Surv., vol. 56, no. 4, pp. 1–34, Apr. 2024.

[9] Y. F. Liu, J. Kim, C. Wilson, and M. Bedny, "The cortical network for computing code comprehension," eLife, vol. 9, p. e59340, Dec. 2020.

[10] S. Wagner and M. Wyrich, "A study of intelligence and personality in code comprehension," IEEE Trans. Softw. Eng., vol. 49, no. 4, pp. 2574–2589, Apr. 2023.

[11] F. Zampetti, M. D. Penta, and M. Linares-Vásquez, "Self-admitted technical debt practices: A comparison between industry and open source," J. Syst. Softw., vol. 179, p. 110991, Sep. 2021.

[12] I. Stanik, C. König, and A. Maalej, "A simple NLP-based approach to support onboarding and retention in open-source communities," in Proc. 30th Int. Conf. Softw. Eng. Knowl. Eng. (SEKE), 2018, pp. 517–522.

[13] C. Casalnuovo, P. Devanbu, and E. Morgan, "Does surprisal predict code comprehension difficulty?" in Proc. 42nd Annu. Meeting Cognit. Sci. Soc., 2020, pp. 564–570.

[14] J. Dominic, B. Tubre, D. Kunkel, and P. Rodeghero, "The human experience of comprehending source code in virtual reality," Empirical Softw. Eng., vol. 27, no. 7, p. 173, Nov. 2022.

[15] J. Cui et al., "Code comprehension: A review and large language models exploration," in Proc. 4th IEEE Int. Conf. Softw. Eng. Artif. Intell. (SEAI), 2024, pp. 1–8.

[16] B. Vasilescu et al., "Social media in hands-on software engineering education," in Proc. 36th Int. Conf. Softw. Eng. (ICSE), 2014, pp. 310–320.

[17] A. Bacchelli and C. Bird, "Expectations, outcomes, and challenges of modern code review," in Proc. 35th IEEE Int. Conf. Softw. Eng. (ICSE), 2013, pp. 712–721.

[18] L. Mou et al., "Convolutional neural networks over tree structures for programming language processing," in Proc. 30th AAAI Conf. Artif. Intell., 2016, pp. 1287–1293.

[19] M. Allamanis, E. T. Barr, P. Devanbu, and C. Sutton, "A survey of machine learning for big code and naturalness," ACM Comput. Surv., vol. 51, no. 4, pp. 1–37, Sep. 2018.

[20] V. J. Hellendoorn and P. Devanbu, "Are code sequences predictable?" in Proc. 25th ACM SIGSOFT Int. Symp. Found. Softw. Eng. (FSE), 2017, pp. 157–167.

[21] R. S. Pressman and B. R. Maxim, Software Engineering: A Practitioner's Approach, 9th ed. New York, NY, USA: McGraw-Hill Education, 2020.

[22] F. P. Brooks, The Mythical Man-Month: Essays on Software Engineering, 2nd ed. Reading, MA, USA: Addison-Wesley, 1995.

[23] I. Sommerville, Software Engineering, 10th ed. Harlow, UK: Pearson Education, 2016.

Similar Articles

<< < 6 7 8 9 10 11 12 13 14 15 > >> 

You may also start an advanced similarity search for this article.