Abstract
The explosion of IoT usage provides efficiency and convenience in various fields including daily life, business and information technology. However, there are potential risks in large-scale IoT systems and vulnerability detection plays a significant role in the application of IoT. Besides, traditional approaches like routine security audits are expensive. Thus, substitution methods with lower costs are needed to achieve IoT system vulnerability detection. LLMs, as new tools, show exceptional natural language processing capabilities, meanwhile, static code analysis offers low-cost software analysis avenues. The paper aims at the combination of LLMs and static code analysis, implemented by prompt engineering, which not only expands the application of LLMs but also provides a probability of accomplishing cost-effective IoT vulnerability software detection.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Sözer, H.: Integrated static code analysis and runtime verification. Softw.: Pract. Exp. 45(10), 1359–1373 (2014). https://doi.org/10.1002/spe.2287
Spataro, J.: Introducing Microsoft 365 Copilot – your copilot for work - The Official Microsoft Blog. The Official Microsoft Blog (2023). https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/
Mehdi, Y.: Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web - The Official Microsoft Blog. The Official Microsoft Blog (2023). https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/
Vaswani, A., et al: Attention is All you Need. arXiv (Cornell University), 30, 5998–6008 (2017). https://arxiv.org/pdf/1706.03762v5
Merritt, R.: What Is a Transformer Model? | NVIDIA Blogs. NVIDIA Blog (2022). https://blogs.nvidia.com/blog/2022/03/25/what-is-a-transformer-model/
Bowman, Samuel R.: Eight Things to Know about Large Language Models (2023). arXiv:2304.00612
Nanda, N., Chan, L., Lieberum, T., Smith, J. L., Steinhardt, J.: Progress measures for grokking via mechanistic interpretability. arXiv (Cornell University) (2023). https://doi.org/10.48550/arxiv.2301.05217
Yao, S., et al.: ReAct: Synergizing Reasoning and Acting in Language Models (2022). https://doi.org/10.48550/arxiv.2210.03629
Liu, Y., et al.: Prompt Injection attack against LLM-integrated Applications (2023). https://doi.org/10.48550/arxiv.2306.05499
Cheung, K.S.: Real estate insights unleashing the potential of ChatGPT in property valuation reports: the “Red Book” compliance Chain-of-thought (CoT) prompt engineering. J. Property Invest. Finance (2023). https://doi.org/10.1108/JPIF-06-2023-0053
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yang, Y. (2023). IoT Software Vulnerability Detection Techniques through Large Language Model. In: Li, Y., Tahar, S. (eds) Formal Methods and Software Engineering. ICFEM 2023. Lecture Notes in Computer Science, vol 14308. Springer, Singapore. https://doi.org/10.1007/978-981-99-7584-6_21
Download citation
DOI: https://doi.org/10.1007/978-981-99-7584-6_21
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-7583-9
Online ISBN: 978-981-99-7584-6
eBook Packages: Computer ScienceComputer Science (R0)