• Sep 11, 2025 News!A full waiver of Article Processing Charges (APC) for articles accepted until December 31, 2026   [Click]
  • Jan 07, 2026 News!JAAI opened Online OJS Submission System, please submit your paper via it   [Click]
  • Dec 12, 2025 News!JAAI Volume 3, Number 4 is available now.   [Click]
General Information
    • Abbreviated Title: J. Adv. Artif. Intell.
    • E-ISSN: 2972-4503
    • Frequency: Quarterly
    • DOI: 10.18178/JAAI
    • Editor-in-Chief: Prof. Dr.-Ing. Hao Luo
    • Managing Editor: Ms. Jennifer X. Zeng
    • E-mail: editor@jaai.net
Editor-in-chief
Prof. Dr.-Ing. Hao Luo
Harbin Institute of Technology, Harbin, China
 
It is my honor to be the editor-in-chief of JAAI. The journal publishes good papers in the field of artificial intelligence. Hopefully, JAAI will become a recognized journal among the readers in the field of artificial intelligence.


 
JAAI 2026 Vol.4(1):1-10
DOI: 10.18178/JAAI.2026.4.1.1-10

Empirical Analysis of Prompt Engineering Strategies for Smart Contract Vulnerability Detection: A Multi‐model Comparison

Aditya Shankar
Computer Science and Engineering Dept., Indian Institute of Information Technology, Design and Manufacturing, Kurnool, Andhra Pradesh, India.
EEmail: aditya.10393@gmail.com

Manuscript submitted November 16, 10, 2025; accepted December 2, 2025; published January 27, 2026


Abstract—Smart contract vulnerabilities have resulted in billions of dollars in losses across Decentralized Finance (DeFi) ecosystems. While recent work explores fine-tuned Large Language Models (LLMs) for vulnerability detection, little research systematically examines prompt engineering strategies with pre-trained models. This paper presents the first comprehensive empirical study comparing five code-understanding LLMs (CodeLlama, CodeBERT, InCoder, DeepSeek-Coder, StarCoder) for smart contract security analysis without fine-tuning. Through 15 experimental iterations testing different prompting approaches across 21 vulnerability types using real DeFi exploit patterns, we discover a strong inverse correlation between prompt complexity and detection success: simple prompts (200–400 characters) achieve 100% response reliability while complex structured prompts (1500+ characters) result in complete failure. Our multi-model comparison reveals dramatic architectural differences: CodeBERT and InCoder achieve 92% accuracy but 0% recall (classifying everything as safe), while CodeLlama demonstrates superior detection with 66.67% recall using few-shot learning. DeepSeek-Coder offers optimal balance with 33.33% recall at 6.1 s inference time. These findings establish baseline performance metrics for prompt-based approaches and provide practical deployment guidelines for security practitioners.

keywords—Smart contracts, vulnerability detection, large language models, prompt engineering, DeFi security

Cite: Aditya Shankar,"Empirical Analysis of Prompt Engineering Strategies for Smart Contract Vulnerability Detection: A Multi‐model Comparison," Journal of Advances in Artificial Intelligence, vol. 4, no. 1, pp. 1-10, 2026. doi: 10.18178/JAAI.2026.4.1.1-10

Copyright © 2026 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Copyright © 2023-2025. Journal of Advances in Artificial Intelligence. Unless otherwise stated.

E-mail: editor@jaai.net