LLM4Sec 2025 : Workshop on the use of Large Language Models for Cybersecurity
Large Language Models (LLMs) are widely used for their exceptional ability in performing natural language processing applications like question answering, text completion, and text translation, to name a few. These capabilities enable their use in several domains such as customer support and interaction, content creation, editing and proofreading, sentiment analysis, etc. Besides the natural language, LLMs can generate and manipulate sequences of tokens of any kind, acting as boxes into which human knowledge can be compressed and then extracted when necessary. Owing to this, LLMs can be used to solve a wide range of problems and have been increasingly incorporated into several software frameworks. Among the others, their adoption to advance in the field of cyber security is gaining momentum. As a matter of fact, LLMs have been employed to expose and remediate security flaws, generate secure code and test cases, detect vulnerable or malicious code, and verify the integrity, confidentiality, and reliability of data. Interesting results have been presented so far, but the research in this area is still in its early stages, and it has the potential to produce further significant findings.
This workshop aims to stimulate research on LLM-based solutions for security and privacy. We invite both academic and industrial researchers to submit research papers as either original works, discussion papers, or excerpt of published articles.
Topics of interest include, but are not limited to:
Secure code generation
Test case generation
Vulnerable code detection
Malicious code detection
Vulnerable code fixing
Software deobfuscation and repairing
Anomaly-based detection
Signature-based detection
Network security
Computer forensics
Spam detection
Phishing detection and prevention
Vulnerability discovery
Malware identification and analysis
Data anonymization/de-anonymization
Big data analytics for security
Data integrity
Data confidentiality
Data reliability
Data traceability
Zero-day attack detection
Automated security policy generation
Predictive analytics
Decision support
This workshop aims to stimulate research on LLM-based solutions for security and privacy. We invite both academic and industrial researchers to submit research papers as either original works, discussion papers, or excerpt of published articles.
Topics of interest include, but are not limited to:
Secure code generation
Test case generation
Vulnerable code detection
Malicious code detection
Vulnerable code fixing
Software deobfuscation and repairing
Anomaly-based detection
Signature-based detection
Network security
Computer forensics
Spam detection
Phishing detection and prevention
Vulnerability discovery
Malware identification and analysis
Data anonymization/de-anonymization
Big data analytics for security
Data integrity
Data confidentiality
Data reliability
Data traceability
Zero-day attack detection
Automated security policy generation
Predictive analytics
Decision support