Skip to main content

Hard Topic Theme: Security Applications of Generative AI

Background

Since 2013, ACSAC has had a hard topic theme that focuses the conference on tackling a hard, cutting-edge, cybersecurity problem requiring cooperation from government, industry, and academia. This year, ACSAC especially encourages contributions in the area of Security Applications of Generative AI.

ACSAC welcomes contributions on the Security Applications of Generative AI topic of not only technical papers, but also of panels, workshops, posters, and works-in-progress, as well as other "out-of-the-box" ideas. ACSAC also welcomes specific suggestions for invited speakers and presenters on this topic. Reference the Call for Submissions for specific instructions for each submission type.

During the conference, a number of orchestrated sessions will include government and industry speakers to frame the hard topic theme, industry and academic speakers to discuss issues and challenges related to the topic, and academic speakers to introduce promising security research. The primary goal of these special sessions will be to foster discussion that can expose opportunities for further collaboration and highlight promising research directions.

Hard Topic Description

This year's hard topic theme solicits research results and technologies that advance our understanding of the applicability of Generative AI and Large Language Models to computer security. These transformative new technologies can have a significant impact in addressing traditional security problems, such as vulnerability detection, fraud detection, reverse engineering, threat intelligence, and many others. At the same time, the complexity of these systems also exposes new potential vulnerabilities that are still not well understood by the research community.

The "Security Applications of Generative AI" hard topic is broad and includes, but is not limited to, research that investigates and showcases the application of LLMs and Generative AI to security problems. We are also interested in research that studies the reliability and robustness of these models, especially in the presence of adversaries both at training and inference time. We particularly welcome research that proposes approaches aimed at improving the robustness of these models, improve the explainability of their decisions, and mitigate the effect of adversarial behaviors against them.