Jeju, Korea | Workshop date TBA during August 9-13, 2026
Large language models are increasingly used in search, software engineering, scientific discovery, customer support, education, healthcare, and agentic systems. As these systems are deployed in more consequential settings, concerns about security, privacy, robustness, faithfulness, and safety become substantially more important. Failures such as prompt injection, jailbreaks, privacy leakage, hallucination, insecure tool use, and unreliable long-horizon behavior can create real downstream harm.
The SeT-LLM workshop focuses on the foundations and practice of building secure and trustworthy large language models. We aim to bring together researchers and practitioners from data mining, machine learning, natural language processing, systems, security, human-computer interaction, and applied domains to examine how LLMs can be evaluated, stress-tested, aligned, and deployed more responsibly.
The 2nd SeT-LLM Workshop will provide a venue for discussion on rigorous benchmarks, red teaming, privacy-preserving methods, trustworthy inference, robust agent design, and responsible applications of LLMs in high-stakes environments. Our goal is to advance technically grounded and practically useful approaches to dependable generative AI.
The workshop aims to foster a shared research agenda around secure and trustworthy LLMs. We focus on:
We invite submissions on secure and trustworthy large language models from data mining, machine learning, NLP, security, systems, HCI, and interdisciplinary application domains. The workshop welcomes both technical innovations and deployment-centered insights across the following research areas:
Submissions may report new results, benchmarks, system designs, negative findings, or position papers.
We expect to welcome submissions in several formats. Final submission instructions will be posted once the workshop CFP is finalized.
Formatting requirements:
Archival status and presentation format will be updated once finalized. Questions? Contact us at setllm.workshop@gmail.com
Program committee members will be listed here once confirmed
Contact: setllm.workshop@gmail.com
Last updated March 23, 2026