The 2nd SeT-LLM Workshop on Secure and Trustworthy Large Language Models

Workshop @ KDD 2026


Jeju, Korea   |   Workshop date TBA during August 9-13, 2026


Large language models are increasingly used in search, software engineering, scientific discovery, customer support, education, healthcare, and agentic systems. As these systems are deployed in more consequential settings, concerns about security, privacy, robustness, faithfulness, and safety become substantially more important. Failures such as prompt injection, jailbreaks, privacy leakage, hallucination, insecure tool use, and unreliable long-horizon behavior can create real downstream harm.

The SeT-LLM workshop focuses on the foundations and practice of building secure and trustworthy large language models. We aim to bring together researchers and practitioners from data mining, machine learning, natural language processing, systems, security, human-computer interaction, and applied domains to examine how LLMs can be evaluated, stress-tested, aligned, and deployed more responsibly.

The 2nd SeT-LLM Workshop will provide a venue for discussion on rigorous benchmarks, red teaming, privacy-preserving methods, trustworthy inference, robust agent design, and responsible applications of LLMs in high-stakes environments. Our goal is to advance technically grounded and practically useful approaches to dependable generative AI.

Workshop Goals

The workshop aims to foster a shared research agenda around secure and trustworthy LLMs. We focus on:

  • Understanding failure modes in modern LLM systems, including security, safety, and privacy risks
  • Developing rigorous evaluations for reliability, faithfulness, robustness, and trustworthiness
  • Connecting deployment practice with research across alignment, governance, and high-stakes applications

Call for Papers

We invite submissions on secure and trustworthy large language models from data mining, machine learning, NLP, security, systems, HCI, and interdisciplinary application domains. The workshop welcomes both technical innovations and deployment-centered insights across the following research areas:

Submissions may report new results, benchmarks, system designs, negative findings, or position papers.

Submission Guidelines

We expect to welcome submissions in several formats. Final submission instructions will be posted once the workshop CFP is finalized.

Formatting requirements:

Important Dates

Archival status and presentation format will be updated once finalized. Questions? Contact us at setllm.workshop@gmail.com

Invited Speakers

Speaker to be announced
Speaker To Be Announced Affiliation
Speaker to be announced
Speaker To Be Announced Affiliation
Speaker to be announced
Speaker To Be Announced Affiliation
Speaker to be announced
Speaker To Be Announced Affiliation

Organizers

Jinghui Chen
Jinghui Chen Penn State
Michael Johnston
Michael Johnston Amazon
Jian Kang
Jian Kang MBZUAI
Lu Lin
Lu Lin Penn State
Ting Wang
Ting Wang Stony Brook University
Chaowei Xiao
Chaowei Xiao JHU & Nvidia
Jieyu Zhao
Jieyu Zhao USC

Program Committee

Program committee members will be listed here once confirmed

Member Name
Member Name
Member Name
Member Name
Member Name
Member Name
Member Name
Member Name

Contact: setllm.workshop@gmail.com

Last updated March 23, 2026