I am a faculty member (tenured) at CISPA Helmholtz Center for Information Security. I sometimes also chime in iDRAMA Lab for the memes. I am a member of ELLIS.

Research Areas

  • Trustworthy Machine Learning (Privacy, Security, and Safety)
  • Social Network Analysis
  • Misinformation, Hate Speech, and Memes

I’m always looking for motivated students and postdocs to join my group. If you are interested, please write me an email (zhang@cispa.de).

Awards

  • Best paper award honorable mention at CCS 2022
  • Busy Beaver teaching award nomination for advanced lecture “Machine Learning Privacy” at Saarland University (2022 Summer)
  • Busy Beaver teaching award for seminar “Privacy of Machine Learning” at Saarland University (2021 Winter)
  • Distinguished reviewer award at TrustML Workshop 2020 (co-located with ICLR 2020)
  • Distinguished paper award at NDSS 2019
  • Best paper award at ARES 2014

What’s New

  • [March 2023] I will join the editorial board of ACM TOPS!
  • [March 2023] We released MLHospital, a python package to evaluate machine learning models’ security and privacy risks. MLHospital is under continual development, and we welcome contributors!
  • [March 2023] I successfully passed my tenure-track evaluation and become a tenured faculty at CISPA!
  • [February 2023] One paper titled “Can’t Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders” got accepted in CVPR 2023!
  • [February 2023] One paper titled “A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots” got accepted in USENIX Security 2023!
  • [February 2023] I will join the TPC of Oakland 2024!
  • [January 2023] One paper titled “Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?” got accepted in ICLR 2023!
  • [December 2022] One paper titled “Backdoor Attacks Against Dataset Distillation” got accepted in NDSS 2023!
  • [December 2022] Our research on toxic behaviors of chatbots got covered by Chris Stokel-Walker on Fast Company!
  • [December 2022] I will join the TPC of CCS 2023!
  • [November 2022] One paper titled “On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive Learning” got accepted in Oakland 2023!
  • [November 2022] One paper titled “Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network” got accepted in AAAI 2023!
  • [November 2022] Our paper “Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots” got best paper award honorable mention at CCS 2022!
  • [November 2022] Junjie Chu and Yukun Jiang joined the team to start their PhD!
  • [October 2022] Our advanced lecture “Machine Learning Privacy” (2022 Summer) got nominated for Busy Beaver teaching award in Saarland University!
  • [September 2022] One paper titled “Amplifying Membership Exposure via Data Poisoning” got accepted in NeurIPS 2022!
  • [September 2022] One paper titled “UnGANable: Defending Against GAN-based Face Manipulation” got accepted in USENIX Security 2023!
  • [September 2022] One paper titled “PrivTrace: Differentially Private Trajectory Synthesis by Adaptive Markov Model” got accepted in USENIX Security 2023!
  • [August 2022] One paper titled “Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbot” got accepted in CCS 2022!
  • [August 2022] One paper titled “On the Privacy Risks of Cell-Based NAS Architectures” got accepted in CCS 2022!
  • [August 2022] One paper titled “Membership Inference Attacks by Exploiting Loss Trajectory” got accepted in CCS 2022!
  • [July 2022] One paper titled “Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning” got accepted in ECCV 2022!
  • [May 2022] One talk titled “All Your GNN Models and Data Belong to Me” got accepted in Black HAT USA 2022!