5th Deep Learning and
Security Workshop
co-located with the 43rd IEEE Symposium on Security and Privacy
May 26, 2022
Photo: Pixabay

Keynotes

A collection of things you can (and can not do) with training data poisoning
Nicholas Carlini, Google Brain

Abstract:
Sometimes life gives you lemons. Other times you're asked if you want to give a keynote talk a few hours before it's scheduled to start. And because you make poor life decisions, you say yes, because why not? Then you frantically start screenshooting figures from whatever recent papers you've written and hope things don't go terribly. So, in this talk I will, with the visual appeal of a 2003 PowerPoint presentation, talk about three things you can do with training data poisoning (1. backdoor contrastive learning, 2. audit DP-SGD, 3. increase privacy vulnerability) and one thing you can't (4. prevent face recognition from working).

Bio: Nicholas Carlini Nicholas Carlini is a research scientist at Google Brain working at the intersection of machine learning and computer security. His most recent line of work studies the properties of neural networks from an adversarial perspective. He received his Ph.D. from UC Berkeley in 2018, and his B.A. in computer science and mathematics (also from UC Berkeley) in 2013. Generally, Nicholas is interested in developing attacks on machine learning systems; most of his work develops attacks demonstrating security and privacy risks of these systems. He has received best paper awards at ICML and IEEE S&P, and his work has been featured in the New York Times, the BBC, Nature Magazine, Science Magazine, Wired, and Popular Science. Previously he interned at Google Brain, evaluating the privacy of machine learning; Intel, evaluating Control-Flow Enforcement Technology (CET); and Matasano Security, doing security testing and designing an embedded security CTF.

Resilient Collaborative AI for Cyber Defense
Alina Oprea, Northeastern University

Abstract: Modern cyber attacks have become sophisticated, coordinated, and are operating at global scale. It is challenging to detect these attacks in their early stages, as adversaries utilize common network services, evolve their techniques, and can evade existing detection mechanisms. I will discuss two AI-based systems for threat detection designed to address some of these challenges. First, I will talk about PORTFILER, a new machine learning system applied to network traffic for detecting self-propagating malware attacks. PORTFILER introduces a novel ensemble methodology for aggregating unsupervised models that increases resilience against adversarial evasion. Second, I will discuss CELEST, a collaborative threat detection system using federated learning designed to train global models for cyber defense among multiple participating organizations. CELEST uses a novel word embedding model for semantic representation of HTTP logs and an active learning component to enhance the detection of new attacks. I will describe our experience in deploying these systems on two university networks as part of the DARPA CHASE program. Finally, I will mention a number of challenges and open problems in designing resilient AI in cyber security.

Bio: Alina Oprea is an Associate Professor at Northeastern University in the Khoury College of Computer Sciences. She joined Northeastern University in Fall 2016 after spending 9 years as a research scientist at RSA Laboratories. Her research interests in cyber security are broad, with a focus on machine learning security and privacy, threat detection, cloud security, and applied cryptography. She is the recipient of the Technology Review TR35 award for her research in cloud security in 2011, the Google Security and Privacy Award in 2019, and the Ruth and Joel Spira Award for Excellence in Teaching in 2020. Alina served as Program Committee co-chair of the IEEE Security and Privacy Symposium in 2020 and 2021, and she is currently a steering committee member for the IEEE Security and Privacy Symposium and NDSS. She also serves as Associate Editor of the ACM Transactions of Privacy and Security (TOPS) journal and the IEEE Security and Privacy Magazine. Her work was recognized with Best Paper Awards at NDSS 2005, AISEC in 2017, and GameSec in 2019.

Programme (Tentative) - May 26, 2022

The following times are on PT time zone. Proceedings are available after the workshop (with credentials) here.
8:20–08:30 Opening and Welcome
08:30–9:30 Keynote I (Chair: Lorenzo Cavallaro)
A collection of things you can (and can not do) with training data poisoning
Nicholas Carlini (Google Brain)
9:30-11:00 Session I (Chair: Fabio Pierazzi)
9:30: Misleading Deep-Fake Detection with GAN Fingerprints
Vera Wesselkamp (TU Braunschweig), Konrad Rieck (TU Braunschweig), Daniel Arp (TU Berlin), Erwin Quiring (TU Braunschweig)
10:00–10:30 Coffee Break
10:30: Concept-based Adversarial Attacks: Tricking Humans and Classifiers alike
Johannes Schneider (University of Liechtenstein), Giovanni Apruzzese (University of Liechtenstein)
11:00–12:00 Security Panel (Chair: Yizheng Chen)
Promises and challenges of Security in Trustworthy AI
A panel discussion with Scott Coull (Mandiant; remote), Brendan Dolan-Gavitt (NYU), Fabio Pierazzi (King's College London), David Wagner (UC Berkeley), and Gang Wang (UIUC; remote)
12:00–13:00 Lunch Break
13:00–14:00 Keynote II (Chair: Yizheng Chen)
Resilient Collaborative AI for Cyber Defense
Alina Oprea (Northeastern University)
14:00-15:30 Session II (Chair: Nikolaos Vasiloglou)
14:00: Ares: A System-Oriented Wargame Framework for Adversarial ML
Farhan Ahmed (Stony Brook University), Pratik Vaishnavi (Stony Brook University), Kevin Eykholt (IBM Research), Amir Rahmati (Stony Brook University)
14:30–15:00 Refreshment Break
15:00: Parameterizing Activation Functions for Adversarial Robustness
Sihui Dai (Princeton University), Saeed Mahloujifar (Princeton University), Prateek Mittal (Princeton University)
15:30–16:30 Privacy Panel (Chair: Yuan Tian)
Promises and challenges of Privacy in Trustworthy AI
A panel discussion with Nicholas Carlini (Google Brain), Neil Gong (Duke University; remote), Yuan Tian (University of Virginia), Florian Tramer (Google Brain/ETH Zurich; remote)
16:30 Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline: Feb 8, 2022, 11:59 PM (AoE, UTC-12) EXTENDED DEADLINE Feb 1, 2022, 11:59 PM (AoE, UTC-12)
  • Acceptance notification: Mar 7, 2022 NEW NOTIFICATION Mar 1, 2022
  • Camera-ready due: Mar 17, 2022
  • Workshop: May 26, 2022

Overview

Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a promising tool for security in academia and industry. On the other hand, the security of deep learning has gained focus in research, the robustness of neural networks has recently been called into question.

This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security of deep learning.

Topics of Interest

DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):

Deep Learning

  • Deep learning for program embedding and similarity
  • Deep program learning
  • Modern deep NLP
  • Recurrent network architectures
  • Neural networks for graphs
  • Neural Turing machines
  • Semantic knowledge-bases
  • Generative adversarial networks
  • Relational modeling and prediction
  • Deep reinforcement learning
  • Attacks against deep learning
  • Resilient and explainable deep learning

Computer Security

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification, analysis, and similarity
  • Data anonymization/ de-anonymization
  • Security in social networks
  • Vulnerability discovery

Submission Guidelines

You are invited to submit original research papers of up to six pages, plus additional references. To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.

Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.

For any questions, contact the workshop organizers at dls2022@ieee-security.org

Presentation Form

All accepted submissions will be presented at the workshop and included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Submission Site

https://hotcrp.dls2022.ieee-security.org/

Committee

Workshop Chair

Program Chair

Program Co-Chair

Steering Committee

Program Committee

  • Ambra Demontis, University of Cagliari
  • Andrew Ilyas, Massachusetts Institute of Technology
  • Battista Biggio, University of Cagliari
  • Brendan Dolan-Gavitt, New York University
  • Chao Zhang, Tsinghua University
  • Christian Wressnegger, Karlsruhe Institute of Technology (KIT)
  • Daniel Arp, TU Berlin
  • Davide Maiorca, University of Cagliari
  • Erwin Quiring, TU Braunschweig
  • Evan Downing, Georgia Institute of Technology
  • Feargus Pendlebury, Facebook
  • Giorgio Giacinto, University of Cagliari
  • Giovanni Apruzzese, University of Liechtenstein
  • Heng Yin, University of California, Riverside
  • Ivan Evtimov, Meta AI
  • Kevin Roundy, NortonLifeLock
  • Kexin Pei, Columbia University
  • Konrad Rieck, TU Braunschweig
  • Liang Tong, NEC Labs
  • Matthew Jagielski, Google
  • Min Du, Palo Alto Networks
  • Mohammadreza (Reza) Ebrahimi, University of South Florida
  • Mu Zhang, University of Utah
  • Nicholas Carlini, Google
  • Philip Tully, Mandiant
  • Reza Shokri, National University of Singapore
  • Sagar Samtani, Indiana University
  • Sanghyun Hong, Oregon State University
  • Scott Coull, Mandiant
  • Shruti Tople, Microsoft Research
  • Teodora Baluta, National University of Singapore
  • Tianhao Wang, University of Virginia
  • Tummalapalli S Reddy, University of Texas at Arlington
  • Varun Chandrasekaran, University of Wisconsin-Madison
  • Weilin Xu, Intel Labs
  • Yacin Nadji, Corelight Inc
  • Yang Zhang, CISPA Helmholtz Center for Information Security
  • Yinzhi Cao, Johns Hopkins University
  • Ziqi Yang, Zhejiang University