8:20–08:30 | Opening and Welcome |
08:30–9:30 | Keynote I (Chair: Lorenzo Cavallaro) |
A collection of things you can (and can not do) with training data poisoning Nicholas Carlini (Google Brain) |
|
9:30-11:00 | Session I (Chair: Fabio Pierazzi) |
9:30: Misleading Deep-Fake Detection with GAN Fingerprints
Vera Wesselkamp (TU Braunschweig), Konrad Rieck (TU Braunschweig), Daniel Arp (TU Berlin), Erwin Quiring (TU Braunschweig) | |
10:00–10:30 | Coffee Break |
10:30: Concept-based Adversarial Attacks: Tricking Humans and Classifiers alike
Johannes Schneider (University of Liechtenstein), Giovanni Apruzzese (University of Liechtenstein) | |
11:00–12:00 | Security Panel (Chair: Yizheng Chen) |
Promises and challenges of Security in Trustworthy AI A panel discussion with Scott Coull (Mandiant; remote), Brendan Dolan-Gavitt (NYU), Fabio Pierazzi (King's College London), David Wagner (UC Berkeley), and Gang Wang (UIUC; remote) |
|
12:00–13:00 | Lunch Break |
13:00–14:00 | Keynote II (Chair: Yizheng Chen) |
Resilient Collaborative AI for Cyber Defense Alina Oprea (Northeastern University) |
|
14:00-15:30 | Session II (Chair: Nikolaos Vasiloglou) |
14:00: Ares: A System-Oriented Wargame Framework for Adversarial ML
Farhan Ahmed (Stony Brook University), Pratik Vaishnavi (Stony Brook University), Kevin Eykholt (IBM Research), Amir Rahmati (Stony Brook University) | |
14:30–15:00 | Refreshment Break |
15:00: Parameterizing Activation Functions for Adversarial Robustness
Sihui Dai (Princeton University), Saeed Mahloujifar (Princeton University), Prateek Mittal (Princeton University) | |
15:30–16:30 | Privacy Panel (Chair: Yuan Tian) |
Promises and challenges of Privacy in Trustworthy AI A panel discussion with Nicholas Carlini (Google Brain), Neil Gong (Duke University; remote), Yuan Tian (University of Virginia), Florian Tramer (Google Brain/ETH Zurich; remote) |
|
16:30 | Closing remarks |
Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a promising tool for security in academia and industry. On the other hand, the security of deep learning has gained focus in research, the robustness of neural networks has recently been called into question.
This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security of deep learning.
DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):
Deep Learning
Computer Security
You are invited to submit original research papers of up to six pages, plus additional references. To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.
Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.
For any questions, contact the workshop organizers at dls2022@ieee-security.org
All accepted submissions will be presented at the workshop and included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.