October 30, 2020
Can I Trust my AI?
Abstract: Artificial intelligence (AI) and machine learning (ML) techniques are being increasingly deployed in safety and security-critical settings. Examples include autonomous driving, biometric authentication, and medical diagnostics. At the same time, however, ML systems are also susceptible to new attacks. These include “adversarial input perturbations” which have to been shown to be particularly pernicious for deep neural networks and ML “backdooring” attacks that compromise the training process. In this talk, I will explore the emerging landscape of “adversarial ML” with the goal of answering basic questions about the trustworthiness and reliability of modern machine learning systems.
Siddharth is currently an Associate Professor of ECE at NYU. He received his Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University in 2009, and a B.Tech. degree in Electrical Engineering from the Indian Institute of Technology Madras. He joined NYU in Fall 2014 as an Assistant Professor, and prior to that, was an Assistant Professor at the University of Waterloo from 2010-2014. His research interests are in machine learning, cyber-security, and computer hardware design.
In 2016, Siddharth was listed in Popular Science Magazine’s annual list of “Brilliant 10” researchers. Siddharth has received the NSF CAREER Award (2015), best paper awards at the IEEE Symposium on Security and Privacy (S&P) 2016 and the USENIX Security Symposium 2013. His NDSS 2015 paper was selected as a “Top Picks” in Hardware Security in 2019. Siddharth also received the Angel G. Jordan Award from ECE department of Carnegie Mellon University for outstanding thesis contributions and service to the community. He serves on the technical program committee of several top conferences in the area of computer engineering and computer hardware and has served as a reviewer for several IEEE and ACM journals.