Explanations for query results have been the subject of extensive research. The advantages of such explanations are evident, as they allow users to validate and justify the results of the query and deepen their knowledge about the data. However, when the query is proprietary and needs to remain confidential or when the data is cloaked by privacy restrictions, such explanations may be detrimental to the privacy desiderata. This tradeoff raises the question “can we provide useful explanations while maintaining the privacy requirements of the query and data?”

In this talk, I will present two recent works that attempt to reconcile these gaps. I will begin by discussing our work for providing provenance-based explanations for query results, while ensuring that a proprietary query remains hidden using a privacy model inspired by k-anonymity. I will proceed by presenting our work on providing predicate-based explanations for aggregate query results, while ensuring differential privacy.

Co-hosted by the NYU Tandon Center for Responsible AI (NYU R/AI).
Sept 30th, 12:00 pm to 1:00 pm ET  |  370 Jay St, Room 1201 (in-person only)


Amir Gilad is a postdoctoral researcher in the Database Group at Duke University. He received his Ph.D. in Computer Science from Tel Aviv university. His work focuses on developing tools and algorithms that assist users in understanding and gaining insights into data and the systems that manipulate it. His research relates to classic database tools such as data provenance, as well as natural language processing, causal inference, and privacy. 

Amir is the recipient of the VLDB best paper award, the SIGMOD research highlight award, and the Google Ph.D. fellowship for Structured Data and Database Management.