Il prossimo appuntamento nell'ambito dell'iniziativa "IncontrinRETe", organizzato dal Gruppo  Research and Technology group (RET) di  AREA Science Park si terrà il 25 novembre. 


Speaker: Ginevra Carbone, PhD Student in Applied Data Science and Artificial Intelligence (UniTS)

Webinar : "Robustness and interpretability of neural networks’ predictions under adversarial attacks"


Abstract: Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of Deep Neural Networks (DNNs) in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. Another major drawback of DNNs is their black box nature, which empirically results in very strong predictive power, but in general does not provide any intuition about the possible explanations underlying the decisions. We experimentally show how random projections of the original inputs into lower dimensional spaces can be used to improve the robustness of deterministic NNs to adversarial attacks. We describe the geometry of adversarial attacks in the large-data, overparametrized limit for Bayesian Neural Networks (BNNs) and prove that they are immune to gradient-based adversarial attacks in this limit. Finally, we consider the problem of the stability of saliency-based explanations of Neural Network predictions under adversarial attacks. In this setting, we show that Bayesian interpretations are considerably more stable than those provided by deterministic and adversarially trained networks.


The seminar will be in hybrid mode: in presence at the Conference Hall C1 S39 at AREA Science Park, and online via Zoom.


**The occupancy of the Conference Hall is currently limited and Green Pass is strictly required for attendance. If you plan to attend in person please drop an email to alberto.cazzaniga@areasciencepark.it **


Zoom link: Zoom Link


For information please contact alberto.cazzaniga@areasciencepark.it


Organizzato da | organized by: 
Area Science Park