Lieu de réalisation
Centre Inria d'Université Côte d'Azur
Langue :
Catuscia Palamidessi (Intervention)
Crédit image : Centre Inria d'Université Côte d'Azur
Détenteur des droits
Centre Inria d'Université Côte d'Azur
Conditions d'utilisation
Droit commun de la propriété intellectuelle
DOI : 10.60527/xz6k-gc44
Citer cette ressource :
Catuscia Palamidessi. Inria. (2022, 15 décembre). Information Structures for Privacy and Fairness. [Vidéo]. Canal-U. (Consultée le 12 juin 2024)

Information Structures for Privacy and Fairness

Réalisation : 15 décembre 2022 - Mise en ligne : 17 janvier 2023
  • document 1 document 2 document 3
  • niveau 1 niveau 2 niveau 3

The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness. 
In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics. 
In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to preserve the utility of the data. In this regard, a useful tool to recover utility is the Iterative Bayesian Update (IBU), an instance of the famous Expectation-Maximization method from Statistics. I will show that the IBU, combined with the metric version of DP, outperforms the state-of-the art, which is based on algebraic methods combined with the Randomized Response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, …). Furthermore I will discuss a surprising duality between the IBU and one of the methods used to enhance metric DP, that is the Blahut-Arimoto algorithm from Rate-Distortion Theory.  Finally, I will discuss the issue of biased decisions in machine learning, and will show that the IBU can be applied also in this domain to ensure a fairer treatment of disadvantaged groups. 


Sur le même thème