Membership Inference refers to a privacy notion aimed at protecting sensitive data used in machine learning models, particularly in applications involving finance and healthcare. While traditional approaches like differential privacy (DP) offer strong theoretical guarantees, they often lead to a significant drop in utility for machine learning tasks, making them less practical. Membership inference privacy (MIP) addresses these challenges by providing a different privacy framework that offers interpretable guarantees and potentially requires less randomness than DP.

Posts

Provable Membership Inference Privacy

In applications involving sensitive data, such as finance and healthcare, the necessity for preserving data privacy can be a significant barrier to machine learning model development.Differential privacy (DP) has emerged as one canonical standard for provable privacy. However, DP’s strong theoretical guarantees often come at the cost of a large drop in its utility for machine learning; and DP guarantees themselves are difficult to interpret. In this work, we propose a novel privacy notion, membership inference privacy (MIP), as a steptowards addressing these challenges. We give a precise characterization of the relationship between MIP and DP, and show that in some cases, MIP can be achieved using less amountof randomness compared to the amount required for guaranteeing DP, leading to smaller drop in utility. MIP guarantees are also easily interpretable in terms of the success rate of membership inference attacks in a simple random subsampling setting. As a proof of concept, we also provide a simple algorithm for guaranteeing MIP without needing to guarantee DP.