Privacy impacts every stakeholder in the AI solution ecosystem, including consumers, operators, solution providers and regulators. This is especially true for applications such as healthcare, safety and finance which require collecting and analyzing highly sensitive data. We develop AI solutions to assure customers that private information is not leaked at any stage of the data lifecycle. Our differential privacy method guarantees that an adversary cannot decipher training data from model outputs. Our differential privacy method provides a provable guarantee of privacy, while using significantly less data than competitors. We also develop federated training methods that securely combine private data from multiple users or enterprises, at orders of magnitude lower communication costs than competitors, while providing a guarantee of low leakage through differential privacy.
Team Members: Francesco Pittaluga