Besides privacy-aware learning, we also develop methods for privacy-aware sensing. In particular, we develop novel computational cameras that allow computer vision analysis even in sensitive environments like hospitals or smart homes. Our key innovation is a camera that removes private information. Our adversarial training approach allows simultaneously high accuracy and high privacy through learned phase masks, which are inserted in the focal plane of the camera. Our hardware prototypes allow tasks like depth estimation or action recognition, while ensuring private face information is hidden. We also develop prototype cameras that encrypt images to prevent deciphering by humans except with a secret key, while allowing computer vision analysis such as face recognition to be performed on the encrypted image.
Privacy impacts every stakeholder in the AI solution ecosystem, including consumers, operators, solution providers and regulators. This is especially true for applications such as healthcare, safety and finance which require collecting and analyzing highly sensitive data. We develop AI solutions to assure customers that private information is not leaked at any stage of the data lifecycle. Our differential privacy method guarantees that an adversary cannot decipher training data from model outputs. Our differential privacy method provides a provable guarantee of privacy, while using significantly less data than competitors. We also develop federated training methods that securely combine private data from multiple users or enterprises, at orders of magnitude lower communication costs than competitors, while providing a guarantee of low leakage through differential privacy.
Team Members: Francesco Pittaluga, Bingbing Zhuang