Safe and Trustworthy AI
By leveraging big data and deep learning, in recent years, AI technologies have made significant progress. They have been adopted in many applications including malware detection, image classification, and stock market prediction. As our society becomes more automated, more and more systems will rely on AI techniques. And instead of augmenting human decisions, some AI systems will make their own decisions and execute autonomously. This is extremely risky especially for mission-critical fields (like homeland security, medical diagnosis, and self-driving vehicles). In fact, the advances in AI outpace efforts to curb its potential hazards, because AI systems are vulnerable due to various attacks and biases.
This project aims to develop innovative system testing and data governance engines to identify AI system vulnerabilities, defend against advanced attacks, improve technical robustness, and mitigate technical unfairness/biases. Our engines leverage fine-grained reliability assessment, generalized robustness enhancement, and hardware-based privacy protection techniques to secure both AI model and data throughout the system development life cycle.
Since AI is being applied to more domains for mission-critical tasks, our engines can offer “AI System Testing and Data Governance” as a service to make businesses in these domains more reliable, trustworthy, transparent, and secure. We are developing new solutions to support a variety of AI systems with different types of data, thus can be applied to an enormous variety of businesses, including autonomous driving, biometric authentication, finance, healthcare, IoT devices, smart factories, and more.