[Lecture] Some personal perspectives and experiences on trustworthy AI
Update Time:2024-11-12 16:37:50

Topic: Some Personal Perspectives and Experiences on Trustworthy AI

Lecturer: Dr. Leo Zhang, Griffith University (Australia)

Time: November 14, 2024 (Thursday), 9:00-11:00, UTC+8

Venue: Room 510, Lecture Hall, Jianhu Xuehai Building


Biography:

Leo Zhang is currently Senior Lecturer with the School of Information and Communication Technology, Griffith University, Australia. Prior to this, he was respectively Lecturer and Senior Lecturer with the School of Information Technology, Deakin University, from 2018 to 2022. He received his Ph.D. degree in City University of Hong Kong in 2016. Leo’s research interest is mainly in cybersecurity, with a particular focus on trustworthy AI (adversarial /backdoor/poisoning/privacy attacks & defenses) and applied cryptography (privacy-preserving computation, authentication in emerging areas). He has published over 100 papers in these fields in top-tier journals and conferences (over 3600 times cited, h-index: 32), including IEEE S&P, ACSAC, Esorics, AsiaCCS, ICML, NeurIPS, CVPR, ICCV, AAAI, IJCAI, etc. He is the regular Program Committee member of these conferences. Leo’s research was sponsored by Australian Research Council, Department of Industry, Science and Resources of the Australian Government, Australian Cyber Security Cooperative Research Center, Department of Environment, Science and Innovation of the Queensland Government, Bosch Group, NVIDIA, etc. He was one of the recipients of the 2021 Australian Information Security Association Researcher(s) of the Year Award, and he is an Associate Editor for IEEE Transactions on Dependable and Secure Computing and IEEE Transactions on Multimedia.

Abstract:

In todays world, Artificial Intelligence (Al) permeates every facet of our lives, underscoring the critical importance of trustworthy AI. Trustworthy AI encompasses various dimensions, including safety & robustness, privacy, generalizability, fairness, and explainability. The core of many challenges in achieving trustworthy AI lies in distribution shifts within data. For instance, adversarial/evasion attacks stem directly from distribution shifts between training and test datasets. This talk delves into the intersection of safety & robustness, and generalizability in AI through the lens of our recent research endeavors. Focusing primarily on safety and robustness, we explore the susceptibility of AI models to poisoning attacks, which can stealthily introduce trojans or backdoors, compromising their integrity. Furthermore, we investigate the underlying reasons behind the notable generalizability of adversarial attacks across diverse data samples and neural network architectures. By unraveling these complexities, we aim to shed light on crucial aspects of building trustworthy AI systems in an era where their reliability is paramount.


Rewritten by: Mei Mengqi

Edited by: Li Tiantian, Liang Muwei

Source: School of Computer Science and Artificial Intelligence