报告时间:2025年07月02日(星期三)9:00-11:00
报告地点:合肥工业大学管理学院925会议室
报告人:李晓白 教授
工作单位:美国马萨诸塞大学
举办单位:合肥工业大学管理学院
报告简介:
Machine learning (ML) and artificial intelligence (AI) models excel at making accurate predictions for hard problems. However, these models often function as black boxes, and their lack of transparency presents significant challenges. Interpretable machine learning (IML) and explainable AI (XAI) aim to tackle these challenges by developing methods that provide clear and meaningful explanations understandable to humans. While black-box models can make both correct and incorrect predictions, existing XAI/IML approaches focus exclusively on explaining predictions assuming they are correct. Moreover, explanations provided by XAI/IML methods may be inadequate or misleading, causing mistrust and lack of confidence in machine learning or AI technologies. How to prevent misleading explanations has not been explored in the XAI/IML literature. A closely related and confounding problem is evaluating the credibility of uncertain predictions. We propose a novel and practical approach to provide credible and risk-sensitive explanations for the predictions of machine learning and AI models. The proposed method takes a counterfactual explanation approach. To address the prediction uncertainty issue, we introduce new measures to evaluate the credibility of the model predictions and identify unreliable predictions. To minimize the impact of misleading explanations, our method provides robust counterfactuals to mitigate the risk of weak explanations on the one hand and vigilant counterfactuals that are sensitive to detecting undesirable changes on the other hand. We validate the effectiveness of the proposed method through an empirical evaluation using real-world data.
报告人简介:
Dr. Xiaobai Li is a Professor of Information Systems in the Department of Operations and Information Systems at the University of Massachusetts Lowell, USA. He received his Ph.D. in management science from the University of South Carolina. Dr. Li’s research focuses on machine learning, data science, data privacy, and business analytics. He has received funding for his research from National Institutes of Health (NIH) and National Science Foundation (NSF), USA. His work has appeared in Management Science, Information Systems Research, MIS Quarterly, Operations Research, INFORMS Journal on Computing, Journal of the Association for Information Systems, IEEE Transactions (TKDE, TSMC, TAC), Decision Sciences, Decision Support Systems, Communications of the ACM, and European Journal of Operational Research, among others. He currently serves as an associate editor for Information Systems Research and several other journals.