Trustworthy Foundation Models
Principled understanding, alignment, and trust evaluation of generative foundation models — from interpretation to safety.
University of Notre Dame · Computer Science & Engineering
Directed by Prof. Xiangliang Zhang — researching trustworthy foundation models, AI for science, and graph learning.
We design machine learning systems that learn from complex, large-scale data — with a focus on trustworthy foundation models, AI for chemistry & science, and graph-based learning.
Welcome to the MINE lab, directed by Prof. Xiangliang Zhang in the Department of Computer Science and Engineering at the University of Notre Dame.
In our lab, “MINE” represents the ownership of innovation, the discovery of hidden insights, and the pursuit of untapped potential — much like mining precious resources. Our mission is to enable computer machines to learn through experience and data, fostering breakthroughs in artificial intelligence that power knowledge advancement across diverse fields.
Principled understanding, alignment, and trust evaluation of generative foundation models — from interpretation to safety.
Generative and predictive models for chemistry, biology, physics — and large-scale simulation for the social sciences.
Graph methods for drug discovery, recommendation, and knowledge graphs, leveraging structure to improve real-world decisions.
Call for papers! We are co-organizing several workshops: KDD 2026 Workshop on Agentic AI for Scientific and Societal Advances, KDD 2026 Workshop on Reliable Scientific Foundation Models (RelSciFM), AI for Education Day at KDD 2026, CVPR 2026 – 2nd Workshop on Knowledge-Intensive Multimodal Reasoning, and the upcoming ICML 2026 Workshop on AI for Physics (AI4Physics). Welcome submissions!
Congratulations to Yue Huang for his papers accepted by ICLR 2026! Check the recent contributions on Benchmarking Generative Foundation Models (TrustGen) and Guardrail for General Agentic Systems. Our work Benchmarking Large Language Models on Safety Issues in Scientific Labs was highlighted by New Scientist and Science News.
Congratulations to Yue Huang, Xiaonan Luo, and Xiangqi Wang for their papers accepted by AAAI 2026! Congratulations to Yujun Zhou for the paper Benchmarking Large Language Models on Safety Issues in Scientific Labs, to appear in Nature Machine Intelligence.
Congratulations to Yue Huang and Xiangqi Wang for their papers accepted by NeurIPS 2025! We are also organizing an AI for Scientific Research Workshop at AAAI 2026—welcome to submit!
Congratulations to Yue Huang and our co-authors for winning the Best Paper Award at the KDD 2025 SciSoc LLM Workshop for Evaluating Large Language Models with Psychometrics, and the Best Paper Award at the ICML 2025 Workshop on Data in Generative Models (DIG-BUG) for Preference Leakage: A Contamination Problem in LLM-as-a-Judge. Many thanks to all our collaborators! Congratulations also to Yue Huang, Kehan, and Yili for their papers accepted at CIKM 2025.