University of Notre Dame · Computer Science & Engineering

Welcome.

Directed by Prof. Xiangliang Zhang — researching trustworthy foundation models, AI for science, and graph learning.

We design machine learning systems that learn from complex, large-scale data — with a focus on trustworthy foundation models, AI for chemistry & science, and graph-based learning.

305+
Peer-reviewed papers
17
PhDs graduated
9
Current PhD students
3
Research thrusts
About the lab

A home for curious data miners.

Welcome to the MINE lab, directed by Prof. Xiangliang Zhang in the Department of Computer Science and Engineering at the University of Notre Dame.

In our lab, “MINE” represents the ownership of innovation, the discovery of hidden insights, and the pursuit of untapped potential — much like mining precious resources. Our mission is to enable computer machines to learn through experience and data, fostering breakthroughs in artificial intelligence that power knowledge advancement across diverse fields.

Meet the team

What we work on

Three research thrusts.

01

Trustworthy Foundation Models

Principled understanding, alignment, and trust evaluation of generative foundation models — from interpretation to safety.

02

AI for Science

Generative and predictive models for chemistry, biology, physics — and large-scale simulation for the social sciences.

03

Graph-Based Learning

Graph methods for drug discovery, recommendation, and knowledge graphs, leveraging structure to improve real-world decisions.

Latest

News.

All news

Mar. 2026
Jan. 2026

Congratulations to Yue Huang for his papers accepted by ICLR 2026! Check the recent contributions on Benchmarking Generative Foundation Models (TrustGen) and Guardrail for General Agentic Systems. Our work Benchmarking Large Language Models on Safety Issues in Scientific Labs was highlighted by New Scientist and Science News.

Nov. 2025

Congratulations to Yue Huang, Xiaonan Luo, and Xiangqi Wang for their papers accepted by AAAI 2026! Congratulations to Yujun Zhou for the paper Benchmarking Large Language Models on Safety Issues in Scientific Labs, to appear in Nature Machine Intelligence.

Sep. 2025

Congratulations to Yue Huang and Xiangqi Wang for their papers accepted by NeurIPS 2025! We are also organizing an AI for Scientific Research Workshop at AAAI 2026—welcome to submit!

Aug. 2025

Congratulations to Yue Huang and our co-authors for winning the Best Paper Award at the KDD 2025 SciSoc LLM Workshop for Evaluating Large Language Models with Psychometrics, and the Best Paper Award at the ICML 2025 Workshop on Data in Generative Models (DIG-BUG) for Preference Leakage: A Contamination Problem in LLM-as-a-Judge. Many thanks to all our collaborators! Congratulations also to Yue Huang, Kehan, and Yili for their papers accepted at CIKM 2025.

Selected

Recent publications.

All 305 papers

2026

ProbeLLM: Automating Principled Diagnosis of LLM Failures

Yue Huang, Zhengzhe Jiang, Yuchen Ma, Yu Jiang, Xiangqi Wang, Yujun Zhou, Yuexing Hao, Kehan Guo, Pin-Yu Chen, Marzyeh Ghassemi, Stefan Feuerriegel, Xiangliang Zhang

ICML 2026

2026

My Favorite Streamer is an LLM: Discovering, Bonding, and Co-Creating in AI VTuber Fandom

Jiayi Ye, Chaoran Chen, Yue Huang, Yanfang Ye, Toby Jia-Jun Li, Xiangliang Zhang

CHI 2026

2026

TrustGen: A Platform of Dynamic Benchmarking on the Trustworthiness of Generative Foundation Models

Yue Huang, Chujie Gao, Siyuan Wu, Haoran Wang, Xiangqi Wang, Jiayi Ye, Yujun Zhou, Yanbo Wang, Jiawen Shi, Qihui Zhang, Han Bao, Zhaoyi Liu, Yuan Li, Tianrui Guan, Peiran Wang, Haomin Zhuang, Dongping Chen, Kehan Guo, Andy Zou, Bryan Hooi, Caiming Xiong, Elias Stengel-Eskin, Hongyang Zhang, Hongzhi Yin, Huan Zhang, Huaxiu Yao, Jieyu Zhang, Jaehong Yoon, Kai Shu, Ranjay Krishna, Swabha Swayamdipta, Weijia Shi, Xiang Li, Yuexing Hao, Zhihao Jia, Zhize Li, Xiuying Chen, Zhengzhong Tu, Xiyang Hu, Tianyi Zhou, Jieyu Zhao, Lichao Sun, Furong Huang, Or Cohen-Sasson, Prasanna Sattigeri, Anka Reuel, Max Lamparth, Yue Zhao, Nouha Dziri, Yu Su, Huan Sun, Heng Ji, Chaowei Xiao, Mohit Bansal, Nitesh V Chawla, Jian Pei, Jianfeng Gao, Michael Backes, Philip S. Yu, Neil Zhenqiang Gong, Pin-Yu Chen, Bo Li, Dawn Song, Xiangliang Zhang

ICLR 2026

2026

Building a Foundational Guardrail for General Agentic Systems via Synthetic Data

Yue Huang, Hang Hua, Yujun Zhou, Pengcheng Jing, Manish Nagireddy, Inkit Padhi, Greta Dolcetti, Zhangchen Xu, Subhajit Chaudhury, Ambrish Rawat, Liubov Nedoshivina, Pin-Yu Chen, Prasanna Sattigeri, Xiangliang Zhang

ICLR 2026

2026

Preference Leakage: A Contamination Problem in LLM-as-a-Judge

Dawei Li, Renliang Sun, Yue Huang, Ming Zhong, Bohan Jiang, Jiawei Han, Xiangliang Zhang, Wei Wang, Huan Liu

ICLR 2026

Tutorials & workshops

Events we're
organizing.

Tutorial · AAAI 2026

Towards Trustworthy and Socially Responsible Generative Foundation Models

Learn more
Workshop · KDD 2026

Reliable Scientific Foundation Models (RelSciFM)

Learn more
Workshop · CVPR 2026

Knowledge-Intensive Multimodal Reasoning (2nd edition)

Learn more
Tutorial · CIKM 2025

Socially Responsible & Trustworthy Generative Foundation Models

Learn more
Tutorial · CIKM 2025

Generative Models for Synthetic Data in the GenAI Era

Learn more
Tutorial · CVPR 2025

Multimodal Mathematical Reasoning

Learn more

We are looking for motivated PhD students.

Open positions for PhDs, postdocs, and visiting researchers.

See openings
University of Notre Dame Campus
University of Notre Dame · South Bend, Indiana