🎓 About Me

🏫 I am currently a fifth-year Ph.D. student in the Department of Electrical and Computer Engineering at the University of Virginia, where I am advised by Prof. Jundong Li. Before joining UVA, I earned my B.E. in Electronic Engineering from Tsinghua University in 2020, where my diploma thesis was advised by Prof. Ji Wu.

📝 My research focuses on Generalizable Machine Learning (GML) and its applications in trustworthy AI solutions. My work spans diverse data modalities, including graphs, knowledge bases, and text, with the aim of ensuring fairness, interpretability, and efficiency in AI systems.

🤝 Looking for collaborations to work on impactful projects!

💼 I am actively seeking faculty positions! I would greatly appreciate it if you could share any opportunities. Thank you!

🔍 Research Interests

My research aims to tackle real-world challenges in Generalizable Machine Learning (GML) and Trustworthy AI, focusing on the following:

  • Generalization with Minimal Labeled Data: Designing frameworks for few-shot learning, meta-learning, and task-adaptive model generalization.
  • Robust Learning with Noisy and Unlabeled Data: Leveraging weak supervision to enhance model robustness across tasks.
  • Fairness and Interpretability in AI: Developing fairness-aware GML frameworks and interpretable AI methods for socially impactful applications.

🔥 News and Updates

  • 2024.10:  🎉 Two papers are accepted at EMNLP 2024 Main!
  • 2024.10:  🎉 Two papers on Fairness in Large Language Models are accepted at NeurIPS SoLaR (One Spotlight)!
  • 2024.10:  🎉 Our paper, “Mixture of Demonstrations for In-Context Learning,” is accepted at NeurIPS 2024!
  • 2024.10:  🎉 One paper is accepted at IEEE BigData 2024!
  • 2024.10:  🎉 One paper is accepted at WSDM 2024!
  • 2024.10:  🎉 Our paper, “Federated Graph Learning with Graphless Clients,” is accepted at TMLR!
  • 2024.09:  🎉 Published “Enhancing Distribution and Label Consistency for Graph Out-of-Distribution Generalization” at ICDM 2024.
  • 2024.09:  🎉 Our survey “Knowledge Editing for Large Language Models: A Survey,” is accepted at ACM Computing Surveys.
  • 2024.07:  🎉 Our paper, “Understanding and Modeling Job Marketplace with Pretrained Language Models,” is accepted at CIKM 2024 Applied Research Track!
  • 2024.05:  🎉 Two papers are accepted at ACL 2024 Findings!
  • 2024.02:  🎉 Our paper, “Interpreting Pretrained Language Models via Concept Bottlenecks”, is accepted at PAKDD 2024 with Best Paper Award!

📜 Publications

2024

  • NeurIPS: Mixture of Demonstrations for In-Context Learning
    Song Wang*, Zihan Chen*, Chengshuai Shi, Cong Shen, Jundong Li.

  • ACM Computing Surveys: Knowledge Editing for Large Language Models: A Survey
    Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, Jundong Li.

  • ICDM: Enhancing Distribution and Label Consistency for Graph Out-of-Distribution Generalization
    Song Wang, Xiaodong Yang, Rashidul Islam, Huiyuan Chen, Minghua Xu, Jundong Li, Yiwei Cai.

  • NeurIPS SoLaR Spotlight: On Demonstration Selection for Improving Fairness in Language Models
    Song Wang, Peng Wang, Yushun Dong, Tong Zhou, Lu Cheng, Yangfeng Ji, Jundong Li.

  • ACL Findings: FastGAS: Fast Graph-based Annotation Selection for In-Context Learning
    Zihan Chen, Song Wang, Cong Shen, Jundong Li.

Full Publication List (Google Scholar)

📖 Education

  • University of Virginia: Ph.D. in Electrical and Computer Engineering (2020-Present)
  • Tsinghua University: B.E. in Electronic Engineering (2016-2020)

💁 Service and Volunteering

  • Conference Reviewer: NeurIPS, ICML, ACL, SIGKDD, NAACL, EMNLP.
  • Mentorship: Directly supervised 6 undergraduate and graduate students, many of whom have published at top venues like NAACL and NeurIPS.

🐱 Hobbies and Interests

Beyond research, I enjoy:
🎤 Singing and playing music (piano, guitar).
🎮 Esports and gaming, especially competitive League of Legends (Top 0.5% in NA server).
🌟 Exploring innovative applications of AI for social good.

Feel free to contact me for collaborations, research discussions, or just to connect!