Percy Liang
Percy Liang is an American computer scientist whose research focuses on machine learning, natural language processing, and foundation models. He is an Associate Professor of Computer Science at Stanford University and is the Director of the Center for Research on Foundation Models (CRFM).[1][2]
Percy Liang | |
|---|---|
| Occupation | Associate Professor of Computer Science |
| Employer | Stanford University |
| Title | Director |
| Academic background | |
| Education | Massachusetts Institute of Technology (BS)
Massachusetts Institute of Technology (MEng) University of California, Berkeley (PhD) |
| Doctoral advisor | Michael I. Jordan, Dan Klein |
| Academic work | |
| Discipline | Computer science |
| Sub-discipline | Machine learning, Natural language processing, Foundation models |
| Institutions | Stanford University |
Education
Liang received a Bachelor of Science degree in 2004 and a Master of Engineering degree in 2005 from Massachusetts Institute of Technology. He received bronze and silver medals at the International Olympiad in Informatics (IOI).[3] He earned his Ph.D. in Computer Science from the University of California, Berkeley in 2011, where his doctoral advisors were Michael I. Jordan and Dan Klein.[4]
Academic career
After completing his doctorate, Liang held a postdoctoral position at Google. He later joined the faculty at Stanford University, where he conducts research and teaches courses in artificial intelligence, machine learning, statistical learning theory, and language modeling.
Liang is known for his work on semantic parsing, weak and indirect supervision, robustness and generalization in machine learning, and the study of large-scale foundation models.[5][6] He has also been an advocate for efficient and reproducible research, and is one of the developers of CodaLab Worksheets, a platform for managing computational experiments.[5][7][8]
Center for Research on Foundation Models
Liang is the founding Director of the Center for Research on Foundation Models (CRFM) at Stanford. The center focuses on the development, evaluation, and governance of foundation models, including technical, social, and policy considerations. CRFM operates as an interdisciplinary research initiative within Stanford HAI.[1][9]
With CRFM, Liang has supported the development of open source large language models.[10]
Research
Liang has authored peer-reviewed publications in artificial intelligence and machine learning venues, including ACL, EMNLP, ICML, and COLT. His work has influenced both theoretical and applied research in natural language understanding and machine learning systems.[11][12]
Recognition
Liang has received awards for his research contributions, including the National Science Foundation CAREER Award,[13] the Presidential Early Career Award for Scientists and Engineers,[6] the IJCAI Computers and Thought Award,[14] and the Sloan Research Fellowship.[2]
References
- ^ a b "Percy Liang". Computer Science Department, Stanford University. Retrieved 2026-01-26.
- ^ a b "Percy Liang's Profile | Stanford Profiles". Faculty, Stanford University. Retrieved 2026-01-26.
- ^ "USACO". The International Olympiad in Informatics. Retrieved 2026-01-26.
- ^ Liang, Percy; Jordan, Michael I.; Klein, Dan (2013). "Learning Dependency-Based Compositional Semantics". Computational Linguistics. 39 (2): 389โ446. doi:10.1162/COLI_a_00127. ISSN 0891-2017.
- ^ a b Liu, Nelson F.; Lin, Kevin; Hewitt, John; Paranjape, Ashwin; Bevilacqua, Michele; Petroni, Fabio; Liang, Percy (2024-02-23). "Lost in the Middle: How Language Models Use Long Contexts". Transactions of the Association for Computational Linguistics. 12: 157โ173. doi:10.1162/tacl_a_00638. ISSN 2307-387X.
- ^ a b "PECASE Recipient". U.S. National Science Foundation. Retrieved 2026-01-26.
- ^ Liang, Percy; Potts, Christopher (2015-01-01). "Bringing Machine Learning and Compositional Semantics Together". Annual Review of Linguistics. 1 (1): 355โ376. doi:10.1146/annurev-linguist-030514-125312. ISSN 2333-9683.
- ^ Sherman, Erik. "New Research Shows LLMs Face A Big Copyright Risk". Forbes. Retrieved 2026-01-26.
- ^ Kurenkov, Andrey (2022-01-27). "Percy Liang on Machine Learning Robustness, Foundation Models, and Reproducibility". The Gradient. Retrieved 2026-01-26.
- ^ Knight, Will. "The US Needs an Open Source AI Intervention to Beat China". Wired. ISSN 1059-1028. Retrieved 2026-01-26.
- ^ "Open-Source and Science in the Era of Foundation Models". AI at Princeton. Retrieved 2026-01-26.
- ^ "Percy Liang". The Alan Turing Institute. Archived from the original on 2025-05-12. Retrieved 2026-01-26.
- ^ "White House Press Release - President Donald J. Trump Announces Recipients of the Presidential Early Career Award for Scientists and Engineers". The American Presidency Project. Retrieved 2026-01-26.
- ^ "IJCAI-16 Awards Announcement". 25th International Joint Conference on Artificial Intelligence. 2016. Retrieved 2026-01-30.
Further reading
- Quach, Katyanna (23 Aug 2021). "We spoke to a Stanford prof on the tech and social impact of AI's powerful, emerging 'foundation models'". The Register. Archived from the original on 2025-11-17. Retrieved 2026-01-26.