About me
I am a fourth-year Ph.D. candidate in the Department of Electrical Engineering and Computer Science at UC Berkeley. I am fortunate to be advised by Professor Jiantao Jiao and Professor Stuart Russell. Previously, I graduated from Yao Class at Tsinghua University.
My research centers on understanding and improving the reasoning capabilities of large language models (LLMs). I approach this through theoretically analyzing their foundations and limitations, and designing more effective training, inference and evaluation methods. My work spans different regimes of reasoning, ranging from implicit reasoning, where models produce answers without explicit thinking steps (e.g., the reversal curse, two-hop reasoning, out-of-context reasoning), to inference-time reasoning with intermediate outputs (e.g., chain of continuous thought, token assorted), and up to agentic reasoning, where models proactively use tools and gather information to solve complex problems.
I’m also broadly interested in AI safety, model identifiability, decision-making and reinforcement learning, especially in how these areas intersect with the development and evaluation of more reliable, efficient, and interpretable AI systems.