Kaya Stechly

prof_pic.jpg

I am a Computer Science PhD at Yale interested in learning, computational cognitive science, and language-mediated reasoning in both brains and machines. Previously, I completed a Linguistics M.A. at Arizona State University where I was advised by Kathryn Pruitt and worked in the Yochan lab under Subbarao Kambhampati.

My research is guided by the dual goals of teaching concepts to and distilling symbols from AI systems in human-interpretable form. Much of my work has focused on the investigating much-hyped in-context learning abilities of large language models, especially as applied to classical reasoning and planning tasks. I also work on leveraging the strengths of neural networks in representing tacit knowledge to improve the expressivity and efficiency of symbolic systems.

In my free time, I enjoy swimming, reading and writing fiction, and baking.

news

Jun 15, 2025 Started my Computer Science PhD at Yale.
Sep 25, 2024 “Chain of Thoughtlessness? An Analysis of CoT in Planning” has been accepted to the main track of NeurIPS 2024! And “LLMs Still Can’t Plan; Can LRMs? A Preliminary Evaluation of OpenAI’s o1 on PlanBench” has been accepted to the Open World Agents Workshop.
Sep 22, 2024 New preprint. We extend PlanBench to OpenAI’s o1-preview and o1-mini, and provide a preliminary analysis of the models’ capabilities: LLMs Still Can’t Plan; Can LRMs? A Preliminary Evaluation of OpenAI’s o1 on PlanBench.
May 31, 2024 New preprint analyzing how chain of thought approaches break down out-of-distribution: Chain of Thoughtlessness? An Analysis of CoT in Planning.
May 01, 2024 LLMs Can’t Plan, But Can Help Planning in LLM-Modulo Frameworks accepted into ICML 2024 and awarded a spotlight distinction.

latest posts