Kaya Stechly

prof_pic.jpg

I am a Linguistics M.A. student at Arizona State University interested in theoretical models of language learning, computational cognitive science, and language-mediated reasoning in both brains and machines. I am currently applying to PhD programs, targeting a Fall 2025 start.

My current reasearch is split into two major threads:

On the AI side, I currently work at the Yochan lab under Subbarao Kambhampati. My research is guided and dual goals of teaching concepts to and distilling symbols from AI systems in human-interpretable form. Much of my work has focused on the investigating much-hyped in-context learning abilities of large language models, especially as applied to classical reasoning and planning tasks. I also work on leveraging the strengths of neural networks in representing tacit knowledge to improve the expressivity and efficiency of symbolic systems.

In linguistics, I am studying models of phonological acquisition and evolution in the context of optimality theory and its extensions. I am particularly interested in the question of how linguistic typology is shaped not just by binary (possible vs. impossible) representational factors but by biases induced by the resource-efficiency of the learning algorithms that the human brain implements. My thesis advisor is Kathryn Pruitt.

In my free time, I enjoy swimming, reading and writing fiction, and baking.

news

Sep 25, 2024 “Chain of Thoughtlessness? An Analysis of CoT in Planning” has been accepted to the main track of NeurIPS 2024! And “LLMs Still Can’t Plan; Can LRMs? A Preliminary Evaluation of OpenAI’s o1 on PlanBench” has been accepted to the Open World Agents Workshop.
Sep 22, 2024 New preprint. We extend PlanBench to OpenAI’s o1-preview and o1-mini, and provide a preliminary analysis of the models’ capabilities: LLMs Still Can’t Plan; Can LRMs? A Preliminary Evaluation of OpenAI’s o1 on PlanBench.
May 31, 2024 New preprint analyzing how chain of thought approaches break down out-of-distribution: Chain of Thoughtlessness? An Analysis of CoT in Planning.
May 01, 2024 LLMs Can’t Plan, But Can Help Planning in LLM-Modulo Frameworks accepted into ICML 2024 and awarded a spotlight distinction.
Feb 29, 2024 New preprint, extending our work from last year on the efficacy of LLM self-verification: On the Self-Verification Limitations of Large Language Models on Reasoning and Planning Tasks.

latest posts