Jackson Petty
@jacksonpetty.org
200 followers 240 following 340 posts
the passionate shepherd, to his love • ἀρετῇ • מנא הני מילי
Posts Media Videos Starter Packs
Pinned
jacksonpetty.org
Pingali and Bilardi (2015) just get me
jacksonpetty.org
lmao were you also on the 12:17 to GC?
jacksonpetty.org
you’re telling me a star spangled this banner??
Reposted by Jackson Petty
tallinzen.bsky.social
I'm hiring at least one post-doc! We're interested in creating language models that process language more like humans than mainstream LLMs do, through architectural modifications and interpretability-style steering. Express interest here: docs.google.com/forms/d/e/1F...
NYU LLM + cognitive science post-doc interest form
Tal Linzen's group at NYU is hiring a post-doc! We're interested in creating language models that process language more like humans than mainstream LLMs do, through architectural modifications and int...
docs.google.com
jacksonpetty.org
Kauaʻi is amazing
jacksonpetty.org
Eg: general direction following, or translation of *natural* languages based only on non-formal reference grammars. Our results here show that there is no a priori roadblock to success, but that there are overhangs between what models can do and what they actually do.
jacksonpetty.org
2. It’s natural to ask “well, why not just break out to tool use? Parsers can solve this task trivially.” That’s true! But I think it’s valuable to understand how formally-verifiable tasks can shed light on model behavior on tasks for which aren’t formally verifiable.
jacksonpetty.org
This is contrary to the view that failure means “LLMs can’t reason”—failure here is likely correctable, and hopefully will make models more robust!
jacksonpetty.org
Why is this important? Well, two main reasons:
1. The overhang between models’ knowledge of *how* to solve the task and their ability to follow through gives me hope that we produce models which are better at following complex instructions in-context.
jacksonpetty.org
So, what did we learn?
1. LLMs *do* know how to follow instructions, but they often don’t
2. The complexity of instructions and examples reliably predicts whether (current) models can solve the task
3. On hard tasks, models (and people, tbh) like to fall back to heuristics
jacksonpetty.org
But often models get distracted by irrelevant info, or “get lazy” and choose to rely on heuristics rather than actually verifying the instructions; we use o4-mini as an LLM judge to classify model strategies: as examples get more complex, models shift to relying on heuristics rather than rules:
jacksonpetty.org
So, how can LLMs succeed at this task, and why do they fail when grammars and examples get complex? Well, models in general do understand the general solution: even small models recognize they can build a CYK table or do an exhaustive top-down search of the derivation tree:
jacksonpetty.org
In general, we find that models tend to agree with one another on which grammars (left) and which examples (right) are hard, though again 4.1-nano and 4.1-mini pattern with each other against others. These correlations increase with complexity!
jacksonpetty.org
Interestingly, models’ accuracies are reflective of divergent class biases: 4.1-nano and 4.1-mini love to predict strings as being positive, while all other models have the opposite bias; these biases also change with example complexity!
jacksonpetty.org
What do we find? All models struggle on complex instruction sets (grammars) and tasks (strings); the best reasoning models are better than the rest, but still approach ~chance accuracy when grammars (top) have ~500 rules, or when strings (bottom) have >25 symbols.
jacksonpetty.org
We release the static dataset used in our evals as RELIC-500, where the grammar complexity is capped at 500 rules.
jacksonpetty.org
We introduce RELIC as an LLM evaluation: 1. generate a CFG of a given complexity; 2. sample positive (parses) and negative (doesn’t parse) strings from the grammar’s terminal symbols; 3. prompt the LLM with a (grammar, sample) pair and ask it to classify if the grammar generates the given string
jacksonpetty.org
As an analogue for instruction sets and tasks, formal grammars have some really nice properties: they can be made arbitrarily complex, we can sample new ones easily (avoiding problems with dataset contamination), and we can verify a model’s accuracy using formal tools (parsers).
jacksonpetty.org
LLMs are increasingly used to solve tasks “zero-shot,” with only a specification of the task given in a prompt. To evaluate LLMs on increasingly complex instructions, we turn to a classic problem in computer science and linguistics: recognizing if a formal grammar generates a given string.
jacksonpetty.org
How well can LLMs understand tasks with complex sets of instructions? We investigate through the lens of RELIC: REcognizing (formal) Languages In-Context, finding a significant overhang between what LLMs are able to do theoretically and how well they put this into practice.
jacksonpetty.org
Such a shame that Apple doesn’t have much cash on hand for such expenditures
jacksonpetty.org
(not that this would _replace_ the scraped data in the near or medium term, but it probably would curry favor with public sentiment)