Sian Gooding
@siangooding.bsky.social
280 followers 20 following 15 posts
Senior Research Scientist @GoogleDeepMind working on Autonomous Assistants ✍️🤖
Posts Media Videos Starter Packs
Reposted by Sian Gooding
jevakallio.dev
👋 We're building a new type of word processor at Marker, and we're hiring for React/ProseMirror engineers and full-stack AI engineers to join the team in London.

Are you an engineer who cares about writing? Or do you know someone who does?

See: writewithmarker.com/jobs

More details below 👇
siangooding.bsky.social
You’ll collaborate with a kind, curious, research-driven team—including the brilliant @joao.omg.lol & @martinklissarov.bsky.social —and get to shape work at the frontier of multi-agent learning.

If that sounds like you, apply!

DM me if you're curious or have questions
siangooding.bsky.social
Some big questions we’re thinking about:
1⃣How do communication protocols emerge?
2⃣What inductive biases help coordination?
3⃣How can language improve generalisation and transfer?
siangooding.bsky.social
We’re interested in:
🤖🤖 Multi-agent RL
🔠 Emergent language
🎲 Communication games
🧠 Social & cognitive modelling
📈 Scaling laws for coordination
siangooding.bsky.social
The project explores how agents can learn to communicate and coordinate in complex, open-ended environments—through emergent protocols, not hand-coded rules.
siangooding.bsky.social
🚨 I’m hosting a Student Researcher @GoogleDeepMind!

Join us on the Autonomous Assistants team (led by
@egrefen.bsky.social) to explore multi-agent communication—how agents learn to interact, coordinate, and solve tasks together.

DM me for details!
siangooding.bsky.social
Our work highlights the need for LLMs to improve in areas like action selection, self-evaluation + goal alignment to perform robustly in open-ended tasks

Implications of this work extend beyond writing assistance to autonomous workflows for LLMs in general open-ended use cases
siangooding.bsky.social
Finding: LLMs can lose track of the original goal during iterative refinement, leading to "semantic drift" - a divergence from the author's intent. This is a key challenge for autonomous revision. ✍️
siangooding.bsky.social
Finding: LLMs struggle to reliably filter their own suggestions. They need better self-evaluation to work effectively in autonomous revision workflows. ⚖️
siangooding.bsky.social
Finding: Gemini 1.5 Pro produced the highest quality editing suggestions, according to human evaluators, outperforming Claude 3.5 Sonnet and GPT-4o 🦾
siangooding.bsky.social
Finding: LLMs tend to favour adding content, whereas human editors remove or restructure more. This suggests LLMs are sycophantic, reinforcing existing text rather than critically evaluating it. ➕
siangooding.bsky.social
Why? There are many possible solutions and no single 'right' answer. Success is difficult to gauge!

We examine how LLMs generate + select text revisions, comparing their actions to human editors. We focus on action diversity, alignment with human prefs, and iterative improvement
siangooding.bsky.social
Our paper explores this by analysing LLMs as autonomous co-writers. Work done with Lucia Lopez Rivilla, @egrefen.bsky.social ) 🫶

Open-ended tasks like writing are a real challenge for LLMs (even powerful ones like Gemini 1.5 Pro, Claude 3.5 Sonnet, and GPT-4o).
siangooding.bsky.social
New paper from our team @GoogleDeepMind!

🚨 We've put LLMs to the test as writing co-pilots – how good are they really at helping us write? LLMs are increasingly used for open-ended tasks like writing assistance, but how do we assess their effectiveness? 🤔

arxiv.org/pdf/2503.19711
arxiv.org
Reposted by Sian Gooding
tomerullman.bsky.social
you're telling me a cherry picked this example?
Reposted by Sian Gooding
rritunnano.bsky.social
Instead of listing my publications, as the year draws to an end, I want to shine the spotlight on the commonplace assumption that productivity must always increase. Good research is disruptive and thinking time is central to high quality scholarship and necessary for disruptive research.