Lakshya A Agrawal
@lakshyaaagrawal.bsky.social
760 followers 3.5K following 35 posts
PhD @ucberkeleyofficial.bsky.social | Past: AI4Code Research Fellow @msftresearch.bsky.social | Summer @EPFL Scholar, CS and Applied Maths @IIITDelhi | Hobbyist Saxophonist https://lakshyaaagrawal.github.io Maintainer of https://aka.ms/multilspy
Posts Media Videos Starter Packs
Reposted by Lakshya A Agrawal
nehalecky.bsky.social
Just what I was looking for. Thank you for sharing, looking forward to the read.
Reposted by Lakshya A Agrawal
sungkim.bsky.social
DSPy folks love GEPA, so here's a GEPA paper for anyone who wants to learn more.

Given any AI system containing one or more LLM prompts, GEPA samples system-level trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems,
Reposted by Lakshya A Agrawal
tbressers.bsky.social
..GEPA and prompt optimization explained: https://arxiv.org/abs/2507.19457v1

(7/7)
ArXiv page 7
Reposted by Lakshya A Agrawal
tbressers.bsky.social
..make adapting large models more practical—especially when compute or data is limited. It’s like giving AI a way to learn from its own “thinking out loud,” turning natural language into a powerful tool for self-improvement.

Links:
Paper on arXiv: https://arxiv.org/abs/2507.19457 ..

(6/7)
ArXiv page 6
Reposted by Lakshya A Agrawal
tbressers.bsky.social
..code on the fly.

What’s cool here is the shift from treating AI tuning as a blind search for a higher score to a reflective process that leverages the AI’s native strength: language. By evolving prompts through thoughtful reflections, GEPA unlocks smarter, faster learning that could..

(5/7)
ArXiv page 5
Reposted by Lakshya A Agrawal
tbressers.bsky.social
..fewer attempts than traditional reinforcement learning methods. On several tough tasks like multi-step question answering and instruction following, GEPA consistently outperforms both standard reinforcement learning and previous prompt optimizers. It even shows promise for optimizing..

(4/7)
ArXiv page 4
Reposted by Lakshya A Agrawal
tbressers.bsky.social
..strategies by mixing and matching what works best.

GEPA treats AI prompt tuning like a conversation with itself, iterating through generations of prompts that learn from detailed feedback written in words, not just numbers. This lets it learn much more efficiently—up to 35 times..

(3/7)
ArXiv page 3
Reposted by Lakshya A Agrawal
tbressers.bsky.social
..what went wrong and how to fix it? That’s the idea behind a new approach called GEPA. Instead of relying solely on those sparse reward signals, GEPA has AI inspect its own attempts using natural language reflections. It diagnoses errors, proposes prompt fixes, and evolves smarter..

(2/7)
ArXiv page 2
Reposted by Lakshya A Agrawal
tbressers.bsky.social
What if language itself could teach AI to get better, faster?

Most AI training feels like trial and error in the dark—reinforcement learning tweaks models by chasing a number, often needing tens of thousands of tries to improve. But what if the AI could actually *talk to itself* about..

(1/7)
ArXiv page 1
Reposted by Lakshya A Agrawal
llms.activitypub.awakari.com.ap.brid.gy
gepa 0.0.15a1 A framework for optimizing textual system components (AI prompts, code snippets, etc.) using LLM-based reflection and Pareto-efficient evolutionary search.

Origin | Interest | Match
gepa
A framework for optimizing textual system components (AI prompts, code snippets, etc.) using LLM-based reflection and Pareto-efficient evolutionary search.
pypi.org
Reposted by Lakshya A Agrawal
bluesky.awakari.com
gepa 0.0.15a1 A framework for optimizing textual system components (AI prompts, code snippets, etc.) using LLM-based reflection and Pareto-efficient evolutionary search.

Interest | Match | Feed
Origin
pypi.org
Reposted by Lakshya A Agrawal
bluesky.awakari.com
gepa 0.0.15a1 A framework for optimizing textual system components (AI prompts, code snippets, etc.) using LLM-based reflection and Pareto-efficient evolutionary search.

Interest | Match | Feed
Origin
pypi.org
Reposted by Lakshya A Agrawal
bluesky.awakari.com
gepa 0.0.16 A framework for optimizing textual system components (AI prompts, code snippets, etc.) using LLM-based reflection and Pareto-efficient evolutionary search.

Interest | Match | Feed
Origin
pypi.org
Reposted by Lakshya A Agrawal
bluesky.awakari.com
gepa 0.0.16 A framework for optimizing textual system components (AI prompts, code snippets, etc.) using LLM-based reflection and Pareto-efficient evolutionary search.

Interest | Match | Feed
Origin
pypi.org
Reposted by Lakshya A Agrawal
New research released today from Databricks shows how its GEPA (Generative Evolutionary Prompt Adaptation) technique improves prompt optimization by an order of magnitude.

venturebeat.com/ai/the-usd10...
venturebeat.com
Reposted by Lakshya A Agrawal
qdrddr.bsky.social
🚀 #GEPA: Automatic #Prompt Optimization by @databricksinc.bsky.social: gpt-oss-120b beats Claude Sonnet 4 (+3%) at ~20x lower cost. Completes with DSPy SIMBA/MIPROv2
📜 MIT lic
🔗 Link in first 💬⤵️

Repost 🔁 #AI #LLM #RAG #PromptEngineering #ContextEngineering
Reposted by Lakshya A Agrawal
tom-doerr.bsky.social
optimizes prompts and code using AI-driven reflection and evolution
Screenshot of the repository
Reposted by Lakshya A Agrawal
llms.activitypub.awakari.com.ap.brid.gy
gepa 0.0.11 A framework for optimizing textual system components (AI prompts, code snippets, etc.) using LLM-based reflection and Pareto-efficient evolutionary search.

Origin | Interest | Match
Client Challenge
pypi.org
Reposted by Lakshya A Agrawal
bluesky.awakari.com
gepa 0.0.11 A framework for optimizing textual system components (AI prompts, code snippets, etc.) using LLM-based reflection and Pareto-efficient evolutionary search.

Interest | Match | Feed
Origin
pypi.org
Reposted by Lakshya A Agrawal
zeta-alpha.bsky.social
​We’ll also cover new releases like EmbeddingGemma and the research shaping the field, including OpenAI’s “Why Language Models Hallucinate”, DeepMind’s “Theoretical Limitations of Embedding-Based Retrieval”, and recent work such as GEPA, BrowseComp-Plus, and Universal Deep Research.

See you there!