Having agents run tests to include test outputs in their context can greatly improve things. It basically gives the tool additional attempts with additional information without having to prompt multiple times.
Having agents run tests to include test outputs in their context can greatly improve things. It basically gives the tool additional attempts with additional information without having to prompt multiple times.
- humans come up with rather simple/ad-hoc/stream of consciousness prompts already having a clear mental model of the shape of the change they are trying to generate
- humans come up with rather simple/ad-hoc/stream of consciousness prompts already having a clear mental model of the shape of the change they are trying to generate
- good automatic test coverage to keep the LLM from breaking what is there
- good automatic test coverage to keep the LLM from breaking what is there
- brownfield setting where there is already a lot of prior art for the LLM agent to draw "inspiration" from
- brownfield setting where there is already a lot of prior art for the LLM agent to draw "inspiration" from
What about when engineers at the top of their game use AI tools responsibly to accelerate their work?
I propose "vibe engineering"!
simonwillison.net/2025/Oct/7/v...