github.com/kodroi/block
github.com/kodroi/block
1. Update editorconfig to treat all analyzer issues as errors
2. Add a stop hook to build the project
This way Claude will work until all the code matches your standards.
What analyzer are you missing to get this done?
1. Update editorconfig to treat all analyzer issues as errors
2. Add a stop hook to build the project
This way Claude will work until all the code matches your standards.
What analyzer are you missing to get this done?
Use the Claude safety net and have a peaceful lunch while Claude gets your feature done!
github.com/kenryu42/cla...
Use the Claude safety net and have a peaceful lunch while Claude gets your feature done!
github.com/kenryu42/cla...
Create deterministic feedback loops.
Add a stop hook that runs your build and tests. That's just the start, turn up your static analysis and get the exact patterns you want - every time.
#ClaudeCode #AgenticCoding
Create deterministic feedback loops.
Add a stop hook that runs your build and tests. That's just the start, turn up your static analysis and get the exact patterns you want - every time.
#ClaudeCode #AgenticCoding
Can’t we just define what should be tested with pattern analysis and architectural practices and monitor that?
#UnitTesting #TestAutomation
Can’t we just define what should be tested with pattern analysis and architectural practices and monitor that?
#UnitTesting #TestAutomation
Add scaffolding.
Scripts that run when the code is changed to verify. Guards that prevent editing certain files.
With Claude Code, you can achieve this with hooks code.claude.com/docs/en/hook...
#ClaudeCode #AgenticCodding
Add scaffolding.
Scripts that run when the code is changed to verify. Guards that prevent editing certain files.
With Claude Code, you can achieve this with hooks code.claude.com/docs/en/hook...
#ClaudeCode #AgenticCodding
Feedback loops. Do tests pass and does it meet our coding standards?
They are deterministic questions, the same input has the same output. AI models are not deterministic but statistical. Different answer everytime.
Feedback loops change it.
Feedback loops. Do tests pass and does it meet our coding standards?
They are deterministic questions, the same input has the same output. AI models are not deterministic but statistical. Different answer everytime.
Feedback loops change it.
Create custom code analyzer that will fail if the pattern is still used.
I've been creating them in .NET (with Claude). The patterns are treated as build errors --> Claude runs until all the patterns are fixed.
Create custom code analyzer that will fail if the pattern is still used.
I've been creating them in .NET (with Claude). The patterns are treated as build errors --> Claude runs until all the patterns are fixed.
The principle of least astonishment or the pricinple of least WTFs. en.wikipedia.org/wiki/Princip...
Prompt: "Review my changes for POLA violations"
Result: "Code that's easy to follow"
#claudecode
The principle of least astonishment or the pricinple of least WTFs. en.wikipedia.org/wiki/Princip...
Prompt: "Review my changes for POLA violations"
Result: "Code that's easy to follow"
#claudecode
1. Use plan mode
2. Plan your feature
3. Select: Yes, clear context
1. Use plan mode
2. Plan your feature
3. Select: Yes, clear context
1. Create architecture tests
2. Create a stop hook to the tests
3. Claude code won't stop until the implementation matches your architecture
4. Less micro-managing, more value
#claudecode
1. Create architecture tests
2. Create a stop hook to the tests
3. Claude code won't stop until the implementation matches your architecture
4. Less micro-managing, more value
#claudecode
Agentic coding with feedback loops.
1) Add a Claude Code stop hook which builds, runs the tests, checks code quality and tests architecture
2) Tell Claude code to refactor a class. It will run until the hooks pass.
Agentic coding with feedback loops.
1) Add a Claude Code stop hook which builds, runs the tests, checks code quality and tests architecture
2) Tell Claude code to refactor a class. It will run until the hooks pass.
But the bigger gap? Business and technical teams who can't speak the same language.
AI fails when strategy and execution live in different worlds.
But the bigger gap? Business and technical teams who can't speak the same language.
AI fails when strategy and execution live in different worlds.
Poor quality. Unstructured. Inaccessible.
Before you buy another AI tool, ask: do we actually have the data foundation this needs to work?
Poor quality. Unstructured. Inaccessible.
Before you buy another AI tool, ask: do we actually have the data foundation this needs to work?
Not because AI doesn't work. Because we're solving the wrong problems.
The question isn't "what can AI do?" It's "what problem are we actually trying to solve?"
Not because AI doesn't work. Because we're solving the wrong problems.
The question isn't "what can AI do?" It's "what problem are we actually trying to solve?"