📓 Building instructa.ai Academy
Curated Ai Prompts instructa.ai/en/ai-prompts
🇺🇦 Raised 20k for UA
👨💻 coder & design
Website: kevinkern.dev
📍 Vienna, Austria
- Quick install "npm install -g codex-1up"
- Add custom sounds when a task is finished
- Guided setup (easy for beginners)
- Lot of cleanup, improvements
- New updated profiles to start quickly
- Quick install "npm install -g codex-1up"
- Add custom sounds when a task is finished
- Guided setup (easy for beginners)
- Lot of cleanup, improvements
- New updated profiles to start quickly
The less context you have, the more specific your prompt should be.
The less context you have, the more specific your prompt should be.
In my case I use a slash command with gpt-5. Usually you find some flaws in the final implementation.
In my case I use a slash command with gpt-5. Usually you find some flaws in the final implementation.
You can now view all changes from Agent across multiple files in one place, without switching between files.
You can now view all changes from Agent across multiple files in one place, without switching between files.
We have seen this in codex before. Cursor generates multiple versions of a task so you can pick the best one.
We have seen this in codex before. Cursor generates multiple versions of a task so you can pick the best one.
This is a good one for designers and frontend-devs
- You can now open an inline browser and directly select the elements you want to edit.
- The @ browser command is helpful to access and chat with your site/app and test your changes live.
This is a good one for designers and frontend-devs
- You can now open an inline browser and directly select the elements you want to edit.
- The @ browser command is helpful to access and chat with your site/app and test your changes live.
It's specifically designed to work with agents. Optimized for setups involving multiple agents like worktrees or controlling browsers.
It's specifically designed to work with agents. Optimized for setups involving multiple agents like worktrees or controlling browsers.
What I've experienced so far:
- It's unbelievable FAST (similar to cheetah).
- The model is tightly bound to the codebase; questions outside it are likely dropped while codebase specific info takes priority.
- Works great for non coding tasks too (docs, specs, e.g.)
What I've experienced so far:
- It's unbelievable FAST (similar to cheetah).
- The model is tightly bound to the codebase; questions outside it are likely dropped while codebase specific info takes priority.
- Works great for non coding tasks too (docs, specs, e.g.)
I've been testing it for a few days and have put together a full summary in the thread below.
Click to see whats new 🧵
I've been testing it for a few days and have put together a full summary in the thread below.
Click to see whats new 🧵
It's probably the only MCP I'm currently using.
I fetch all stories for a given epic in markdown, including the specification stored in confluence and export the figma screens.
It's probably the only MCP I'm currently using.
I fetch all stories for a given epic in markdown, including the specification stored in confluence and export the figma screens.
Link in the comments
Link in the comments
It builds on the existing task lists, showing they iterate on what’s already there and bring features together instead of shipping separate ones. I like that approach.
It builds on the existing task lists, showing they iterate on what’s already there and bring features together instead of shipping separate ones. I like that approach.
It seems that the configs for model_reasoning_effort (low, medium, high) are still available.
It seems that the configs for model_reasoning_effort (low, medium, high) are still available.