📓 Building instructa.ai Academy
Curated Ai Prompts instructa.ai/en/ai-prompts
🇺🇦 Raised 20k for UA
👨💻 coder & design
Website: kevinkern.dev
📍 Vienna, Austria
- Quick install "npm install -g codex-1up"
- Add custom sounds when a task is finished
- Guided setup (easy for beginners)
- Lot of cleanup, improvements
- New updated profiles to start quickly
- Quick install "npm install -g codex-1up"
- Add custom sounds when a task is finished
- Guided setup (easy for beginners)
- Lot of cleanup, improvements
- New updated profiles to start quickly
In my case I use a slash command with gpt-5. Usually you find some flaws in the final implementation.
In my case I use a slash command with gpt-5. Usually you find some flaws in the final implementation.
I've been testing it for a few days and have put together a full summary in the thread below.
Click to see whats new 🧵
I've been testing it for a few days and have put together a full summary in the thread below.
Click to see whats new 🧵
It's probably the only MCP I'm currently using.
I fetch all stories for a given epic in markdown, including the specification stored in confluence and export the figma screens.
It's probably the only MCP I'm currently using.
I fetch all stories for a given epic in markdown, including the specification stored in confluence and export the figma screens.
Link in the comments
Link in the comments
It builds on the existing task lists, showing they iterate on what’s already there and bring features together instead of shipping separate ones. I like that approach.
It builds on the existing task lists, showing they iterate on what’s already there and bring features together instead of shipping separate ones. I like that approach.
It seems that the configs for model_reasoning_effort (low, medium, high) are still available.
It seems that the configs for model_reasoning_effort (low, medium, high) are still available.
It took 5 iterations to get green light from Codex to implement the plan.
But first impressions are positive. Worth testing further within my workflow as a fast Q&A, planning, and peer review model alongside gpt Codex.
It took 5 iterations to get green light from Codex to implement the plan.
But first impressions are positive. Worth testing further within my workflow as a fast Q&A, planning, and peer review model alongside gpt Codex.
Which probably means it comes from one of the major AI labs. If we follow the usual release loop of models, it fits for Gemini 3. And since it’s pretty fast, it feels more like a Flash model than the Pro one.
Which probably means it comes from one of the major AI labs. If we follow the usual release loop of models, it fits for Gemini 3. And since it’s pretty fast, it feels more like a Flash model than the Pro one.
1/ Agent Autocomplete for prompts
2/ Hooks to control & extend the Agent loop
3/ Team Rules set global rules in the dashboard for all projects. Bugbot now follows team rules too.
4/ Share reusable prompts via links for docs
5/ Checking Agent Status with Menubar Monitor
1/ Agent Autocomplete for prompts
2/ Hooks to control & extend the Agent loop
3/ Team Rules set global rules in the dashboard for all projects. Bugbot now follows team rules too.
4/ Share reusable prompts via links for docs
5/ Checking Agent Status with Menubar Monitor
It's really cool that it finally made it into the new Cursor Version.
It's really cool that it finally made it into the new Cursor Version.
1. First, I'm using Iconify, which is a developer friendly icon set framework.
🧵👇
1. First, I'm using Iconify, which is a developer friendly icon set framework.
🧵👇
Add Terraform, Ansible, Dokku and manage it with your own CLI.
Add Terraform, Ansible, Dokku and manage it with your own CLI.
But Anthropic has never released a stealth model, and xAI published grok-code-fast-1 (sonic) recently.
But Anthropic has never released a stealth model, and xAI published grok-code-fast-1 (sonic) recently.