Kentaromiura
banner
kentaromiura.bsky.social
Kentaromiura
@kentaromiura.bsky.social
100 followers 340 following 200 posts
Posts Media Videos Starter Packs
As a demo I shown what my AI built all alone on first try from a single small prompt:

cristian.tokyo/img/blockgam... but if you are from mobile you might want to use cristian.tokyo/img/blockgam... which also has been vibe engineered the same way using the above as a starting point.
Tetris Game
cristian.tokyo
Some Tuesday at work we have a “vibe engineering” session where we discuss stuff, this Tuesday I spoke on how to make AI agents that use local llms to reproduce @roocode.bsky.social “boomerang” mode on the cli with multiple local llms using github.com/kentaromiura... and @lmstudio-ai.bsky.social
GitHub - kentaromiura/ODM: On demand model proxy server
On demand model proxy server. Contribute to kentaromiura/ODM development by creating an account on GitHub.
github.com
Since this I’ve released two local tools already.
The first for Jp/En translation and back and the latest one for allowing to load and unload different models on demand, allowing better agentic flow on local development machine.
ODM: github.com/kentaromiura...
ShallowT: github.com/kentaromiura...
Hey guys, I’ve found a way to make my cocktail sauce recipe hotter and pinker!
Neosoft Neopaint, early 90s, I first used in 286 in middle school together with neobook
My guess someone had to repeat the same transition start properties many times and was bother enough to create a spec
I’ve fully entered the local llm rabbit hole. Please send help (or an AMD AI MAX if you hate me).
This is the best advertisement for grok tbqh.
THE GIRLS ARE FIGHTINGGG
A few years ago (2011) I made a script that downloaded lots of news from different rss feeds, put together in a page, run through calibre to convert to mobi and send to my kindle every morning; I think this is now a prompt away from that.
So yesterday Jan V1 was released based on this model. So I tried Jan in the Jan app, after configuring the duck duck go and fetcher MCP I managed to get a little assistant to summarize news etc, although it got Wikipedia winning instead of losing.
Despite this, for the first time I’ve got 2 local llms finishing my test, both the stubborn but small qwen Qwen/Qwen3-4B-Thinking-2507 and gpt-oss-20b, the latter with huge context could even complete the task on first try via aider.
Test is asking to write rule 110 in Odin.
I later tried the Odin challenge but with search available it ended up copying some gists, I asked to iterate on that and again it search for premade stuff before doing it, so a real engineer behavior lol. It always gave attribution though, so maybe asking to check for MIT license might be fine.
So yesterday Jan V1 was released based on this model. So I tried Jan in the Jan app, after configuring the duck duck go and fetcher MCP I managed to get a little assistant to summarize news etc, although it got Wikipedia winning instead of losing.
Despite this, for the first time I’ve got 2 local llms finishing my test, both the stubborn but small qwen Qwen/Qwen3-4B-Thinking-2507 and gpt-oss-20b, the latter with huge context could even complete the task on first try via aider.
Test is asking to write rule 110 in Odin.
I think AI is remarkably similar to human engineer behavior. Maybe a bit too much.
Anyway it’s the first model I tried that passed that test, with gpt oss being the second, gpt could even output the correct program in one go when fed the odin documentation and rule 110 wiki page with lots of context (Odin doc I think is around 15K).
I managed to get this model getting the right result for my special test (write rule 110 in the Odin programming language), after I corrected the wrong syntax it remembered the corrections and eventually got it right, not before of getting the order of bits wrong and telling me that they were right.
In your opinion. If it works and it solves a problem then it has real worth.
I disagree, if the barrier of entry is lower using .net, Java, electron/tauri or lisp or lua or basic or python and it solves a problem then the worth of having something done now against maybe never is infinite percent better.
We reach to a point where the dead internet theory is true not because there are more bots than humans but because the slop created and the insane engagement makes social medias basically useless. Manual curation is the only way forward now.
Expedition 33 has one of the worst platforming part of any game. If I was to demo the game and give it a score solely on that it would get a 0.
This is a difficult task because no local llama are trained on Odin code and the syntax is alien to them. My only requirement is I don’t have to write a line of code myself.
GPT oss is very good at failing fast with small context, so you can have many iterations in relatively small time.
Despite this, for the first time I’ve got 2 local llms finishing my test, both the stubborn but small qwen Qwen/Qwen3-4B-Thinking-2507 and gpt-oss-20b, the latter with huge context could even complete the task on first try via aider.
Test is asking to write rule 110 in Odin.
I think AI is remarkably similar to human engineer behavior. Maybe a bit too much.
>Hey AI, I think instead of rule 110, you implemented rule 118, you’ve got the MSB and LSB backwards.
Ai: I see that’s a common mistake when implementing cellular automations, I’m actually correct.
>No you’re not;you are reading the truth table backwards, MSB is bit 7 you think is 0.
Ai: Acthually..
I think AI is remarkably similar to human engineer behavior. Maybe a bit too much.
>Hey AI, I think instead of rule 110, you implemented rule 118, you’ve got the MSB and LSB backwards.
Ai: I see that’s a common mistake when implementing cellular automations, I’m actually correct.
>No you’re not;you are reading the truth table backwards, MSB is bit 7 you think is 0.
Ai: Acthually..
>Hey AI, I think instead of rule 110, you implemented rule 118, you’ve got the MSB and LSB backwards.
Ai: I see that’s a common mistake when implementing cellular automations, I’m actually correct.
>No you’re not;you are reading the truth table backwards, MSB is bit 7 you think is 0.
Ai: Acthually..
TIL if you zoom enough you can read the “Here’s to the crazy ones.” In the document emoji on a Mac.
So you all know Serial Experiment Lain, but do you know