Horacio Pérez Sánchez
horaciops.bsky.social
Horacio Pérez Sánchez
@horaciops.bsky.social
🧬 Researcher | 🔍 Molecular Discovery & AI |🎙️ Host of "Investig la I." (350+ ep)| ❌ Don’t follow the algorithm |🧪 Model the system |❓ Ask better questions | https://horacio-ps.com
Before concluding, I am curious: what do you think works best today for this kind of AI-assisted, exploratory coding? Are you seeing... #AI #ProgrammingLanguages #VibeCoding #DeclarativeProgramming #LLMs #HumanComputerInteraction #SoftwareDesign #FutureOfCoding #PWL 🧵 14/14
January 20, 2026 at 11:11 PM
Or will the real change happen in tools and environments, not in languages themselves? 🧵 13/14
January 20, 2026 at 11:11 PM
This opens more exploratory questions. Are we moving towards "languages of intention" rather than programming languages? Will future developers mostly design constraints and let AI fill the gaps? 🧵 12/14
January 20, 2026 at 11:11 PM
Specification-first approaches (schemas, APIs, policies), where code becomes a by-product. This shifts the creative act from writing code to designing clear rules and intentions, which AI can then implement automatically. 🧵 11/14
January 20, 2026 at 11:11 PM
Intermediate or target representations (like WebAssembly), where humans disappear from the loop. These formats could allow AI systems to exchange and optimize code directly, without needing human-readable syntax. 🧵 10/14
January 20, 2026 at 11:11 PM
Strongly typed ecosystems (TypeScript, Rust), where errors are visible early. This helps AI learn from structure and constraints, reducing confusion and producing more reliable code. 🧵 9/14
January 20, 2026 at 11:11 PM
Declarative languages (like SQL or GraphQL), where intent is explicit. This means AI can focus on understanding what we want instead of how to do it, making collaboration smoother and faster. 🧵 8/14
January 20, 2026 at 11:11 PM
Some languages are simply easier for AI to "reason with", even if they were never designed for that purpose. I looked a bit around, and I found some interesting options and patterns: 🧵 7/14
January 20, 2026 at 11:11 PM
AI tends to do better when the task is clear and the rules are easy to follow. There are also existing languages that seem to work better than others in this context. Strong typing, explicit semantics, and limited ambiguity matter more than syntax. 🧵 6/14
January 20, 2026 at 11:11 PM
Many things are already moving in this direction, even if no one calls them "languages for AI." We now use simple, structured ways to tell computers what we want — like clear instructions, templates, or examples — instead of explaining every step. 🧵 5/14
January 20, 2026 at 11:11 PM
If programming becomes more conversational and less procedural, then the friction is no longer computation — it is expression. 🧵 4/14
January 20, 2026 at 11:11 PM
The relevance is not whether AI can already write code (it clearly can), but whether our current languages are the right interface for expressing intention, uncertainty, and exploration. 🧵 3/14
January 20, 2026 at 11:11 PM
This question is not only technical. It is about how humans and machines negotiate meaning. Programming languages were originally designed to be precise for computers and tolerable for humans. With AI in the loop, that balance may be shifting again. 🧵 2/14
January 20, 2026 at 11:11 PM
So the question remains open: is it worth implementing a hierarchical Ralph system, or is it better to directly design a proper deep-agent orchestration from the start? #AI #LLM #VibeCoding #RalphWiggum #AutonomousAgents #Optimization #MultiAgentSystems #DeepAgents 🧵 12/12
January 12, 2026 at 7:36 PM
I honestly do not know the answer yet. The idea is attractive, but the cost, complexity, and risk of false convergence are real. 🧵 11/12
January 12, 2026 at 7:35 PM
Maybe a disciplined deep-agent architecture with explicit roles, metrics, and stopping criteria is simply a cleaner and more robust solution. Or maybe a lightweight hierarchical Ralph system could be enough for fast exploration and prototyping. 🧵 10/12
January 12, 2026 at 7:35 PM
That makes it powerful, but also dangerous if it is not controlled. At this point, the obvious question appears: is this still just "vibe coding with extra steps", or are we basically reinventing an orchestration of deep agents? 🧵 9/12
January 12, 2026 at 7:35 PM
models, or even genetic and Monte Carlo-like searches, but now with a semantic engine (LLMs) instead of blind random mutations. The "mutation" is no longer random; it is guided by language, context, and partial understanding. 🧵 8/12
January 12, 2026 at 7:35 PM
Each sub-Ralph would run its own iterative loop, fixing errors, refining solutions, and stopping when local criteria are met. In the background, this is not very different from classical ideas we already know: hierarchical optimization, master–worker 🧵 7/12
January 12, 2026 at 7:35 PM
At a high level, the master Ralph would not write code itself, but would decompose the problem, launch sub-Ralphs with clear local objectives, evaluate their outputs, and decide what to keep or discard. 🧵 6/12
January 12, 2026 at 7:35 PM
Conceptually, this starts to look less like a meme and more like a real optimization architecture. 🧵 5/12
January 12, 2026 at 7:35 PM
What if we put a Ralph inside another Ralph? Or even better, a "Ralph master" that launches several smaller Ralphs, each one exploring a limited part of the problem? 🧵 4/12
January 12, 2026 at 7:35 PM
The question is simple: if we already have this so-called "Ralph Wiggum" approach — an iterative, almost naive loop where an AI keeps trying, fixing, and retrying until something works — does it make sense to push it one step further? 🧵 3/12
January 12, 2026 at 7:35 PM
I have been thinking recently about this idea, and I am curious if it really makes sense or not, so I share it here for people interested in AI, coding, and optimization. 🧵 2/12
January 12, 2026 at 7:35 PM