Hrishi
olickel.com
Hrishi
@olickel.com
Previously CTO, Greywing (YC W21). Building something new at the moment.

Writes at https://olickel.com
Disclaimer: I *know* model X and harness Y have been able to do these things before, but this feels like a genuine upgrade in all my testing - like this is the first time the model can actually see.

YMMV though - happy Thanksgiving!
December 1, 2025 at 6:22 AM
I think we've just scratched the surface on what's possible.

This might be the start of us actually being able to talk to models with images, conveying a lot more than what's been possible before.
December 1, 2025 at 6:22 AM
Opus 4.5 still amazes me, that Anthropic in a single release moved from models that could sort-of understand pictures to something that actually knows what its looking at, and (from my testing) the best model for visual understanding by far.
December 1, 2025 at 6:22 AM
What's amazing is also that these can now be collaborated on, and version controlled. You probably don't need this for a comic, but it's useful to have for other kinds of design.
December 1, 2025 at 6:22 AM
4. Process - more specific breakdown of the actual task (in this case that's outlining each specific strip)
5. Ideas - in this case that would be the characters themselves
6. Guidelines - for us that's style guidelines
December 1, 2025 at 6:22 AM
Obviously this is all new, but currently my design specs are structured as:
1. Background - writing, reasoning, definitions, etc.
2. Primary Task - what's the overarching objective?
3. Audience - who is this for? What is the intended outcome?

then comes more specific parts:
December 1, 2025 at 6:22 AM
Before rushing to hook up Opus and Nano in an endless loop that burns tokens, it's worth functioning as the ferry-agent in this loop manually.

Go to Opus with the results, ask for updated specs (or addendums), and go back to nano with the specs.
December 1, 2025 at 6:22 AM
The same thing applies to frontend design. The loop of
Generate from specs ↠ Render ↠ Critique ↠ Edit specs ↠ Regenerate works extremely well with Opus. For fun, you can also throw in Nano to generate out-there-undesignable-but-cool frontends to remix from.
December 1, 2025 at 6:22 AM
Text as the intermediate makes the designs so much more editable. The same specs produce the same results, and changing something - at least for me - has a predictable effect.
December 1, 2025 at 6:22 AM
It's also amazing at writing specs. The same plan->spec->build->review->spec workflow we've been using for code works *perfectly* for design, with Opus as the planning model and Banan as the executor.

Sorry - next strip coming up!
December 1, 2025 at 6:22 AM
This entire strip (and others like it) were made from an OpusBanana collaboration.

Turns out Opus is now miles ahead of even Gemini at visual understanding. This is a model that can pick out and critique emotional impact, while noticing elements 10 pixels out of place.
December 1, 2025 at 6:22 AM
Runner up:
• Gemini 3.0 in Antigravity is really good at one shotting complex (in terms of functionality) frontends!

Goes without saying, I haven't tested everything, and anything not mentioned is either unknown or had weird results *for me*! Your mileage may vary :)
November 29, 2025 at 2:21 AM
Harnesses today (even Chat UIs) matter as much as underlying models. The prompt, tools, the nature of the loop - all make a massive difference for deep agentic work. While a lot of harness+model combos *can* do something, they're a long way away from doing it reliably, every single time
November 29, 2025 at 2:21 AM
10) Claude Deep Research for research - most steerable and deep research tool so far
11) Nanobanana pro for image generation and diagrams- Flux 2 as a far second for photo work
12) Firecrawl MCP connected to almost all of these for bulk web searching
November 29, 2025 at 2:21 AM
6) Claude Code if you want to have fun for the rest of the day and vibe code all your side projects
7) Codex for placebo testing
8) Claude UI with Sonnet (or GPT-5 on chat[dot]com) as a Google/Perplexity replacement
9) V0 for frontend if you already have assets in Figma
November 29, 2025 at 2:21 AM
3) Gemini 3 in AI Studio with Mandark for proper code review and creating solid plans for large changes
4) Gemini CLI with Gemini 3.0 (only Gemini 3) for long refactors and edits
5) Sonnet 4.5 1m in Cline for long complex tasks where you already have a good plan
November 29, 2025 at 2:21 AM