Manas Garg
banner
fuzzysparkle.com
Manas Garg
@fuzzysparkle.com
Applied AI at Adobe. Jack of many trades. I write about whatever amuses me.
In a circus, everyone is a clown - some by profession and the rest by association.
June 5, 2025 at 2:08 AM
Finally tried Gemini for deep research. Very impressed.
May 11, 2025 at 4:41 PM
Raindrops on the roof
Thundering through the rainspout
Nature’s dance
February 13, 2025 at 2:35 PM
Someday, I’ll take LLM’s code generation capabilities for granted. For now, I am in admiration.
February 9, 2025 at 9:52 PM
Thinking is intelligence externalized?
February 7, 2025 at 3:52 PM
I go to DeepSeek to read its reasoning and not to see the output. It’s usually when I don’t think that LLM will give me an answer but I still want to use it for stimulating my thinking.

Need an LLM that generates only thinking tokens.
February 2, 2025 at 4:23 PM
Good summary.
Explainer: What's R1 and Everything Else

This is an attempt to consolidate the dizzying rate of AI developments since Christmas. If you're into AI but not deep enough, this should get you oriented again.

timkellogg.me/blog/2025/01...
January 26, 2025 at 2:32 PM
Reminder: SF Bay Area is a very beautiful place.
January 18, 2025 at 7:20 PM
My macbook pro is good enough to run an LLM but not PowerPoint.
January 15, 2025 at 12:42 AM
uv is impressive fast. Good reminder that you can disrupt an existing product/tool simply (but not trivially) by bringing interaction lag close to zero.

docs.astral.sh/uv/
uv
docs.astral.sh
January 11, 2025 at 5:09 PM
Benchmarks are worthless but benchmarking is everything...?
January 3, 2025 at 10:36 PM
Raindrops on patio
A warm blanket on the couch
My dog Yuko snores
January 3, 2025 at 6:01 PM
God, grant me the serenity to accept LLM's limitations,
Courage to use it where it (mostly) works,
And wisdom to know the one from the other.
January 3, 2025 at 5:38 PM
LLMs are not backward compatible. We are yet to realize the operational implications of taking dependency on LLMs in business critical use cases.
January 2, 2025 at 7:42 PM
Claude seems to be much better than ChatGPT at least for programming. It generates more comprehensive code and provides better explanations. Is there an area where ChatGPT does better?
December 30, 2024 at 11:42 PM
Premature abstractions is the root cause of all the development and operational complexity.

fhur.me/posts/2024/t...
December 28, 2024 at 5:21 PM
+100. My own approach is to build large projects in concentric circles of increasing scope and finesse - be it personal weekend projects or large multi-team projects at work. It not only works well for motivation but also it helps course correct in terms of requirements and implementation choices.
Btw, speaking of Mitch, he wrote one of my favorite posts on how to motivate yourself to build things: get yourself to a demo.

mitchellh.com/writing/buil...
My Approach to Building Large Technical Projects
mitchellh.com
December 28, 2024 at 5:18 PM
It seems - what GPT-3 was for creating writing, GPT-o1 is for reasoning. I wonder if o3 would represent the GPT-3.5 like leap but for reasoning...
December 27, 2024 at 9:28 PM
Design by committee.
December 12, 2024 at 9:22 PM
Interesting. I didn't know that mirror bacteria was a thing.
“Although we were initially skeptical that mirror bacteria could pose major risks, we have become deeply concerned. We were uncertain about the feasibility of synthesizing mirror bacteria but have concluded that technological progress will likely make this possible”
🧪
www.science.org/doi/10.1126/...
Confronting risks of mirror life
Broad discussion is needed to chart a path forward.
www.science.org
December 12, 2024 at 9:11 PM
This was such a delightful read. An excellent start of the day with a lot to reflect on. Ref: link.springer.com/article/10.1...
December 9, 2024 at 2:56 PM
While I can see why someone could feel this way, I am at the opposite end of spectrum. I believe that a lot of innovation opportunities exist in pushing the art of possible in ~7B and ~32B parameter range. These models can run on consumer hardware and punch well above their weight(s) - pun intended.
Great post that captures the tension between classic ML approaches and modern deep learning while acknowledging the nuances of both.

“Working with LLMs doesn’t feel the same. It’s like fitting pieces into a pre-defined puzzle instead of building the puzzle itself.”

www.reddit.com/r/MachineLea...
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
www.reddit.com
December 6, 2024 at 5:08 AM
Fall! One should be forgiven to believe that the world is on fire.
December 5, 2024 at 3:07 PM
LLMs have made programming exciting again. More people writing code just for the sheer fun of it.
December 5, 2024 at 4:53 AM
QwQ is fun (and even borderline useful). Since I get to see raw reasoning output (unlike o1-preview where I get to see only a summary of reasoning), I find `ollama run qwq` to be a better thinking partner compared to ChatGPT.
December 2, 2024 at 1:01 AM