nateberkopec.bsky.social
@nateberkopec.bsky.social
If you are using Concurrent.physical_processor_count or Concurrent.processor_count to set your Puma/Unicorn worker counts, that is wrong.

Use Concurrent.available_processor_count. It takes into account cpu quotas in envs like k8s/docker.
November 25, 2025 at 5:02 PM
Tired: Repo github stars
Wired: Repo contributor count
November 24, 2025 at 9:03 PM
have you, too, been to so many conferences that you feel nostalgia for this?
November 24, 2025 at 5:01 PM
Example of the kind of thing I'm doing more of now in the age of LLMs:

A ~1000 line Ruby project for creating Datadog scorecards. A client had ~30+ microservices, but adherence to "platform" standards like "use jemalloc" was spotty. LLM did it in something in ~1hr.
November 21, 2025 at 5:02 PM
Architecture astronauts create 1+ service layers, which are ~internal views. They end up having to serialize big objects before crossing layers, but only use tiny parts of the object on the other side. Since they didn't use lazy accessors, they waste tons of db queries/work.
November 20, 2025 at 5:00 PM
If you too have a Rails app and would like to see your page load times improve like this, you should try our retainer service 😉
November 17, 2025 at 5:03 PM
I'm thinking about the future of the content side of my business. I think where we're headed, intellectual property isn't coming with us.

In a world without copyright, how do I capture value while still teaching them the skill of performance? Building their taste?
November 16, 2025 at 9:53 PM
After a deep dive on LLM agent security, here's things I think we should all be doing but aren't:

1. Running agents on remote containers only.
2. Doing internet research in a separate cleanroom env
3. Having LLMs read logs daily for signs of exfiltration/promptjacking
November 14, 2025 at 4:56 PM
As a result of running LLM agents daily:

1. My Github is now in vigilant mode. I'm signing all commits from now on.
2. My gpg signing key is on a yubikey which requires touch for confirmation.

Verified on GH == Nate reviewed it and pushed a button with his meaty finger.
November 13, 2025 at 5:00 PM
November 13, 2025 at 6:12 AM
In the age of LLM agents, the intruder is already inside the house. If you're running Claude Code in dangerous mode, a prompt injection attack is one bash call away from exfiltrating almost anything that's unlocked on your machine, or using your (auth'd) gh cli to steal your repo
November 12, 2025 at 5:04 PM
I'm constantly thinking about and auditing my own moral life. I think it's our duty as human beings, but particularly as technologists and wielders of capital.

But I don't the solution is ever to pull back; to withdraw. Exclusion is ineffective as a means of creating change. It admits defeat.
November 9, 2025 at 11:01 PM
Welcome aboard Joshua Young to the Puma maintainers team 🫡
November 6, 2025 at 4:59 PM
What's the state of the art for OSS merch? We want to print some Puma stuff to hand out at conferences.
November 5, 2025 at 5:03 PM
Useful OSS work that doesn't show up as a GH contribution:

This person quickly prototyped a pure-Ruby HTTP parser, just to give us some ballpark of performance. We said "interesting, the results show this isn't great, thank you!"

github.com/puma/puma/p...
[Experiment] Pure ruby parser by swebb · Pull Request #3660 · puma/puma
Description As an experiment/learning exercise I decided to try implementing a HTTP parser in pure ruby. See #1889. I did this purely as an exercise and not in an attempt to get it merged. I&#...
github.com
October 30, 2025 at 4:58 PM
I need a macOS desktop app for monitoring GH actions runs across multiple repositories, and get system notifications when they either pass or fail. Anyone also have this workflow?
October 29, 2025 at 10:26 PM
October 29, 2025 at 8:56 PM
Prosopite + LLMs create a powerful workflow automation where you can get a PR fixing an N+1 with zero humans writing code
October 29, 2025 at 5:03 PM
Reposted
If you want to make change or add new feature to Ruby, I suggest to read www.a-k-r.org/pub/howto-pe...
Ruby's decision-making process isn't democratic or based on voting. It's more like a game of persuading Matz and Module maintainers.
www.a-k-r.org
October 28, 2025 at 9:56 PM
Vibe check: I use LLMs for about 80% of my coding work now. That work is done with Claude Code, in the terminal. I run 1 to 2 agents at a time, dangerous mode. I rarely edit the code directly, but instead talk to the LLM until I get what I want, or write code/rules to corral it.
October 28, 2025 at 5:02 PM
If you have any control over it, you should be running your CI with NVMe/attached storage. 10-30% faster tests vs network-attached block storage (i.e. EBS) for about ~2% more expensive instances. Test suites are highly bottlenecked on disk.
October 27, 2025 at 4:58 PM
Bloomberg terminal, but for AWS spot instance pricing in multiple regions
October 24, 2025 at 4:58 PM
Using coding agents effectively is about brute-forcing.

Your task is turn tedious coding tasks into something than be brute-forced by $5 worth of GPU time.

You are setting up a loop which an agent can pass/fail itself against, and churn the loop until it passes.
October 23, 2025 at 4:57 PM
Unsurprisingly, every "AI" product I've ever used is extremely buggy. Because, of course, they're building it "with AI".
October 22, 2025 at 4:57 PM
Rails "DIYstack" (kamal, hetzner, etc) hasn't yet solved reliability and trust for the data layer.

People running a business on Rails want backups, push-button recovery, zero-config HA (like Heroku PG premium). You can do a managed DB but you're stuck on hyperscalers then.
October 21, 2025 at 5:02 PM