Computational Cosmetologist
banner
dferrer.bsky.social
Computational Cosmetologist
@dferrer.bsky.social
ML Scientist (derogatory), Ex-Cosmologist, Post Large Scale Structuralist

I worked on the Great Problems in AI: Useless Facts about Dark Energy, the difference between a bed and a sofa, and now facilitating bank-on-bank violence. Frequentists DNI.
I think this is true for an elite core group, but like WSB and noncredibledefense the hyperbole gets taken as literally true by less savvy newcomers.
December 3, 2025 at 2:01 AM
Later someone lost a bet and had to put it in a paper.
December 3, 2025 at 1:47 AM
In a sane world, this was made a group of applied math grad students circa 2022 playing a drinking game in lockdown. They went in a circle and had 3 s to name something in a random zone or they had to drink. They were drunk when they started. The board was kept to laugh at later.
December 3, 2025 at 1:44 AM
There’s also a completely plausible story about environmental harm here—by the ag industry. Linking the two feels like trying to blame De Beers for US school shootings. It isn’t denying kids are dying here and abroad to say that’s not true.
December 2, 2025 at 3:32 AM
And like microchip production considered end-to-end is a dirty industry. There is a completely plausible story to tell about data center environmental damage. It’s just mostly abroad so it has less emotional resonance.
December 2, 2025 at 3:32 AM
Of course you have to give as much leeway to the argument as possible to make your point. It just bugs me that this is going to be taken as industry shilling when it’s already going out on a limb for the AI water cohort.
December 2, 2025 at 3:32 AM
This also assumes the evaporated water is just gone, instead of the likely outcome of recirculating and slightly diluting the ag runoff. The “concentration is worse than pollution” theory is a tough sell. You need the local air moisture to be completely separate from the local hydrological system.
December 2, 2025 at 3:32 AM
Also why math proof RL is the strongest performing of the bunch—you have far more fine-grained attribution from the proving language
December 1, 2025 at 1:32 PM
Last speaker at the orientation for my program was from uni mental health. She gave out cards and said, “When the thoughts of self-harm come—and they will come—you’ll want this handy.”

I thought it was an awkward faux pas until year 2. It was not.
December 1, 2025 at 12:03 AM
Damn, I would say you could even swap out “good” for “legitimate” and this would still be true outside the arts. A sci/eng PhD is still effectively robbery even when they *do* pay you.
November 30, 2025 at 11:56 PM
CNNs in particular have so much continuity with classical CV that it’s hard to draw a boundary. Higher-level hand designed convolutional features were already being tried in the 2000s.
November 30, 2025 at 10:33 PM
No, but you see this is merely plagiarism of existing comics from the training corpus, extrapolated to match the prompt. This isn’t generalization, though, because we’ve proven that’s impossible with a thought experiment and analysis of a different system.
November 30, 2025 at 10:08 PM
"Great point! You're not just juicing KPI's, you're disrupting the entire industry. Brilliant, poignantly insightful questions like 'how to spend less on office heating' are the kind of queries your employees are too timid to ask..."
November 30, 2025 at 4:08 AM
You have to be an Enterprise Leader to understand. Lesser mortals cannot comprehend the realm of pure Decision, immaculately cleansed of all data and expertise.

In reality, I'm imagining it's just a system prompt asking for maximally sycophantic replies
November 30, 2025 at 4:08 AM
Love to have my pretensions to Taste and Aesthetics nourished by following the guidance of someone with the time to read random new things
November 30, 2025 at 2:07 AM
This but only semi-ironically. Found so many good books this way.
November 30, 2025 at 1:59 AM
I have been searching for a year now and would love to find something.

It’s such a tightly written little novel. An absolute joy. Most of the stuff that’s supposedly like it is leaden with lore and sprawling mystery that swallows the plot.

I wish I had better news.
November 30, 2025 at 1:52 AM
Usually doing some variation of log-sinkhorn, both discrete for set prediction and on 2-d grids.

I suspect the initial problem is a lot softer / less sparse and so converges faster. I’ve never done a real investigation because it was painful but still feasible after some tweaks
November 27, 2025 at 12:45 PM
Profiling the initial steps always ends up giving me a very optimistic picture of how compute-intense it will be.
November 27, 2025 at 4:00 AM
Whenever I've used OT distances as losses the real killer in compute has ended up being conditioning---especially for EMD due to non-uniqueness. There's some point in early-mid training where the number of iterations required for convergence blows up by a factor of 10-20x.
November 27, 2025 at 4:00 AM
Old scanners in particular tend to have some nasty hand-coded approximations to make them fast enough to work---you need to be scanning quickly enough that you can treat the target as stationary but slowly enough to get decent SNR on the sensor and do the computations in real time.
November 25, 2025 at 1:45 AM