Calvin McCarter
@calvinmccarter.bsky.social
220 followers 450 following 59 posts
calvinmccarter.com
Posts Media Videos Starter Packs
calvinmccarter.bsky.social
i first read that as:

"power washing" but for the feet. idea to fill out later
calvinmccarter.bsky.social
Slightly off-topic, but one of my biggest pet peeves is when people say about baby sadness, "Don't worry: they won't remember this." In the long run, we are all memoryless!
calvinmccarter.bsky.social
what are the reasons for this from your perspective?
calvinmccarter.bsky.social
maybe not more expensive to manufacture in an absolute sense. but if chinese internal demand rises, and it no longer needs us exports, then china may no longer see the value in exporting to the us. in which case, the cost of consumer goods *in the us* would rise.
calvinmccarter.bsky.social
also, immigrants will also do less of our low-wage / low-status work for us (declining LatAm TFR). meanwhile, AI will (initially) compete with high-wage / high-status workers. so we're still looking at a combustible political situation.
calvinmccarter.bsky.social
tbf, while AI is going to start doing more of our work for us, East Asia will start doing less of our work for us (their trade surpluses will decline due to their aging demographics and the end of the USD as reserve currency)
calvinmccarter.bsky.social
In principle I agree, but federal lands are relatively poorly managed wrt fire prevention and management. Obviously it would be preferable to just fix that, but I'm not sure whether that's realistic.
calvinmccarter.bsky.social
Related to your earlier assessment of the probability of the US striking Iran, how would you assess the probability of an Israeli strike? Is it harder to predict, because Israel doesn't need to bring assets into the region?
calvinmccarter.bsky.social
"autoregression for training, diffusion for inference"
calvinmccarter.bsky.social
alphaxiv does this for all arxiv papers -- just s/ar/alpha/ in the url -- and i've been told that it's coming soon to {bio,chem,med}rxiv as well.
calvinmccarter.bsky.social
I don't disagree with you exactly, but if an institution's natural defenders are AWOL (or even siding with its enemies) due to a litany of grievances, then it's probably already too late to save that institution.
calvinmccarter.bsky.social
(not that there's anything wrong with that)
calvinmccarter.bsky.social
I just delete my old repo, then give myself a new username...
calvinmccarter.bsky.social
Has anyone tried far-UVC in their home? It's now dropped into the ~$300 price range where I'm interested in trying it for myself. substack.com/home/post/p-...
Flipping the switch on far-UVC
We’ve known about far-UVC’s promise for a decade. Why isn't it everywhere?
substack.com
calvinmccarter.bsky.social
it's definitely a Michigan thing
calvinmccarter.bsky.social
I am slightly cynical about the Clean Label Project, given that it seems to be "pay to play". Also, afaict, it was started by the founder of ByHeart formula, and theirs was like the first thing to get certification. Not that this is necessarily bad -- ByHeart and Kendamil are the best formula IMO.
calvinmccarter.bsky.social
the website has a list: cleanlabelproject.org/product-cate... which is specific and helpful, though oddly it slightly differs from the brands in the report:
calvinmccarter.bsky.social
Here's a link to the report: cleanlabelproject.org/wp-content/u... (TLDR heuristics: whey is better than plant-based, non-organic is better than organic, unflavored is better than chocolate-flavored)
calvinmccarter.bsky.social
Ultra exciting! And it's gratifying to see that this method uses the kernel density integral preprocessing method that I published in @tmlr-pub.bsky.social (2023). (One takeaway: even if your ML research focus isn't deep learning, pursue directions that complement rather than compete with it.)
sammuller.bsky.social
This might be the first time after 10 years that boosted trees are not the best default choice when working with data in tables.
Instead a pre-trained neural network is, the new TabPFN, as we just published in Nature 🎉
calvinmccarter.bsky.social
This creates a train-inference gap if one is training at any fixed masking rate. Of course, with MLMs people don't even try that, and instead use pseudo-likelihood as an approximation for likelihood. Besides being approximate, the problem with this is that this takes L separate forward-passes.
calvinmccarter.bsky.social
When one evaluates log-likelihood of a sequence of length L via the chain rule of probability, the first term has missingness fraction of 1, the second has missingness of (L-1)/L, etc. So the inference-time masking rate is ~ Uniform[0, 1].