gradient-hiking.bsky.social
@gradient-hiking.bsky.social
140k is dead wrong, but analyzing an individual experience instead of comparing a weighted basket of goods to nominal average wages is important.

1) No single real person has median wage + average CPI-U
2) In large wage/inflation regimes a larger share of people are winners/losers due to variance.
November 26, 2025 at 3:07 PM
I think it’s closer to the “living wage” similar to what MIT publishes, not poverty line. Clark county GA living wage has to cover 93k in expenses for 2 parents, 2 children, both working.

livingwage.mit.edu/counties/13059
Living Wage Calculator - Living Wage Calculation for Clarke County, Georgia
livingwage.mit.edu
November 25, 2025 at 11:59 PM
Read Binders history of monetary + fiscal and was surprised how similar 1960s/70s/80s sounded to today. Turn the fiscal crank no matter what from JFK to Reagan.
October 14, 2025 at 10:48 PM
@conorsen.bsky.social why so dismissive of debasement trade (sans cranks)? IMO it stems from looking at historical fiscal + monetary polciy +
debts since 1950s and noticing how few episodes we had actually deflation. Fed + congress will “fail” on the right tail of inflation never the left tail.
October 14, 2025 at 10:45 PM
Podcasters are less critical of guests and most questions are softballs w/ minimal pushback and are fuzzier on facts (Ie. Odd lots, Derek Thompson). Writers at WSJ, NYT are much tougher in print, leading to more trust. The recent Plain English ep on DC buildout is an example of weak questions.
September 26, 2025 at 11:02 PM
Think from their shoes. If you have *already lost* game A, which was buying a home pre 2021 price shock, why not play game B (recession/blowup), even if you know the odds are low?
September 22, 2025 at 2:05 PM
Slightly related, but this Jason Furman piece was a great retrospective on the legislative side of post-pandemic. Curious your take. IMO hindsight says Biden went *way* too far with fiscal spending when the data said economy was already on a huge upward swing. www.foreignaffairs.com/united-state...
The Post-Neoliberal Delusion
The tragedy of Bidenomics.
www.foreignaffairs.com
February 15, 2025 at 7:52 PM
I've wondered whether the prudent strategy ala taleb is to use ~.5% of SPY exposure and buy 10-25% OTM puts which reduce the pain of a large decline by providing the psychological benefit of a holding having "a large green number" + a cash infusion worth ~2-4% of your portfolio ie., similar to SWR.
February 14, 2025 at 9:05 PM
They are unlikely to compete with oai, goog, meta in AI long term because 1) MSFT proper pays engineers 40% less 2) Their core biz didn't lever ML, unlike search, ads, social media which put enormous engineering effort into ML for ~14 years.
January 30, 2025 at 8:51 PM
They invested in OpenAI because they fundamentally lack internal talent. MSFT is an integrator, OpenAI is an innovator. Thus if the future is market shifting AI breakthroughs, what % market share goes to msft? If it's ho hum status quo, they do well IMO. x.com/TechEmails/s...
x.com
x.com
January 30, 2025 at 8:51 PM
- Lets guess ~60% of MSFT/FAANG capex is for inference demand, meaning serving customers requests. MSFT has trillions of (inference) queries to deepseeks millions, thus inflating capex.
- More GPUs means faster researcher experimentation -> faster improvements. ie., not all GPUs "one training run"
January 24, 2025 at 9:08 PM
- The reasoning model (R1) is genuinely impressive, but it's algorithmically impressive. It's novelty didn't come from scaling compute.
January 24, 2025 at 9:08 PM
- The v3 (not reasoning) model is gpt-4o perf level, which is 8+ months old.
- The v3 model likely used "knowledge distillation" ie., train small model from a larger model. If they used OAI's model to do KD it's means you need less compute (but somebody has to train the frontier!)
January 24, 2025 at 9:08 PM
You're missing a few key puzzle pieces on deepseek:

- The quoted $5M training cost is likely wrong. They also likely have 50k+ H100 GPUs which they must lie about due to export controls. www.interconnects.ai/p/deepseek-v...
www.reddit.com/r/singularit...
DeepSeek V3 and the cost of frontier AI models
The $5M figure for the last training run should not be your basis for how much frontier AI models cost.
www.interconnects.ai
January 24, 2025 at 9:08 PM
If 100% of Gary Marcus posts are a negative reading of ML related news, and if P(negative_reading) = 1, then entropy goes to zero, meaning the information gain is zero... was there any point in reading?
December 22, 2024 at 1:10 AM
Take a look at OpenAI's o3 model announcement today. Extremely impressive. I'd wager inference time compute scaling (as opposed to pre-training) would put a floor on any Capex cooling. It's a different scaling vector and needs a different datacenter/network/gpu architecture buildout.
December 20, 2024 at 10:46 PM