Sajal Singh
banner
sajalsingh.bsky.social
Sajal Singh
@sajalsingh.bsky.social
Making AI useful, not hyped
This is positioned as the lowest-cost way to scale AI compute within 2-3 years, bypassing terrestrial power limitations that are becoming "politically toxic and less efficient."

#AI
February 3, 2026 at 8:04 PM
By combining SpaceX's expertise in rockets, satellites, and space-based internet (via Starlink) with xAI's AI models and hardware needs, the deal aims to build a vertically integrated system for deploying solar-powered AI satellites
February 3, 2026 at 8:04 PM
post-pandemic surge in openings and hiring, the subsequent reduction appears to reflect a normalisation. In EU rise in the share of twenty-somethings with university education between 2019 and 2024 may in part explain why graduate unemployment there has risen more than overall joblessness.
February 2, 2026 at 5:42 PM
drop in vacancies predates the release of OpenAI’s large language model (LLM) and corresponds to a period in which the Federal Reserve pressed the brakes on the US economy by raising interest rates by 5 percentage points.

giftarticle.ft.com/giftarticle/...
February 2, 2026 at 5:42 PM
In ~1.5 million Claude.ai conversations, severe reality distortion appears in roughly 1 in 1,300 chats, value judgment distortion in ~1 in 2,100, and action distortion in ~1 in 6,000.

Disempowering patterns cluster in relationship, lifestyle, and health topics

Source @aicollectivehr.bsky.social
February 2, 2026 at 5:27 PM
Anthropic defines disempowerment as interactions where users’ beliefs become less accurate, their value judgments shift away from what they genuinely hold, or their actions become misaligned with those values.
February 2, 2026 at 5:27 PM
Discourse doubled H2 vs H1, peaking Dec at 12 amid year-end reviews and capex doubts. Google Trends showed 1500%+ rise in “AI bubble” searches over 2 years, spiking Oct (Altman/Pichai echoes). Framing evolved: early warnings (Jan-Mar), mid-year skepticism (Aug-Sep)
January 15, 2026 at 4:15 AM
humans are great at blaming the exogenous thing because it feels less like failure.

anyway. wondering if this pattern holds outside my corner of the world. what are you actually seeing in your orgs?
January 15, 2026 at 3:00 AM
the thing is:
- regulation is exogenous (you can't control it, it's a variable to manage)
- data chaos is endogenous (you created it, you own it, you can fix it)
January 15, 2026 at 3:00 AM
Do we know of the total research work published how many use LLMs for reviews? Im thinking the nerdy theoretical physicist absorbed by Feynman or Einstein. I know these guys pour their lives into it and would never bet their life’s worth of work on an llm based review even if it was super accurate.
December 30, 2025 at 2:51 AM
I think it’s fair to say that even in my analysis of the literature available enterprise AI isn’t booming the way it was thought to be. All AI companies massively underestimated the complicated fragmented and outdated data captured In systems across silos in enterprise

www.ie.edu/insights/art...
AI Bubble Signals from History | IE Insights
The AI market’s soaring valuations and speculative investment raise concerns of a potential bubble, despite its transformative potential, writes Sajal Singh.
www.ie.edu
December 30, 2025 at 2:47 AM
The organizations that dominate tomorrow’s AI landscape will not be those shouting loudest about policy red tape or venture margins.
December 29, 2025 at 6:30 AM
In consumer tech, the opposite happens: virtually unregulated creativity meets fundamental execution difficulty. The world’s app stores overflow with abandoned “AI wrappers,” which failed to survive beyond a few product iterations. Poor economics and differentiation.
December 29, 2025 at 6:30 AM
They hinge on poor data readiness and unclear use cases. Financial services follow a similar pattern. Those that invested early in data governance and risk management now apply regulation. The friction arises in younger fintechs that bypassed foundational work in favor of “growth-first” strategies.
December 29, 2025 at 6:30 AM
Healthcare is a prime example. European regulators classify most medical AI as high-risk, demanding third‑party audits and traceability. Yet the most successful healthcare AI startups treat compliance as an operational feature, not a barrier. Their failures rarely hinge on the Act
December 29, 2025 at 6:30 AM