Vincent Carchidi
@vcarchidi.bsky.social
460 followers 580 following 1.5K posts
Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind. (It's confusing. Just go with it.) https://vincentcarchidi.substack.com/ All opinions entirely my own.
Posts Media Videos Starter Packs
vcarchidi.bsky.social
holy shit that was actually a text
carlquintanilla.bsky.social
“.. On Sept. 20, Trump meant to send a private message to Attorney General Pam Bondi urging her to prosecute” Comey.

“.. Trump believed he had sent Bondi the message directly .. and was surprised to learn it was public, the officials said.” 🤡

@wsj.com
www.wsj.com/politics/pol...
vcarchidi.bsky.social
Good time, given the discussions, to re-up this piece from last year arguing that it's during AI downturns that the most impactful work in the field has been done, which later underpins the AI booms (often switching from other, less tarnished names back to "AI").

cacm.acm.org/opinion/betw...
Between the Booms: AI in Winter – Communications of the ACM
cacm.acm.org
vcarchidi.bsky.social
into the repurposing of unrealized infrastructure.

...which is a lot of unknowns tbf.
vcarchidi.bsky.social
Yeah it doesn't seem like there's room to think there's no impact, potential or actual.

My extremely lay impression is that the outstanding questions are: how expansive the risks of a burst would be; how deeply it would affect the rest of the economy and for how long; how quick depreciation factors
vcarchidi.bsky.social
I'm glad you liked it!

And yeah, even though it's a continuous thing, once I've done this in any given area for a long enough period, a challenge is communicating the experience. I was thinking about the use of the term "understanding" before writing this...a can of worms but still indispensable.
vcarchidi.bsky.social
Surely there's a less anthropomorphic way to study this/interpret the data? They go into it very freely using *extremely* loaded terms like "scheming," "realize," "lying," etc.
vcarchidi.bsky.social
Really don't like dunking on people under the guise of knowing more than they do, even if maybe I don't. Just not my thing.

So disagreement IMO can be great but not loudly educating the dummies who disagree with me :)
vcarchidi.bsky.social
Really just to say: I like to get my thoughts out there, but I try to stay in my lane. There's a lot I'm interested in but don't know enough about it to talk confidently on. And I don't have reach on here, but if I did, I would try to keep my wits about me. I like to think I've gotten better at it.
vcarchidi.bsky.social
I've stopped following the ones I have in mind but they're always popping up
vcarchidi.bsky.social
It's also causing some 'pro' people to pontificate on everything, since AI seems to touch on everything these days. And as a professional pontificator, I don't think some people know what they're talking about when they venture outside of the tech, but you wouldn't know it from their confidence.
vcarchidi.bsky.social
The loudest of the anti-AI crowd here was always what it is, and that's a shame. Recently some bigger accounts have allowed themselves to become so negatively polarized by them that they basically exist to dunk on the lowest hanging fruit, and that's a shame too.
vcarchidi.bsky.social
And this is the shit coffee, mind you (Folgers)
vcarchidi.bsky.social
So a normal sized thing of coffee grounds costs $20 now
Reposted by Vincent Carchidi
tedunderwood.com
Unless the article is fully generated (not likely) this is a symptom of a much older problem, which is that people aren’t reading many of the sources they cite — just gesturing at them.
o.simardcasanova.net
It's not the first time that I'm seeing this

I'm afraid that hallucinated citations is an issue that scientists and experts will have to deal with from now on
rikefranke.bsky.social
And here we go. I never wrote this article, and yet it is cited here.

www.liberalbriefs.com/geopolitics/...

And of course, it sounds so plausible, I seriously checked whether I had forgotten it, or the footnote was slightly wrong.

#AIisnotresearch
vcarchidi.bsky.social
Half baked thought, but I'm wondering if some of the model providers are seeing people privately using their chatbots for work, and maybe (wrongly?) inferring that work is already being automated even though it hasn't clearly shown up in public data...hence the continued momentum
vcarchidi.bsky.social
What I would really focus on, in this context, is the expectations set by companies, the cajoling of everyone to use it for everything, employment worries ("learn AI or get left behind, oops you got replaced anyway"), and the real possibility there's a downturn because of the investment hubris.
vcarchidi.bsky.social
I don't think it's at all unreasonable! There's lots to debate about where certain perceptions begin and end, but people can simultaneously use AI (say as a Google search replacement) and still not sense an overall improvement in quality of life, including a sense that it's damaging society.
vcarchidi.bsky.social
Yes. See other comments in the thread, but run of the mill smartphones have far more adoption than that and aren't exactly the apple of the public's eye right now.
vcarchidi.bsky.social
I guess, on the other side of this, some people apparently very much did distinguish individual models from OpenAI, re: the backlash for deprecating 4o.

How representative that is, I really can't say.
vcarchidi.bsky.social
Agree with this, but do ordinary people make that distinction? Do the companies and the tech have the same identities?
vcarchidi.bsky.social
other tech has afforded them and the effects it seems to have on society.

Again, not rational necessarily, but both feelings exist simultaneously and I don't think most people ultimately are motivated by questions about whether they're intelligent or whatever.
vcarchidi.bsky.social
This has been basically my experience as well. I'm not (like you) saying the personal uses are great but people are turning to them for daily tasks.

But the other side to this is that, at least in my experience and just a general observation, people actively dislike the kinds of lives this and
vcarchidi.bsky.social
I dont think I disagree but not sure what you're getting at. Say more?
vcarchidi.bsky.social
No worries!

Agreed on the back and forth. And one example that I think also applies to how AI products are used is social media, especially toxic brands like Facebook: people might *regularly use it* and still hate it and think things would be better without it, without a sense of contradiction.