Currently at NVIDIA, formerly FB and LANL. Opinions mine as always.
🏡: Denver, CO
🔗 : https://www.ajdecon.org
First up is Cielo, a Cray XE6 I worked on at LANL! Which might actually be the prettiest supercomputer I’ve worked on.
I’m also a sucker for fluid dynamics analogies, and for highlighting the complexity of path-dependent processes.
I’m also a sucker for fluid dynamics analogies, and for highlighting the complexity of path-dependent processes.
A whopping 0.2” at DIA = we’re on the board. First measurable snow since April 18 (225 days).
❄️ 2nd-latest first snow of the season (12/10/21).
❄️ 225 snowless days = T-3rd-longest snowless on record for Denver.
#COwx
Posit a world where high-power compute is highly regulated internationally due to AI concerns.
Put the players in as international inspectors going into a rogue nation that has stopped doing mandatory reporting.
Posit a world where high-power compute is highly regulated internationally due to AI concerns.
Put the players in as international inspectors going into a rogue nation that has stopped doing mandatory reporting.
and, tbh: jira-mcp is right there, and I developed a claude skill for interacting with jira via jira-cli (to save context space) in 45 minutes. these aren't tools; they're toolkits. to use them effectively, having them help you build tools is necessary.
The problem of identifying great engineers in hiring is real. In addition to Lorin’s mentions of online presence and Github activity, this is also why so many orgs rely *heavily* on referrals.
The problem of identifying great engineers in hiring is real. In addition to Lorin’s mentions of online presence and Github activity, this is also why so many orgs rely *heavily* on referrals.
The LLMs mostly do pretty well on computing questions, actually! But that’s not exactly representative of their performance across the board.
The results will make it clear just how much they get wrong, and how brazen they are about it.
Now, apply that to every subject in which you are NOT an expert.
The shit just doesn’t work.
The LLMs mostly do pretty well on computing questions, actually! But that’s not exactly representative of their performance across the board.
But also… I really don’t think you could transplant the old Google search algorithm to today’s Internet and have it do nearly as well. The information environment itself has gotten a lot harder to search.
But also… I really don’t think you could transplant the old Google search algorithm to today’s Internet and have it do nearly as well. The information environment itself has gotten a lot harder to search.
I don’t even have a problem with criticism coming from outside the field it’s aimed at!
I just roll my eyes at critics who do not *also* participate in some field in a constructive fashion.
I don’t even have a problem with criticism coming from outside the field it’s aimed at!
I just roll my eyes at critics who do not *also* participate in some field in a constructive fashion.
LLMs should not "connect with" users. they are plumbing for words
OpenAI has since made the chatbot safer, but that comes with a tradeoff: less usage.
LLMs should not "connect with" users. they are plumbing for words
Gmail reads your inbox to provide features like spam filtering, filtering promotions, and providing summary cards.
It does not, apparently, use this to train GenAI models.
Gmail reads your inbox to provide features like spam filtering, filtering promotions, and providing summary cards.
It does not, apparently, use this to train GenAI models.
If you read the original, I highly recommend checking out the new version too, it’s really well-done.
If you read the original, I highly recommend checking out the new version too, it’s really well-done.