the ideal networking event is 5-7pm i am now home (hotel) in my jammies chilling after eating free food and talking about aliens for two hours because that's what people ask about when you say you do astro
November 20, 2025 at 1:45 AM
the ideal networking event is 5-7pm i am now home (hotel) in my jammies chilling after eating free food and talking about aliens for two hours because that's what people ask about when you say you do astro
unfortunately my incredible performance as "the bad boy of SC" in the orientation skits impressed the committee so much that i got dragged into doing promo interviews for the conference
November 17, 2025 at 3:11 PM
unfortunately my incredible performance as "the bad boy of SC" in the orientation skits impressed the committee so much that i got dragged into doing promo interviews for the conference
nothing feeds the old 'i'm in purgatory and nothing is real' delusions like this paper that has been done 4 times and is still not published and i have to redo some of the figures again.
November 7, 2025 at 5:12 PM
nothing feeds the old 'i'm in purgatory and nothing is real' delusions like this paper that has been done 4 times and is still not published and i have to redo some of the figures again.
red + orange; red + purple : too similar yellow: hard to see blue: assigned elsewhere green + purple: joker red + green: colorblind orange + green: ew orange + purple: ewww
we need shrimp colors in matplotlib because i can't figure out a way to make these two lines different colors without it being bad or ugly
November 6, 2025 at 5:28 PM
red + orange; red + purple : too similar yellow: hard to see blue: assigned elsewhere green + purple: joker red + green: colorblind orange + green: ew orange + purple: ewww
When a chatbot gets something wrong, it’s not because it made an error. It’s because on that roll of the dice, it happened to string together a group of words that, when read by a human, represents something false. But it was working entirely as designed. It was supposed to make a sentence & it did.
June 19, 2025 at 11:28 AM
When a chatbot gets something wrong, it’s not because it made an error. It’s because on that roll of the dice, it happened to string together a group of words that, when read by a human, represents something false. But it was working entirely as designed. It was supposed to make a sentence & it did.
Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.
June 19, 2025 at 11:21 AM
Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.
discovered that my beautifully edited down research statement that perfectly fits into the page limit is a font size too small. send thoughts and prayers.
October 22, 2025 at 7:08 PM
discovered that my beautifully edited down research statement that perfectly fits into the page limit is a font size too small. send thoughts and prayers.